Make test tasks specify which target failed in exception.

Review Request #2090 — Created April 17, 2015 and submitted

itay
pants
1431
55d3c4a...
pants-reviews
dturner-tw, nhoward_tw

Make test task raise exception on failure with specific tests targets that failed. This will allow me to specify more twitter-specific behavior in Twitter codebase (by registering my own tasks, which extend tasks I change here).

The gist:
- Added targets_failed into TaskError
- Made pytest_run and JUnitRun write a summary file (either xml or resultlog) and analyze it

As this is supposed to be my first evaaaar pants change, I am probably trying to do here something incredibly idiotic. Don't hesitate to tell me that, I'll be glad to learn.

unit tests.

  • 0
  • 0
  • 4
  • 1
  • 5
Description From Last Updated
IT
IT
IT
DT
  1. 
      
  2. build-support/bin/ci.sh (Diff revision 2)
     
     

    maybe it's just RBcommons being weird, but should this be "failing_target" instead of "failing_targe"?

  3. Docstring format: summary, two newlines, details, then arg info.

  4. Is it guaranteed that this file will exist?

    1. actually not - in case of fast fail it might not. added a check.

  5. user might have already supplied -xmlreport option -- is it OK to supply it twice?

  6. nit: you don't need the ^ and the $ because you are using .match, which is already anchored.

  7. Is there any output for tests that succeed? Or any output at all other than failures?

    1. yes, but i ignore it.

    2. oh, duh. Yep.

  8. 
      
IT
IT
DT
  1. 
      
  2. that's three newlines.

  3. 
      
DT
  1. OK, add some tests and I'll give a shipit.

  2. 
      
IT
NH
  1. Thanks. I've got a couple nitpicks, one of them's an issue, the other is more of a style thing.

  2. build-support/bin/ci.sh (Diff revision 2)
     
     
     
     
     
     
     
     
     
     

    thanks for the sorting!

  3. I think this error log will happen whenever test classes are specified on the CLI, could you change the collect_test_targets to include the target for CLI specified tests?

    1. I wish I knew how. It's getting the list of tests, not the list of targets from self._tests_to_run. any idea how I can get the associated targets?

      is it guaranteed that all tests in _tests_to_run are included in supplied targets?

    2. I don't think it's guaranteed, but I think providing a test class that isn't included should be an error condition. If the test isn't in the supplied targets, I doubt the test goal will do what you want. I suppose it's possible that you could have a test in a library dep that you'd want to run in the current env for some reason, but I think that would be unusual.

    3. Ok, implemented lookup for targets for tests. Any idea how do i test this?

  4. src/python/pants/backend/python/tasks/pytest_run.py (Diff revision 2)
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

    Could you change how you are looking up the resultlogs here?

    It's a little tricky to follow because it is modifying a closed over variable's value for a nested use above it.

    I think it'd be easier to follow if you reorganized it to look something like

    def run_and_analyze(args, resultlog_path):
      # ...
    
    args = []
    # ...
    args.extend(sources)
    
    requested_resultlogs = [arg.split('=', 1)[-1] for arg in args if arg.startswith('--resultlog=')]
    if requested_resultlogs:
      return run_and_analyze(args, requested_resultlogs[-1])
    else:
      with temporary_file_path() as resultlog_path:
        args.append('--resultlog={}'.format(resultlog_path)
        return run_and_analyze(args, resultlog_path)
    
  5. 
      
IT
IT
DT
  1. Ship It!
  2. 
      
ZU
  1. 
      
  2. Did you mean to kill this TODO? I can't see the JIRA so I don't know if its out of date.

    1. The JIRA was markes as resolved.

  3. This kind of makes the --no-xmlreport option meaningless. We have --xmlreport enabled by default in pants.ini.

    What happens if you specify -xmlreport twice on the command line (it can be set in init above)?

    1. twice -xmlreport will have no further effect than a single one. what do you suggest we do regarding this option?

  4. add javadoc here to document return type.

  5. Use .format() instead of %s

  6. src/python/pants/base/exceptions.py (Diff revision 7)
     
     

    Add failed_targets to the docstring.

  7. 
      
IT
IT
IT
NH
  1. Just a few style things, otherwise lgtm

  2. I don't think this is necessary if you are just passing through the args.

  3. you could use fp.readlines() here instead

  4. you could use a set literal here

    failed_files = {
      m.groups ...
    }
    
  5. you can rm the nested square brackets--if there's enclosing parens they aren't necessary

  6. 
      
IT
DT
  1. In master at 937907d721b8e4e9628b6d13bb59423e4406a5f1, please close this as "submitted"

  2. 
      
IT
Review request changed

Status: Closed (submitted)

ZU
  1. I've made some changes and now I'm having some trouble with the new code to identify failed tests. It seems that the output is sometimes missing the filename:

    F ::test_two
     def test_two():
     >     assert 1 == core.two()
     E     assert 1 == 2
     E      +  where 2 = <function two at 0x109c03c08>()
     E      +    where <function two at 0x109c03c08> = core.two
    
     tests/test_core_red.py:5: AssertionError
    

    F ::test_two
    

    when we expected something like:

    F tests/test_core_red.py::test_two
    

    I'm still digging into the problem but thought I'd post it here because I thought someone might have some insight.

    1. I actually bumped into it last night while working on another change. I'm not sure how it passed CI. I'll be working on this later on today as well, but I'd appreciate any help.

    2. Its a tough bug. f I run the tests in a different order (or by themselves) they work. Its only when run with ./pants test tests/python/pants_test:all that I see the failures.

    3. From what I can tell, this line comes directly from the 3rdparty pytest library?

  2. 
      
Loading...