Coalesce errors when parsing BUILDS in a spec

Review Request #1061 — Created Sept. 22, 2014 and submitted

jcoveney
pants
262c115...
pants-reviews
benjyw, ity, patricklaw

Coalesce errors when parsing BUILDS in a spec

The idea is that when you submit a pants command, it will emit an error for ALL erroneous build files in a spec.

Note: if multiple specs are given on the command line, if a given one has an error, then only the first's will be printed, then it will error out.

I handled the errors as best as I could, becasue we need to know the flag before a lot of the options parsing happens.

I tested manually and it works. I need to update the unit tests but wanted to validate my approach first.

  • 0
  • 0
  • 4
  • 1
  • 5
Description From Last Updated
JS
  1. Please add Patrick - he's expert and most active here.

  2. 
      
JC
PA
  1. This seems like a reasonable general approach. Definitely worth testing thoroughly.

  2. This should have a default, presumably False

  3. Use .format() interpolation, and give a more verbose explanation. Also, all error output should go to sys.stderr.

    1. The errors are already fairly detailed... do you want more? here's an example of the output:

      PANTS_DEV=1 ./pants goal list hey::
      Running pants in dev mode from /Users/jcoveney/workspace/github/pants/src/python/pants/bin/pants_exe.py

      Exception message: name 'ADJKLKASJDFLKSAJDFAJSF' is not defined
      while executing BUILD file /Users/jcoveney/workspace/github/pants/hey/a/BUILD
      Loading addresses from 'hey/a' failed.

      Exception message: name 'wtf' is not defined
      while executing BUILD file /Users/jcoveney/workspace/github/pants/hey/b/BUILD
      Loading addresses from 'hey/b' failed.

      Exception message: Invalid BUILD files in spec [hey::]

    2. Hmm, that does seem okay.

  4. 
      
ST
  1. Shipit pending a unit test. Thanks!

  2. Do people know what a "spec" is? Should this just say "Invalid BUILD files for [%s]"?

  3. 
      
IT
  1. 
      
  2. what do you think about --fail-fast? its generally the standard flag for these kind of scenarios - I am open to either though.

  3. self.parse_args?

    1. ? are you implying we should change it from its previous functionality? not sure what you mean here

  4. src/python/pants/goal/option_helpers.py (Diff revision 1)
     
     

    100 chars

  5. 
      
IT
  1. lgtm pending test

  2. ah, this isnt an instance method, dropped.

  3. 
      
JC
JC
  1. Would love more comments on what more testing we should do.

    1. Create 2 build files with errors and if you pass --fail-fast flag, we should see both the errors.

    2. do you know of a test that does this? Especially the part that captures the stdio.... I'm just curious what scaffolding we have in place to deal with integration tests of this sort.

    3. I was thinking about units tests.
      you can use https://github.com/pantsbuild/pants/blob/master/tests/python/pants_test/tasks/test_dependees.py#L58 to create build files in your unit tests.
      And then write a test to list targets in directory and verify both errors are reported.

      you can also looks for integration tests in tests/python/pants_test/tasks/test_*_integration.py

    4. You can also have a look at https://github.com/pantsbuild/pants/blob/c0e23d09031258639163ad1685a339500b72d465/tests/python/pants_test/tasks/test_detect_duplicates.py

      which tests using the --fail-fast flag

  2. 
      
PA
  1. lgtm sans lack of tests.

    Also please add Benjy so he can double check the way you're handling this argument, and whether or not it's problematic given his impending refactor.

    1. Added. Any pointers on some reference tests? Not sure on the best practice to test this sort of thing in a meaningful way. I mean it is being used in every single test already, we're just not testing that it prints what we expect. But it doesn't break anything, well, anything that is tested.

  2. 
      
JC
ZU
  1. If you don't add a better test of the error output from this feature, its in danger of being inadvertently refactored away.

    1. I agree. Do you know of any tests that capture the output of a pants invocation? Sounds like that is what we need here to do that?

    2. Ok, I admit, that wasn't very helpful. The integration tests capture all the standard output, but that would mean checking in failing BUILD files into testprojects. There is some infrastructure to do that (exclude targets in build-support/bin/ci.sh) but some folks have pointed out that isn't a great precedent to set.

      You could also write your code so that instead of usign 'print', you buffer up the output in a test buffer and then put it in with the text when you finally do emit a BadSpecError. Then you can just test the message string of the BadSpecError in a unit test.

    3. Why is it a bad precedent to check in bad BUILD files? Plenty of projects do this for similar reasons (compilers, for example) -- they want to tesk properties of bad input

    4. Stu and I talked about this in: https://rbcommons.com/s/twitter/r/881/ Posted 1 month, 1 week ago (Aug. 14, 2014, 8:20 p.m.)

      The consensus was to try and test examples of malformed configuration in unit tests rather than integration tests.

  2. 
      
JC
JC
ZU
  1. 
      
    1. Ok, thanks for the feedback and the pointers. I made the changes accordingly. Look reasonable?

  2. I don't think we should be using 'print' here. We have code that does special things with exception messages and this circumvents it. Please buffer up all the messages you want to emit and use it as the argument to the BadSpecError below.

  3. I think you could adapt this test pretty easily to a unit test in test_cmd_line_spec_parser.py. You would need to create 2 bad build files with self.add_to_build_file() just like you did here, and then invoke self.spec_parser.parse_addresses and check it with "with self.assertRaisesRegexp()."

    For an example of this approach, see the unit tests in tests/python/pants_test/graph/test_build_graph.py that test BuildGraph.TransitiveLookupError().

  4. 
      
JC
ZU
  1. One more thing you can do for the committers is to make sure that your patch passes the Travis-CI. To do this, fork the pantsbuild/pants library, then push your branch to your fork and createa PR. Travis-CI should pick it up and compile it.

    1. Also, be sure to rebase your patch first. Let me know if you have any issues with this. I'll commit this for you when it goes green.

    2. I had a PR: https://github.com/pantsbuild/pants/pull/598 so no issue there. Just rebased and push, let's see. The travis failure is weird, it looks like some unrelated python 2.6 thing?

  2. use .format() with strings, not % style formatting.

  3. 
      
JC
ZU
  1. commited as 5f19f25, please mark as submitted.

  2. 
      
JC
Review request changed

Status: Closed (submitted)

Loading...