Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

give some context that could help the user when the tool cannot find the failing tests #39

Open
zsoldosp opened this issue Oct 27, 2022 · 0 comments

Comments

@zsoldosp
Copy link

When trying to debug a flaky test, I got the following output

$ detect-test-pollution  --failing-test ...  --tests ...
discovering all tests...
-> discovered 83 tests!
ensuring test passes by itself...
-> OK!
ensuring test fails with test group...
-> OK!
running step 1:
- 82 tests remaining (about 7 steps)
running step 2:
- 41 tests remaining (about 6 steps)
running step 3:
- 21 tests remaining (about 5 steps)
running step 4:
- 11 tests remaining (about 4 steps)
running step 5:
- 6 tests remaining (about 3 steps)
running step 6:
- 3 tests remaining (about 2 steps)
running step 7:
- 2 tests remaining (about 1 steps)
double checking we found it...
Traceback (most recent call last):
  File "/usr/local/bin/detect-test-pollution", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.9/site-packages/detect_test_pollution.py", line 289, in main
    return _bisect(testpath, args.failing_test, testids)
  File "/usr/local/lib/python3.9/site-packages/detect_test_pollution.py", line 241, in _bisect
    raise AssertionError('unreachable? unexpected pass? report a bug?')
AssertionError: unreachable? unexpected pass? report a bug?

at this point, I cannot tell which tests were considered and I'm not any smarter about the test pollution than I have been before using the tool

I've tried passing options to pytest using PYTEST_ADDOPTS (PYTEST_ADDOPTS="-vvv" detect-test-pollution --failing-test ... --tests ...), but that didn't help.

I recall having pollution only when 3 tests were run in a given order in the past, so it is a totally fine if the tool cannot find it, but should help the user still

Some solution ideas that come to mind would be

  • upon not being able to narrow it down to one, list the last set of testids with which the test failed
  • display the output from the test run combined with allow the user passing options to the testrunner being invoked (e.g.: -vvv to pytest)
  • have a --verbose flag (or -vvvv style) and under the " tests remaining (about steps)" message print out the testids considered

but of course there might be other alternatives I haven't considered

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant