Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

silently skip a test #3730

Open
notestaff opened this issue Jul 29, 2018 · 45 comments
Open

silently skip a test #3730

notestaff opened this issue Jul 29, 2018 · 45 comments
Labels
type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@notestaff
Copy link

notestaff commented Jul 29, 2018

Sometimes a test should always be skipped, e.g. because we generate tests by first generating all possible combinations of parameters and then calling pytest.skip inside the test function for combinations that don't make sense. That's different from tests that are skipped in a particular test run, but that might be run in another test run (e.g. on different hardware or when a particular feature is added). Could you add a way to mark tests that should always be skipped, and have them reported separately from tests that are sometimes skipped? When a report says "400 tests skipped" it looks like something is wrong, but maybe 390 of these should always be skipped, and just 10 should be tested later when conditions permit. Also, the "% of tests done" status message becomes distorted when always-skipped tests are included.

@pytestbot
Copy link
Contributor

GitMate.io thinks possibly related issues are #1563 (All pytest tests skipped), #251 (dont ignore test classes with a own constructor silently), #1364 (disallow test skipping), #149 (Distributed testing silently ignores collection errors), and #153 (test intents).

@pytestbot pytestbot added the plugin: pytester related to the pytester builtin plugin label Jul 29, 2018
@nicoddemus nicoddemus added type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature and removed plugin: pytester related to the pytester builtin plugin labels Jul 29, 2018
@blueyed
Copy link
Contributor

blueyed commented Aug 25, 2018

See also #3844.

@h-vetinari
Copy link

h-vetinari commented Nov 14, 2018

[this is copied from a comment in #3844 that is now closed for unrelated reasons; I would suggest the syntax @pytest.mark.ignore() and pytest.ignore, in line with how skip currently works]

This would be extremely useful to me in several scenarios.

Pytest makes it easy (esp. through parametrization and parametrized fixtures) to test a cartesian product of parameter combinations. It's slightly less easy (not least because fixtures can't be reused as parameters) to reduce that cartesian product where necessary.

Often, certain combination simply do not make sense (for what's being tested), and currently, one can only really skip / xfail them (unless one starts complicated plumbing and splitting up of the parameters/fixtures).

But skips and xfails get output to the log (and for good reason - they should command attention and be eventually fixed), and so it's quite a simple consideration that one does not want to pollute the result with skipping invalid parameter combinations.

For this task, pytest.ignore would be the perfect tool. That this would be very intuitive is underlined by the fact that I wanted to open just such an issue before I found the exact same request here already.

@RonnyPfannschmidt
Copy link
Member

pytest.ignore is inconsistent and shouldn't exist it because the time it can happen the test is already running - by that time "ignore" would be a lie

at a work project we have the concept of "uncollect" for tests, which can take a look at fixture values to conditionally remove tests from the collection at modifyitems time

@h-vetinari
Copy link

@RonnyPfannschmidt

pytest.ignore is inconsistent and shouldn't exist it because the time it can happen the test is already running - by that time "ignore" would be a lie

What's so bad a bout "lying" if it's in the interest of reporting/logging, and directly requested by the user?

at a work project we have the concept of "uncollect" for tests, which can take a look at fixture values to conditionally remove tests from the collection at modifyitems time

"At work" sounds like "not in pytest (yet)". Can you elaborate how that works?

@RonnyPfannschmidt
Copy link
Member

@h-vetinari the lies have complexity cost - a collect time "ignore" doesnt have to mess around with the other reporting, it can jsut take the test out cleanly - a outcome level ignore needs a new special case for all report outcome handling and in some cases cant correctly handle it to begin with

for example whats the correct way to report an ignore triggered in the teardown of a failed test - its simply insane

as for the work project, it pretty much just goes off at pytest-modifyitems time and partitions based on a marker and conditionals that take the params

@blueyed
Copy link
Contributor

blueyed commented Nov 15, 2018

@h-vetinari
As for handling this during collection time, see #4377 (comment) for an example, and for docs: https://docs.pytest.org/en/latest/reference.html?highlight=pytest_collection_modifyitems#_pytest.hookspec.pytest_collection_modifyitems.
I think it should work to remove the items that "do not make sense" there.

@h-vetinari
Copy link

@blueyed
Thanks for the response! That seems like it would work, but is not very extensible, as I'd have to pollute my pytest_collection_modifyitems hook with the individual "ignores" for the relevant tests (which differ from test to test).

Is there a way to add a hook that modifies the collection directly at the test itself, without changing global behaviour?

@h-vetinari
Copy link

@RonnyPfannschmidt
Thanks for the response. Obviously, I don't have anywhere near as good of an overview as you, I'm just a simple user. ;-)

for example whats the correct way to report an ignore triggered in the teardown of a failed test - its simply insane

You're right, that skips only make sense at the beginning of a test - I would not have used it anywhere else in a test (e.g. after something that can fail), but I can see the problem from an API design perspective.

This is then getting closer again to the question I just asked to @blueyed, of having a per-test post-collection (or rather pre-execution) hook, to uncollect some tests. The essential part is that I need to be able to inspect the actual value of the parametrized fixtures per test, to be able to decide whether to ignore or not.

as for the work project, it pretty much just goes off at pytest-modifyitems time and partitions based on a marker and conditionals that take the params

This sounds great (if the params are the fixtures), but I'd need this on a per-test basis (maybe as a decorator that takes a function of the same signature as the test?).

@RonnyPfannschmidt
Copy link
Member

@h-vetinari an extracted solution of what i did at work would have 2 components

a) a hook to determine the namespace/kwargs for maker conditionals
b) a marker to control the deselection (most likely @pytest.mark.deselect(*conditions, reason=...)

CC @psav

@aldanor
Copy link

aldanor commented Dec 12, 2019

@RonnyPfannschmidt @h-vetinari This has been stale for a while, but I've just come across the same issue - how do you 'silently unselect' a test? Are there any new solutions or propositions? If you have a large highly-dimensional parametrize-grid, this is needed quite often so you don't run (or even collect) the tests whose parameters don't make sense.

@h-vetinari
Copy link

@aldanor
I haven't followed this further, but would still love to have this feature! :)

@RonnyPfannschmidt
Copy link
Member

the only way to completely "unselect" is not to generate

the next best thing is to deselect at collect time

@aldanor
Copy link

aldanor commented Dec 13, 2019

As @h-vetinari pointed out, sometimes "not generate" is not really an option, e.g. you have a highly-dimensional grid of parameters and you need to unselect just a few that don't make sense, and you don't want for them to show up as 'skipped', because they weren't really skipped.

So, as noted above, perhaps @pytest.mark.deselect(lambda x: ...) or something similar would work then?

@Tadaboody
Copy link
Contributor

@aldanor
Why not then do something along the lines of

@pytest.mark.parametrize('test_name',[parameters for parameters in 
						existing_parameter_generation() if make_sense(parameters)]

and that way not generate them

@nicoddemus
Copy link
Member

nicoddemus commented Dec 17, 2019

@Tadaboody's suggestion is on point I believe. You can always preprocess the parameter list yourself and deselect the parameters as appropriate. @aldanor @h-vetinari @notestaff
does that solve your issue?

@aldanor
Copy link

aldanor commented Dec 18, 2019

@Tadaboody @nicoddemus

Not always, as I've mentioned above:

If you have a large highly-dimensional parametrize-grid

For instance,

@pytest.mark.parametrize('x1', list(range(10)), ids='x1={}'.format)
@pytest.mark.parametrize('x2', list(range(10)), ids='x2={}'.format)
@pytest.mark.parametrize('x3', list(range(10)), ids='x3={}'.format)
@pytest.mark.parametrize('x4', list(range(10)), ids='x4={}'.format)
def test_foo(x1, x2, x3, x4):
    # we want to silently ignore some of these 10k points

Yes, you could argue that you could rewrite the above using a single list comprehensions, then having to rewrite formatting, the whole thing becoming more ugly, less flexible to extend, and your parameter generation now being mixed up with deselection logic.

You can always preprocess the parameter list yourself and deselect the parameters as appropriate

If in the example above it would have sort of worked as a hack, but it fails completely once you have reusable parametrization:

@pytest.fixture(params=list(range(10)), ids='x1={}'.format)
def x1(request):
    return request.param

@pytest.fixture(params=list(range(10)), ids='x2={}'.format)
def x2(request):
    return request.param

@pytest.fixture(params=list(range(10)), ids='x3={}'.format)
def x3(request):
    return request.param

@pytest.fixture(params=list(range(10)), ids='x4={}'.format)
def x4(request):
    return request.param

def test_123(x1, x2, x3):
    # we want to to deselect some of these 1k points

def test_234(x2, x3, x4):
    # we want to ignore some of these 1k points too

@nicoddemus
Copy link
Member

nicoddemus commented Dec 18, 2019

@aldanor thanks for the example.

What do you suggest the API for skipping tests be?

@h-vetinari
Copy link

h-vetinari commented Dec 18, 2019

@nicoddemus @aldanor

How about something like

@pytest.fixture.unselectif(function_of_x1_to_x3)
def test_123(x1, x2, x3):
    # we want to to deselect some of these 1k points

resp.

@pytest.fixture.unselectif(function_of_x1_to_x4)
@pytest.mark.parametrize('x1', list(range(10)), ids='x1={}'.format)
@pytest.mark.parametrize('x2', list(range(10)), ids='x2={}'.format)
@pytest.mark.parametrize('x3', list(range(10)), ids='x3={}'.format)
@pytest.mark.parametrize('x4', list(range(10)), ids='x4={}'.format)
def test_foo(x1, x2, x3, x4):
    # we want to silently ignore some of these 10k points

This fixture.unselectif should take a function lambda *args: something with exactly the same signature as the test (or at least a subset thereof). It could quite freely error if it doesn't like what it's seeing (e.g. requesting something that's not a known fixture), but - assuming everything is set up correctly - would be able to unselect the corresponding parameters already at selection-time.

Note: the name is just an example, and obviously completely up for bikeshedding. After pressing "comment" I immediately thought it should rather be fixture.uncollect.

@aldanor
Copy link

aldanor commented Dec 18, 2019

IIUC how pytest works, once you've entered the test function body, it's already too late. So there's not a whole lot of choices really, it probably has to be something like

@pytest.mark.select(lambda x: x[0] > x[1])
def test_123(x1, x2, x3):
    ...
  • You could also have @pytest.mark.deselect for convenience. But if only one existed, I'd prefer the former.
  • You could also have reason=... optional kwarg, either a string or a lambda.

@nicoddemus
Copy link
Member

@h-vetinari @aldanor it seems what you want is already possible with a custom hook and mark:

# contents of conftest.py
def pytest_configure(config):
    config.addinivalue_line(
        "markers", "uncollect_if(*, func): function to unselect tests from parametrization"
    )

def pytest_collection_modifyitems(config, items):
    removed = []
    kept = []
    for item in items:
        m = item.get_closest_marker('uncollect_if')
        if m:
            func = m.kwargs['func']
            if func(**item.callspec.params):
                removed.append(item)
                continue
        kept.append(item)
    if removed:
        config.hook.pytest_deselected(items=removed)
        items[:] = kept


# test_foo.py
import pytest

def uncollect_if(x, y, z):
    if x > 5:
        return True

@pytest.mark.uncollect_if(func=uncollect_if)
@pytest.mark.parametrize('x', range(10))
@pytest.mark.parametrize('y', range(10, 100, 10))
@pytest.mark.parametrize('z', range(1000, 1500, 100))
def test_foo(x, y, z, tmpdir):
    pass

This produces:

 λ pytest .tmp\uncollect\ -q
.............................................................. [ 22%]
.............................................................. [ 45%]
.............................................................. [ 68%]
.............................................................. [ 91%]
......................                                         [100%]
270 passed, 180 deselected in 1.12s

This can also easily be turned into a plugin.

@h-vetinari
Copy link

@nicoddemus
Thanks for the demo, that looks cool! I think a plugin would be good, or even better: a built-in feature of fixtures. I'm saying this because otherwise, it would be much harder to get this into other projects (like numpy/pandas etc.), where the appetite for more plugins etc. is very low.

@nicoddemus
Copy link
Member

nicoddemus commented Dec 19, 2019

A built-in feature is not out of question, but I'm personally not very keen on adding this to the core myself, as it is a very uncommon use case and a plugin does the job nicely.

Note also that if a project for some reason doesn't want to add a plugin as dependency (and a test dependency at that, so I think it should not have as much friction as adding a new dependency to the project itself), it can always copy that code into their conftest.py file.

We can definitely thought add the example above to the official documentation as an example of customization. I would be happy to review/merge a PR to that effect. 😁

@SaturnIC
Copy link

@nicoddemus : It would be convenient if the metafunc.parametrize function
would cause the test not to be generated if the argvalues parameter is an empty list,
because logically if your parametrization is empty there should be no test run.

Currently though if metafunc.parametrize(...,argvalues=[],...) is called in the pytest_generate_tests(metafunc) hook it will mark the test as ignored.

@nicoddemus
Copy link
Member

Hi @SaturnIC,

Not sure, users might generate an empty parameter set without realizing it (a bug in a function which produces the parameters for example), which would then make pytest simply not report anything regarding that test, as if it didn't exist; this will probably generate some confusion until the user can figure out the problem.

I think adding a mark like we do today is a good trade-off: at least the user will have some feedback about the skipped test, and see the reason in the summary.

@RonnyPfannschmidt
Copy link
Member

we dont mark a test ignored, we mark it skip or xfail with a given reason

@SaturnIC
Copy link

SaturnIC commented Jun 12, 2020

Hi @nicoddemus,

thanks for the fast reply.
I think it can not be the responsibility of pytest to prevent users from misusing functions against their specification, this would surely be an endless task.

Is there another good reason why an empty argvalues list should mark the test as skip (thanks @RonnyPfannschmidt) instead of not running it at all ?

@RonnyPfannschmidt
Copy link
Member

Off hand I am not aware of any good reason to ignore instead of skip /xfail

@h-vetinari
Copy link

Off hand I am not aware of any good reason to ignore instead of skip /xfail

Some good reasons (I'm biased, I'll admit) have come up in this very thread.

In short: Having lots of parametrisation, but not polluting (skip-)log with tests that represent impossible parameter combinations.

@RonnyPfannschmidt
Copy link
Member

I apologise, I should have been more specific.

there are good reasons to deselect impossible combinations, this should be done as deselect at modifyitems time

The skip/xfail for a actually empty matrix seems still absolutely necessary for parameterization.

The missing capability of fixtures at modifyitems time gives this unnecessary hardship

@h-vetinari
Copy link

No need to apologise. :)

Would just be happy to see this resolved eventually, but I understand that it's a gnarly problem.

@xeor
Copy link

xeor commented Jun 12, 2020

Another approach, that also might give some additional functionality might be:

def real_test_func(hostname, p):
  # real test
  assert ...

@pytest.generator()
@pytest.mark.parametrize('p', [1,2,3])
def test_foo(host, p):
    pytest.add(real_test_func, [host.name, p])

@RonnyPfannschmidt
Copy link
Member

@xeor that proposal looks alien to me and with a description of the semantics there's a lot of room for misunderstanding

@SaturnIC
Copy link

SaturnIC commented Jun 12, 2020

The skip/xfail for a actually empty matrix seems still absolutely necessary for parameterization.

@RonnyPfannschmidt Why though? The empty matrix, implies there is no test, thus also nothing to ignore?

@RonnyPfannschmidt
Copy link
Member

It's typically a user error at parameterization, thus a required indication

@SaturnIC
Copy link

Isn't a skipped test a bad warning, those two are very different things?

Warnings could be sent out using the python logger?

If one uses the same test harness for different test runs,
it is very possible to have empty matrices deliberately.

@RonnyPfannschmidt
Copy link
Member

What some of us do deliberately many more of us do accidentially, silent omissions are a much more severe error class than non silent ignore

The correct structural way to mark a parameter set as correct to ignore is to generate a one element matrix with the indicative markers

@xeor
Copy link

xeor commented Jun 12, 2020

@RonnyPfannschmidt Thanks for the feedback. Just let me clarify the proposal if it looked weird (i'm not good at naming things).

The gist was to:

  • Have a test_ function that generates can generate tests, but are not test itself. A test-generator.
  • The test-generator will still get parameterized params, and fixtures.
  • It can create tests however it likes based on info from parameterize or fixtures, but in itself, is not a test.

It might not fit in at all tho, but it seams like a good idea to support something like this in my case.

@RonnyPfannschmidt
Copy link
Member

I would prefer to see this implemented as a callable parameter to Parametrize

Taking the node, and eventually fixtures of a scope available at collect time

@jan-matejka
Copy link

Hi, I think I am looking at the same problem right now. I described it it more detail here: https://stackoverflow.com/questions/63063722/how-to-create-a-parametrized-fixture-that-is-dependent-on-value-of-another-param.

What i'd like to achieve is stacking parametrized fixtures as described at the SO link (can reproduce here if the link is an issue) and I think it would be solution for other people here as well as the proposal to ignore / deselect testcases looks like a XY problem to me, where the real solution would be to not generate the invalid test case combinations in the first place.

@drewbell
Copy link

drewbell commented Aug 19, 2021

@pytest.mark.uncollect_if(func=uncollect_if)
@pytest.mark.parametrize('x', range(10))
@pytest.mark.parametrize('y', range(10, 100, 10))
@pytest.mark.parametrize('z', range(1000, 1500, 100))
def test_foo(x, y, z, tmpdir):
pass


This produces:

λ pytest .tmp\uncollect\ -q
.............................................................. [ 22%]
.............................................................. [ 45%]
.............................................................. [ 68%]
.............................................................. [ 91%]
...................... [100%]
270 passed, 180 deselected in 1.12s

A built-in feature is not out of question, but I'm personally not very keen on adding this to the core myself, as it is a very uncommon use case and a plugin does the job nicely.

@nicoddemus thanks for the solution. I too want a simple way to deselect a test based on a fixture value or parametrized argument value(s) without adding to the "test skipped" list, and this solution's API is definitely adequate.

I understand that this might be a slightly uncommon use case, but I support adding something like this to the core because I strongly agree with @notestaff that there is value in differentiating tests that are SKIPPED vs tests that are intended to NEVER BE RUN. Solely relying on @pytest.mark.skip() pollutes the differentiation between these two and makes knowing the state of my test set harder.

...

I personally use @pytest.mark.skip() to primarily to instruct pytest "don't run this test now", mostly in development as I build out functionality, almost always temporarily. In contrast, as people have mentioned, there are clearly scenarios where some combinations of fixtures and/or parametrized arguments are never intended to run. Sure, you could do this by putting conditions on the parameters, but that can hinder readability: sometimes code to remove a few items from a group is much clearer than the code to not add them to the group in the first place. Plus the "don't add them to the group" argument doesn't solve the scenario where I want to deselect/ignore a test based on the value of a fixture cleanly as far as I can tell.

If all the tests I want to run are being run, I want to see an all-green message, that way the presence "X tests skipped" tells me if something that should be tested is currently being skipped. As someone with an embedded background, the "X tests skipped" message feels like a compiler warning to me, and please forgive those of us don't like living with projects that feel as if they have "compiler warnings" :)

I understand @RonnyPfannschmidt's concern with silent skips and think designating them as 'deselected' (1) gives good visibility, (2) differentiates them from skipped tests, and (3) does not attract excessive attention to tests that should never run.

Perhaps another solution is adding another optional parameter to skip, say 'ignore', to give the differentiation that some of use would like to see in the result summary.

def test_function():
if not valid_config():
pytest.skip("unsupported configuration", ignore=True)

Results (1.39s):
19 passed
1 skipped
1 ignored # it is very helpful to know that this test should never run

@adesiraju
Copy link

@nicoddemus : It would be convenient if the metafunc.parametrize function would cause the test not to be generated if the argvalues parameter is an empty list, because logically if your parametrization is empty there should be no test run.

Currently though if metafunc.parametrize(...,argvalues=[],...) is called in the pytest_generate_tests(metafunc) hook it will mark the test as ignored.

This way worked for me, I was able to ignore some parameters using pytest_generate_tests(metafunc)

@leofang
Copy link

leofang commented Jun 16, 2022

+1 for this feature, but it seems this thread hasn't been active for a while...

@tomhea
Copy link

tomhea commented Jul 29, 2022

Until the feature is implemented:

The following code successfully uncollect and hide the the tests you don't want.

@pytest.hookimpl(hookwrapper=True)
def pytest_collectreport(report):
    report.result = filter(should_hide, report.result)
    yield


@pytest.hookimpl(hookwrapper=True)
def pytest_collection_modifyitems(config, items):
    yield
    items[:] = filter(should_hide, items)

I used it with the next filtering function (should_hideis_not_skipped) in order to hide skipped tests:

def is_not_skipped(test) -> bool:
    if hasattr(test, 'callspec') and hasattr(test.callspec, 'params'):
        params = test.callspec.params
        for fixture_name, fixture_type in fixtures_name_to_type.items():
            if fixture_name in params:
                return isinstance(params[fixture_name], fixture_type)
    return True

@jasongi
Copy link

jasongi commented Mar 24, 2024

I've just published a plugin that implements a uncollect_if marker: pytest-uncollect-if (pypi) - similar to @nicoddemus's implementation in this comment with exactly the same API but using a pytest_make_collect_report hookwrapper instead.

The reason I used the pytest_make_collect_report instead of pytest_collection_modifyitems is that in the pytest_collection_modifyitems implementation the "uncollected" tests will be reported as deselected - and if you just omit calling the deselected hook they get reported as collected instead. For my particular usecase, I just wanted to pretend like they never existed in first case. I might implement the pytest_collection_modifyitems version if there's interest, but I don't see much value in that personally.

Edit: actually I went ahead and did the deselect_if version too - pytest-deselect-if

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests