Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding external tests to a collection #421

Closed
pytestbot opened this issue Jan 12, 2014 · 13 comments
Closed

Adding external tests to a collection #421

pytestbot opened this issue Jan 12, 2014 · 13 comments
Labels
type: enhancement new feature or API change, should be merged into features branch

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Trevor Bekolay (BitBucket: tbekolay, GitHub: tbekolay)


It would be nice if it were possible to run tests from outside of the current package when running py.test on the current package. A possible use case for this functionality would be to have a common set of tests for many packages implementing a certain protocol or standard. Another use case is for testing pluggable features (this was the the origin of the idea; see this SO question).

Two things are necessary for this to happen. First, you must get a set of tests contained in another package. Second, you must add that set of tests to the current package.

For getting access to external tests, you can do something similar to what occurs when doing py.test --collect-only and providing the path to the external package. However, it should additionally be possible to disable all reporting, and to get programmatic access to the tests collected by that run.

For adding external tests to the current collection, one possibility is pytest_collection_modifyitems. If the tests can be collected programmatically they can just be added to the items list. However, a more explicit hook like pytest_collection_addexternalitems or something might be preferable.

Since I'd like this for one of my projects, I'm motivated to implement this, but it'd be best to discuss a nice API for doing this before starting an implementation.


@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Thanks for the initiative. One design issue is how to "override" or "substitute" fixtures. Supose you want to run tests found in somepath/test_base.py and tests there need a fix1 fixture which is defined in a somepath/conftest.py file. We then want to be able to have our "inheriting" project define fix1 and have it used for those tests (that's kind of the whole point). We can not simply ignore the other conftest file because it may define other fixtures which somepath/test_base.py needs. I guess we would need to have something in the API that allows to say use fix1 from here for these tests.

@pytestbot
Copy link
Contributor Author

Original comment by Trevor Bekolay (BitBucket: tbekolay, GitHub: tbekolay):


Right, that's a good point, it's very likely (certain?) that there will be some conflicting fixtures. How is this handled right now within a package? I just tried defining a funcarg in a test file and in conftest.py: the one in the test file took precedence over the one in conftest.py, and it worked as expected. Could there be a similar precedence order in which the fixtures in the current package (the one that first invoked pytest) take precedence over those in the 'remote' package? So, however we collect tests from the remote package, it could start off with the configuration of the local package and only add fixtures, not replace existing ones.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Right now, fixtures are looked up according to "source order", meaning
first in the class of the test, then the module, then the next conftest,
(the next conftest) and finally global plugins. The internal data structures
are tied to this source/file order. For the inheritance/base scheme we
would need to modify/extend the lookup logic i guess. It is implemented in
the FixtureManager class in _pytest/python.py, see the doc string.

@pytestbot pytestbot added the type: enhancement new feature or API change, should be merged into features branch label Jun 15, 2015
@mloning
Copy link

mloning commented Oct 14, 2021

Hi all, I'm looking for a solution to the same problem. The issue is a few years old, are there any updates on this or recommended ways of doing this with pytest today?

@nicoddemus
Copy link
Member

Hi @mloning,

Indeed that issue is quite old and lost steam. There are some issues to consider, for example do the external tests have fixtures?

I think the best bet would be to manage to create the tests programatically and add them to the collection in pytest_collect_modifyitems. However there's no "ready to use" solution to this, it will need some exploration and probing to make it happen.

@mloning
Copy link

mloning commented Oct 14, 2021

Hi @nicoddemus, thanks for the reply!

The use case is as follows:

  • We have a base package in which we define an API standard for different kinds of objects (in our case, machine learning algorithm types, e.g. classifier, forecaster, transformer)
  • We have a set of unit tests to make sure a given object complies with the API standard (e.g. any classifier needs to have a fit and predict method), we use fixtures and pytest_generate_tests to parametrize our test functions
  • We have a companion package which provides additional implementations of these kinds of algorithms
  • We want to run the unit tests defined in the base package in the companion package to make sure its objects also comply with the API standard

I'd really appreciate if you could add a bit more detail on how to work with pytest_collect_modifyitems. Let me know if anything else comes to mind on how to best address this.

@nicoddemus
Copy link
Member

nicoddemus commented Oct 14, 2021

Hi @mloning,

Thanks for providing more details.

In that case I do have a suggestion on how to run tests from a base package in a companion package, to tests that some implementation in companion implements an interface correctly, like a "conformance testing".

In the base package, you will need a base class containing your tests, and also implements an "empty" fixture which should provide the object for conformance testing. The tests in this base class will then use that fixture.

# base/common.py

class BaseConformanceSuite:

    @pytest.fixture
    def target(self):
        """Returns the object that will be checked for conformance"""
        assert 0, "This fixture needs to be re-declared in subclasses"

    def test_fit(self, target):
        assert target.fit(...)

    def test_predict(self, target):
        assert target.predict(...)

Note the class name does not start or end with Test on purpose, as it is an "abstract" class in the sense that it should not be collected by pytest.

You then subclass from BaseConformanceSuite in base, or any other package (including companion), to apply that set of tests to a new target implementation, for example:

# base/tests/test_transform.py
from base.common import BaseConformanceSuite
from base.algorithms import TransformAlgo

class TestTransform(BaseConformanceSuite):
    
    @pytest.fixture
    def target(self):
        return TransformAlgo()
# companion/tests/test_forecaster.py
from base.common import BaseConformanceSuite
from companion.algorithms import ForecasterAlgo

class TestForecaster(BaseConformanceSuite):
    
    @pytest.fixture
    def target(self):
        return ForecasterAlgo()

Note here that they start with Test, so pytest will collect and run their tests.

In summary, the trick is to use base inheritance to get your tests from one package to another. This doesn't require implementing any hooks, and fixtures/collection will work naturally.

@mloning
Copy link

mloning commented Oct 14, 2021

Thanks @nicoddemus - this is really helpful! I'll get back to you in case we have more questions or get stuck, but I think I know what we have to do now.

@mloning
Copy link

mloning commented Oct 16, 2021

@nicoddemus one follow-up question, is there an easy way to parametrize the fixtures when define as above in the method? Instead of having a single ForecasterAlgo() we'd like to iterate over multiple ones.

We're currently using pytest_generate_tests to do that, but having this in the class as a method seems to the more elegant solution. So we could do something like this, just wondering if there's a better way of doing this.

class TestForecaster(BaseConformanceSuite):

    @pytest.fixture(params=all_forecasters)
    def target(self, estimator):
        return estimator.param

@kalekundert
Copy link
Contributor

Not sure if that snippet is meant to be pseudo-code, but it's pretty much the right syntax. The argument just needs to be named request:

class TestForecaster(BaseConformanceSuite):

    @pytest.fixture(params=all_forecasters)
    def target(self, request):
        return request.param

More info on parametrized fixtures: https://docs.pytest.org/en/6.2.x/fixture.html#fixture-parametrize

@mloning
Copy link

mloning commented Oct 21, 2021

Thanks @kalekundert

@KudryashovIlya
Copy link

KudryashovIlya commented Sep 20, 2023

Hi all!
I'm trying to understand whether it can be implemented or not: I need to add several "external" tests to the current session.
In our project we have 8 environments and the test structure looks like this:

  • tests
    • common_case
      • case_1.py
      • ...
    • test_env_1
      • test_1.py
      • ...
    • test_env_2
      • test_1.py
      • ...

In test_*.py we declare tests that inherit from common_case (since the cases and steps are repeated and differ only in some steps/conditions).
So, when I change one common case, I need to check how it will work in all environments and in the current implementation I have to do this manually, so the idea arose to automate this.
I tried to deal with _pytest.python.pytest_pycollect_makeitem, having previously created a Module instance, but in the end it doesn't work:

>>> path = pathlib.Path(<path_to_test)
>>> module = _pytest.python.pytest_collect_file(path, items[0].parent) # items
>>> _pytest.python.pytest_pycollect_makeitem(module, 'test', module.obj)
>>>

I tried to do it as suggested here, but either I didn’t understand the idea or it didn’t suit me.

@nicoddemus
Copy link
Member

I tried to deal with _pytest.python.pytest_pycollect_makeitem, having previously created a Module instance, but in the end it doesn't work:

The idea is for you to implement your own pytest_pycollect_makeitem function, which does its own logic, not call pytest's own...

Take a look at pytest-cpp for an example of a plugin which creates new tests from external resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

5 participants