Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Test discovery/test runner #10

Open
bilderbuchi opened this issue Jul 15, 2016 · 4 comments
Open

Enhancement: Test discovery/test runner #10

bilderbuchi opened this issue Jul 15, 2016 · 4 comments

Comments

@bilderbuchi
Copy link

It would be great if XogenyTest would offer some kind of test runner, which could be used/extended in your package, would discover all tests (e.g. in a Test subpackage) automatically, run them in sequence, and report the results back. That way, one would not have to run all the created tests manually every time, which gets tedious with a large number of tests.

I already tried thinking about how to best approach this, but I'm a bit stumped.

  • Should this be a model or function? Model I guess is easier to use/run in an IDE, on its own.
  • Is is it even possible in vanilla Modelica to discover programmatically all available models in a package, by some glob/regex matching, or by the TestCase annotation? If not, how could this be programmed?
  • How to avoid that the whole test run stops on the first failed assert (at error level currently)? Other test frameworks (e.g. the ecxellent pytest in Python) just catch the generated exceptions, report, and carry on testing. Hoever, I could not find an equivalent to try/catch or exceptions or "treating" asserts in Modelica.
@kdavies4
Copy link
Contributor

kdavies4 commented Jul 15, 2016

Just a couple of notes on how I used XogenyTest:

I create a Tests package that has the same structure as the library to be tested. Each function has a corresponding test function and each model has a corresponding test model. I aggregate the test functions into higher-level test functions, following the package hierarchy upwards. A boolean "ok" response is the boolean and of the calls to all of the functions in the package and its subpackages. I aggregate the models in a Dymola test script (not pure Modelica, I know) by simulating them one by one. For models that are similar and simple, I guess you could instantiate multiple test models into a higher-level model, but that could get messy for the solver.

I know that this doesn't directly address your questions, but maybe it offers some ideas.

@xogeny
Copy link
Owner

xogeny commented Jul 15, 2016

The big issue here is that to do this right requires tool support. My goal, with this library, was to try and build a consensus around ways of doing testing with the hope that tool requirements would emerge organically from that. In a nutshell, you need to push the vendors to do more to support what you want.

@bilderbuchi
Copy link
Author

@kdavies4 thanks for sharing your workflow! It is helpful, I'll see how far I get with a .mos script or some runner function.

@xogeny yeah, tool support would be useful, but I think first we'd need to have something "finished" that the tools could support and standardize on. Would it not be possible to first build something rudimentary but working (I love how you kept this library simple and thus universal), and then get tool vendors to add convenient support for it? How would one go about that?

I have already toyed with the idea, considering that there is some intersection of Modelica and Python here and there, to implement a Modelica plugin for pytest. That way we could use some of the aesomeness of pytest, but I'm not sure how well it could work considering Modelica is a "foreign language" for pytest.

OTOH, it would be great to try and stay within Modelica first, before looking elsewhere, if only for dependency/cross-lang reasons.

/soapbox
It makes me sad that the same/a similar thing apparently gets reimplemented over and over again in N slightly different, incompatible ways, instead of sitting together and defining a common, maintained, well designed thing that all can help move forward. I wonder why testing support beyond asserts is not part of Modelica/the MSL? I'm not the only one wondering that.

Frankly, the more I get to know Modelica, the more I am astonished at some stuff that is not there, as well as that some stuff just disappears/is never implemented.
What happened to the TestCase annotation that you are using in this library (I can't find a trace of it anywhere else online)?
What happened to so many of the testing solutions that you can find, that seem to get a paper at a Modelica conference, then disappear without a trace? (e.g. MoUnit, OptimicaTestingToolkit) Do they all get folded into commercial tools?
What happened to the Exception handling @adrpo proposed in 2008?
To the partial derivative support that's even in Fritzson's current book iirc?

@thorade
Copy link

thorade commented May 12, 2017

On github, the TestCase annotation is used by XogenyTest and modelica-compliance
https://github.com/search?l=Modelica&q=TestCase&type=Code
but it is not a "standard" annotation, just a vendor specific annotation, it seems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants