-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Test discovery/test runner #10
Comments
Just a couple of notes on how I used XogenyTest: I create a Tests package that has the same structure as the library to be tested. Each function has a corresponding test function and each model has a corresponding test model. I aggregate the test functions into higher-level test functions, following the package hierarchy upwards. A boolean "ok" response is the boolean and of the calls to all of the functions in the package and its subpackages. I aggregate the models in a Dymola test script (not pure Modelica, I know) by simulating them one by one. For models that are similar and simple, I guess you could instantiate multiple test models into a higher-level model, but that could get messy for the solver. I know that this doesn't directly address your questions, but maybe it offers some ideas. |
The big issue here is that to do this right requires tool support. My goal, with this library, was to try and build a consensus around ways of doing testing with the hope that tool requirements would emerge organically from that. In a nutshell, you need to push the vendors to do more to support what you want. |
@kdavies4 thanks for sharing your workflow! It is helpful, I'll see how far I get with a .mos script or some runner function. @xogeny yeah, tool support would be useful, but I think first we'd need to have something "finished" that the tools could support and standardize on. Would it not be possible to first build something rudimentary but working (I love how you kept this library simple and thus universal), and then get tool vendors to add convenient support for it? How would one go about that? I have already toyed with the idea, considering that there is some intersection of Modelica and Python here and there, to implement a Modelica plugin for pytest. That way we could use some of the aesomeness of pytest, but I'm not sure how well it could work considering Modelica is a "foreign language" for pytest. OTOH, it would be great to try and stay within Modelica first, before looking elsewhere, if only for dependency/cross-lang reasons. /soapbox Frankly, the more I get to know Modelica, the more I am astonished at some stuff that is not there, as well as that some stuff just disappears/is never implemented. |
On github, the |
It would be great if XogenyTest would offer some kind of test runner, which could be used/extended in your package, would discover all tests (e.g. in a
Test
subpackage) automatically, run them in sequence, and report the results back. That way, one would not have to run all the created tests manually every time, which gets tedious with a large number of tests.I already tried thinking about how to best approach this, but I'm a bit stumped.
TestCase
annotation? If not, how could this be programmed?pytest
in Python) just catch the generated exceptions, report, and carry on testing. Hoever, I could not find an equivalent to try/catch or exceptions or "treating" asserts in Modelica.The text was updated successfully, but these errors were encountered: