-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report regressiontest coverage #253
Comments
Very nice! |
@MichaMans : Let's discuss in Aachen how you run your tests and what the workflow/use case is. We run our test by starting multiple CI tests on travis to be able to use multiple instances in parallel, each running a few packages. In this case, coverage would never show 100% even though collectively we run all tests. |
@mwetter yes sure, let's discuss in Aachen. We are using a very similar setup so we can refactor the feature that it is usable in a general way like also in the travis-ci/coverage approach (there i might need some help). A solution for the parallel setup might be to configure a coverage only test and use this for the overall coverage result. Antoine's work looks very promising too, but might not resolve the gitlab/travis coverage displaying features. |
@MichaMans : I moved your code to the branch However, it does not seem to be doing the right thing. For example, $ ../bin/runUnitTests.py -s IBPSA.Controls.Discrete
Regression tests are only run for the following package:
IBPSA.Controls.Discrete
***
Coverage: 7%
***
You are testing : 1 out of 15 total examples in
Controls
***
The following examples are not tested
/Controls/Continuous/Examples/OffTimer.mo
/Controls/Continuous/Examples/SignalRanker.mo
/Controls/Continuous/Examples/PIDHysteresis.mo
/Controls/Continuous/Examples/LimPIDWithReset.mo
/Controls/Continuous/Examples/PIDHysteresisTimer.mo
/Controls/Continuous/Examples/LimPID.mo
/Controls/Continuous/Examples/NumberOfRequests.mo
/Controls/Continuous/Validation/LimPIDReset.mo
/Controls/Continuous/Validation/OffTimerNonZeroStart.mo
/Controls/SetPoints/Examples/OccupancySchedule.mo
/Controls/SetPoints/Examples/Table.mo
/Controls/SetPoints/Examples/HotWaterTemperatureReset.mo
/Controls/SetPoints/Validation/OccupancyScheduleNegativeStartTime.mo
/Controls/SetPoints/Validation/OccupancySchedulePositiveStartTime.mo
Using 1 of 48 processors to run unit tests for dymola.
Number of models : 369
blocks : 110
functions: 119
Generated 1 regression tests.
Comparison files output by funnel are stored in the directory 'funnel_comp' of size 0.0 MB.
Run 'report' method of class 'Tester' to access a summary of the comparison results.
Script that runs unit tests had 0 warnings and 0 errors. I asked it to test |
@mwetter Thanks for moving it. I'll have a look at it again. You are right, it seems to not working correct. Just for clarification what it should do:
Do you generally agree that if this works right, it would be an addition for buildingspy? |
@MichaMans : I am still struggling a bit about the exact use case: When would you not have 100% "coverage"? I would need to dig in again to see how exactly we recognize a model as an "Example". I think the test would be that this example is somehow excluded from the test, either because it is listed in the json file, or because the experiment annotation or .mos script is missing. But wouldn't the latter be considered an error rather than not having coverage? Also, "coverage" is in my view misleading: If you have for example a MixingVolume, and only one test that tests its use a a dynamic mixing volume, you did not cover the equations that would be used if it were configured as steady-state. We should therefore think if there is a better term. |
@mwetter I see your last point and would generally agree with it. We could definitely discuss and maybe find a "real" coverage test for modelica. Regarding your first point. I would agree that maybe for the modelica-ibpsa the coverage is always 100% 😃 but it is definitely not for AixLib and i have no idea whats with the other libraries. Speaking for the AixLib. we provide a lot of Examples how models are working or used in, for example, a system context, but these do not yet have a test script and so are not tested within the ci. That's the reason behind why, in our case such a feature is useful. |
I see. Then it would be good to flag these examples in your CI testing with some "coverage" metrics. |
I agree, test coverage would usually mean the ratio (tested models and variants) / (all existing models and variants). |
@MichaMans : I would call it "Models-Coverage" as it also includes models in |
What is the problem / Suggestion?
Why do we want to solve it?
How do we want to solve it?
Repo/bin/runUnitTests.py
script. The result could be the following:@thorade @mwetter @Mathadon what do you think? Any objections, additions?
Maybe related to #245
The text was updated successfully, but these errors were encountered: