Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature tests and generation of test code should use CLI #847

Closed
kanigsson opened this issue Nov 11, 2021 · 3 comments
Closed

Feature tests and generation of test code should use CLI #847

kanigsson opened this issue Nov 11, 2021 · 3 comments
Labels

Comments

@kanigsson
Copy link
Collaborator

On #713, I am modifying the interface for the code generation slightly, as was discussed during review. The parser now also produces an Integration object that needs to be passed to the generator.

Instead of adapting the feature_test.py and generate_spark_test_code.py to the new interface, I think these scripts should run the rflx CI instead.

I would go even slightly further and transform the feature tests into tests that execute a script similar to this:

  • code generation (also check if identical to baseline)
  • compilation
  • execution
  • proof

The script would not proceed to the next step if one step fails. This would replace the current test_compilability, test_executability etc.

Arguments in favor:

  • avoid code duplication
  • transforms feature tests into end-to-end tests that also test the CLI

Arguments against:

What's your opinion, @treiher?

@kanigsson kanigsson added the bug label Nov 11, 2021
@kanigsson kanigsson self-assigned this Nov 11, 2021
@kanigsson
Copy link
Collaborator Author

By the way, I used the label "bug" for lack of a better label. Maybe we should add a label "QA", or "quality", or "testing"?

@treiher
Copy link
Collaborator

treiher commented Nov 11, 2021

By the way, I used the label "bug" for lack of a better label. Maybe we should add a label "QA", or "quality", or "testing"?

Yes, I think that could be helpful, I will add one.

Instead of adapting the feature_test.py and generate_spark_test_code.py to the new interface, I think these scripts should run the rflx CI instead.

The input for generate_spark_test_code.py contains Python models, which cannot be used as input for rflx. We could consider converting the Python models to specifications. I'm not sure yet, if it would be possible to completely replace the models by specifications or if it would make sense to have both and check that they are in sync.

I would go even slightly further and transform the feature tests into tests that execute a script similar to this:

  • code generation (also check if identical to baseline)
  • compilation
  • execution
  • proof

The script would not proceed to the next step if one step fails. This would replace the current test_compilability, test_executability etc.

The reason for having separate test functions (like test_equality and test_executability) is the possibility to perform steps separately. I think that is a useful feature during development. For example, I'm not always interested in having "equality", when testing the effects of a changed code generation on the functional correctness or on the proof.

Arguments against:

I also couldn't find a definitive answer. The linked documentation doesn't explicitly mention sub-processes started using the subprocess module.

Arguments in favor:

  • avoid code duplication
  • transforms feature tests into end-to-end tests that also test the CLI

That are worthwhile goals, but I have the feeling that the necessary work outweigh the benefits at the moment. It would be an optimization, but it doesn't solve a concrete problem. I suppose the changes needed for #713 should be quite simple, even if they have to be done at multiple places (correct me if I'm wrong). I think we should keep this issue open and reconsider its implementation at a later time.

@treiher treiher added testing and removed bug labels Nov 11, 2021
@senier
Copy link
Member

senier commented Nov 1, 2022

We don't currently see a benefit in implementing this ticket.

@senier senier closed this as not planned Won't fix, can't repro, duplicate, stale Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants