Skip to content

Finish GrFN3 test suite generation #256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
pauldhein opened this issue Sep 24, 2021 · 0 comments
Open

Finish GrFN3 test suite generation #256

pauldhein opened this issue Sep 24, 2021 · 0 comments
Assignees

Comments

@pauldhein
Copy link
Contributor

Overview

The collection of tasks defined below should be all that is needed to prepare the tests that will determine if the grfn2gromet branch to merge into master. All tasks defined should be carried out on the grfn2gromet branch or a branch from that branch.

Useful pointers

  • Python code examples: tests/data/program_analysis/language_tests/python/<idiom-name>/<test-name>/<test-name>.py
  • PyTest fixtures location: tests/conftest.py
  • PyTest mark definitions: ./pytest.ini
  • Full pipeline script: scripts/model_assembly/py2grfn.py

Tasks

  • Write a script that will auto-generate the test stubs for tests/program_analysis/test_python2cag.py, tests/program_analysis/test_cag2air.py, tests/model_assembly/test_air2grfn.py, and tests/model_assembly/test_grfn2cag.py for all of the Python code examples according to the three example stubs shown
  • Create or refactor tests/model_assembly/test_grfn_execution.py to utilize all of the Python code examples
    • Implement the GrFN3 execution framework
    • Run the Python code examples with sample input/output. Save those examples to a JSON file that you can load during testing so that you can verify that the GrFN execution is producing the appropriate output for given input.
  • Run the full pipeline script for each example in the Python code examples with all of the necessary flags to produce the JSON file output for CAST, AIR, GrFN, GrFN dotfiles, and CAG.
    • This may require changes or corrections to be made in the pipeline that generates these items
    • Manually inspect the output to ensure that the output is correct
  • Remove old tests that are no longer necessary for the new pipeline, add any other desired test cases and mark the sensitivity analysis test cases with pytest.mark.skip so that we have a record of them but they will not be caught as failing for this PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants