-
-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(crypto-square): put test name on single line #845
fix(crypto-square): put test name on single line #845
Conversation
Hello. Thanks for opening a PR on Exercism 🙂 We ask that all changes to Exercism are discussed on our Community Forum before being opened on GitHub. To enforce this, we automatically close all PRs that are submitted. That doesn't mean your PR is rejected but that we want the initial discussion about it to happen on our forum where a wide range of key contributors across the Exercism ecosystem can weigh in. You can use this link to copy this into a new topic on the forum. If we decide the PR is appropriate, we'll reopen it and continue with it, so please don't delete your local branch. If you're interested in learning more about this auto-responder, please read this blog post. Note: If this PR has been pre-approved, please link back to this PR on the forum thread and a maintainer or staff member will reopen it. |
This PR touches files which potentially affect the outcome of the tests of an exercise. This will cause all students' solutions to affected exercises to be re-tested. If this PR does not affect the result of the test (or, for example, adds an edge case that is not worth rerunning all tests for), please add the following to the merge-commit message which will stops student's tests from re-running. Please copy-paste to avoid typos.
For more information, refer to the documentation. If you are unsure whether to add the message or not, please ping |
Just to be sure, is the issue the "spaces" or that the whole TEST_CASE line is over two lines?
|
In the original review, it sounded like multi-line strings are the issue. But I just had a quick look at the parser of the runner and it seems to be quite primitive. We need to disable clang-format anyway, so we can just as well put everything on a single line just to be sure. Let me quickly change that. Also for the other tests. |
Here's the relevant comment from the original review: #827 (comment) |
@iHiD The test runner searches the name of the tests in the |
e328bf5
to
0d562bf
Compare
@ahans Is that second commit really necessary, with all those |
That was in response to @iHiD's comment. Just to be on the safe side. I'm still not fully following what the parser is doing. If |
0d562bf
to
6df7197
Compare
The website should be able to display the test code. Therefore the
I'm not following, why is it temporary? |
@siebenschlaefer I reverted that now and confirmed with the test runner locally that it still works. With the crypto-square test from
With the |
Because it's a known limitation of the test runner that I will look into fixing. I can think of a quite simple fix for the multi-line string issue that we're facing here, but ideally, the parser would be able to handle any valid C++ code. Anyway, I will add some thoughts over at this issue. |
The test runner in v3 had to print the code that is actually run for each test. I wrote the runner to search for the "name" lines in the $slug_test.cpp to parse the content for each test case. If the name is split, it cannot find the positions and fails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
I just tested it locally and got a valid results.json
for a correct and an incorrect solution.
Thanks all! |
One question - why did the CI not detect that the previous test broke things? Maybe something we can add? |
We would have to compile the same program that the test runner is using and use that binary with the CI, then check the validity of the resulting json |
Many test-runners spin up the Dockerfile that's built for production and run some golden tests in there. I'm guessing that's not the case here then? I think @ErikSchierboom has a pretty quick workflow to add that to a repo, so maybe he does that when he's back? |
The test runners golden tests are in place. This exercise had a "malformed" test file, that compiles fine, but will not parse the JSON parsing in the test runner during production. The only way I see to detect this beforehand is to run every commit through the real thing |
Having a full-blown integration test that runs each example implementation through an instance of the Docker image would probably be the most complete solution. But the critical part is the parsing. Wouldn't it be possible to combine this with the regular tests using the example implementations? The executables are built anyway. Just run them again with |
No description provided.