-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a test plan #123
Add a test plan #123
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, seems like a solid plan. I left some comments.
|
||
### Project Overview | ||
|
||
The single user application will be installed with pip, and offer a web application (running on localhost) that guides the user through the application of differential privacy on a data file they provide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Single user application" is a phrase I haven't heard much. I think I know what you mean but perhaps this could be simplified to just "application".
|
||
### Out-of-Scope | ||
|
||
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. | |
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but performance is not the focus of automated testing. |
With "it's" I get slightly confused. I don't think you're referring to "test timeouts", which would be "they're".
### Out-of-Scope | ||
|
||
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. | ||
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. | |
- Rendering of preview charts: We might a [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. |
Or plural? Results tables? 🤔
|
||
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. | ||
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. | ||
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output. | |
- Testing exact values: The outputs are randomized, so we can not test for any particular value in the output. |
- For more complex functions and operations, use pytest unit tests. | ||
- Run linting and type checking tools as part of tests. | ||
- Use Playwright for end-to-end tests. | ||
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers. | |
- Use test matrix on Github CI if we think there will be any sensitivity to versions or browsers. |
|
||
- Reviewers should read and understand code. | ||
- Reviewers are not expected to checkout or manually test code. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested this in Slack but perhaps we could have a bullet for the following idea:
Should we have an additional column after "in review"? For Dataverse after someone clicks "approve" automation moves the PR to the next column. We call this column "ready for QA".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step.
|
||
- Reviewers should read and understand code. | ||
- Reviewers are not expected to checkout or manually test code. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a bullet to say "- Reviewers should indicate they are reviewing a PR by dragging it to "In Review" and assigning it to themselves."?
[See README](https://github.com/opendp/dp-creator-ii/#conventions). | ||
|
||
### Pull Request Review and Testing | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These comments are all out of order, sorry, but can we have a bullet for "- Reviewers will indicate that that they need the creator of the PR to address a concern by assigning the creator to the issue".
(This would happen while the issue is in the "In Review" column. That's what we do for Dataverse, anyway, and it seems to work fine. The creator unassigned themselves after they've changed the code or at least written a reply to the concern.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @pdurbin that it would be good to follow the process we use in Dataverse, using an 'In Review' status, and assigning the reviewer/coder if there are changes to be made during the review.
|
||
Github CI will run tests on each PR. We require tests to pass before merge. | ||
|
||
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. | |
We do not require branches to be up to date with main: There is a small chance that one PR may break (have merge conflicts) because of changes in a separate before, but that can be addressed after the fact. |
By "break" I assume you're talking about merge conflicts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main.
N/A | ||
|
||
### AWS-Based Environments | ||
|
||
N/A | ||
|
||
## Significantly Impacted Division/College/Department | ||
|
||
N/A | ||
|
||
## Dependencies | ||
|
||
N/A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by N/A here. Should it be TBD instead? 🤔
@anniewu332 thought that the developer of a project shouldn't also be leading the testing: I have my blind spots. I'm going to move this back to draft, and when the right person is identified, they can pick this up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, thank you, I just left some comments about the details
|
||
- Reviewers should read and understand code. | ||
- Reviewers are not expected to checkout or manually test code. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step.
|
||
Github CI will run tests on each PR. We require tests to pass before merge. | ||
|
||
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main.
|
||
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. | ||
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. | ||
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to test that the outputs are within a certain range, just as a sanity check?
Coming out of the Nov 13 meeting, the consensus was that this has surfaced some good questions, but this particular form is probably more detail than we need. #111 will remain open and Ellen may articulate a simpler plan, perhaps in consultation with Annie and Omer. |
These are good questions, but I'm not sure how applicable they are for the project at this phase: Maybe you are requesting a plan for steps before a release? Or there might be a set of IQSS routines that you have in mind, and if this is an IQSS project it might make sense to apply those routines here? Or maybe we want to be more clear about what is expected of reviews? Or maybe we should be more concrete about the feedback we'd like to get from real users? This template seems to assume a separate QA role- I don't think it's my place to dictate staffing for a project, so it's hard for me to fill that out.
So if there's something else I can provide that would be helpful, let me know.