Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test plan #123

Closed
wants to merge 7 commits into from
Closed

Add a test plan #123

wants to merge 7 commits into from

Conversation

mccalluc
Copy link
Contributor

@mccalluc mccalluc commented Nov 1, 2024

These are good questions, but I'm not sure how applicable they are for the project at this phase: Maybe you are requesting a plan for steps before a release? Or there might be a set of IQSS routines that you have in mind, and if this is an IQSS project it might make sense to apply those routines here? Or maybe we want to be more clear about what is expected of reviews? Or maybe we should be more concrete about the feedback we'd like to get from real users? This template seems to assume a separate QA role- I don't think it's my place to dictate staffing for a project, so it's hard for me to fill that out.

So if there's something else I can provide that would be helpful, let me know.

@mccalluc mccalluc requested a review from ekraffmiller November 1, 2024 19:24
@pdurbin pdurbin self-assigned this Nov 7, 2024
Copy link
Collaborator

@pdurbin pdurbin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, seems like a solid plan. I left some comments.


### Project Overview

The single user application will be installed with pip, and offer a web application (running on localhost) that guides the user through the application of differential privacy on a data file they provide.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Single user application" is a phrase I haven't heard much. I think I know what you mean but perhaps this could be simplified to just "application".


### Out-of-Scope

- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but performance is not the focus of automated testing.

With "it's" I get slightly confused. I don't think you're referring to "test timeouts", which would be "they're".

### Out-of-Scope

- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
- Rendering of preview charts: We might a [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.

Or plural? Results tables? 🤔


- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output.
- Testing exact values: The outputs are randomized, so we can not test for any particular value in the output.

- For more complex functions and operations, use pytest unit tests.
- Run linting and type checking tools as part of tests.
- Use Playwright for end-to-end tests.
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers.
- Use test matrix on Github CI if we think there will be any sensitivity to versions or browsers.


- Reviewers should read and understand code.
- Reviewers are not expected to checkout or manually test code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggested this in Slack but perhaps we could have a bullet for the following idea:

Should we have an additional column after "in review"? For Dataverse after someone clicks "approve" automation moves the PR to the next column. We call this column "ready for QA".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step.


- Reviewers should read and understand code.
- Reviewers are not expected to checkout or manually test code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a bullet to say "- Reviewers should indicate they are reviewing a PR by dragging it to "In Review" and assigning it to themselves."?

[See README](https://github.com/opendp/dp-creator-ii/#conventions).

### Pull Request Review and Testing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments are all out of order, sorry, but can we have a bullet for "- Reviewers will indicate that that they need the creator of the PR to address a concern by assigning the creator to the issue".

(This would happen while the issue is in the "In Review" column. That's what we do for Dataverse, anyway, and it seems to work fine. The creator unassigned themselves after they've changed the code or at least written a reply to the concern.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @pdurbin that it would be good to follow the process we use in Dataverse, using an 'In Review' status, and assigning the reviewer/coder if there are changes to be made during the review.


Github CI will run tests on each PR. We require tests to pass before merge.

We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact.
We do not require branches to be up to date with main: There is a small chance that one PR may break (have merge conflicts) because of changes in a separate before, but that can be addressed after the fact.

By "break" I assume you're talking about merge conflicts.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main.

Comment on lines +97 to +109
N/A

### AWS-Based Environments

N/A

## Significantly Impacted Division/College/Department

N/A

## Dependencies

N/A
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by N/A here. Should it be TBD instead? 🤔

@mccalluc
Copy link
Contributor Author

mccalluc commented Nov 7, 2024

@anniewu332 thought that the developer of a project shouldn't also be leading the testing: I have my blind spots. I'm going to move this back to draft, and when the right person is identified, they can pick this up.

@mccalluc mccalluc marked this pull request as draft November 7, 2024 19:01
@pdurbin pdurbin removed their assignment Nov 7, 2024
Copy link
Member

@ekraffmiller ekraffmiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, thank you, I just left some comments about the details


- Reviewers should read and understand code.
- Reviewers are not expected to checkout or manually test code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step.


Github CI will run tests on each PR. We require tests to pass before merge.

We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main.


- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to test that the outputs are within a certain range, just as a sanity check?

@mccalluc
Copy link
Contributor Author

Coming out of the Nov 13 meeting, the consensus was that this has surfaced some good questions, but this particular form is probably more detail than we need. #111 will remain open and Ellen may articulate a simpler plan, perhaps in consultation with Annie and Omer.

@mccalluc mccalluc closed this Nov 13, 2024
@mccalluc mccalluc deleted the 111-test-plan branch November 13, 2024 20:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Complete
Development

Successfully merging this pull request may close these issues.

Write a test plan
3 participants