Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve our test coverage #11299

Open
zengh-cropty opened this issue Feb 22, 2024 · 1 comment
Open

Improve our test coverage #11299

zengh-cropty opened this issue Feb 22, 2024 · 1 comment
Labels
awaiting triage Awaiting triage from a maintainer

Comments

@zengh-cropty
Copy link

What would you like to share?

Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.

How to find low-coverage files
Go to the Actions tab in this repository and find the most recent build workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:

---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing

quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.

Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the web_programming and project_euler directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.

When you open your PR, put "Contributes to #9943" in the PR description. Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.

How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:

def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).

Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. Your PR will not be merged if it has failing tests. If you happen to discover a bug while writing doctests, please fix it.

Please read our contributing guidelines before you contribute.

Additional information

No response

@zengh-cropty zengh-cropty added the awaiting triage Awaiting triage from a maintainer label Feb 22, 2024
@DeepeshKalura
Copy link

I was thinking of creating a new folder called test where all the test cases reside. For example, not sure about every folder but for searching and sorting folders the test case remains the same for every algorithm.

Does this idea have an issue?
separate folder for test case and algorithm rather than the same file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting triage Awaiting triage from a maintainer
Projects
None yet
Development

No branches or pull requests

2 participants