-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dealing with tests that cannot succeeded #665
Comments
you can use tags and then include/exclude those tags based on the conditions in which you run the tests. But I might not understand your request exactly... |
@Tieske filtering using tags excludes test from a test report and it is not desired. |
Here's how we do stuff like that (untested): local platform_it = function(platforms, description, ...)
if type(platforms) ~= "table" then
-- plain 'it' call
return it(platforms, description, ...)
end
local platform = get_platform()
local test = false
for _, plat in ipairs(platforms or {}) do
if plat == platform then
test = true
break
end
end
return test and it(description, ...) or pending("[skipping on "..platform.."] "..description, ...)
end
platform_it({ windows, osx }, "a test as usual", function()
-- test something, only on Windows and OSX, not on Linux
end)
platform_it({ osx, linux }, "another test as usual", function()
-- test something, only on OSX and Linux, not on Windows
end) |
The important aspect of an xfail test is that it still runs but it's expected to fail. This is useful to document the expected behavior for a scenario that's know to be failing (e.g., a bug report) but hasn't been fixed yet. If something changes that fixes the test, you're alerted to it because the test passing is treated as a failure. At that point, you can verify whether the behavior change is intended and simply switch it from "xfail" to a normal test. |
We might be able to extend the reporting functionality to report on excluded tests maybe? Tags exist specifically so skip, xfail, etc can all be handled the same way anyways |
I don't think reporting on excluded tests answers this question, the point of an Xfail test is that they are included but the mode is reversed. They are not excluded, they are run, but the expected output is anything but the declared expectation. That way you sound an alarm if a know-broken test starts passing and you fixed a bug you didn't realize was being affected (for the better). I don't see way to do that with the tag system. We can include and exclude but not reverse modes. |
Sometimes tests cannot be fixed quickly and you expect to fail. In such cases it's a common practice to mark them accordingly with statuses like XFail or Skip.
A Skip means that you expect your test to pass unless a certain configuration or condition prevents it to run. And XFail means that your test can run but you expect it to fail because there is an implementation problem.
It would be nice to have functionality to set certain test status in a test source code.
The text was updated successfully, but these errors were encountered: