Conversation
| use wasm_bindgen::prelude::wasm_bindgen; | ||
| use wgpu::Backends; | ||
|
|
||
| // @todo avoid copying this from wgpu-info |
There was a problem hiding this comment.
I copied this list since I'm not sure where it should live if it's also imported here.
There was a problem hiding this comment.
On latest trunk this should now be solved! You can call wgpu::TextureFormat::exhaust and from the exhaust::Exhaust trait, which should get you the list of all formats.
| } | ||
|
|
||
| #[wasm_bindgen] | ||
| pub async fn gpu_report() -> String { |
There was a problem hiding this comment.
Attempt here is to create a one-adapter "gpu report" for a given browser. I wasn't sure this was valid, but if it is, the advantage is it gives accurate info on the CLI about which tests are skipped.
There was a problem hiding this comment.
Yeah this is fine! Not for this PR, but it's a real shame we can't re-use the code between this and wgpu-info.
| let url = new URL(window.location.href); | ||
| let name = url.searchParams.get("name"); | ||
| let wasm = url.searchParams.get("wasm"); | ||
| const { default: init, run_test, gpu_report } = await import(`./${wasm}.js`); |
There was a problem hiding this comment.
This is a bit hacky to load a JS file based on URL param, I'd try to do something better in a real PR.
|
|
||
| let cargo_args = flatten_args(args, passthrough_args); | ||
|
|
||
| let bins = [ |
There was a problem hiding this comment.
Best-case this list wouldn't be needed. But I couldn't think of a general way to loop over the workspace and build the right tests.
Also, now wondering why the example tests work at all, given that I mistakenly list wgpu-test twice...
OK fixed example tests. Now a couple tests are failing, but I think there are specific reasons for those.
There was a problem hiding this comment.
Yeah the failing tests seem to be because they're checking for not(target_arch=wasm32), but since wasm_manager is actually enumerating the tests on the native system, that check doesn't work. So once a general approach is decided, I can update that check for the failing tests.
There was a problem hiding this comment.
Alright, that sounds good.
| .cmd("cargo") | ||
| .args(build_args) | ||
| .args(["--message-format=json", "-q"]) | ||
| .output()?; |
There was a problem hiding this comment.
Idea here is to figure out where the wasm binary actually is, since it has a random hash appended to it. I felt like there should be an obvious way to do this, but couldn't find one.
There was a problem hiding this comment.
cargo nextest list -v will output the binary name, PWD, and list of tests attributed to that binary, which should prvent the need to double build (as it will do the build for you)
|
Thanks for putting this up! I'm going to self-assign this so I don't lose it |
cwfitzgerald
left a comment
There was a problem hiding this comment.
Sorry it took so long to look at this, I wanted to make sure I got the time to give this a proper look and to fully understand it.
I think this approach is great! There's a few gotchas that I've noted, and the failures that you've noted, but I think we should go forward with this approach, and we can refine the experience and infrastructure as we go.
Thank you for taking the time to look into this!
|
|
||
| # Speed up image comparison even in debug builds | ||
| [profile.dev.package."nv-flip-sys"] | ||
| opt-level = 3 |
| "scripts": { | ||
| "test": "echo \"Error: no test specified\" && exit 1" | ||
| }, |
There was a problem hiding this comment.
nit: leftover default init script
| .cmd("cargo") | ||
| .args(build_args) | ||
| .args(["--message-format=json", "-q"]) | ||
| .output()?; |
There was a problem hiding this comment.
cargo nextest list -v will output the binary name, PWD, and list of tests attributed to that binary, which should prvent the need to double build (as it will do the build for you)
|
|
||
| let cargo_args = flatten_args(args, passthrough_args); | ||
|
|
||
| let bins = [ |
There was a problem hiding this comment.
Alright, that sounds good.
| cargo_metadata.workspace = true | ||
| env_logger.workspace = true | ||
| parking_lot = { workspace = true, features = ["deadlock_detection"] } | ||
| ureq.workspace = true |
There was a problem hiding this comment.
nit: as we don't need tls, it is possible that we turn it off to prevent taking in deps on the TLS stack?
| use wasm_bindgen::prelude::wasm_bindgen; | ||
| use wgpu::Backends; | ||
|
|
||
| // @todo avoid copying this from wgpu-info |
There was a problem hiding this comment.
On latest trunk this should now be solved! You can call wgpu::TextureFormat::exhaust and from the exhaust::Exhaust trait, which should get you the list of all formats.
| #[cfg(not(target_arch = "wasm32"))] | ||
| fn main() -> $crate::native::MainResult { | ||
| $crate::native::main($tests) | ||
| if std::env::var("TEST_WASM") == Ok("true".to_string()) { |
There was a problem hiding this comment.
| if std::env::var("TEST_WASM") == Ok("true".to_string()) { | |
| if std::env::var("TEST_WASM").as_deref() == Ok("true") { |
super nit
| } | ||
|
|
||
| #[wasm_bindgen] | ||
| pub async fn gpu_report() -> String { |
There was a problem hiding this comment.
Yeah this is fine! Not for this PR, but it's a real shame we can't re-use the code between this and wgpu-info.
| } else { | ||
| shell | ||
| .cmd("cargo") | ||
| .args(["nextest", "run", "-P", "wasm"]) |
There was a problem hiding this comment.
| .args(["nextest", "run", "-P", "wasm"]) | |
| .args(["nextest", "run", "--profile", "wasm"]) |
Nit, lets use the long name for flags, as it's clearer what they mean
Connections
#8709
Description
Try using playwright to run wasm tests in browser. Just runs in chrome/webgl for now.
The approach in this PR is fairly complex, but has the advantage of using the nextest CLI on the front end.
The tests are run using
cargo xtask test-wasm. The xtask code:tests/wasm/webtotests/wasm/dist(basically a build step)wasm-bindgenthen copies them into thetests/wasm/distfolderTEST_WASM=trueenv var. The causes a version of the tests to run natively, where instead of running an actual test, a request is made to the node server to run the browser test in playwright. This is done usingwasm_manager.rsAn alternative approach to the architecture above would be to just loop over the tests in the browser, skipping nextest, and sending logs back to the shell. This is what I started with in 33ea7ef. Downside is that it doesn't allow the same CLI args as nextest, and would require something like an iframe to recover from bad tests.
Testing
Changed ci.yml to call a new xtask task instead of wasm-pack