Skip to content

[DRAFT] Set vllm version limit to avoid upstream API incompatibility #351

[DRAFT] Set vllm version limit to avoid upstream API incompatibility

[DRAFT] Set vllm version limit to avoid upstream API incompatibility #351

This workflow is awaiting approval from a maintainer in #292
Triggered via pull request October 21, 2024 09:49
Status Action required
Total duration
Artifacts
This workflow is awaiting approval from a maintainer in #292

test_cli_cuda_torch_ort.yaml

on: pull_request
run_cli_cuda_torch_ort_multi_gpu_tests
run_cli_cuda_torch_ort_multi_gpu_tests
run_cli_cuda_torch_ort_single_gpu_tests
run_cli_cuda_torch_ort_single_gpu_tests
Fit to window
Zoom out
Zoom in