Skip to content

[Installation]: how to add vllm[audio] when build from source code in arm64 platform #16816

@fanfan-lucky

Description

@fanfan-lucky

Your current environment

2025-04-18 06:03:09 (182 KB/s) - ‘collect_env.py’ saved [26874/26874]

INFO 04-18 06:03:19 [init.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.7.0a0+7c8ec84dab.nv25.03
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04.1 LTS (aarch64)
GCC version: (Ubuntu 10.5.0-4ubuntu2) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39

Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-133-generic-aarch64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20

Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit

ersions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvidia-dali-cuda120==1.47.0
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-modelopt==0.25.0
[pip3] nvidia-modelopt-core==0.25.0
[pip3] nvidia-nvimgcodec-cu12==0.4.1.21
[pip3] nvidia-nvjpeg2k-cu12==0.8.1.40
[pip3] nvidia-nvtiff-cu12==0.4.0.62
[pip3] onnx==1.17.0
[pip3] optree==0.14.1
[pip3] pynvml==12.0.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3b.nvinternal
[pip3] pyzmq==26.2.1
[pip3] torch==2.7.0a0+7c8ec84dab.nv25.3
[pip3] torch-geometric==2.6.1
[pip3] torch_tensorrt==2.7.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[pip3] transformers==4.51.2
[pip3] triton==3.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.3rc2.dev92+g98d01d3ce.d20250410
vLLM Build Flags:
CUDA Archs: 8.0 8.6 9.0 10.0 12.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PIX PIX SYS SYS SYS SYS PIX PIX PIX SYS 16-31 1 N/A
GPU1 PIX X PIX PIX SYS SYS SYS SYS PIX PIX PIX SYS 16-31 1 N/A
GPU2 PIX PIX X PIX SYS SYS SYS SYS PIX PIX PIX SYS 16-31 1 N/A
GPU3 PIX PIX PIX X SYS SYS SYS SYS PIX PIX PIX SYS 16-31 1 N/A
GPU4 SYS SYS SYS SYS X PIX PIX PIX SYS SYS SYS PIX 64-79 4 N/A
GPU5 SYS SYS SYS SYS PIX X PIX PIX SYS SYS SYS PIX 64-79 4 N/A
GPU6 SYS SYS SYS SYS PIX PIX X PIX SYS SYS SYS PIX 64-79 4 N/A
GPU7 SYS SYS SYS SYS PIX PIX PIX X SYS SYS SYS PIX 64-79 4 N/A
NIC0 PIX PIX PIX PIX SYS SYS SYS SYS X PIX PIX SYS
NIC1 PIX PIX PIX PIX SYS SYS SYS SYS PIX X PIX SYS
NIC2 PIX PIX PIX PIX SYS SYS SYS SYS PIX PIX X SYS
NIC3 SYS SYS SYS SYS PIX PIX PIX PIX SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3

NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.8.4.1
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=8.0 8.6 9.0 10.0 12.0+PTX
NCCL_VERSION=2.25.1
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
TORCH_NCCL_USE_COMM_NONBLOCKING=0
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.8.1.012
PYTORCH_VERSION=2.7.0a0+7c8ec84
PYTORCH_BUILD_NUMBER=0
CUBLASMP_VERSION=0.4.0.789
CUDNN_FRONTEND_VERSION=1.10.0
CUDNN_VERSION=9.8.0.87
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=148941829
CUDA_DRIVER_VERSION=570.124.06
PYTORCH_BUILD_VERSION=2.7.0a0+7c8ec84
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=25.03
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1

How you are installing vllm

If I build vllm 0.8.3 from source code in arm64 platform, how can I add vllm[audio] function from building, look forward to your feedback.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    installationInstallation problemsstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions