Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does piper support AMD GPU acceleration with rocm? #483

Open
eliranwong opened this issue Apr 29, 2024 · 8 comments
Open

Does piper support AMD GPU acceleration with rocm? #483

eliranwong opened this issue Apr 29, 2024 · 8 comments

Comments

@eliranwong
Copy link

Does piper support AMD GPU acceleration with rocm?

@eliranwong
Copy link
Author

I can see onnx runtime support AMD rocm, please read:

https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-onnx.html

But how to get it integrated into piper?

@mush42
Copy link
Contributor

mush42 commented Apr 29, 2024

@eliranwong

From what I know, you need to modify and rebuild Piper, setting the execution-providers to your chosen ones.
Also, you need to provide onnxruntime build with your providers.

Best
Musharraf

@eliranwong
Copy link
Author

Appreciate your reply and help. May I ask for more information about:

  1. how to set execution-providers for rebuilding piper
  2. how to provide onnxruntime build with your providers

@eliranwong
Copy link
Author

Do you mean I need to manually edit this line:

providers=providers,

@mush42
Copy link
Contributor

mush42 commented Apr 29, 2024

@eliranwong

  1. Since Piper doesn't provide a command-line option to set onnxruntime EP to ROCM, you need to modify Piper's C++ code to set it manually in the source.
  2. Find a library onnxruntime.so built with ROCM EP, if pre-built binaries are not provided by Microsoft, you need to build it yourself.

@eliranwong
Copy link
Author

eliranwong commented Apr 30, 2024

So far, below is the easiest way that I found:

  1. Install ONNX Runtime with ROCm Execution Provider
    reference: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu#22-onnx-runtime-with-rocm-execution-provider
# pre-requisites
pip install -U pip
pip install cmake onnx
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Install ONNXRuntime from source
git clone --recursive  https://github.com/ROCmSoftwarePlatform/onnxruntime.git
cd onnxruntime
git checkout rocm6.0_internal_testing

./build.sh --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
pip install build/Linux/Release/dist/*
  1. Manually edit this line:

providers=providers,

to:

providers=["ROCMExecutionProvider"]

I am open to better solution.

I would appreciate if the author of piper and support it directly, so that I don't need to manually edit the line.

Many thanks.

@eliranwong
Copy link
Author

I read piper currently support --cuda argument. I would suggest to add --rocm argument to make piper better.

@eliranwong
Copy link
Author

eliranwong commented May 28, 2024

Update: Created a pull request to add --migraphx and --rocm options to support AMD / ROCm-enabled GPUs.

If the pull request is merged, AMD-GPUs users can run piper either 'piper --migraphx' or 'piper --rocm'.

Before the pull request is merged, AMD-GPUs users can still workaround the issue with the following setup:

To support ROCm-enabled GPUs via 'ROCMExecutionProvider' or 'MIGraphXExecutionProvider':

  1. Install piper-tts

pip install piper-tts

  1. Uninstall onnxruntime

pip uninstall onnxruntime

  1. Install onnxruntime-rocm

pip3 install https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0.2/onnxruntime_rocm-inference-1.17.0-cp310-cp310-linux_x86_64.whl --no-cache-dir

Remarks: Wheel files that support different ROCm versions are available at: https://repo.radeon.com/rocm/manylinux

To verify:

python3

$ import onnxruntime
$ onnxruntime.get_available_providers()

Output:

['MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']

Workaround:

Manually edit the 'load' function in the file ../site-packages/piper/voice.py:

From:

providers=["CPUExecutionProvider"]
if not use_cuda
else ["CUDAExecutionProvider"],

To:

providers=["MIGraphXExecutionProvider"],

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants