You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the bug?
I wanted to try to run the ROCm backend on my ryzen 5 8600G (760M I think).
In order to get llama.cpp to recognize your gpu as a compatible ROCm one, you can set the HSA_OVERRIDE_GFX_VERSION env variable
I've configured my .desktop to override this en var when starting lmstudio (see later in report) but the detected hardware is still "RADV GFX1103_R1"
[Desktop Entry]
Categories=Development;
Comment[en_GB]=Use the chat UI or local server to experiment and develop with local LLMs.
Comment=Use the chat UI or local server to experiment and develop with local LLMs.
Exec=env HSA_OVERRIDE_GFX_VERSION=11.0.1 /home/myUser/.local/bin/lmstudio %U
GenericName[en_GB]=
GenericName=
Icon=/home/myUser/.local/share/zap/v2/icons/lmstudio..png
Keywords=developer;llm;
MimeType=
Name[en_GB]=LM Studio (AppImage)
Name=LM Studio (AppImage)
Path=
StartupNotify=true
StartupWMClass=LM Studio
Terminal=false
TerminalOptions=
TryExec=/home/myUser/.local/bin/lmstudio
Type=Application
X-AppImage-Integrate=
X-AppImage-Version=0.3.9
X-KDE-SubstituteUID=false
X-KDE-Username=
X-Zap-Id=lmstudio
category=Development;Utility;
Am I doing something wrong ? Maybe you do not support this work around ?
Mind you it was for testing purposes, the vulkan backend work just fine :)
The text was updated successfully, but these errors were encountered:
Which version of LM Studio?
LM Studio 0.3.9-6
Which operating system?
Fedora 41
What is the bug?
I wanted to try to run the ROCm backend on my ryzen 5 8600G (760M I think).
In order to get llama.cpp to recognize your gpu as a compatible ROCm one, you can set the HSA_OVERRIDE_GFX_VERSION env variable
I've configured my .desktop to override this en var when starting lmstudio (see later in report) but the detected hardware is still "RADV GFX1103_R1"
Am I doing something wrong ? Maybe you do not support this work around ?
Mind you it was for testing purposes, the vulkan backend work just fine :)
The text was updated successfully, but these errors were encountered: