You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When continuing a conversarion, LM Studio will appear to generate a response but will fail with the error:
vk::Queue::submit: ErrorDeviceLost
After this error occurs:
LM Studio stops generating responses, even in new conversations.
It will not be able to regen responses.
Reloading the model does fix the issue, but won't really help, as it keeps happening, even if a different model is chosen.
Logs
LM Studio Log Extract
22:29:29.307 › [LMSInternal][Client=LM Studio][Endpoint=predict] Error in channel handler: Error: received prediction-error
at _0x33a37e.<computed> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:411:108509)
at _0x29233a._0x3e785a (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:276771)
at _0x29233a.emit (node:events:519:28)
at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:244159)
at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:294568)
at ForkUtilityProcess.<anonymous> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:243179)
at ForkUtilityProcess.emit (node:events:519:28)
at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)
- Caused By: Error: vk::Queue::submit: ErrorDeviceLost
at _0x2e0c50.<computed>.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:9:50164)
at async Object.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:12197)
at async Object.handleMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:2327)
[LMSInternal][Client=LM Studio][Endpoint=predict] Canceled predicting due to channel error.
22:29:29.308 › [LMSInternal][Client=LM Studio][Endpoint=continueAssistantMessageAtIndex] Error in RPC handler: Error: Channel Error
at _0x1ae069.continueAssistantMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:46:9553)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async _0x1ae069.continueAssistantMessageAtIndex (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:46:14338)
at async Object.handler (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:106:7869)
- Caused By: Error: received prediction-error
at _0x33a37e.<computed> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:411:108509)
at _0x29233a._0x3e785a (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:276771)
at _0x29233a.emit (node:events:519:28)
at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:244159)
at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:294568)
at ForkUtilityProcess.<anonymous> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:243179)
at ForkUtilityProcess.emit (node:events:519:28)
at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)
- Caused By: Error: vk::Queue::submit: ErrorDeviceLost
at _0x2e0c50.<computed>.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:9:50164)
at async Object.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:12197)
at async Object.handleMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:2327)
System Log (syslog)
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] Resetting rcs0 for preemption time out
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] lm-studio[4108404] context reset due to GPU hang
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 9:1:8ed1fff2, in lm-studio [4108404]
To Reproduce
Open LM Studio.
Continue a conversation.
LM Studio begins supposedly generating a response.
Error vk::Queue::submit: ErrorDeviceLost appears on bottom right. Also, "This message contains no content. The AI has nothing to say." appears below the prompt. All logs above get gened at that point..
All future responses fail, even if one tries to starrt new conversations, or continue. It won't be able to regen either, failing instantly ever since.
Reload or switch models, and from step 2, issue will still manifest.
Expected Behavior
LM Studio should not have stop generating as long as the model is able to, and resources allow for it. Even so, it should not impair functionality, having to reload model.
Additional Information
It is unclear whether this issue occurs exclusively on non-GPU systems. I began monitoring system logs when the problem first appeared. It may have started with this version of LM Studio. I recently upgraded my system's RAM, which could be a factor, but I am uncertain. The system has been stable otherwise, with no noticeable glitches. This issue did not appear to occur in version 0.3.8.
The text was updated successfully, but these errors were encountered:
Which version of LM Studio?
LM Studio 0.3.9-6 x64
Which operating system?
Kubuntu 24.04
What is the bug?
When continuing a conversarion, LM Studio will appear to generate a response but will fail with the error:
After this error occurs:
Logs
LM Studio Log Extract
System Log (
syslog
)To Reproduce
vk::Queue::submit: ErrorDeviceLost
appears on bottom right. Also, "This message contains no content. The AI has nothing to say." appears below the prompt. All logs above get gened at that point..Expected Behavior
LM Studio should not have stop generating as long as the model is able to, and resources allow for it. Even so, it should not impair functionality, having to reload model.
Additional Information
It is unclear whether this issue occurs exclusively on non-GPU systems. I began monitoring system logs when the problem first appeared. It may have started with this version of LM Studio. I recently upgraded my system's RAM, which could be a factor, but I am uncertain. The system has been stable otherwise, with no noticeable glitches. This issue did not appear to occur in version 0.3.8.
The text was updated successfully, but these errors were encountered: