Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LM Studio Fails to Generate After ErrorDeviceLost #409

Open
71walceli opened this issue Feb 6, 2025 · 2 comments
Open

LM Studio Fails to Generate After ErrorDeviceLost #409

71walceli opened this issue Feb 6, 2025 · 2 comments

Comments

@71walceli
Copy link

71walceli commented Feb 6, 2025

Which version of LM Studio?

LM Studio 0.3.9-6 x64

Which operating system?

Kubuntu 24.04

What is the bug?

When continuing a conversarion, LM Studio will appear to generate a response but will fail with the error:

vk::Queue::submit: ErrorDeviceLost

After this error occurs:

  • LM Studio stops generating responses, even in new conversations.
  • It will not be able to regen responses.
  • Reloading the model does fix the issue, but won't really help, as it keeps happening, even if a different model is chosen.

Logs

LM Studio Log Extract
22:29:29.307 › [LMSInternal][Client=LM Studio][Endpoint=predict] Error in channel handler: Error: received prediction-error
    at _0x33a37e.<computed> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:411:108509)
    at _0x29233a._0x3e785a (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:276771)
    at _0x29233a.emit (node:events:519:28)
    at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:244159)
    at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:294568)
    at ForkUtilityProcess.<anonymous> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:243179)
    at ForkUtilityProcess.emit (node:events:519:28)
    at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)
- Caused By: Error: vk::Queue::submit: ErrorDeviceLost
    at _0x2e0c50.<computed>.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:9:50164)
    at async Object.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:12197)
    at async Object.handleMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:2327)
[LMSInternal][Client=LM Studio][Endpoint=predict] Canceled predicting due to channel error.
22:29:29.308 › [LMSInternal][Client=LM Studio][Endpoint=continueAssistantMessageAtIndex] Error in RPC handler: Error: Channel Error
    at _0x1ae069.continueAssistantMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:46:9553)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async _0x1ae069.continueAssistantMessageAtIndex (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:46:14338)
    at async Object.handler (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:106:7869)
- Caused By: Error: received prediction-error
    at _0x33a37e.<computed> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:411:108509)
    at _0x29233a._0x3e785a (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:276771)
    at _0x29233a.emit (node:events:519:28)
    at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:244159)
    at _0x29233a.onChildMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:294568)
    at ForkUtilityProcess.<anonymous> (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/main/index.js:24:243179)
    at ForkUtilityProcess.emit (node:events:519:28)
    at ForkUtilityProcess.a.emit (node:electron/js2c/browser_init:2:71823)
- Caused By: Error: vk::Queue::submit: ErrorDeviceLost
    at _0x2e0c50.<computed>.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:9:50164)
    at async Object.predictTokens (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:12197)
    at async Object.handleMessage (/tmp/.mount_LM-StuRez2Pg/resources/app/.webpack/lib/llmworker.js:14:2327)

System Log (syslog)
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] Resetting rcs0 for preemption time out  
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] lm-studio[4108404] context reset due to GPU hang  
Feb 05 22:25:39 wceli-h510mh kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 9:1:8ed1fff2, in lm-studio [4108404]  

To Reproduce

  1. Open LM Studio.
  2. Continue a conversation.
  3. LM Studio begins supposedly generating a response.
  4. Error vk::Queue::submit: ErrorDeviceLost appears on bottom right. Also, "This message contains no content. The AI has nothing to say." appears below the prompt. All logs above get gened at that point..
  5. All future responses fail, even if one tries to starrt new conversations, or continue. It won't be able to regen either, failing instantly ever since.
  6. Reload or switch models, and from step 2, issue will still manifest.

Expected Behavior

LM Studio should not have stop generating as long as the model is able to, and resources allow for it. Even so, it should not impair functionality, having to reload model.

Additional Information

It is unclear whether this issue occurs exclusively on non-GPU systems. I began monitoring system logs when the problem first appeared. It may have started with this version of LM Studio. I recently upgraded my system's RAM, which could be a factor, but I am uncertain. The system has been stable otherwise, with no noticeable glitches. This issue did not appear to occur in version 0.3.8.

@SiliconSelf
Copy link

I am also having this problem

@71walceli
Copy link
Author

Tested w/o other RAM module, but it's the same. I nuked v0.3.8 on my system. Can't seem to get it anywhere else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants