Skip to content

Error with LM Studio hosted OpenAI compatible embedding model #1014

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Else00 opened this issue Jan 24, 2025 · 0 comments
Open

Error with LM Studio hosted OpenAI compatible embedding model #1014

Else00 opened this issue Jan 24, 2025 · 0 comments

Comments

@Else00
Copy link

Else00 commented Jan 24, 2025

I was trying to use an LM Studio hosted embeddings model but I have problem adding it.

First: i manually tried the embeddings model with RapidAPI:
Request

POST /v1/embeddings HTTP/1.1
Content-Type: application/json
Host: localhost:1234
Connection: close
User-Agent: RapidAPI/4.2.8 (Macintosh; OS X/15.2.0) GCDHTTPRequest
Content-Length: 50

{
  "input": "Testo di cui generare l'embedding"
}

Response

HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: *
Content-Type: application/json; charset=utf-8
Content-Length: 31122
ETag: W/"7992-WyzzogQYwebVGmjRxKi2AcRWn/w"
Date: Fri, 24 Jan 2025 09:55:47 GMT
Connection: close

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [
        -0.027857886627316475,
        0.001916739041917026,
        0.004705055151134729,
        0.015269467607140541,
        -0.001412238460034132,
        -0.041608117520809174,
      ...
      ],
      "index": 0
    }
  ],
  "model": "text-embedding-bge-m3",
  "usage": {
    "prompt_tokens": 0,
    "total_tokens": 0
  }
}

I already configured the LLM phi-4 from LM Studio with the url http://host.docker.internal:1234/v1/chat/completions and it works fine.

Then i tried to configure the embedding model, i selected OpenAl-compatible APl embedder, i used http://host.docker.internal:1234 as url because i have seen that in the code the /v1/embeddings is automatically added.

LM Studio log:

2025-01-24 10:57:02 [DEBUG] 
Received request: POST to /v1/embeddings with body  {}
2025-01-24 10:57:02 [ERROR] 
'input' field is required

Chershire cat log:

[2025-01-24 09:57:02.035] ERROR  cat.routes.embedder..upsert_embedder_setting::150 
HTTPStatusError("Client error '400 Bad Request' for url 'http://host.docker.internal:1234/v1/embeddings'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400")
INFO:     192.168.215.1:20785 - "PUT /embedder/settings/EmbedderOpenAICompatibleConfig HTTP/1.1" 400 Bad Request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant