-
Notifications
You must be signed in to change notification settings - Fork 478
Description
I freshly installed everything and also the ollama server just like the guide tells wneh starting openui via pinokio.
i also downloaded 2 models
llava and bakllava
the UI itselfs dispays 2 models for ollama but missing there names
within the log files this is written:
Traceback (most recent call last):
File "/home/hle/pinokio/api/openui.git/app/backend/env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/hle/pinokio/api/openui.git/app/backend/env/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/hle/pinokio/api/openui.git/app/backend/env/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/hle/pinokio/api/openui.git/app/backend/env/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/home/hle/pinokio/api/openui.git/app/backend/openui/server.py", line 215, in chat_completions
raise HTTPException(status_code=e.status_code, detail=msg)
fastapi.exceptions.HTTPException: 404: Error code: 404 - {'error': {'message': 'model "undefined" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}
ollama api model responce on manual triggering:
{
"models": [
{
"name": "bakllava:latest",
"model": "bakllava:latest",
"last_modified": "2025-02-02T17:20:38Z",
"size_bytes": 4,733,351,307,
"hash_digest": "3dd68bd4447cba20e20deba918749e7f58ff689a8ba4a90c9ff9dc9118037486",
"details": {
"parent_model": "None",
"format": "GGUF",
"model_family": "LLaMA",
"supported_families": ["LLaMA", "CLIP"],
"parameter_size": "7B",
"quantization": "Q4_0"
}
},
{
"name": "llava:13b",
"model": "llava:13b",
"last_modified": "2025-02-02T17:11:10Z",
"size_bytes": 8,011,256,494,
"hash_digest": "0d0eb4d7f485d7d0a21fd9b0c1d5b04da481d2150a097e81b64acb59758fdef6",
"details": {
"parent_model": "None",
"format": "GGUF",
"model_family": "LLaMA",
"supported_families": ["LLaMA", "CLIP"],
"parameter_size": "13B",
"quantization": "Q4_0"
}
}
]
}