-
Notifications
You must be signed in to change notification settings - Fork 646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to deploy. Please check the log for more details. #1355
Comments
Hi @wmrenr, I noticed there is an error log about Also, provided my config for ollama, you might be able to refer to models:
- api_base: http://host.docker.internal:11434/
kwargs:
n: 1
temperature: 0
model: ollama/phi4
provider: litellm_llm
timeout: 120
type: llm
---
models:
- api_base: http://host.docker.internal:11434/
model: ollama/nomic-embed-text:latest
timeout: 120
provider: litellm_embedder
type: embedder |
Sorry, the wrenai-ren-AI-service. log I uploaded is not corresponding before this, I now update all logs as fllows: (config.yaml, .env and docker-compose.yaml are the same as above ) wrenai-wren-ai-service.log All logs do not report any error, but the UI interface shows such errors. |
@paopa Could you please have time to look over my problems? |
Hi @wmrenr, I’m having a bit of a hard time figuring out what’s causing this issue from your log. It seems to only pop up when Wren AI is deploying the model for your data source. Could you add some extra logging functionality? Also, could you let me know which version of Wren AI you’re using and how you’re deploying it (launcher, Docker Compose, or from scratch)? And does it happen every time you restart the system? |
I can try to add some extra logging functionality; My version of Wren AI is 0.15.3; I deployed it by Docker Compose; It happen every time When I restart the system. |
Ask everyone for help: I applied large model of ollama deployed locally to start WrenAI. Now the UI interface is such an error, but all logs do not report any error, how to solve?
Here are my all logs, config.yaml, .env, docker-compose.yaml:
wrenai-wren-ai-service.log
wrenai-wren-engine.log
wrenai-wren-ui.log
wrenai-ibis-server.log
config.yaml
type: llm
provider: litellm_llm
models:
model: openai/qwen2.5:0.5b # openai/<ollama_model_name>
api_key_name: LLM_OLLAMA_API_KEY
timeout: 600
kwargs:
n: 1
temperature: 0
type: embedder
provider: litellm_embedder
models:
api_key_name: EMBEDDER_OLLAMA_API_KEY
api_base: http://host.docker.internal:11434/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
timeout: 600
type: engine
provider: wren_ui
endpoint: http://wren_ui:3000
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768
timeout: 120
recreate_index: true
type: pipeline
pipes:
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
llm: litellm_llm.openai/qwen2.5:0.5b
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
llm: litellm_llm.openai/qwen2.5:0.5b
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
llm: litellm_llm.openai/qwen2.5:0.5b
engine: wren_ui
llm: litellm_llm.openai/qwen2.5:0.5b
document_store: qdrant
embedder: litellm_embedder.openai/nomic-embed-text:latest
settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
.env
COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64
PROJECT_DIR=/root/.wrenai
WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3000
IBIS_SERVER_PORT=8000
WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}
QDRANT_HOST=qdrant
SHOULD_FORCE_DEPLOY=1
LLM_OPENAI_API_KEY=randomstring
EMBEDDER_OPENAI_API_KEY=randomstring
LLM_AZURE_OPENAI_API_KEY=randomstring
EMBEDDER_AZURE_OPENAI_API_KEY=randomstring
LLM_OLLAMA_API_KEY=randomstring
EMBEDDER_OLLAMA_API_KEY=randomstring
OPENAI_API_KEY=randomstring
WREN_PRODUCT_VERSION=0.15.3
WREN_ENGINE_VERSION=0.13.1
WREN_AI_SERVICE_VERSION=0.15.7
IBIS_SERVER_VERSION=latest
WREN_UI_VERSION=0.20.1
WREN_BOOTSTRAP_VERSION=0.1.5
user id (uuid v4)
USER_UUID=
TELEMETRY_ENABLED=true
this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information
GENERATION_MODEL=gpt-4o-mini
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=
the port exposes to the host
OPTIONAL: change the port if you have a conflict
HOST_PORT=3000
AI_SERVICE_FORWARD_PORT=5555
Wren UI
EXPERIMENTAL_ENGINE_RUST_VERSION=false
docker-compose.yaml*********
version: "3"
volumes:
data:
networks:
wren:
driver: bridge
services:
bootstrap:
image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DATA_PATH: /app/data
volumes:
- data:/app/data
command: /bin/sh /app/init.sh
wren-engine:
image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_ENGINE_PORT}
- ${WREN_ENGINE_SQL_PORT}
volumes:
- data:/usr/src/app/etc
- ${PROJECT_DIR}/data:/usr/src/app/data
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- bootstrap
ibis-server:
image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${IBIS_SERVER_PORT}
environment:
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
wren-ai-service:
image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_AI_SERVICE_PORT}
ports:
- ${AI_SERVICE_FORWARD_PORT}:${WREN_AI_SERVICE_PORT}
environment:
# sometimes the console won't show print messages,
# using PYTHONUNBUFFERED: 1 can fix this
PYTHONUNBUFFERED: 1
CONFIG_PATH: /app/data/config.yaml
env_file:
- ${PROJECT_DIR}/.env
volumes:
- ${PROJECT_DIR}/config.yaml:/app/data/config.yaml
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- qdrant
qdrant:
image: qdrant/qdrant:v1.11.0
restart: on-failure
expose:
- 6333
- 6334
volumes:
- data:/qdrant/storage
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
wren-ui:
image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DB_TYPE: sqlite
# /app is the working directory in the container
SQLITE_FILE: /app/data/db.sqlite3
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT}
IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT}
The text was updated successfully, but these errors were encountered: