Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to deploy. Please check the log for more details. #1355

Open
wmrenr opened this issue Mar 4, 2025 · 6 comments
Open

Failed to deploy. Please check the log for more details. #1355

wmrenr opened this issue Mar 4, 2025 · 6 comments
Labels
bug Something isn't working

Comments

@wmrenr
Copy link

wmrenr commented Mar 4, 2025

Ask everyone for help: I applied large model of ollama deployed locally to start WrenAI. Now the UI interface is such an error, but all logs do not report any error, how to solve?

Image

Image

Here are my all logs, config.yaml, .env, docker-compose.yaml:
wrenai-wren-ai-service.log
wrenai-wren-engine.log
wrenai-wren-ui.log
wrenai-ibis-server.log

config.yaml
type: llm
provider: litellm_llm
models:

  • api_base: http://host.docker.internal:11434/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
    model: openai/qwen2.5:0.5b # openai/<ollama_model_name>
    api_key_name: LLM_OLLAMA_API_KEY
    timeout: 600
    kwargs:
    n: 1
    temperature: 0

type: embedder
provider: litellm_embedder
models:

  • model: openai/nomic-embed-text:latest # put your ollama embedder model name here, openai/<ollama_model_name>
    api_key_name: EMBEDDER_OLLAMA_API_KEY
    api_base: http://host.docker.internal:11434/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
    timeout: 600

type: engine
provider: wren_ui
endpoint: http://wren_ui:3000


type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768
timeout: 120
recreate_index: true


type: pipeline
pipes:

  • name: db_schema_indexing
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: historical_question_indexing
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: table_description_indexing
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: db_schema_retrieval
    llm: litellm_llm.openai/qwen2.5:0.5b
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: historical_question_retrieval
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: sql_generation
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: sql_correction
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: followup_sql_generation
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: sql_summary
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_answer
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: sql_breakdown
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: sql_expansion
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: semantics_description
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: relationship_recommendation
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: question_recommendation
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.openai/qwen2.5:0.5b
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: question_recommendation_sql_generation
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: chart_generation
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: chart_adjustment
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: intent_classification
    llm: litellm_llm.openai/qwen2.5:0.5b
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    document_store: qdrant
  • name: data_assistance
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text:latest
  • name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text:latest
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: preprocess_sql_data
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_executor
    engine: wren_ui
  • name: sql_question_generation
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_generation_reasoning
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_regeneration
    llm: litellm_llm.openai/qwen2.5:0.5b
    engine: wren_ui
  • name: sql_explanation
    llm: litellm_llm.openai/qwen2.5:0.5b
  • name: sql_pairs_deletion
    document_store: qdrant
    embedder: litellm_embedder.openai/nomic-embed-text:latest

settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true

.env
COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64

PROJECT_DIR=/root/.wrenai

WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3000
IBIS_SERVER_PORT=8000
WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}

QDRANT_HOST=qdrant
SHOULD_FORCE_DEPLOY=1

LLM_OPENAI_API_KEY=randomstring
EMBEDDER_OPENAI_API_KEY=randomstring
LLM_AZURE_OPENAI_API_KEY=randomstring
EMBEDDER_AZURE_OPENAI_API_KEY=randomstring
LLM_OLLAMA_API_KEY=randomstring
EMBEDDER_OLLAMA_API_KEY=randomstring
OPENAI_API_KEY=randomstring

WREN_PRODUCT_VERSION=0.15.3
WREN_ENGINE_VERSION=0.13.1
WREN_AI_SERVICE_VERSION=0.15.7
IBIS_SERVER_VERSION=latest
WREN_UI_VERSION=0.20.1
WREN_BOOTSTRAP_VERSION=0.1.5

user id (uuid v4)

USER_UUID=

TELEMETRY_ENABLED=true

this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information

GENERATION_MODEL=gpt-4o-mini
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=

the port exposes to the host

OPTIONAL: change the port if you have a conflict

HOST_PORT=3000
AI_SERVICE_FORWARD_PORT=5555

Wren UI

EXPERIMENTAL_ENGINE_RUST_VERSION=false

docker-compose.yaml*********
version: "3"

volumes:
data:

networks:
wren:
driver: bridge

services:
bootstrap:
image: ghcr.io/canner/wren-bootstrap:${WREN_BOOTSTRAP_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DATA_PATH: /app/data
volumes:
- data:/app/data
command: /bin/sh /app/init.sh

wren-engine:
image: ghcr.io/canner/wren-engine:${WREN_ENGINE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_ENGINE_PORT}
- ${WREN_ENGINE_SQL_PORT}
volumes:
- data:/usr/src/app/etc
- ${PROJECT_DIR}/data:/usr/src/app/data
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- bootstrap

ibis-server:
image: ghcr.io/canner/wren-engine-ibis:${IBIS_SERVER_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${IBIS_SERVER_PORT}
environment:
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"

wren-ai-service:
image: ghcr.io/canner/wren-ai-service:${WREN_AI_SERVICE_VERSION}
restart: on-failure
platform: ${PLATFORM}
expose:
- ${WREN_AI_SERVICE_PORT}
ports:
- ${AI_SERVICE_FORWARD_PORT}:${WREN_AI_SERVICE_PORT}
environment:
# sometimes the console won't show print messages,
# using PYTHONUNBUFFERED: 1 can fix this
PYTHONUNBUFFERED: 1
CONFIG_PATH: /app/data/config.yaml
env_file:
- ${PROJECT_DIR}/.env
volumes:
- ${PROJECT_DIR}/config.yaml:/app/data/config.yaml
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- qdrant

qdrant:
image: qdrant/qdrant:v1.11.0
restart: on-failure
expose:
- 6333
- 6334
volumes:
- data:/qdrant/storage
networks:
- wren
extra_hosts:
- "host.docker.internal:host-gateway"

wren-ui:
image: ghcr.io/canner/wren-ui:${WREN_UI_VERSION}
restart: on-failure
platform: ${PLATFORM}
environment:
DB_TYPE: sqlite
# /app is the working directory in the container
SQLITE_FILE: /app/data/db.sqlite3
WREN_ENGINE_ENDPOINT: http://wren-engine:${WREN_ENGINE_PORT}
WREN_AI_ENDPOINT: http://wren-ai-service:${WREN_AI_SERVICE_PORT}
IBIS_SERVER_ENDPOINT: http://ibis-server:${IBIS_SERVER_PORT}

  GENERATION_MODEL: ${GENERATION_MODEL}
  # telemetry
  WREN_ENGINE_PORT: ${WREN_ENGINE_PORT}
  WREN_AI_SERVICE_VERSION: ${WREN_AI_SERVICE_VERSION}
  WREN_UI_VERSION: ${WREN_UI_VERSION}
  WREN_ENGINE_VERSION: ${WREN_ENGINE_VERSION}
  USER_UUID: ${USER_UUID}
  POSTHOG_API_KEY: ${POSTHOG_API_KEY}
  POSTHOG_HOST: ${POSTHOG_HOST}
  TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
  # client side
  NEXT_PUBLIC_USER_UUID: ${USER_UUID}
  NEXT_PUBLIC_POSTHOG_API_KEY: ${POSTHOG_API_KEY}
  NEXT_PUBLIC_POSTHOG_HOST: ${POSTHOG_HOST}
  NEXT_PUBLIC_TELEMETRY_ENABLED: ${TELEMETRY_ENABLED}
  EXPERIMENTAL_ENGINE_RUST_VERSION: ${EXPERIMENTAL_ENGINE_RUST_VERSION}
  # configs
  WREN_PRODUCT_VERSION: ${WREN_PRODUCT_VERSION}
ports:
  # HOST_PORT is the port you want to expose to the host machine
  - ${HOST_PORT}:3000
volumes:
  - data:/app/data
networks:
  - wren
extra_hosts:
  - "host.docker.internal:host-gateway"
depends_on:
  - wren-ai-service
  - wren-engine
@wmrenr wmrenr added the bug Something isn't working label Mar 4, 2025
@wwwy3y3
Copy link
Member

wwwy3y3 commented Mar 4, 2025

Hi @wmrenr thanks for reaching out. @paopa please check it when you're available.

@paopa
Copy link
Member

paopa commented Mar 4, 2025

Hi @wmrenr, I noticed there is an error log about litellm.exceptions.APIError: litellm.APIError: APIError: OpenAIException - Connection error. in your ai-service.log. and also noticed there are api_key_name: LLM_OLLAMA_API_KEY in llm and embedder section. Can you try to remove them and also the env in .env file, then give it a try again?

Also, provided my config for ollama, you might be able to refer to

models:
- api_base: http://host.docker.internal:11434/
  kwargs:
    n: 1
    temperature: 0
  model: ollama/phi4
provider: litellm_llm
timeout: 120
type: llm
---
models:
- api_base: http://host.docker.internal:11434/
  model: ollama/nomic-embed-text:latest
  timeout: 120
provider: litellm_embedder
type: embedder

@wmrenr
Copy link
Author

wmrenr commented Mar 4, 2025

Hi @wmrenr, I noticed there is an error log about litellm.exceptions.APIError: litellm.APIError: APIError: OpenAIException - Connection error. in your ai-service.log. and also noticed there are api_key_name: LLM_OLLAMA_API_KEY in llm and embedder section. Can you try to remove them and also the env in .env file, then give it a try again?

Also, provided my config for ollama, you might be able to refer to

models:


models:

Sorry, the wrenai-ren-AI-service. log I uploaded is not corresponding before this, I now update all logs as fllows: (config.yaml, .env and docker-compose.yaml are the same as above )

wrenai-wren-ai-service.log
wrenai-wren-engine.log
wrenai-wren-ui.log
wrenai-ibis-server.log

All logs do not report any error, but the UI interface shows such errors.

Image

Image

@wmrenr
Copy link
Author

wmrenr commented Mar 5, 2025

Hi @wmrenr, I noticed there is an error log about litellm.exceptions.APIError: litellm.APIError: APIError: OpenAIException - Connection error. in your ai-service.log. and also noticed there are api_key_name: LLM_OLLAMA_API_KEY in llm and embedder section. Can you try to remove them and also the env in .env file, then give it a try again?
Also, provided my config for ollama, you might be able to refer to
models:

models:

Sorry, the wrenai-ren-AI-service. log I uploaded is not corresponding before this, I now update all logs as fllows: (config.yaml, .env and docker-compose.yaml are the same as above )

wrenai-wren-ai-service.log wrenai-wren-engine.log wrenai-wren-ui.log wrenai-ibis-server.log

All logs do not report any error, but the UI interface shows such errors.

Image

Image

@paopa Could you please have time to look over my problems?

@paopa
Copy link
Member

paopa commented Mar 6, 2025

Hi @wmrenr, I’m having a bit of a hard time figuring out what’s causing this issue from your log. It seems to only pop up when Wren AI is deploying the model for your data source. Could you add some extra logging functionality? Also, could you let me know which version of Wren AI you’re using and how you’re deploying it (launcher, Docker Compose, or from scratch)? And does it happen every time you restart the system?

@wmrenr
Copy link
Author

wmrenr commented Mar 6, 2025

Hi @wmrenr, I’m having a bit of a hard time figuring out what’s causing this issue from your log. It seems to only pop up when Wren AI is deploying the model for your data source. Could you add some extra logging functionality? Also, could you let me know which version of Wren AI you’re using and how you’re deploying it (launcher, Docker Compose, or from scratch)? And does it happen every time you restart the system?

I can try to add some extra logging functionality; My version of Wren AI is 0.15.3; I deployed it by Docker Compose; It happen every time When I restart the system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants