-
Notifications
You must be signed in to change notification settings - Fork 734
Unable to see container id for wren AI and in wren Ai having error "Failed to create asking task" #1393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@Nikita23526 could you give me the container log of ai service by running |
i have provided you ...yesterday i have connected it with mysql with database having single table and provided few question it responded but today i have made a database with 5 tables and when i am asking question it is showing "failed to create task" docker logs -f wrenai-wren-ai-service-1 ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. ERROR: Application startup failed. Exiting. |
i am using gemini-flash-2.0 and have provided the api key in .env file also |
@Nikita23526 the error showed that ai pipelines definitions are incomplete. Please read carefully at the comment and fill in missing pipeline definitions. Thanks
|
can you help me out to set up pipeline as i think i have followed the instructions correctly but still same issue "Failed to create task" |
2025-03-19 11:27:32 wren-ai-service-1 | File "/src/globals.py", line 49, in create_service_container |
Could u share your config.yaml? |
@cyyeh # ------------------------------- LLM Configuration (Ollama Mistral)-------------------------------type: llm
-------------------------------Embedding Model Configuration-------------------------------type: embedder
-------------------------------Wren Engine Configuration-------------------------------type: engine -------------------------------Document Store Configuration-------------------------------type: document_store -------------------------------AI Pipeline Configuration-------------------------------type: pipeline
-------------------------------General Settings-------------------------------settings: |
@cyyeh please review |
The bootstrap container in Docker cannot run and connect to Lm studio, please help. |
Getting Error "Failed to create asking task"
I have followed the official documentation and used wren-launcher-windows for custom llm in docker container i have id for ollama but not for wren
Expected behavior
I have connected it with mySql and when asking about table it is not responding me with answer just showing an error "Failed to create asking task"
Screenshots
Desktop (please complete the following information):
Wren AI Information
Additional context
Add any other context about the problem here.
Relevant log output
-# you should rename this file to config.yaml and put it in ~/.wrenai
please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:
1. you need to use your own llm and embedding models
2. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
3. you need to fill in correct llm and embedding models in the pipe definitions
type: llm
provider: litellm_llm
models:
model: ollama_chat/phi4:14b # ollama_chat/<ollama_model_name>
alias: default
timeout: 600
kwargs:
n: 1
temperature: 0
type: embedder
provider: litellm_embedder
models:
alias: default
api_base: http://host.docker.internal:11434 # if you are using mac/windows, don't change this; if you are using linux, please search "Run Ollama in docker container" in this page: https://docs.getwren.ai/oss/installation/custom_llm#running-wren-ai-with-your-custom-llm-embedder
timeout: 600
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768 # put your embedding model dimension here
timeout: 120
recreate_index: true
please change the llm and embedder names to the ones you want to use
the format of llm and embedder should be .<model_name> such as litellm_llm.gpt-4o-2024-08-06
the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
embedder: litellm_embedder.default
document_store: qdrant
embedder: litellm_embedder.default
document_store: qdrant
embedder: litellm_embedder.default
document_store: qdrant
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
embedder: litellm_embedder.default
document_store: qdrant
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
llm: litellm_llm.default
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
llm: litellm_llm.default
document_store: qdrant
embedder: litellm_embedder.default
document_store: qdrant
embedder: litellm_embedder.default
llm: litellm_llm.default
llm: litellm_llm.default
engine: wren_ui
llm: litellm_llm.default
llm: litellm_llm.default
llm: litellm_llm.default
engine: wren_ui
settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false # if you want to use db schemas without pruning, set this to true. It will be faster
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
ON CMD when i wrote docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9b0e9ea27466 ollama/ollama "/bin/ollama serve" 17 hours ago Up 3 minutes 0.0.0.0:11434->11434/tcp ollama
a79c776ec2a0 ghcr.io/canner/wren-ui:0.20.2 "docker-entrypoint.s…" 18 hours ago Up 3 minutes 0.0.0.0:3000->3000/tcp wrenai-wren-ui-1
d154af9e1e1b ghcr.io/canner/wren-ai-service:0.15.18 "/app/entrypoint.sh" 18 hours ago Up 3 minutes 0.0.0.0:5555->5555/tcp wrenai-wren-ai-service-1
ab4775ac51e3 ghcr.io/canner/wren-engine:0.14.3 "/__cacert_entrypoin…" 18 hours ago Up 3 minutes 7432/tcp, 8080/tcp wrenai-wren-engine-1
9fdd91b4742c ghcr.io/canner/wren-engine-ibis:0.14.3 "fastapi run" 18 hours ago Up 2 minutes 8000/tcp wrenai-ibis-server-1
9969ba55153d ghcr.io/canner/wren-bootstrap:0.1.5 "/bin/sh /app/init.sh" 18 hours ago Exited (0) About a minute ago wrenai-bootstrap-1
c0daefb2a9f6 qdrant/qdrant:v1.11.0 "./entrypoint.sh" 18 hours ago Up 3 minutes 6333-6334/tcp wrenai-qdrant-1
i am getting this
The text was updated successfully, but these errors were encountered: