You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
我们在env文件中加入了相应的deepseek api key,当请求what could i ask时出现如下问题
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
E0307 16:40:17.972 31064 wren-ai-service:60] An error occurred during question recommendation generation: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
E0307 16:40:17.972 31064 wren-ai-service:60] An error occurred during question recommendation generation: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
目前使用litellm进行连接deepseek官网api
config.yaml配置如下
models:
kwargs:
n: 1
response_format:
type: text
temperature: 0
model: deepseek/deepseek-reasoner
timeout: 120
kwargs:
n: 1
response_format:
type: text
temperature: 0
model: deepseek/deepseek-chat
timeout: 120
kwargs:
n: 1
response_format:
type: json_object
temperature: 0
model: deepseek/deepseek-coder
timeout: 120
provider: litellm_llm
type: llm
models:
model: gte-large
timeout: 120
provider: litellm_embedder
type: embedder
endpoint: http://localhost:3000
provider: wren_ui
type: engine
connection_info: ''
endpoint: http://localhost:8000
manifest: ''
provider: wren_ibis
source: bigquery
type: engine
endpoint: http://localhost:8080
manifest: ''
provider: wren_engine
type: engine
embedding_model_dim: 3072
location: http://localhost:6333
provider: qdrant
recreate_index: false
timeout: 120
type: document_store
pipes:
embedder: litellm_embedder.gte-large
name: db_schema_indexing
embedder: litellm_embedder.gte-large
name: historical_question_indexing
embedder: litellm_embedder.gte-large
name: table_description_indexing
embedder: litellm_embedder.gte-large
llm: litellm_llm.deepseek/deepseek-coder
name: db_schema_retrieval
embedder: litellm_embedder.gte-large
name: historical_question_retrieval
llm: litellm_llm.deepseek/deepseek-coder
name: sql_generation
llm: litellm_llm.deepseek/deepseek-coder
name: sql_correction
llm: litellm_llm.deepseek/deepseek-coder
name: followup_sql_generation
name: sql_summary
llm: litellm_llm.deepseek/deepseek-chat
name: sql_answer
llm: litellm_llm.deepseek/deepseek-coder
name: sql_breakdown
llm: litellm_llm.deepseek/deepseek-coder
name: sql_expansion
name: semantics_description
llm: litellm_llm.deepseek/deepseek-coder
name: relationship_recommendation
name: question_recommendation
embedder: litellm_embedder.gte-large
llm: litellm_llm.deepseek/deepseek-coder
name: question_recommendation_db_schema_retrieval
llm: litellm_llm.deepseek/deepseek-coder
name: question_recommendation_sql_generation
name: chart_generation
name: chart_adjustment
embedder: litellm_embedder.gte-large
llm: litellm_llm.deepseek/deepseek-coder
name: intent_classification
name: data_assistance
embedder: litellm_embedder.gte-large
name: sql_pairs_indexing
embedder: litellm_embedder.gte-large
llm: litellm_llm.deepseek/deepseek-coder
name: sql_pairs_retrieval
name: preprocess_sql_data
name: sql_executor
name: sql_question_generation
name: sql_generation_reasoning
llm: litellm_llm.deepseek/deepseek-coder
name: sql_regeneration
type: pipeline
settings:
allow_using_db_schemas_without_pruning: false
column_indexing_batch_size: 50
development: false
engine_timeout: 30
host: 127.0.0.1
langfuse_enable: true
langfuse_host: https://cloud.langfuse.com
logging_level: INFO
port: 5556
query_cache_maxsize: 1000
query_cache_ttl: 3600
table_column_retrieval_size: 100
table_retrieval_size: 10
我们在env文件中加入了相应的deepseek api key,当请求what could i ask时出现如下问题
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersOh no an error! Need help with Hamilton?
Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-2niepkra8-DGKGf_tTYhXuJWBTXtIs4g
E0307 16:40:17.972 31064 wren-ai-service:60] An error occurred during question recommendation generation: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersE0307 16:40:17.972 31064 wren-ai-service:60] An error occurred during question recommendation generation: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=litellm_llm/deepseek/deepseek-coder
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers当输出任意问题时出现下列问题
http://localhost:3000/api/graphql请求响应为
{
"errors": [
{
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"createAskingTask"
],
"message": "Cannot read properties of null (reading 'hash')",
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"message": "Cannot read properties of null (reading 'hash')",
"shortMessage": "Internal server error"
}
}
],
"data": null
}
The text was updated successfully, but these errors were encountered: