Skip to content

Commit a7a0964

Browse files
authored
doc:knowledge use case (#146)
1.doc:knowledge qa use cases.
2 parents 1519df7 + bfeabc2 commit a7a0964

File tree

1 file changed

+43
-1
lines changed

1 file changed

+43
-1
lines changed
Lines changed: 43 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,43 @@
1-
# Knownledge based qa
1+
# Knownledge based qa
2+
3+
Chat with your own knowledge is a very interesting thing. In the usage scenarios of this chapter, we will introduce how to build your own knowledge base through the knowledge base API. Firstly, building a knowledge store can currently be initialized by executing "python tool/knowledge_init.py" to initialize the content of your own knowledge base, which was introduced in the previous knowledge base module. Of course, you can also call our provided knowledge embedding API to store knowledge.
4+
5+
6+
We currently support four document formats: txt, pdf, url, and md.
7+
```
8+
vector_store_config = {
9+
"vector_store_name": name
10+
}
11+
12+
file_path = "your file path"
13+
14+
knowledge_embedding_client = KnowledgeEmbedding(file_path=file_path, model_name=LLM_MODEL_CONFIG["text2vec"],local_persist=False, vector_store_config=vector_store_config)
15+
16+
knowledge_embedding_client.knowledge_embedding()
17+
18+
```
19+
20+
Now we currently support vector databases: Chroma (default) and Milvus. You can switch between them by modifying the "VECTOR_STORE_TYPE" field in the .env file.
21+
```
22+
#*******************************************************************#
23+
#** VECTOR STORE SETTINGS **#
24+
#*******************************************************************#
25+
VECTOR_STORE_TYPE=Chroma
26+
#MILVUS_URL=127.0.0.1
27+
#MILVUS_PORT=19530
28+
```
29+
30+
31+
Below is an example of using the knowledge base API to query knowledge:
32+
33+
```
34+
vector_store_config = {
35+
"vector_store_name": name
36+
}
37+
38+
query = "your query"
39+
40+
knowledge_embedding_client = KnowledgeEmbedding(file_path="", model_name=LLM_MODEL_CONFIG["text2vec"], local_persist=False, vector_store_config=vector_store_config)
41+
42+
knowledge_embedding_client.similar_search(query, 10)
43+
```

0 commit comments

Comments
 (0)