Skip to content

Commit 2011eb7

Browse files
committed
banner and workflow section added in README.md
1 parent 90f6cbf commit 2011eb7

File tree

3 files changed

+28
-2
lines changed

3 files changed

+28
-2
lines changed

readme.md renamed to README.md

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,16 @@
1-
# Knowly
1+
<div align="center">
2+
<a href="">
3+
<img alt="Banner" src="documentation/banner.png" alt="banner">
4+
</a>
5+
</div>
6+
27
## _Advancing Conversational Data Interaction_
38

4-
Knowly, is a knowledge-based self aware, Chat Application designed to revolutionize the way users interact with their data.
9+
Knowly, is a knowledge-based self aware, Chat Application designed to revolutionize the way users interact with their data. It is empowered with state-of-the-art language models, which gave it immense capability to generate excellent response during interacting with human.
510
****
611

712
##### **To skip over features:**
13+
- [Working Principle](#working-principle)
814
- [Getting Started](#gettingstarted)
915
- [Demo](#demo)
1016
- [Features](#features)
@@ -13,6 +19,26 @@ Knowly, is a knowledge-based self aware, Chat Application designed to revolution
1319
- [LICENSE](#license)
1420
****
1521

22+
## Working Principle
23+
24+
Here we are presenting the workflow of Knowly. The diagram shows each and every fragments of the system. In order to understand how Knowly works, we first have to look into each subsections of this diagram. Some features operates independently -
25+
26+
- Normal chat with LLM
27+
- RAG
28+
- Multimodal RAG
29+
30+
Normal chat requires only user query. It can be text or voice. Voice is transcribed into text and passed to the LLM for generating response. This flow doesn't require document upload and embedding creations or uploading image.
31+
32+
RAG requires documents upload and user query. The query can be passed through text or voice as well. The uploaded documents are processed and splitted into smaller chunks. These chunks then passed through an Embedding Model to generate embedding vector for each text chunks. These embedding vectors are then stored into vector database. The vector database then used to run query and retrieve similar chunks for context to LLM. Then the retrieved context is augmented with user query and passed into LLM for generating response.
33+
34+
Multimodal RAG requires image upload. This subsection is responsible for interacting with image as well as text and generate meaningful answers. It does not require document vectorization.
35+
36+
<div align="center">
37+
<a href="">
38+
<img alt="Banner" src="documentation/workflow.png" alt="banner">
39+
</a>
40+
</div>
41+
1642

1743
## Getting Started
1844
<a name="gettingstarted"></a>

documentation/banner.png

138 KB
Loading

documentation/workflow.png

43.3 KB
Loading

0 commit comments

Comments
 (0)