Export the model #617
EranBolan91
started this conversation in
General
Replies: 2 comments 3 replies
-
With localGPT, you are not really fine-tuning or training the model. Your
data in documents are ingested and stored on a local vectorDB, the default
uses Chroma. You can backup and restore the ChromaDB. I did encounter a
case of index corrupted. You could also scale it by having a scaleable
vector database, Computer B pass the query to extract from VectorDB, return
the response and let LLM on Computer B assemble the response.
…On Sun, 29 Oct 2023 at 18:02, Eran Bolandian ***@***.***> wrote:
Is it possible, to train model with my own data on computer "A". Then
export the trained model to computer "B" and use it to ask the model
questions?
—
Reply to this email directly, view it on GitHub
<#617>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKYRNR6YLMCBCQDEOR2G7XTYBYLR3AVCNFSM6AAAAAA6UXIQMOVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZVG44DQNBUGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Eran,
I believe that should work. By default,
https://github.com/PromtEngineer/localGPT uses ChromaDB as the VectorDB.
Depending on which version of ChromaDB, the default on localgpt should be
using 0.4.6, which stores all the data in duckdb + parquet which are files
under folder .db
The latest version (cannot remember which version onwards) ChromaDB
migrates to sqlite.
In Computer B, you just need to make sure you point it to the right folder
for Chroma. Langchain does the search based on your query (using the same
embeddings), return the response and send it to the LLM model to piece it
into nicely paragraphs. If you are using another VectorDB, you will need to
find what folder and files it is stored on. Langchain which LocalGPT uses
has support for multiple VectorDB, the documentation is poor though.
I would try it out at small scale first. Other things you may need
to be aware of, in computer B, you will need to lock the files when you
transfer the files from computer A and prevent the case of read/write
contention when someone is querying at the same time. May have to re-visit
if you are doing this at scale.
Declaration: I play with PrivateGPT a lot instead of localGPT, but both
works similar. I just dun have a GPU in my possession to play with localGPT.
regards,
Max
…On Wed, 1 Nov 2023 at 18:33, Eran Bolandian ***@***.***> wrote:
Please correct me if I'm wrong.
On computer A, I can ingest all the documents that I want. All the data
saved in the local vectorDB.
Then I can copy the vectorDB and move it to computer B and it will work
just fine?
Query computer B and it will response with the ingested documents from
computer A, using computer's A vectorDB.
Eventually whats matter is 'vectorDB'?
—
Reply to this email directly, view it on GitHub
<#617 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKYRNR6L7HEU3ZINEZCH7XDYCIQQXAVCNFSM6AAAAAA6UXIQMOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TINBUGM2DM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible, to train model with my own data on computer "A". Then export the trained model to computer "B" and use it to ask the model questions?
Beta Was this translation helpful? Give feedback.
All reactions