Problems when working with large documents #1069
Replies: 1 comment
-
problems with context is not adecuado |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone, and thanks in advance.
I'm learning how kernel memory works and I'm having trouble. I've been testing, and I don't have a problem with small documents (up to 3 sheets, I just import them with ImportDocumentAsync). The problem is when the documents are large. I see that the tables don't provide the information, and I can't create the corresponding chunks. I've tried WithCustomTextPartitioningOptions, but it doesn't work. In the end, I did
"var chunks = TextChunker.SplitMarkDownLines(text, maxTokensPerLine: 500);
for (var i = 0; i < chunks.Count; i++) {
documentId = expectMemory.ImportTextAsync(chunks[i]);
}"
But I don't think it's correct, and I'm not approaching it correctly.
I've already mentioned that this project is for learning, and I'm doing everything in memory or on disk using Olama.
I'm posting the code, just in case it's necessary.
Beta Was this translation helpful? Give feedback.
All reactions