New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
index.find() tries to reshape and fails #1822
Comments
After further digging into the traceback I am starting to think this maybe something to do with the way the scores are being processed after the query is made to hnswlib. The traceback appears to be pointing to this method.
|
Can I understand how did you build the index? |
Its a large index of 3 million plus documents. I have a set of parquet files that contain about a 100 000 rows each. These files are read in as DataFrames converted to Dicts and then the same function The db is then initialized and indexed as below.
The whole thing runs in a loop where it runs on each file at a time. Is the fact that I initialise the doc index on each loop run potentially the problem? Should that sit outside the loop. It's worth mention this process did crash at one point and I restarted it from where it left off. I may just try rebuild the whole db index and see where that gets me. I see you have labeled this as a bug, can you provide a bit more on what your initial thinking is? |
I just labeled as a bug because it seems like it, but I have no thinking other than suspecring that something went wrong at indexing time. I would indeed not initialize everytime but keep indexing in a loop. |
Please let us know what is the result when u reindex again? |
@JoanFM thanks for the response. After some debugging over the weekend I manage to nail down the issue. I don't believe it is a bug as it was a subtle error on my part. I had two scripts one to create the index and another to search through it. The script to create the index used the following document class class AddressDoc(BaseDoc):
ELID: int
FULL_ADDRESS: str
EMBEDDINGS: NdArray[768] = Field(
max_elements=3500000, space="cosine", num_threads=20
) The script to search the index used this document class and as result seemed to throw the error. class AddressDoc(BaseDoc):
ELID: int
FULL_ADDRESS: str
EMBEDDINGS: NdArray[768] Maybe as a feature request I wonder if its possible for this line to throw an error about an inconsistent class being used to initialise an already existing index.
I did have a few follow on questions about this which hopefully you can shed some light on.
If it is of value. The below example can be used to demonstrate firstly the reshape error and the blown up sql lite db by uncommenting the various lines pertaining to class from docarray import BaseDoc, DocList
from docarray.index import HnswDocumentIndex
from docarray.typing import NdArray
from pydantic import Field
import numpy as np
import os
###### Build db ######
class Person(BaseDoc):
name: str
follower: int
# embeddings: NdArray[32]
embeddings: NdArray[32] = Field(space="cosine")
data = [
{"name": f"Maria_{i}", "follower": 12345, "embeddings": np.zeros(32)}
for i in range(20)
]
db_dl = DocList[Person](
[
Person(
name=d["name"],
follower=d["follower"],
embeddings=d["embeddings"],
)
for d in data
]
)
print("db data complete")
doc_index = HnswDocumentIndex[Person](
work_dir=os.path.join(work_dir)
)
doc_index.index(db_dl)
print("db indexed")
print(f"num docs: {doc_index.num_docs()}")
###### Search db ######
class Person(BaseDoc):
name: str
follower: int
# embeddings: NdArray[32]
embeddings: NdArray[32] = Field(space="cosine")
doc_index = HnswDocumentIndex[Person](
work_dir=os.path.join(work_dir)
)
se_dl = DocList[Person](
[
Person(
name=d["name"],
follower=d["follower"],
embeddings=d["embeddings"],
)
for d in data[:1]
]
)
print("search data complete")
doc_index.find_batched(se_dl, search_field="embeddings", limit=1)
print("db search complete") |
|
@nikhilmakan02 , may I ask you how you configure the num_threads? |
@JoanFM apologies for the late reply on this. may I ask you how you configure the num_threads? I did it in the class as below, is that not correct?
1. I have a question, I do not understand why it blew, is it because the configuration was not respected? because the fields of the class are the same. Without this, it is hard to know how to validate the incompatibility. Not sure if I understand what you are asking here when you say 'why it blew' The class has the same name, the fields have the same name correct, but the class used to build the index had this custom configuration 3. The reason why having "cosine" blows the size is because in that case, hnswlib normalizes the vectors, and therefore we cannot rely on the hnswlib to reconstruct the original vector to give as result. For this, we may be able to add a configuration value to see if you do not mind about that. (Not sure how this could be named, maybe you could easily provide a PR for this) Definitely think this would be worth doing to bring that DB size down. Also in some vector dbs you also have the option to not return the embeddings in the search to improve performance, this is preferrable as ultimately I don't need the embeddings after the search is complete. I have never submitted a PR on github before, so let me know if there is something I can do to help here and how I go about doing it. |
Hey @nikhilmakan02 , There is always a first time contributing to an Open Source project and I can assure you is very rewarding. You just need to do your changes in a fork of yours and you then can open a PR from a branch from the GUI itself. Just make sure to follow the CONTRIBUTING guidelines (https://github.com/docarray/docarray/blob/main/CONTRIBUTING.md) and to join our discord server (https://discord.com/invite/DRa9JpGT) to get assitance ans communicate with the community. |
Initial Checks
Description
Apologies the title of this is not the best. I have a very odd case and can't seem to understand what is causing it. I have also failed at recreating the issue in a simpler example.
I have a Doc List where each document has been built with the same process however the data is obviously different for each doc. I am using the hnswlib backend.
The issue I have is after I built the doc list with no issues I then try to run a .find() on the individual elements of the doc list, some of which fail and some don't. The error I get on some of these can be seen in the traceback below.
Code Snippet:
I have compared dl[2] and dl[3] left right and center and can't understand what the issue is. The embeddings array in both documents are the same shape which I have checked with numpy (.shape, .ndims, .size). I can't understand what the difference is between the two that causes the error below.
Traceback below:
Example Code
No response
Python, DocArray & OS Version
Affected Components
The text was updated successfully, but these errors were encountered: