-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: implement XTR for retrieving multivector #3437
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: BubbleCal <[email protected]>
Signed-off-by: BubbleCal <[email protected]>
Signed-off-by: BubbleCal <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3437 +/- ##
==========================================
+ Coverage 78.88% 78.92% +0.04%
==========================================
Files 251 251
Lines 92230 92364 +134
Branches 92230 92364 +134
==========================================
+ Hits 72752 72897 +145
+ Misses 16508 16495 -13
- Partials 2970 2972 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Signed-off-by: BubbleCal <[email protected]>
Signed-off-by: BubbleCal <[email protected]>
.map(|v| distance_type.func()(q, v)) | ||
.min_by(|a, b| a.partial_cmp(b).unwrap()) | ||
.map(|v| 1.0 - distance_type.func()(q, v)) | ||
.max_by(|a, b| a.total_cmp(b)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changing these so that the flat search results can be the same with IVF_FLAT, so the numbers won't confuse users
_ => unreachable!(), | ||
}; | ||
|
||
let mut knn_node = if q.refine_factor.is_some() || is_multivec { | ||
let mut knn_node = if q.refine_factor.is_some() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't require refine for multivector search any more
@@ -1061,7 +1063,7 @@ mod tests { | |||
let gt = multivec_ground_truth(&vectors, &query, k, params.metric_type); | |||
let gt_set = gt.iter().map(|r| r.1).collect::<HashSet<_>>(); | |||
|
|||
let recall = row_ids.intersection(>_set).count() as f32 / 10.0; | |||
let recall = row_ids.intersection(>_set).count() as f32 / 100.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the recall was calculated incorrectly, the previous algo requires refine_factor=5 to reach good enough recall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something seems off in the algorithm, with how missed_similarities
is handled. Could you address my comment, and also maybe write a unit tests that shows we get correct results? out of this?
let row_ids = batch[ROW_ID].as_primitive::<UInt64Type>(); | ||
let dists = batch[DIST_COL].as_primitive::<Float32Type>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are using values here, can we add a debug assert that there are non nulls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
// at most, we will have k * refine_factor results for each query | ||
let mut results = HashMap::with_capacity(k * refactor); | ||
let mut missed_similarities = 0.0; | ||
while let Some((min_sim, batch)) = reduced_inputs.try_next().await? { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the algorithm in the paper deeply, but it seems odd to me that the order of the ANN queries matters. It appears that later batches will be adding a higher missed_similarities
value. Is that intentional?
It looks like the output order of select_all
isn't deterministic. https://docs.rs/futures/latest/futures/stream/fn.select_all.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's intentional, this is a little bit complicated, will add more comments about this:
considering we are updating the final results
with a batch
from a query vector, and for a row x
:
- if
x
exists inresults
but notbatch
: setmin_sim
as the estimated similarity, the contribution ismin_sim
- if
x
exists in both, then the contribution issim
inbatch
- if
x
exists in onlybatch
, this means all queries before missed this row, this algo maintainsmissed_similarities
as the sum ofmin_sim
so far, so the contribution ismissed_similarities + sim
Signed-off-by: BubbleCal <[email protected]>
we have tests here https://github.com/lancedb/lance/pull/3437/files#diff-6de816b72e7c722316243c57df4f809ad34dc8581367c72335154dada48c40edL993 |
Signed-off-by: BubbleCal <[email protected]>
this PR introduces XTR, which can score the documents without the original multivector, so we don't need any IO op for searching on multivector.
it sets the minimum similarity as the estimated similarity for missed documents of single query vector.