-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature request: batch opening of 4 ML polys into one in spartan-parallel
#63
Comments
To batch open multiple multivariate polynomials with different number of variables at different points, we need to run a sum-check instance that reduces the openings to a single point. Formally, suppose we are given
We want to prove that
In order to achieve that goal, we can use this particular sum-check for At the end of the sum-check, we will need to open The above is the big idea. Now we need to solve two problems:
We already have an implementation of sum-check for (1) in the Ceno repo. So the only problem left is (2). |
Regarding problem (2), there are two potential approaches:
|
The challenge in approach (1): in WHIR, each iteration shrinks the codeword size by 2, but the polynomial number of variables by 2^k. So do we merge two codewords with the same size but with different polynomial sizes, or with the same polynomial sizes but different codeword sizes? The workable solution is to keep the codeword size the same, because when we add the codewords, the polynomials are also added correctly. The problem is that whenever a new polynomial is merged in, the code rate is pushed back to the original rate, and the number of queries are back to the original. So the proof size would be larger than the original estimated proof size of WHIR. Anyway, this challenge is solvable, the only issue is with the efficiency. |
The challenge in approach (2): suppose we have two polynomials Approach (2) is not workable until this challenge is resolved. However, if this approach does work, the protocol is conceptually simpler than approach (1), since it only affects the first round and the rest stay the same as WHIR, the change to the current code would also be minimal. |
Summary of approach (1)Modify the non-batched WHIRFirst, we need to modify the original WHIR to run two instances in parallel. Originally, a WHIR instance is to prove that a given codeword is committing to some polynomial For the beginning, the second instance is dummy, i.e., Batched version of WHIRWe are now ready to describe the batched version of WHIR, for batching polynomials with different sizes. The original weight function is
|
There is a more severe challenge than this, rendering this approach unusable. In short, after one round (or several rounds) of reduction, the weight function of the running polynomial is no longer the same as the rest polynomials yet to batch. Therefore, the situation becomes: trying to batch prove The above "Summary of approach (1)" works but is only slightly better than the naive approach: run a separate batched WHIR for each different polynomial size. |
Goal
The goal is to reduce the size of proof generated by
spartan-parallel
.Background
Currently we need to open four different set of multilinear polynomials at three different points in$O(n)$ (where $n$ is the number of variables). This means the opening proof of ML poly with 16 variables is not significantly shorter than that with 25 variables.
spartan-parallel
(details). This requires us to generate 4 polynomial opening proofs. However, we know multilinear polynomial opening proof's size isplonky3
, it batch the opening of hundreds of polynomials into one.See more discussion at slack
Proposal (TODO) - Batching
The text was updated successfully, but these errors were encountered: