Skip to main content

Accumulation

# Accumulation

Introduction

The trick below was originally described in Halo, however we are going to base this post on the abstraction of "accumulation schemes" described by Bünz, Chiesa, Mishra and Spooner in Proof-Carrying Data from Accumulation Schemes, in particular the scheme in Appendix A. 2.

Relevant resources include:

This page describes the most relevant parts of these papers and how it is implemented in Pickles/Kimchi. It is not meant to document the low-level details of the code in Pickles, but to describe what the code aims to do, allowing someone reviewing / working on this part of the codebase to gain context.

Interactive Reductions Between Relations

The easiest way to understand "accumulation schemes" is as a set of interactive reductions between relations. An interactive reduction RR\relation \to \relation' proceeds as follows:

  • The prover/verifier starts with some statement x\statement, the prover additionally holds w\witness.
  • They then run some protocol between them.
  • After which, they both obtain x\statement' and the prover obtains w\witness'

Pictorially:

Commucation diagram of interactive/non-deterministic reductions between languages
Fig 1. An overview the particular reductions/languages (described below) we require.

With the security/completeness guarantee that:

(x,w)R    (x,w)R(\statement, \witness) \in \relation \iff (\statement', \witness') \in \relation'

Except with negligible probability.

In other words: we have reduced membership of R\relation to membership of R\relation' using interaction between the parties: unlike a classical Karp-Levin reduction the soundness/completeness may rely on random coins and multiple rounds. Foreshadowing here is a diagram/overview of the reductions (the relations will be described as we go) used in Pickles:

Commucation diagram of interactive/non-deterministic reductions between languages
Fig 2. An overview the particular reductions/languages (described below) we require.

As you can see from Fig. 2, we have a cycle of reductions (following the arrows) e.g. we can reduce a relation "RAcc,G\relation_{\mathsf{Acc}, \vec{G}}" to itself by applying all 4 reductions. This may seem useless: why reduce a relation to itself?

However the crucial point is the "in-degree" (e.g. n-to-1) of these reductions: take a look at the diagram and note that any number of RAcc,G\relAcc instances can be reduced to a single RPCD,d\relPCS{\degree} instance! This RPCD,d\relPCS{d} instance can then be converted to a single RAcc,G\relAcc by applying the reductions (moving "around the diagram" by running one protocol after the other):

RPCD,dRIPA,RIPA,1RAcc,G\relPCS{\degree} \to \relIPA{\ell} \to \relIPA{1} \to \relAcc

Note: There are many examples of interactive reductions, an example familiar to the reader is PlonK itself: which reduces circuit-satisfiability RC\relation_{C} (x\statement is the public inputs and w\witness is the wire assignments) to openings of polynomial commitments RPCD,d\relPCS{d} (x\statement' are polynomial commitments and evaluation points, w\witness is the opening of the commitment).

More Theory/Reflections about Interactive Reductions (click to expand)

As noted in Compressed Σ\Sigma-Protocol Theory and Practical Application to Plug & Play Secure Algorithmics every Proof-of-Knowledge (PoK) with kk-rounds for a relation R\relation can instead be seen as a reduction to some relation R\relation' with k1k-1 rounds as follows:

  • Letting x\statement' be the view of the verifier.
  • w\witness' be a kk'th-round message which could make the verifier accept.

Hence the relation R\relation' is the set of verifier views (except for the last round) and the last missing message which would make the verifier accept if sent.

This simple, yet beautiful, observation turns out to be extremely useful: rather than explicitly sending the last-round message (which may be large/take the verifier a long time to check), the prover can instead prove that he knows a last-round message which would make the verifier accept, after all, sending the witness w\witness' is a particularly simple/inefficient PoK for (x,w)R(\statement', \witness') \in \relation'.

The reductions in this document are all of this form (including the folding argument): receiving/verifying the last-round message would require too many resources (time/communication) of the verifier, hence we instead replace it with yet another reduction to yet another language (e.g. where the witness is now half the size).

Hence we end up with a chain of reductions: going from the languages of the last-round messages. An "accumulation scheme" is just an example of such a chain of reductions which happens to be a cycle. Meaning the language is "self-reducing" via a series of interactive reductions.

A Note On Fiat-Shamir: All the protocols described here are public coin and hence in implementation the Fiat-Shamir transform is applied to avoid interaction: the verifiers challenges are sampled using a hash function (e.g. Poseidon) modelled as a reprogrammable random oracle.

Polynomial Commitment Openings RPCD,d\relPCS{d}

Recall that the polynomial commitment scheme (PCS) in Kimchi is just the trivial scheme based on Pedersen commitments. For Kimchi we are interested in "accumulation for the language (RPCS,d\relation_{\mathsf{PCS}, d}) of polynomial commitment openings", meaning that:

(x=(C,G,x,y),w=(f))RPCS,d    {C=f,Gy=i=0d1fixi}\left( \statement = (\comm, \vec{G}, \openx, \openy), \witness= (\vec{f}) \right) \in \relation_{\mathsf{PCS},d} \iff \left\{ \begin{aligned} \comm &= \langle \vec{f}, \vec{G} \rangle \\ \openy &= \sum_{i = 0}^{d-1} f_i \cdot \openx^i \end{aligned} \right\}

Where f\vec{f} is a list of dd coefficients for a polynomial f(X)i=0d1fiXif(X) \coloneqq \sum_{i=0}^{d-1} f_i \cdot X^i.

This is the language we are interested in reducing: providing a trivial proof, i.e. sending f\vec{f} requires linear communication and time of the verifier, we want a poly-log verifier. The communication complexity will be solved by a well-known folding argument, however to reduce computation we are going to need the "Halo trick".

First a reduction from PCS to an inner product relation.

Reduction 1: RPCS,dRIPA,\relation_{\mathsf{PCS},d} \to \relation_{\mathsf{IPA},\ell}

Formally the relation of the inner product argument is:

\left( \statement = (\comm, \vec{G}, H, \vec{\openx}, y), \witness = (\vec{f}) \right) \in \relation_{\mathsf{IPA},\ell} \iff \left\{ \begin{aligned} \openy &= \langle \vec{f}, \vec{\openx} \rangle \in \FF \\ \land \\ \comm &= \langle \vec{f}, \vec{G} \rangle + [\openy] \cdot \genOpen \in \GG \end{aligned} \right\} $$ where $\vec{f},\vec{\openx}, \vec{G}$ are of length $\ell$. We can reduce $\left(\statement = (\comm, \vec{G}, \openx, \openy), \witness = (\vec{f})\right) \in \relation_{\mathsf{PCS}, d}$ to $\relation_{\mathsf{IPA}, \ell}$ with $d = \ell$ as follows: - Define $\vec{\openx} = (1, \openx, \openx^2, \openx^3, \ldots, \openx^{\ell-1})$. Because $\vec{f}$ was a satisfying witness to $\relation_{\mathsf{PCS}, d}$, we have $\openy = f(\openx) = \langle \vec{f}, \vec{\openx} \rangle$. The prover sends $y$. - The verifier adds the evaluation $\openy$ to the commitment "in a new coordinate" as follows: 1. Verifier picks $\genOpen \sample \GG$ and sends $H$ to the prover. 2. Prover/Verifier updates $\comm \gets \comm + [\openy] \cdot \genOpen$ Intuitively we sample a fresh $\genOpen$ to avoid a malicious prover "putting something in the $H$-position", because he must send $\openy$ before seeing $\genOpen$, hence he would need to guess $\genOpen$ before-hand. If the prover is honest, we should have a commitment of the form:

C = \langle \vec{f}, \vec{G} \rangle + [\openy] \cdot H = \langle \vec{f}, \vec{G} \rangle + [\langle \vec{\openx}, \vec{f} \rangle] \cdot H \in \GG

**Note:** In some variants of this reduction $H$ is chosen as $[\delta] \cdot J$ for a constant $J \in \GG$ where $\delta \sample \FF$ by the verifier, this also works, however we (in Kimchi) simply hash to the curve to sample $H$. ## Reduction 2 (Incremental Step): $\relation_{\mathsf{IPA},\ell} \to \relation_{\mathsf{IPA},\ell/2}$ **Note:** The folding argument described below is the particular variant implemented in Kimchi, although some of the variable names are different. The folding argument reduces a inner product with $\ell$ (a power of two) coefficients to an inner product relation with $\ell / 2$ coefficients. To see how it works let us rewrite the inner product in terms of a first and second part:

\langle \vec{f}, \vec{\openx} \rangle = \langle \vec{f}_L, \vec{\openx}_L \rangle

  • \langle \vec{f}_R, \vec{\openx}_R \rangle \in \FF $$

Where fL=(f1,,f/2)\vec{f}_L = (f_1, \ldots, f_{\ell/2}) and fR=(f/2+1,,f)\vec{f}_R = (f_{\ell/2 + 1}, \ldots, f_\ell), similarly for x\vec{\openx}.

Now consider a "randomized version" with a challenge αF\chalfold \in \FF of this inner product:

fL+α1fR, xL+αxR=α1fR,xL+(fR,xR+fL,xL)+αfL,xR \begin{aligned} \langle \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R, \ \vec{\openx}_L + \chalfold \cdot \vec{\openx}_R \rangle &= \chalfold^{-1} \cdot \langle \vec{f}_R, \vec{\openx}_L \rangle \\ &+ \underline{\color{magenta} \left(\langle \vec{f}_R, \vec{\openx}_R \rangle + \langle \vec{f}_L, \vec{\openx}_L \rangle\right)} \\ &+ \chalfold \cdot \langle \vec{f}_L, \vec{\openx}_R \rangle \end{aligned}
Additional intuition: How do you arrive at the expression above? (click to expand)

The trick is to ensure that $\langle \vec{f}_R, \vec{\openx}_R \rangle + \langle \vec{f}_L, \vec{\openx}_L \rangle = \langle \vec{f}, \vec{\openx} \rangle = v$ ends up in the same power of $\chalfold$.

The particular expression above is not special and arguably not the most elegant: simpler alternatives can easily be found and the inversion can be avoided, e.g. by instead using:

fL+αfR, αxL+xR=fL,xR+α(fR,xR+fL,xL)+α2fR,xL \begin{aligned} \langle \vec{f}_L + \chalfold \cdot \vec{f}_R, \ \chalfold \cdot \vec{\openx}_L + \vec{\openx}_R \rangle &= \langle \vec{f}_L, \vec{\openx}_R \rangle \\ &+ \chalfold \cdot \underline{\color{magenta} \left(\langle \vec{f}_R, \vec{\openx}_R \rangle + \langle \vec{f}_L, \vec{\openx}_L \rangle\right)} \\ &+ \chalfold^2 \cdot \langle \vec{f}_R, \vec{\openx}_L \rangle \end{aligned}

Which will have the same overall effect of isolating the interesting term (this time as the α\chalfold-coefficient). The particular variant above can be found in e.g. Compressed Σ\Sigma-Protocol Theory and Practical Application to Plug & Play Secure Algorithmics and proving extraction is somewhat easier than the variant used in Kimchi.

The term we care about (underlined in magenta) is fR,xR+fL,xL=v\langle \vec{f}_R, \vec{\openx}_R \rangle + \langle \vec{f}_L, \vec{\openx}_L \rangle = v, the other two terms are cross-term garbage. The solution is to let the prover provide commitments to the cross terms to "correct" this randomized splitting of the inner product before seeing α\chalfold: the prover commits to the three terms (one of which is already provided) and the verifier computes a commitment to the new randomized inner product. i.e.

The prover sends commitment to the cross terms fR,xL\langle \vec{f}_R, \vec{\openx}_L \rangle and fL,xR\langle \vec{f}_L, \vec{\openx}_R \rangle:

L=fR0,G+[fR,xL]HL = \langle \vec{f}_R \Vert \vec{0}, \vec{G} \rangle + [\langle \vec{f}_R, \vec{\openx}_L \rangle] \cdot H R=0fL,G+[fL,xR]HR = \langle \vec{0} \Vert \vec{f}_L, \vec{G} \rangle + [\langle \vec{f}_L, \vec{\openx}_R \rangle] \cdot H

The verifier samples α  $F\chalfold \sample \FF and updates the commitment as:

C=[α1]L+C+[α]R =α1(fR0)+(fLfR)+α(0fL),G+[α1fR,xL+fL,xL+fR,xR+αfL,xR]H =(fL+α1fR)(αfL+fR),G+[fL+α1fR,xL+αfL+fR,xR]H\begin{aligned} C' &= [\chalfold^{-1}] \cdot L + C + [\chalfold] \cdot R \\ &\ \\ &= {\langle \chalfold^{-1} \cdot (\vec{f}_R \Vert \vec{0}) + (\vec{f}_L \Vert \vec{f}_R) + \chalfold \cdot (\vec{0} \Vert \vec{f}_L), \vec{G} \rangle} \\ &+ \left[ { \chalfold^{-1} \cdot \langle \vec{f}_R, \vec{\openx}_L \rangle + { \color{magenta} \langle \vec{f}_L, \vec{\openx}_L \rangle + \langle \vec{f}_R, \vec{\openx}_R \rangle } + \chalfold \cdot \langle \vec{f}_L, \vec{\openx}_R \rangle }\right] \cdot H \\ &\ \\ &= {\color{blue} \left\langle \left( \vec{f}_L + \chalfold^{-1} \vec{f}_R \right) \Vert \left( \chalfold \cdot \vec{f}_L + \vec{f}_R \right) , \vec{G} \right\rangle} \\ &+ \left[ { \color{green} \langle \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R , \vec{\openx}_L \rangle + \langle \chalfold \cdot \vec{f}_L +\vec{f}_R , \vec{\openx}_R \rangle } \right] \cdot H \end{aligned}

The final observation in the folding argument is simply that:

αfL+fR=α(fL+α1fR)=αf\chalfold \vec{f}_L + \vec{f}_R = \chalfold \cdot \left( { \color{purple} \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R } \right) = \chalfold \cdot { \color{purple} \vec{f}' }

where f{ \color{purple} \vec{f}' } is the new witness.

Hence we can replace occurrences of αfL+fR\chalfold \vec{f}_L + \vec{f}_R by αf\chalfold \vec{f}', with this look at the green term:

fL+α1fR,xL+αfL+fR,xR=f,xL+αf,xR=f,xL+f,αxR=f,xL+αxR=f,x\begin{aligned} { \color{green} \langle \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R , \vec{\openx}_L \rangle + \langle \chalfold \cdot \vec{f}_L +\vec{f}_R , \vec{\openx}_R \rangle } &= { \langle \vec{f}' , \vec{\openx}_L \rangle + \langle \chalfold \cdot \vec{f}' , \vec{\openx}_R \rangle } \\ &= { \langle { \vec{f}' } , \vec{\openx}_L \rangle + \langle { \vec{f}' } , \chalfold \cdot \vec{\openx}_R \rangle } \\ &= { \langle \vec{f}' , \vec{\openx}_L + \chalfold \cdot \vec{\openx}_R \rangle } \\ &= \langle \vec{f}', \vec{\openx}' \rangle \end{aligned}

By defining x=xL+αxR\vec{\openx}' = \vec{\openx}_L + \chalfold \cdot \vec{\openx}_R. We also rewrite the blue term in terms of f\vec{f}' similarly:

(fL+α1fR)(αfL+fR),G=f(αf),G=ff,GL([α]GR)=f,G\begin{aligned} {\color{blue} \left\langle \left( \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R \right) \Vert \left( \chalfold \cdot \vec{f}_L + \vec{f}_R \right) , \vec{G} \right\rangle} &= {\langle \vec{f}' \Vert ( \chalfold \cdot \vec{f}' ) , \vec{G} \rangle} \\ &= {\langle \vec{f}' \Vert \vec{f}' , \vec{G}_L \Vert ([\chalfold] \cdot \vec{G}_R) \rangle} \\ &= \langle \vec{f}' , \vec{G}' \rangle \end{aligned}

By defining G=GL+[α]GR\vec{G}' = \vec{G}_L + [\chalfold] \cdot \vec{G}_R. In summary by computing:

C[α1]L+C+[α]RGffL+α1fRF/2xxL+αxRF/2GGL+[α]GRG/2yf,x\begin{aligned} C' &\gets [\chalfold^{-1}] \cdot L + C + [\chalfold] \cdot R \in \GG \\ \vec{f}' &\gets \vec{f}_L + \chalfold^{-1} \cdot \vec{f}_R \in \FF^{\ell / 2} \\ \vec{\openx}' &\gets \vec{\openx}_L + \chalfold \cdot \vec{\openx}_R \in \FF^{\ell / 2} \\ \vec{G}' &\gets \vec{G}_L + [\chalfold] \cdot \vec{G}_R \in \GG^{\ell / 2} \\ \openy' &\gets \langle \vec{f'}, \vec{\openx}' \rangle \end{aligned}

We obtain a new instance of the inner product relation (of half the size):

(x=(C,G,H,x,y),w=(f))RIPA,/2( \statement = (C', \vec{G}', H, \vec{\openx}', \openy'), \witness = (\vec{f}') ) \in \relation_{\mathsf{IPA}, \ell/2}

At this point the prover could send x\vec{\openx}', f\vec{f}' to the verifier who could verify the claim:

  1. Computing G\vec{G}' from α\chalfold and G\vec{G}
  2. Computing CC' from f\vec{f}', HH and L,RL,R (obtained from the preceeding round)
  3. Checking y=?f,x\openy' \overset?= \langle \vec{f}', \vec{\openx}' \rangle

This would require half as much communication as the naive proof. A modest improvement...

However, we can iteratively apply this transformation until we reach an instance of constant size:

Reduction 2 (Full): RIPA,RIPA,1\relIPA{\ell} \to \ldots \to \relIPA{1}

That the process above can simply be applied again to the new (C,G,H,x,y)RIPA,/2(C', \vec{G}', H, \vec{\openx}', \openy) \in \relation_{\mathsf{IPA}, \ell/2} instance as well. By doing so k=log2()k = \log_2(\ell) times the total communication is brought down to 2k2 k G\GG-elements until the instance consists of (C,G,H,x,y)RIPA,1(\comm, \vec{G}, \genOpen, \vec{\openx}, \openy) \in \relIPA{1} at which point the prover simply provides the witness fF\vec{f}' \in \FF.

Because we need to refer to the terms in the intermediate reductions we let G(i)\vec{G}^{(i)}, f(i)\vec{f}^{(i)}, x(i)\vec{\openx}^{(i)} be the G\vec{G}', f\vec{f}', x\vec{\openx}' vectors respectively after ii recursive applications, with G(0)\vec{G}^{(0)}, f(0)\vec{f}^{(0)}, x(0)\vec{\openx}^{(0)} being the original instance. We will drop the vector arrow for G(k)\vec{G}^{(k)} (similarly, for f,xf,x), since the last step vector has size 11, so we refer to this single value instead. We denote by αi\chalfold_i the challenge of the ii'th application.

Reduction 3: RIPA,1RAcc,G\relation_{\mathsf{IPA},1} \to \relation_{\mathsf{Acc},\overset{\rightarrow}{G} }

While the proof for RIPA,\relIPA{\ell} above has O(log())O(\log(\ell))-size, the verifiers time-complexity is O()O(\ell):

  • Computing G(k)G^{(k)} from G(0)\vec{G}^{(0)} using α\vec{\chalfold} takes O()O(\ell).
  • Computing x(k)\openx^{(k)} from x(0)\vec{\openx}^{(0)} using α\vec{\chalfold} takes O()O(\ell).

The rest of the verifiers computation is only O(log())O(\log(\ell)), namely computing:

  • Sampling all the challenges α  $F\chalfold \sample \FF.
  • Computing C(i)[αi1]L(i)+C(i1)+[αi]R(i)C^{(i)} \gets [\chalfold_i^{-1}] \cdot L^{(i)} + C^{(i-1)} + [\chalfold_i] \cdot R^{(i)} for every ii

However, upon inspection, the more pessimistic claim that computing x(k)\vec{\openx}^{(k)} takes O()O(\ell) turns out to be false:

Claim: Define h(X)i=0k1(1+αkiX2i)\hpoly(X) \coloneqq \prod_{i = 0}^{k - 1} \left(1 + \chalfold_{k - i} \cdot X^{2^i}\right), then x(k)=h(x)\openx^{(k)} = \hpoly(\openx) for all x\openx. Therefore, x(k)\openx^{(k)} can be evaluated in O(k)=O(log)O(k) = O(\log \ell).

Proof: This can be verified by looking at the expansion of h(X)\hpoly(X). Define {hi}i=0l\{h_i\}_{i=0}^l to be the coefficients of h(X)\hpoly(X), that is h(X)=i=1hiXi1\hpoly(X) = \sum_{i=1}^\ell h_i \cdot X^{i-1}. Then the claim is equivalent to x(k)=i=1hixi1\openx^{(k)} = \sum_{i=1}^{\ell} h_i \cdot \openx^{i-1}. Let b(i,j)\vec{b}(i,j) denote the jjth bit in the bit-decomposition of the index ii and observe that:

hi=j=1kαkjb(i,j)wherejb(i,j)2j=ih_i = \prod_{j=1}^k \vec \chalfold_{k-j}^{b(i,j)} \text{\qquad where\qquad } \sum_{j} \vec b(i,j) \cdot 2^j = i

Now, compare this with how a kkth element of x(i)x^{(i)} is constructed:

xk(1)=xk(0)+α1xn/2+k(0)xk(2)=xk(1)+α2xn/4+k(2)=xk(0)+α1xn/2+k(0)+α2(xn/4+k(0)+α1xn/2+n/4+k(0))=i=03xin4+k(0)(j=01αjb(i,j))\begin{align*} x^{(1)}_k &= x^{(0)}_k + \alpha_1 \cdot x^{(0)}_{n/2+k}\\ x^{(2)}_k &= x^{(1)}_k + \alpha_2 \cdot x^{(2)}_{n/4+k}\\ &= x^{(0)}_k + \alpha_1 \cdot x^{(0)}_{n/2+k} + \alpha_2 \cdot (x^{(0)}_{n/4+k} + \alpha_1 \cdot x^{(0)}_{n/2 + n/4 +k})\\ &= \sum_{i=0}^3 x^{(0)}_{i \cdot \frac{n}{4} + k} \cdot \big( \prod_{j=0}^1 \alpha_j^{b(i,j)} \big) \end{align*}

Recalling that xk(0)=xkx^{(0)}_k = x^k, it is easy to see that this generalizes exactly to the expression for hih_i that we derived later, which concludes that evaluation through h(X)h(X) is correct.

Finally, regarding evaluation complexity, it is clear that h\hpoly can be evaluated in O(k=log)O(k = \log \ell) time as a product of kk factors. This concludes the proof.

The "Halo Trick"

The "Halo trick" resides in observing that this is also the case for G(k)\vec{G}^{(k)}: since it is folded the same way as x(k)\vec{\openx}^{(k)}. It is not hard to convince one-self (using the same type of argument as above) that:

G(k)=h,GG^{(k)} = \langle \vec{h}, \vec{G} \rangle

Where h\vec{h} is the coefficients of h(X)h(X) (like f\vec{f} is the coefficients of f(X)f(X)), i.e. h(X)=i=1hiXi1h(X) = \sum_{i = 1}^{\ell} h_i X^{i-1}

For notational convince (and to emphasise that they are 1 dimensional vectors), define/replace:

U=G(k)G,   c=f(k)F,   h(x)=x(k)FU = \vec{G}^{(k)} \in \GG, \ \ \ c = \vec{f}^{(k)} \in \FF, \ \ \ h(\openx) = \vec{\openx}^{(k)} \in \FF

With this we define the "accumulator language" which states that "UU" was computed correctly:

(x=(U,α),w=ϵ)RAcc,G    {h(X)i=0k1(1+αkiX2i) U=h,G}\left(\statement = (U, \vec{\chalfold}), \witness = \epsilon\right) \in \relation_{\mathsf{Acc}, \vec{G}} \iff \left\{ \begin{aligned} h(X) &\coloneqq \prod_{i = 0}^{k - 1} \left(1 + \chalfold_{k - i} \cdot X^{2^i}\right) \land \ U = \langle \vec{h}, \vec{G} \rangle \end{aligned} \right\}

Note: since there is no witness for this relation anyone can verify the relation (in O()O(\ell) time) by simply computing h,G\langle \vec{h}, \vec{G} \rangle in linear time. Instances are also small: the size is dominated by α\vec{\chalfold} which is α=log2|\vec{\chalfold}| = \log_2 \ell.

In The Code: in the Kimchi code α\vec{\accChal} is called prev_challenges and U\accCom is called comm, all defined in the RecursionChallenge struct.

Now, using the new notation rewrite RIPA,1\relation_{\mathsf{IPA},1} as:

(x=(C,U,H,h(x)),w=(c))RIPA,1    {y=ch(x) C=[c]U+[y]HG}\left( \statement = (\comm, \accCom, \genOpen, h(\openx)), \witness = (c) \right) \in \relation_{\mathsf{IPA},1} \iff \left\{ \begin{aligned} \openy &= c \cdot h(\openx) \\ \land \ \comm &= [c] \cdot \accCom + [\openy] \cdot \genOpen \in \GG \end{aligned} \right\}

Note: It is the same relation, we just replaced some names (c=f(k)c = \vec{f}^{(k)}, x(k)=h(x)\vec{\openx}^{(k)} = \hpoly(\openx)) and simplified a bit: inner products between 1-dimensional vectors are just multiplications.

We now have all the components to reduce RIPA,1RAcc,G\relIPA{1} \to \relAcc (with no soundness error) as follows:

  1. Prover sends c,Uc, \accCom to the verifier.
  2. Verifier does:
    • Compute yh(x)c\openy \gets h(\openx) \cdot c
    • Checks C=?[c]U+[y]H\comm \overset?= [c] \cdot \accCom + [\openy] \cdot \genOpen
  3. Output (x=(U,α),w=ϵ)RAcc,G(\statement = (\accCom, \vec{\accChal}), \witness = \epsilon) \in \relAcc

Note: The above can be optimized, in particular there is no need for the prover to send U\accCom.

Reduction 4: RAcc,GRPCS,d\relation_{\mathsf{Acc}, \overset{\rightarrow}{G}} \to \relation_{\mathsf{PCS}, d}

Tying the final knot in the diagram.

The expensive part in checking (U,α)RAcc,G(\accCom, \vec{\chalfold}) \in \relation_{\mathsf{Acc}, \vec{G}} consists in computing h,G\langle \vec{h}, \vec{G} \rangle given the α\vec{\chalfold} describing h(X)h(X): first expanding α\vec{\chalfold} into h\vec{h}, then computing the MSM. However, by observing that U=h,G\accCom = \langle \vec{h}, \vec{G} \rangle is actually a polynomial commitment to h(X)h(X), which we can evaluate at any point using O(log)O(\log \ell) operations, we arrive at a simple strategy for reducing any number of such claims to a single polynomial commitment opening:

  1. Prover sends U(1),,U(n)\accCom^{(1)}, \ldots, \accCom^{(n)} to the verifier.
  2. Verifier samples ζ  $F\chaleval \sample \FF, u  $Fu \sample \FF and computes:
y=i αi1h(i)(u)FC=i [αi1]U(i)G\begin{aligned} y &= \sum_i \ \chalfold^{i-1} \cdot h^{(i)}(u) \in \FF \\ C &= \sum_i \ [\chalfold^{i-1}] \cdot U^{(i)} \in \GG \end{aligned}

Alternatively:

y=i ui1h(i)(ζ)FC=i [ui1]U(i)G\begin{aligned} y &= \sum_i \ u^{i-1} \cdot h^{(i)}(\chaleval) \in \FF \\ C &= \sum_i \ [u^{i-1}] \cdot U^{(i)} \in \GG \end{aligned}

And outputs the following claim:

(C,ζ,y)LPCS,(C, \chaleval, y) \in \language_{\mathsf{PCS},\ell}

i.e. the polynomial commitment CC opens to yy at ζ\chaleval. The prover has the witness:

f(X)=i αi1h(i)(X)f(X) = \sum_i \ \chalfold^{i-1} \cdot h^{(i)}(X)

Why is this a sound reduction: if one of the U(i)U^{(i)} does not commit to h(i)h^{(i)} then they disagree except on at most \ell points, hence f(i)(ζ)h(i)(ζ)f^{(i)}(\chaleval) \neq h^{(i)}(\chaleval) with probability /F\ell/|\FF|. Taking a union bound over all nn terms leads to soundness error nF\frac{n \ell}{|\FF|} -- negligible.

The reduction above requires nn G\GG operations and O(nlog)O(n \log \ell) F\FF operations.

Support for Arbitrary Polynomial Relations

Additional polynomial commitments (i.e. from PlonK) can be added to the randomized sums (C,y)(C, y) above and opened at ζ\chaleval as well: in which case the prover proves the claimed openings at ζ\chaleval before sampling the challenge uu. This is done in Kimchi/Pickles: the ζ\chaleval and uu above is the same as in the Kimchi code. The combined yy (including both the h()h(\cdot) evaluations and polynomial commitment openings at ζ\chaleval and ζω\chaleval \omega) is called combined_inner_product in Kimchi.

Commucation diagram of interactive/non-deterministic reductions between languages
Fig 3. Cycle of reductions with the added polynomial relations from PlonK.

This RPCS,\relation_{\mathsf{PCS},\ell} instance reduced back into a single RAcc,G\relAcc instance, which is included with the proof.

Multiple Accumulators (PCD Case)

From the section above it may seem like there is always going to be a single RAcc,G\relAcc instance, this is indeed the case if the proof only verifies a single proof, "Incremental Verifiable Computation" (IVC) in the literature. If the proof verifies multiple proofs, "Proof-Carrying Data" (PCD), then there will be multiple accumulators: every "input proof" includes an accumulator (RAcc,G\relAcc instance), all these are combined into the new (single) RAcc,G\relAcc instance included in the new proof: this way, if one of the original proofs included an invalid accumulator and therefore did not verify, the new proof will also include an invalid accumulator and not verify with overwhelming probability.

Commucation diagram of interactive/non-deterministic reductions between languages
Fig 4. Multiple accumulators from previous input proofs (part of the witness) and the polynomial relations for the new proof are combined and reduced into the new accumulator.

Note that the new proof contains the (single) accumulator of each "input" proof, even though the proofs themselves are part of the witness: this is because verifying each input proof results in an accumulator (which could then be checked directly -- however this is expensive): the "result" of verifying a proof is an accumulator (instance of RAcc,G\relAcc) -- which can be verified directly or further "accumulated".

These accumulators are the RecursionChallenge structs included in a Kimchi proof. The verifier check the PlonK proof (which proves accumulation for each "input proof"), this results in some number of polynomial relations, these are combined with the accumulators for the "input" proofs to produce the new accumulator.

Accumulation Verifier

The section above implicitly describes the work the verifier must do, but for the sake of completeness let us explicitly describe what the verifier must do to verify a Fiat-Shamir compiled proof of the transformations above. This constitutes "the accumulation" verifier which must be implemented "in-circuit" (in addition to the "Kimchi verifier"), the section below also describes how to incorporate the additional evaluation point ζω\chaleval \cdot \omega ("the step", used by Kimchi to enforce constraints between adjacent rows). Let CF\mathcal{C} \subseteq \FF be the challenge space (128-bit GLV decomposed challenges):

  1. PlonK verifier on π\pi outputs polynomial relations (in Purple in Fig. 4).

  2. Checking RAcc,G\relation_{\mathsf{Acc}, \vec{G}} and polynomial relations (from PlonK) to RPCS,d\relation_{\mathsf{PCS},d} (the dotted arrows):

    1. Sample ζ  $C\chaleval \sample \mathcal{C} (evaluation point) using the Poseidon sponge.

    2. Read claimed evaluations at ζ\chaleval and ωζ\omega \chaleval (PointEvaluations).

    3. Sample v  $C\chalv \sample \mathcal{C} (commitment combination challenge, polyscale) using the Poseidon sponge.

    4. Sample u  $C\chalu \sample \mathcal{C} (evaluation combination challenge, evalscale) using the Poseidon sponge.

      • The (v,u)(\chalv, \chalu) notation is consistent with the Plonk paper where v\chalv recombines commitments and u\chalu recombines evaluations
    5. Compute CGC \in \GG with v\chalv from:

      • U(1),,U(n)\accCom^{(1)}, \ldots, \accCom^{(n)} (RecursionChallenge.comm G\in \GG)
      • Polynomial commitments from PlonK (ProverCommitments)
    6. Compute yζ\openy_{\chaleval} (part of combined_inner_product) with v\chalv from:

      • The evaluations of h(1)(ζ),,h(n)(ζ)h^{(1)}(\chaleval), \ldots, h^{(n)}(\chaleval)
      • Polynomial openings from PlonK (ProofEvaluations) at ζ\chaleval
    7. Compute yζω\openy_{\chaleval\omega} (part of combined_inner_product) with v\chalv from:

      • The evaluations of h(1)(ζω),,h(n)(ζω)h^{(1)}(\chaleval\omega), \ldots, h^{(n)}(\chaleval\omega)
      • Polynomial openings from PlonK (ProofEvaluations) at ζω\chaleval \cdot \omega

      At this point we have two PCS claims, these are combined in the next transform.

      Details
      At this point we have two claims: $$ \begin{aligned} (\comm, \chaleval, \openy_{\chaleval}) &\in \langPCS{\degree} \\ (\comm, \chaleval\omega, \openy_{\chaleval\omega}) &\in \langPCS{\degree} \end{aligned} $$ These are combined using a random linear combination with $\chalv$ in the inner product argument (see [Different functionalities](../plonk/inner_product_api) for details).
  3. Checking RPCS,dRIPA,\relation_{\mathsf{PCS}, d} \to \relation_{\mathsf{IPA},\ell}.

    1. Sample H  $G\genOpen \sample \GG using the Poseidon sponge: hash to curve.
    2. Compute yyζ+uyζω\openy \gets \openy_{\chaleval} + \chalu \cdot \openy_{\chaleval\omega}.
    3. Update CC+[y]H\comm' \gets \comm + [\openy] \cdot \genOpen.
  4. Checking RIPA,RIPA,1\relation_{\mathsf{IPA}, \ell} \to \relation_{\mathsf{IPA},1}:
    Check the correctness of the folding argument, for every i=1,,ki = 1, \ldots, k:

    1. Receive L(i),R(i)GL^{(i)}, R^{(i)} \in \GG (see the vector OpeningProof.lr in Kimchi).
    2. Sample αi  $C\chalfold_i \sample \mathcal{C} using the Poseidon sponge.
    3. Compute C(i)=[αi1]L+C(i1)+[αi]RC^{(i)} = [\chalfold_i^{-1}] \cdot L + C^{(i-1)} + [\chalfold_i] \cdot R (using GLV endomorphism)
      (Note: to avoid the inversion the element P=[αi1]LP = [\chalfold_i^{-1}] \cdot L is witnessed and the verifier checks [αi]P=[αi]([αi1]L)=L[\chalfold_i] \cdot P = [\chalfold_i] \cdot ([\chalfold_i^{-1}] \cdot L) = L. To understand why computing the field inversion would be expensive see deferred computation)
  5. Checking RIPA,1RAcc,G\relation_{\mathsf{IPA},1} \to \relation_{\mathsf{Acc}, \vec{G}}

    1. Receive cc form the prover.
    2. Define h\hpoly from α\vec{\chalfold} (folding challenges, computed above).
    3. Compute a combined evaluation of the IPA challenge polynomial on two points: b=h(ζ)+uh(ζω)b = \hpoly(\chaleval) + \chalu \cdot \hpoly(\chaleval \omega)
      • Computationally, bb is obtained inside bulletproof folding procedure, by folding the vector binitb_{\mathsf{init}} such that binit,j=ζj+vζjwjb_{\mathsf{init},j} = \zeta^j + v \cdot \zeta^j w^j using the same standard bulletproof challenges that constitute h(X)h(X). This binitb_{\mathsf{init}} is a vv-recombined evaluation point. The equality is by linearity of IPA recombination.
    4. Compute ycb=c(h(ζ)+uh(ζω))\openy' \gets c \cdot b = c \cdot (\hpoly(\chaleval) + \chalu \cdot \hpoly(\chaleval \omega)), this works since: x(k)=xζ(k)+uxζω(k)\openx^{(\rounds)} = \openx^{(\rounds)}_{\chaleval} + \chalu \cdot \openx^{(\rounds)}_{\chaleval\omega} See Different functionalities for more details or the relevant code.
    5. Compute UC(k)[y]H\accCom \gets \comm^{(k)} - [\openy'] \cdot \genOpen (i.e. st. C(k)=U+[y]H\comm^{(k)} = \accCom + [\openy'] \cdot \genOpen)

Note that the accumulator verifier must be proven (in addition to the Kimchi/PlonK verifier) for each input proof.

No Cycles of Curves?

Note that the "cycles of curves" (e.g. Pasta cycle) does not show up in this part of the code: a separate accumulator is needed for each curve and the final verifier must check both accumulators to deem the combined recursive proof valid. This takes the form of passthough data in pickles.

Note however, that the accumulation verifier makes use of both G\GG-operations and F\FF-operations, therefore it (like the Kimchi verifier) also requires deferred computation.