Accumulation
Introduction
The trick below was originally described in Halo, however we are going to base this post on the abstraction of “accumulation schemes” described by Bünz, Chiesa, Mishra and Spooner in ProofCarrying Data from Accumulation Schemes, in particular the scheme in Appendix A. 2.
Relevant resources include:
 ProofCarrying Data from Accumulation Schemes by Benedikt Bünz, Alessandro Chiesa, Pratyush Mishra and Nicholas Spooner.
 Recursive Proof Composition without a Trusted Setup (Halo) by Sean Bowe, Jack Grigg and Daira Hopwood.
This page describes the most relevant parts of these papers and how it is implemented in Pickles/Kimchi. It is not meant to document the lowlevel details of the code in Pickles, but to describe what the code aims to do, allowing someone reviewing / working on this part of the codebase to gain context.
Interactive Reductions Between Relations
The easiest way to understand “accumulation schemes” is as a set of interactive reductions between relations. An interactive reduction $R→R_{′}$ proceeds as follows:
 The prover/verifier starts with some statement $x$, the prover additionally holds $w$.
 They then run some protocol between them.
 After which, they both obtain $x_{′}$ and the prover obtains $w_{′}$
Pictorially:
With the security/completeness guarantee that:
$(x,w)∈R⟺(x_{′},w_{′})∈R_{′}$
Except with negligible probability.
In other words: we have reduced membership of $R$ to membership of $R_{′}$ using interaction between the parties: unlike a classical KarpLevin reduction the soundness/completeness may rely on random coins and multiple rounds. Foreshadowing here is a diagram/overview of the reductions (the relations will be described as we go) used in Pickles:
As you can see from Fig. 2, we have a cycle of reductions (following the arrows) e.g. we can reduce a relation “$R_{Acc,G}$” to itself by applying all 4 reductions. This may seem useless: why reduce a relation to itself?
However the crucial point is the “indegree” (e.g. nto1) of these reductions: take a look at the diagram and note that any number of $R_{Acc,G}$ instances can be reduced to a single $R_{PCD,d}$ instance! This $R_{PCD,d}$ instance can then be converted to a single $R_{Acc,G}$ by applying the reductions (moving “around the diagram” by running one protocol after the other):
$R_{PCD,d}→R_{IPA,ℓ}→R_{IPA,1}→R_{Acc,G}$
Note: There are many examples of interactive reductions, an example familiar to the reader is PlonK itself: which reduces circuitsatisfiability $R_{C}$ ($x$ is the public inputs and $w$ is the wire assignments) to openings of polynomial commitments $R_{PCD,d}$ ($x_{′}$ are polynomial commitments and evaluation points, $w$ is the opening of the commitment).
More Theory/Reflections about Interactive Reductions (click to expand)
As noted in Compressed $Σ$Protocol Theory and Practical Application to Plug & Play Secure Algorithmics every ProofofKnowledge (PoK) with $k$rounds for a relation $R$ can instead be seen as a reduction to some relation $R_{′}$ with $k−1$ rounds as follows:
 Letting $x_{′}$ be the view of the verifier.
 $w_{′}$ be a $k$’thround message which could make the verifier accept.
Hence the relation $R_{′}$ is the set of verifier views (except for the last round) and the last missing message which would make the verifier accept if sent.
This simple, yet beautiful, observation turns out to be extremely useful: rather than explicitly sending the lastround message (which may be large/take the verifier a long time to check), the prover can instead prove that he knows a lastround message which would make the verifier accept, after all, sending the witness $w_{′}$ is a particularly simple/inefficient PoK for $(x_{′},w_{′})∈R_{′}$.
The reductions in this document are all of this form (including the folding argument): receiving/verifying the lastround message would require too many resources (time/communication) of the verifier, hence we instead replace it with yet another reduction to yet another language (e.g. where the witness is now half the size).
Hence we end up with a chain of reductions: going from the languages of the lastround messages. An “accumulation scheme” is just an example of such a chain of reductions which happens to be a cycle. Meaning the language is “selfreducing” via a series of interactive reductions.
A Note On FiatShamir: All the protocols described here are public coin and hence in implementation the FiatShamir transform is applied to avoid interaction: the verifiers challenges are sampled using a hash function (e.g. Poseidon) modelled as a reprogrammable random oracle.
Polynomial Commitment Openings $R_{PCD,d}$
Recall that the polynomial commitment scheme (PCS) in Kimchi is just the trivial scheme based on Pedersen commitments. For Kimchi we are interested in “accumulation for the language ($R_{PCS,d}$) of polynomial commitment openings”, meaning that:
$(x=(C,x,y),w=(f ))∈R_{PCS,d}⟺⎩⎨⎧ Cy =⟨f ,G⟩=i=0∑ f_{i}⋅x_{i} ⎭⎬⎫ $
Where $f $ is a list of coefficients for a polynomial $f(X):=∑_{i}f_{i}⋅X_{i}$.
This is the language we are interested in reducing: providing a trivial proof, i.e. sending $f $ requires linear communication and time of the verifier, we want a polylog verifier. The communication complexity will be solved by a wellknown folding argument, however to reduce computation we are going to need the “Halo trick”.
First a reduction from PCS to an inner product relation.
Reduction 1: $R_{PCS,d}→R_{IPA,ℓ}$
Formally the relation of the inner product argument is:
$(x=(C,G,H,x),w=(f ))∈R_{IPA,ℓ}⟺⎩⎨⎧ y∧C =⟨f ,x⟩∈F=⟨f ,G⟩+[y]⋅H∈G ⎭⎬⎫ $
We can reduce $(x=(C,x,y),w=(f ))∈R_{PCS,d}$ to $R_{IPA,ℓ}$ with $d=ℓ$ as follows:
 Define $x=(1,x,x_{2},x_{3},…,x_{ℓ−1})$, so that $y=f(x)=⟨f ,x⟩$,
 The verifier adds the evaluation $y$ to the commitment “in a new coordinate” as follows:
 Verifier picks $H←$G$ and sends $H$ to the prover.
 Verifier updates $C←C+[y]⋅H$
Intuitively we sample a fresh $H$ to avoid a malicious prover “putting something in the $H$position”, because he must send $y$ before seeing $H$, hence he would need to guess $H$ beforehand. If the prover is honest, we should have a commitment of the form:
$C=⟨f ,G⟩+[y]⋅H=⟨f ,G⟩+[⟨x,f ⟩]⋅H∈G$
Note: In some variants of this reduction $H$ is chosen as $[δ]⋅J$ for a constant $J∈G$ where $δ←$F$ by the verifier, this also works, however we (in Kimchi) simply hash to the curve to sample $H$.
Reduction 2 (Incremental Step): $R_{IPA,ℓ}→R_{IPA,ℓ/2}$
Note: The folding argument described below is the particular variant implemented in Kimchi, although some of the variable names are different.
The folding argument reduces a inner product with $ℓ$ (a power of two) coefficients to an inner product relation with $ℓ/2$ coefficients. To see how it works let us rewrite the inner product in terms of a first and second part:
$⟨f ,x⟩=⟨f _{L},x_{L}⟩+⟨f _{R},x_{R}⟩∈F$
Where $f _{L}=(f_{1},…,f_{ℓ/2})$ and $f _{R}=(f_{ℓ/2+1},…,f_{ℓ})$, similarly for $x$.
Now consider a “randomized version” with a challenge $α∈F$ of this inner product:
$⟨f _{L}+α_{−1}⋅f _{R},x_{L}+α⋅x_{R}⟩ =α_{−1}⋅⟨f _{R},x_{L}⟩+(⟨f _{R},x_{R}⟩+⟨f _{L},x_{L}⟩) +α⋅⟨f _{L},x_{R}⟩ $
Additional intuition: How do you arrive at the expression above? (click to expand)
The trick is to ensure that $⟨f _{R},x_{R}⟩+⟨f _{L},x_{L}⟩=⟨f ,x⟩=v$ ends up in the same power of $α$.
The particular expression above is not special and arguably not the most elegant: simpler alternatives can easily be found and the inversion can be avoided, e.g. by instead using: $⟨f _{L}+α⋅f _{R},α⋅x_{L}+x_{R}⟩ =⟨f _{L},x_{R}⟩+α⋅(⟨f _{R},x_{R}⟩+⟨f _{L},x_{L}⟩) +α_{2}⋅⟨f _{R},x_{L}⟩ $ Which will have the same overall effect of isolating the interesting term (this time as the $α$coefficient). The particular variant above can be found in e.g. Compressed $Σ$Protocol Theory and Practical Application to Plug & Play Secure Algorithmics and proving extraction is somewhat easier than the variant used in Kimchi.
The term we care about (underlined in magenta) is $⟨f _{R},x_{R}⟩+⟨f _{L},x_{L}⟩=v$, the other two terms are crossterm garbage. The solution is to let the prover provide commitments to the cross terms to “correct” this randomized splitting of the inner product before seeing $α$: the prover commits to the three terms (one of which is already provided) and the verifier computes a commitment to the new randomized inner product. i.e.
The prover sends commitment to $⟨f _{R},x_{L}⟩$ and $⟨f _{L},x_{R}⟩$ cross terms:
$L=⟨f _{R}∥0,G⟩+[⟨f _{R},x_{L}⟩]⋅H$
$R=⟨0∥f _{L},G⟩+[⟨f _{L},x_{R}⟩]⋅H$
The verifier samples $α←$F$ and defines:
$C_{′} =[α_{−1}]⋅L+C+[α]⋅R=⟨α_{−1}⋅(f _{R}∥0)+(f _{L}∥f _{R})+α⋅(0∥f _{L}),G⟩+[α_{−1}⋅⟨f _{R},f _{L}⟩+⟨f _{L},x_{L}⟩+⟨f _{R},x_{R}⟩+α⋅⟨f _{L},x_{R}⟩]⋅H=⟨(f _{L}+α_{−1}f _{R})∥(α⋅f _{L}+f _{R}),G⟩+[⟨f _{L}+α_{−1}⋅f _{R},x_{L}⟩+⟨α⋅f _{L}+f _{R},x_{R}⟩]⋅H $
The final observation in the folding argument is simply that:
$αf _{L}+f _{R}=α⋅(f _{L}+α_{−1}⋅f _{R})=α⋅f _{′}$
Hence we can replace occurrences of $αf _{L}+f _{R}$ by $αf _{′}$, with this look at the green term:
$⟨f _{L}+α_{−1}⋅f _{R},x_{L}⟩+⟨α⋅f _{L}+f _{R},x_{R}⟩ =⟨f _{′},x_{L}⟩+⟨α⋅f _{′},x_{R}⟩=⟨f _{′},x_{L}⟩+⟨f _{′},α⋅x_{R}⟩=⟨f _{′},x_{L}+α⋅x_{R}⟩=⟨f _{′},x_{′}⟩ $
By defining $x_{′}=x_{L}+α⋅x_{R}$. We also rewrite the blue term in terms of $f _{′}$ similarly: $⟨(f _{L}+α_{−1}⋅f _{R})∥(α⋅f _{L}+f _{R}),G⟩ =⟨f _{′}∥(α⋅f _{′}),G⟩=⟨f _{′}∥f _{′},G_{L}∥([α]⋅G_{R})⟩=⟨f _{′},G_{′}⟩ $
By defining $G_{′}=G_{L}+[α]⋅G_{R}$. In summary by computing: $C_{′}f _{′}x_{′}G_{′}y_{′} ←[α_{−1}]⋅L+C+[α]⋅R∈G←f _{L}+α_{−1}⋅f _{R}∈F_{ℓ/2}←x_{L}+α⋅x_{R}∈F_{ℓ/2}←G_{L}+[α]⋅G_{R}∈G_{ℓ/2}←⟨f_{′} ,x_{′}⟩ $
We obtain a new instance of the inner product relation (of half the size):
$(x=(C_{′},G_{′},H,x_{′}),w=(f _{′},y_{′}))∈R_{IPA,ℓ/2}$
At this point the prover could send $x_{′}$, $f _{′}$ to the verifier who could verify the claim:
 Computing $G_{′}$ from $α$ and $G$
 Computing $C_{′}$ from $f _{′}$, $v$ and $H$
 Checking $y_{′}=?⟨f _{′},x_{′}⟩$
This would require half as much communication as the naive proof. A modest improvement…
However, we can iteratively apply this transformation until we reach an instance of constant size:
Reduction 2 (Full): $R_{IPA,ℓ}→…→R_{IPA,1}$
That the process above can simply be applied again to the new $(C_{′},G_{′},H,x_{′},v)∈R_{IPA,ℓ/2}$ instance as well. By doing so $k=g_{2}(ℓ)$ times the total communication is brought down to $2k$ $G$elements until the instance consists of $(C,G,H,x)∈R_{IPA,1}$ at which point the prover simply provides the witness $f _{′}∈F$.
Because we need to refer to the terms in the intermediate reductions we let $G_{(i)}$, $f _{(i)}$, $x_{(i)}$ be the $G_{′}$, $f _{′}$, $x_{′}$ vectors respectively after $i$ recursive applications, with $G_{(0)}$, $f _{(0)}$, $x_{(0)}$ being the original instance. We will drop the vector arrow for $G_{(k)}$ (similarly, for $f,x$), since the last step vector has size $1$, so we refer to this single value instead. We denote by $α_{i}$ the challenge of the $i$’th application.
Reduction 3: $R_{IPA,1}→R_{Acc,G→}$
While the proof for $R_{IPA,ℓ}$ above has $O(g(ℓ))$size, the verifiers timecomplexity is $O(ℓ)$:
 Computing $G_{(k)}$ from $G_{(0)}$ using $α$ takes $O(ℓ)$.
 Computing $x_{(k)}$ from $x_{(0)}$ using $α$ takes $O(ℓ)$.
The rest of the verifiers computation is only $O(g(ℓ))$, namely computing:
 Sampling all the challenges $α←$F$.
 Computing $C_{(i)}←[α_{i}]⋅L_{(i)}+C_{(i−1)}+[α_{i}]⋅R_{(i)}$ for every $i$
However, upon inspection, the more pessimistic claim that computing $x_{(k)}$ takes $O(ℓ)$ turns out to be false:
Claim: Define $h(X):=∏_{i=0}(1+α_{k−i}⋅X_{2_{i}})$, then $x_{(k)}=h(x)$ for all $x$. Therefore, $x_{(k)}$ can be evaluated in $O(k)=O(gℓ)$.
Proof: This can be verified by looking at the expansion of $h(X)$. Define ${h_{i}}_{i=0}$ to be the coefficients of $h(X)$, that is $h(X)=∑_{i=1}h_{i}⋅X_{i−1}$. Then the claim is equivalent to $x_{(k)}=∑_{i=1}h_{i}⋅x_{i−1}$. Let $b(i,j)$ denote the $j$th bit in the bitdecomposition of the index $i$ and observe that: $h_{i}=j=1∏k α_{k−j}wherej∑ b(i,j)⋅2_{j}=i$ Now, compare this with how a $k$th element of $x_{(i)}$ is constructed: $x_{k}x_{k} =x_{k}+α_{1}⋅x_{n/2+k}=x_{k}+α_{2}⋅x_{n/4+k}=x_{k}+α_{1}⋅x_{n/2+k}+α_{2}⋅(x_{n/4+k}+α_{1}⋅x_{n/2+n/4+k})=i=0∑3 x_{i⋅4n+k}⋅(j=0∏1 α_{j}) $ Recalling that $x_{k}=x_{k}$, it is easy to see that this generalizes exactly to the expression for $h_{i}$ that we derived later, which concludes that evaluation through $h(X)$ is correct.
Finally, regarding evaluation complexity, it is clear that $h$ can be evaluated in $O(k=gℓ)$ time as a product of $k$ factors. This concludes the proof.
The “Halo Trick”
The “Halo trick” resides in observing that this is also the case for $G_{(k)}$: since it is folded the same way as $x_{(k)}$. It is not hard to convince oneself (using the same type of argument as above) that:
$G_{(k)}=⟨h,G⟩$
Where $h$ is the coefficients of $h(X)$ (like $f $ is the coefficients of $f(X)$), i.e. $h(X)=∑_{i=1}h_{i}X_{i−1}$
For notational convince (and to emphasise that they are 1 dimensional vectors), define/replace:
$U=G_{(k)}∈G,c=f _{(k)}∈F,h(x)=x_{(k)}∈F$
With this we define the “accumulator language” which states that “$U$” was computed correctly:
$(x=(U,α),w=ϵ)∈R_{Acc,G}⟺⎩⎨⎧ h(X) :=i=0∏k−1 (1+α_{k−i}⋅X_{2_{i}})∧U=⟨h,G⟩ ⎭⎬⎫ $
Note: since there is no witness for this relation anyone can verify the relation (in $O(ℓ)$ time) by simply computing $⟨h,G⟩$ in linear time. Instances are also small: the size is dominated by $α$ which is $∣α∣=g_{2}ℓ$.
In The Code: in the Kimchi code $α$ is called prev_challenges
and $U$ is called comm
,
all defined in the RecursionChallenge
struct.
Now, using the new notation rewrite $R_{IPA,1}$ as:
$(x=(C,U,H,h(x)),w=(c))∈R_{IPA,1}⟺{y∧C =c⋅h(x)=[c]⋅U+[y]⋅H∈G }$
Note: It is the same relation, we just replaced some names ($c=f _{(k)}$, $x_{(k)}=h(x)$) and simplified a bit: inner products between 1dimensional vectors are just multiplications.
We now have all the components to reduce $R_{IPA,1}→R_{Acc,G}$ (with no soundness error) as follows:
 Prover sends $c,U$ to the verifier.
 Verifier does:
 Compute $y←h(x)⋅c$
 Checks $C=?[c]⋅U+[y]⋅H$
 Output $(x=(U,α),w=ϵ)∈R_{Acc,G}$
Note: The above can be optimized, in particular there is no need for the prover to send $U$.
Reduction 4: $R_{Acc,G→}→R_{PCS,d}$
Tying the final knot in the diagram.
The expensive part in checking $(U,α)∈R_{Acc,G}$ consists in computing $⟨h,G⟩$ given the $α$ describing $h(X)$: first expanding $α$ into $h$, then computing the MSM. However, by observing that $U=⟨h,G⟩$ is actually a polynomial commitment to $h(X)$, which we can evaluate at any point using $O(gℓ)$ operations, we arrive at a simple strategy for reducing any number of such claims to a single polynomial commitment opening:
 Prover sends $U_{(1)},…,U_{(n)}$ to the verifier.
 Verifier samples $ζ←$F$, $u←$F$ and computes:
$yC =i∑ α_{i−1}⋅h_{(i)}(u)∈F=i∑ [α_{i−1}]⋅U_{(i)}∈G $ Alternatively: $yC =i∑ u_{i−1}⋅h_{(i)}(ζ)∈F=i∑ [u_{i−1}]⋅U_{(i)}∈G $
And outputs the following claim:
$(C,ζ,y)∈L_{PCS,ℓ}$
i.e. the polynomial commitment $C$ opens to $y$ at $ζ$. The prover has the witness:
$f(X)=i∑ α_{i−1}⋅h_{(i)}(X)$
Why is this a sound reduction: if one of the $U_{(i)}$ does not commit to $h_{(i)}$ then they disagree except on at most $ℓ$ points, hence $f_{(i)}(ζ)=h_{(i)}(ζ)$ with probability $ℓ/∣F∣$. Taking a union bound over all $n$ terms leads to soundness error $∣F∣nℓ $ – negligible.
The reduction above requires $n$ $G$ operations and $O(ngℓ)$ $F$ operations.
Support for Arbitrary Polynomial Relations
Additional polynomial commitments (i.e. from PlonK) can be added to the randomized sums $(C,y)$ above and opened at $ζ$ as well: in which case the prover proves the claimed openings at $ζ$ before sampling the challenge $u$.
This is done in Kimchi/Pickles: the $ζ$ and $u$ above is the same as in the Kimchi code.
The combined $y$ (including both the $h(⋅)$ evaluations and polynomial commitment openings at $ζ$ and $ζω$) is called combined_inner_product
in Kimchi.
This $R_{PCS,ℓ}$ instance reduced back into a single $R_{Acc,G}$ instance, which is included with the proof.
Multiple Accumulators (PCD Case)
From the section above it may seem like there is always going to be a single $R_{Acc,G}$ instance, this is indeed the case if the proof only verifies a single proof, “Incremental Verifiable Computation” (IVC) in the literature. If the proof verifies multiple proofs, “ProofCarrying Data” (PCD), then there will be multiple accumulators: every “input proof” includes an accumulator ($R_{Acc,G}$ instance), all these are combined into the new (single) $R_{Acc,G}$ instance included in the new proof: this way, if one of the original proofs included an invalid accumulator and therefore did not verify, the new proof will also include an invalid accumulator and not verify with overwhelming probability.
Note that the new proof contains the (single) accumulator of each “input” proof, even though the proofs themselves are part of the witness: this is because verifying each input proof results in an accumulator (which could then be checked directly – however this is expensive): the “result” of verifying a proof is an accumulator (instance of $R_{Acc,G}$) – which can be verified directly or further “accumulated”.
These accumulators are the RecursionChallenge
structs included in a Kimchi proof.
The verifier check the PlonK proof (which proves accumulation for each “input proof”), this results in some number of polynomial relations,
these are combined with the accumulators for the “input” proofs to produce the new accumulator.
Accumulation Verifier
The section above implicitly describes the work the verifier must do, but for the sake of completeness let us explicitly describe what the verifier must do to verify a FiatShamir compiled proof of the transformations above. This constitutes “the accumulation” verifier which must be implemented “incircuit” (in addition to the “Kimchi verifier”), the section below also describes how to incorporate the additional evaluation point $ζ⋅ω$ (“the step”, used by Kimchi to enforce constraints between adjacent rows). Let $C⊆F$ be the challenge space (128bit GLV decomposed challenges):

PlonK verifier on $π$ outputs polynomial relations (in Purple in Fig. 4).

Checking $R_{Acc,G}$ and polynomial relations (from PlonK) to $R_{PCS,d}$ (the dotted arrows):

Sample $ζ←$C$ (evaluation point) using the Poseidon sponge.

Read claimed evaluations at $ζ$ and $ωζ$ (
PointEvaluations
). 
Sample $v←$C$ (commitment combination challenge,
polyscale
) using the Poseidon sponge. 
Sample $u←$C$ (evaluation combination challenge,
evalscale
) using the Poseidon sponge. The $(v,u)$ notation is consistent with the Plonk paper where $v$ recombines commitments and $u$ recombines evaluations

Compute $C∈G$ with $v$ from:
 $U_{(1)},…,U_{(n)}$ (
RecursionChallenge.comm
$∈G$)  Polynomial commitments from PlonK (
ProverCommitments
)
 $U_{(1)},…,U_{(n)}$ (

Compute $y_{ζ}$ (part of
combined_inner_product
) with $v$ from: The evaluations of $h_{(1)}(ζ),…,h_{(n)}(ζ)$
 Polynomial openings from PlonK (
ProofEvaluations
) at $ζ$

Compute $y_{ζω}$ (part of
combined_inner_product
) with $v$ from: The evaluations of $h_{(1)}(ζω),…,h_{(n)}(ζω)$
 Polynomial openings from PlonK (
ProofEvaluations
) at $ζ⋅ω$
At this point we have two PCS claims, these are combined in the next transform.
At this point we have two claims: $(C,ζ,y_{ζ})(C,ζω,y_{ζω}) ∈L_{PCD,d}∈L_{PCD,d} $ These are combined using a random linear combination with $u$ in the inner product argument (see [Different functionalities](/plonk/inner_product_api.html) for details).


Checking $R_{PCS,d}→R_{IPA,ℓ}$.
 Sample $H←$G$ using the Poseidon sponge: hash to curve.
 Compute $y←y_{ζ}+u⋅y_{ζω}$.
 Update $C_{′}←C+[y]⋅H$.

Checking $R_{IPA,ℓ}→R_{IPA,1}$:
Check the correctness of the folding argument, for every $i=1,…,k$: Receive $L_{(i)},R_{(i)}∈G$ (see the vector
OpeningProof.lr
in Kimchi).  Sample $α_{i}←$C$ using the Poseidon sponge.
 Compute $C_{(i)}=[α_{i}]⋅L+C_{(i−1)}+[α_{i}]⋅R$ (using GLV endomorphism)
(Note: to avoid the inversion the element $P=[α_{i}]⋅L$ is witnessed and the verifier checks $[α_{i}]⋅P=[α_{i}]⋅([α_{i}]⋅L)=L$. To understand why computing the field inversion would be expensive see deferred computation)
 Receive $L_{(i)},R_{(i)}∈G$ (see the vector

Checking $R_{IPA,1}→R_{Acc,G}$
 Receive $c$ form the prover.
 Define $h$ from $α$ (folding challenges, computed above).
 Compute a combined evaluation of the IPA challenge polynomial on two points: $b=h(ζ)+u⋅h(ζω)$
 Computationally, $b$ is obtained inside bulletproof folding procedure, by folding the vector $b_{init}$ such that $b_{init,j}=ζ_{j}+v⋅ζ_{j}w_{j}$ using the same standard bulletproof challenges that constitute $h(X)$. This $b_{init}$ is a $v$recombined evaluation point. The equality is by linearity of IPA recombination.
 Compute $y_{′}←c⋅b=c⋅(h(ζ)+u⋅h(ζω))$, this works since: $x_{(k)}=x_{ζ}+u⋅x_{ζω}$ See Different functionalities for more details or the relevant code.
 Compute $U←C_{(k)}−[y_{′}]⋅H$ (i.e. st. $C_{(k)}=U+[y_{′}]⋅H$)
Note that the accumulator verifier must be proven (in addition to the Kimchi/PlonK verifier) for each input proof.
No Cycles of Curves?
Note that the “cycles of curves” (e.g. Pasta cycle) does not show up in this part of the code:
a separate accumulator is needed for each curve and the final verifier must check both accumulators to deem the combined recursive proof valid.
This takes the form of passthough
data in pickles.
Note however, that the accumulation verifier makes use of both $G$operations and $F$operations, therefore it (like the Kimchi verifier) also requires deferred computation.