kimchi_msm/logup.rs
1#![allow(clippy::type_complexity)]
2
3//! Implement a variant of the logarithmic derivative lookups based on the
4//! equations described in the paper ["Multivariate lookups based on logarithmic
5//! derivatives"](https://eprint.iacr.org/2022/1530.pdf).
6//!
7//! The variant is mostly based on the observation that the polynomial
8//! identities can be verified using the "Idealised low-degree protocols"
9//! described in the section 4 of the
10//! ["PlonK"](https://eprint.iacr.org/2019/953.pdf) paper and "the quotient
11//! polynomial" described in the round 3 of the PlonK protocol, instead of using
12//! the sumcheck protocol.
13//!
14//! The protocol is based on the following observations:
15//!
16//! The sequence (a_i) is included in (b_i) if and only if the following
17//! equation holds:
18//! ```text
19//! k 1 l m_i
20//! ∑ ------- = ∑ ------- (1)
21//! i=1 β + a_i i=1 β + b_i
22//! ```
23//! where m_i is the number of times a_i appears in the sequence b_i.
24//!
25//! The sequence (b_i) will refer to the table values and the sequence (a_i) the
26//! values the prover looks up.
27//!
28//! For readability, the table values are represented as the evaluations over a
29//! subgroup H of the field F of a
30//! polynomial t(X), and the looked-up values by the evaluations of a polynomial
31//! f(X). If we suppose the subgroup H is defined as {1, ω, ω^2, ..., ω^{n-1}},
32//! the equation (1) becomes:
33//!
34//! ```text
35//! n 1 n m(ω^i)
36//! ∑ ---------- = ∑ ---------- (2)
37//! i=1 β + f(ω^i) i=1 β + t(ω^i)
38//! ```
39//!
40//! In the codebase, the multiplicities m_i are called the "lookup counters".
41//!
42//! The protocol can be generalized to multiple "looked-up" polynomials f_1,
43//! ..., f_k (embedded in the structure `LogupWitness` in the codebase) and the
44//! equation (2) becomes:
45//!
46//! ```text
47//! n k 1 n m(ω^i)
48//! ∑ ∑ ------------ = ∑ ----------- (3)
49//! i=1 j=1 β + f_j(ω^i) i=1 β + t(ω^i)
50//! ```
51//!
52//! which can be rewritten as:
53//! ```text
54//! n ( k 1 m(ω^i) )
55//! ∑ ( ∑ ------------ - ----------- ) = 0 (4)
56//! i=1 ( j=1 β + f_j(ω^i) β + t(ω^i) )
57//! \ /
58//! -----------------------------------
59//! "inner sums", h(ω^i)
60//! ```
61//!
62//! The equation says that if we sum/accumulate the "inner sums" (called the
63//! "lookup terms" in the codebase) over the
64//! subgroup H, we will get a zero value. Note the analogy with the
65//! "multiplicative" accumulator used in the lookup argument called
66//! ["Plookup"](https://eprint.iacr.org/2020/315.pdf).
67//!
68//! We will define an accumulator ϕ : H -> F (called the "lookup aggregation" in
69//! the codebase) which will contain the "running
70//! inner sums" which will be equal to zero to start, and when we finished
71//! accumulating, it must equal zero. Note that the initial and final values can
72//! be anything. The idea of the equation 4 is that all the values have been
73//! read and written to the accumulator the right number of times, with respect
74//! to the multiplicities m.
75//! More precisely, we will have:
76//! ```text
77//! - φ(1) = 0
78//! h(ω^j)
79//! /----------------------------------\
80//! ( k 1 m(ω^j) )
81//! - φ(ω^{j + 1}) = φ(ω^j) + ( ∑ ------------ - ----------- )
82//! ( i=1 β + f_i(ω^j) β + t(ω^j) )
83//!
84//! - φ(ω^n) = φ(1) = 0
85//! ```
86//!
87//! We will split the inner sums into chunks of size (MAX_SUPPORTED_DEGREE - 2)
88//! to avoid having a too large degree for the quotient polynomial.
89//! As a reminder, the paper ["Multivariate lookups based on logarithmic
90//! derivatives"](https://eprint.iacr.org/2022/1530.pdf) uses the sumcheck
91//! protocol to compute the partial sums (equations 16 and 17). However, we use
92//! the PlonK polynomial IOP and therefore, we will use the quotient polynomial,
93//! and the computation of the partial sums will be translated into a constraint
94//! in a new power of alpha.
95//!
96//! Note that the inner sum h(X) can be constrainted as followed:
97//! ```text
98//! k k / k \
99//! h(X) * ᴨ (β + f_{i}(X)) = ∑ | m_{i}(X) * ᴨ (β + f_{j}(X)) | (5)
100//! i=0 i=0 | j=0 |
101//! \ j≠i /
102//! ```
103//! (with m_i(X) being the multiplicities for `i = 0` and `-1` otherwise, and
104//! f_0(X) being the table t(X)).
105//! More than one "inner sum" can be created in the case that `k + 2` is higher
106//! than the maximum degree supported.
107//! The quotient polynomial, defined at round 3 of the [PlonK
108//! protocol](https://eprint.iacr.org/2019/953.pdf), will be something like:
109//!
110//! ```text
111//! ... + α^i [φ(ω X) - φ(X) - h(X)] + α^(i + 1) (5) + ...
112//! t(X) = ------------------------------------------------------
113//! Z_H(X)
114//! ```
115//!
116//! `k` can then be seen as the number of lookups we can make per row. The
117//! additional cost when we reach the maximum degree supported is to add a new
118//! constraint and add a new column.
119//! For rows with less than `k` lookups, the prover will add a dummy value,
120//! which will be a value known to be in the table, and the multiplicity must be
121//! increased appropriately.
122//!
123//! To handle more than one table, we will use a table ID and transform the
124//! single value lookup into a vector lookup, using a random combiner.
125//! The protocol can also handle vector lookups, by using the random combiner.
126//! The looked-up values therefore become functions f_j: H x H x ... x H -> F
127//! and is transformed into a f'_j: H -> F using a random combiner `r`.
128//!
129//! To summarize, the prover will:
130//! - commit to the multiplicities m.
131//! - commit to individual looked-up values f (which include the table t) which
132//! should be already included in the PlonK protocol as columns.
133//! - coin an evaluation point β.
134//! - coin a random combiner j (used to aggregate the table ID and concatenate
135//! vector lookups, if any).
136//! - commit to the inner sums/lookup terms h.
137//! - commit to the running sum φ.
138//! - add constraints to the quotient polynomial.
139//! - evaluate all polynomials at the evaluation points ζ and ζω (because we
140//! access the "next" row for the accumulator in the quotient polynomial).
141use ark_ff::{Field, PrimeField, Zero};
142use kimchi::circuits::{
143 berkeley_columns::BerkeleyChallengeTerm,
144 expr::{ConstantExpr, ConstantTerm, Expr, ExprInner},
145};
146use std::{collections::BTreeMap, hash::Hash};
147
148use crate::{
149 columns::Column,
150 expr::{curr_cell, next_cell, E},
151 MAX_SUPPORTED_DEGREE,
152};
153
154/// Generic structure to represent a (vector) lookup the table with ID
155/// `table_id`.
156///
157/// The structure represents the individual fraction of the sum described in the
158/// Logup protocol (for instance Eq. 8).
159///
160/// The table ID is added to the random linear combination formed with the
161/// values. The combiner for the random linear combination is coined during the
162/// proving phase by the prover.
163#[derive(Debug, Clone, PartialEq, Eq)]
164pub struct Logup<F, ID: LookupTableID> {
165 pub(crate) table_id: ID,
166 pub(crate) numerator: F,
167 pub(crate) value: Vec<F>,
168}
169
170/// Basic trait for logarithmic lookups.
171impl<F, ID> Logup<F, ID>
172where
173 F: Clone,
174 ID: LookupTableID,
175{
176 /// Creates a new Logup
177 pub fn new(table_id: ID, numerator: F, value: &[F]) -> Self {
178 Self {
179 table_id,
180 numerator,
181 value: value.to_vec(),
182 }
183 }
184}
185
186/// Trait for lookup table variants
187pub trait LookupTableID:
188 Send + Sync + Copy + Hash + Eq + PartialEq + Ord + PartialOrd + core::fmt::Debug
189{
190 /// Assign a unique ID, as a u32 value
191 fn to_u32(&self) -> u32;
192
193 /// Build a value from a u32
194 fn from_u32(value: u32) -> Self;
195
196 /// Assign a unique ID to the lookup tables.
197 fn to_field<F: Field>(&self) -> F {
198 F::from(self.to_u32())
199 }
200
201 /// Identify fixed and RAMLookups with a boolean.
202 /// This can be used to identify the lookups whose table values are fixed,
203 /// like range checks.
204 fn is_fixed(&self) -> bool;
205
206 /// If a table is runtime table, `true` means we should create an
207 /// explicit extra column for it to "read" from. `false` means
208 /// that this table will be reading from some existing (e.g.
209 /// relation) columns, and no extra columns should be added.
210 ///
211 /// Panics if the argument is a fixed table.
212 fn runtime_create_column(&self) -> bool;
213
214 /// Assign a unique ID to the lookup tables, as an expression.
215 fn to_constraint<F: Field>(&self) -> E<F> {
216 let f = self.to_field();
217 let f = ConstantExpr::from(ConstantTerm::Literal(f));
218 E::Atom(ExprInner::Constant(f))
219 }
220
221 /// Returns the length of each table.
222 fn length(&self) -> usize;
223
224 /// Returns None if the table is runtime (and thus mapping value
225 /// -> ix is not known at compile time.
226 fn ix_by_value<F: PrimeField>(&self, value: &[F]) -> Option<usize>;
227
228 fn all_variants() -> Vec<Self>;
229}
230
231/// A table of values that can be used for a lookup, along with the ID for the table.
232#[derive(Debug, Clone)]
233pub struct LookupTable<F, ID: LookupTableID> {
234 /// Table ID corresponding to this table
235 pub table_id: ID,
236 /// Vector of values inside each entry of the table
237 pub entries: Vec<Vec<F>>,
238}
239
240/// Represents a witness of one instance of the lookup argument
241// IMPROVEME: Possible to index by a generic const?
242// The parameter N is the number of functions/looked-up values per row. It is
243// used by the PlonK polynomial IOP to compute the number of partial sums.
244#[derive(Debug, Clone, PartialEq, Eq)]
245pub struct LogupWitness<F, ID: LookupTableID> {
246 /// A list of functions/looked-up values.
247 /// Invariant: for fixed lookup tables, the last value of the vector is the
248 /// lookup table t. The lookup table values must have a negative sign.
249 /// The values are represented as:
250 /// [ [f_{1}(1), ..., f_{1}(ω^(n-1)],
251 /// [f_{2}(1), ..., f_{2}(ω^(n-1)]
252 /// ...
253 /// [f_{m}(1), ..., f_{m}(ω^(n-1)]
254 /// ]
255 //
256 // TODO: for efficiency, as we go through columns and after that row, we
257 // should reorganize this. While working on the interpreter, we might
258 // change this structure.
259 //
260 // TODO: for efficiency, we might want to have a single flat fixed-size
261 // array
262 pub f: Vec<Vec<Logup<F, ID>>>,
263 /// The multiplicity polynomials; by convention, this is a vector
264 /// of columns, corresponding to the `tail` of `f`. That is,
265 /// `m[last] ~ f[last]`.
266 pub m: Vec<Vec<F>>,
267}
268
269/// Represents the proof of the lookup argument
270/// It is parametrized by the type `T` which can be either:
271/// - `Polycomm<G: KimchiCurve>` for the commitments
272/// - `F` for the evaluations at ζ (resp. ζω).
273// FIXME: We should have a fixed number of m and h. Should we encode that in
274// the type?
275#[derive(Debug, Clone)]
276pub struct LookupProof<T, ID> {
277 /// The multiplicity polynomials
278 pub(crate) m: BTreeMap<ID, Vec<T>>,
279 /// The polynomial keeping the sum of each row
280 pub(crate) h: BTreeMap<ID, Vec<T>>,
281 /// The "running-sum" over the rows, coined `φ`
282 pub(crate) sum: T,
283 /// All fixed lookup tables values, indexed by their ID
284 pub(crate) fixed_tables: BTreeMap<ID, T>,
285}
286
287/// Iterator implementation to abstract the content of the structure.
288/// It can be used to iterate over the commitments (resp. the evaluations)
289/// without requiring to have a look at the inner fields.
290impl<'lt, G, ID: LookupTableID> IntoIterator for &'lt LookupProof<G, ID> {
291 type Item = &'lt G;
292 type IntoIter = std::vec::IntoIter<&'lt G>;
293
294 fn into_iter(self) -> Self::IntoIter {
295 let mut iter_contents = vec![];
296 // First multiplicities
297 self.m.values().for_each(|m| iter_contents.extend(m));
298 self.h.values().for_each(|h| iter_contents.extend(h));
299 iter_contents.push(&self.sum);
300 // Fixed tables
301 self.fixed_tables
302 .values()
303 .for_each(|t| iter_contents.push(t));
304 iter_contents.into_iter()
305 }
306}
307
308/// Compute the following constraint:
309/// ```text
310/// lhs
311/// |------------------------------------------|
312/// | denominators |
313/// | /--------------\ |
314/// column * (\prod_{i = 0}^{N} (β + f_{i}(X))) =
315/// \sum_{i = 0}^{N} m_{i} * \prod_{j = 1, j \neq i}^{N} (β + f_{j}(X))
316/// | |--------------------------------------------------|
317/// | Inner part of rhs |
318/// | |
319/// | /
320/// \ /
321/// \ /
322/// \---------------------------------------------------------/
323/// rhs
324/// ```
325/// It is because h(X) (column) is defined as:
326/// ```text
327/// n m_i(X) n 1 m_0(ω^j)
328/// h(X) = ∑ ---------- = ∑ ------------ - -----------
329/// i=0 β + f_i(X) i=1 β + f_i(ω^j) β + t(ω^j)
330///```
331/// The first form is generic, the second is concrete with f_0 = t; m_0 = m; m_i = 1 for ≠ 1.
332/// We will be thinking in the generic form.
333///
334/// For instance, if N = 2, we have
335/// ```text
336/// h(X) = m_1(X) / (β + f_1(X)) + m_2(X) / (β + f_{2}(X))
337///
338/// m_1(X) * (β + f_2(X)) + m_2(X) * (β + f_{1}(X))
339/// = ----------------------------------------------
340/// (β + f_2(X)) * (β + f_1(X))
341/// ```
342/// which is equivalent to
343/// ```text
344/// h(X) * (β + f_2(X)) * (β + f_1(X)) = m_1(X) * (β + f_2(X)) + m_2(X) * (β + f_{1}(X))
345/// ```
346/// When we have f_1(X) a looked-up value, t(X) a fixed table and m_2(X) being
347/// the multiplicities, we have
348/// ```text
349/// h(X) * (β + t(X)) * (β + f(X)) = (β + t(X)) + m(X) * (β + f(X))
350/// ```
351pub fn combine_lookups<F: PrimeField, ID: LookupTableID>(
352 column: Column<usize>,
353 lookups: Vec<Logup<E<F>, ID>>,
354) -> E<F> {
355 let joint_combiner = {
356 let joint_combiner = ConstantExpr::from(BerkeleyChallengeTerm::JointCombiner);
357 E::Atom(ExprInner::Constant(joint_combiner))
358 };
359 let beta = {
360 let beta = ConstantExpr::from(BerkeleyChallengeTerm::Beta);
361 E::Atom(ExprInner::Constant(beta))
362 };
363
364 // Compute (β + f_{i}(X)) for each i.
365 // Note that f_i(X) = table_id + r * x_{1} + r^2 x_{2} + ... r^{N} x_{N}
366 let denominators = lookups
367 .iter()
368 .map(|x| {
369 // Compute r * x_{1} + r^2 x_{2} + ... r^{N} x_{N}
370 let combined_value = x
371 .value
372 .iter()
373 .rev()
374 .fold(E::zero(), |acc, y| acc * joint_combiner.clone() + y.clone())
375 * joint_combiner.clone();
376 // FIXME: sanity check for the domain, we should consider it in prover.rs.
377 // We do only support degree one constraint in the denominator.
378 let combined_degree_real = combined_value.degree(1, 0);
379 assert!(combined_degree_real <= 1, "Only degree zero and one is supported in the denominator of the lookup because of the maximum degree supported (8); got degree {combined_degree_real}",);
380 // add table id + evaluation point
381 beta.clone() + combined_value + x.table_id.to_constraint()
382 })
383 .collect::<Vec<_>>();
384
385 // Compute `column * (\prod_{i = 1}^{N} (β + f_{i}(X)))`
386 let lhs = denominators
387 .iter()
388 .fold(curr_cell(column), |acc, x| acc * x.clone());
389
390 // Compute `\sum_{i = 0}^{N} m_{i} * \prod_{j = 1, j \neq i}^{N} (β + f_{j}(X))`
391 let rhs = lookups
392 .into_iter()
393 .enumerate()
394 .map(|(i, x)| {
395 denominators.iter().enumerate().fold(
396 // Compute individual \sum_{j = 1, j \neq i}^{N} (β + f_{j}(X))
397 // This is the inner part of rhs. It multiplies with m_{i}
398 x.numerator,
399 |acc, (j, y)| {
400 if i == j {
401 acc
402 } else {
403 acc * y.clone()
404 }
405 },
406 )
407 })
408 // Individual sums
409 .reduce(|x, y| x + y)
410 .unwrap_or(E::zero());
411
412 lhs - rhs
413}
414
415/// Build the constraints for the lookup protocol.
416/// The constraints are the partial sum and the aggregation of the partial sums.
417pub fn constraint_lookups<F: PrimeField, ID: LookupTableID>(
418 lookup_reads: &BTreeMap<ID, Vec<Vec<E<F>>>>,
419 lookup_writes: &BTreeMap<ID, Vec<Vec<E<F>>>>,
420) -> Vec<E<F>> {
421 let mut constraints: Vec<E<F>> = vec![];
422 let mut lookup_terms_cols: Vec<Column<usize>> = vec![];
423 lookup_reads.iter().for_each(|(table_id, reads)| {
424 let mut idx_partial_sum = 0;
425 let table_id_u32 = table_id.to_u32();
426
427 // FIXME: do not clone
428 let mut lookups: Vec<_> = reads
429 .clone()
430 .into_iter()
431 .map(|value| Logup {
432 table_id: *table_id,
433 numerator: Expr::Atom(ExprInner::Constant(ConstantExpr::from(
434 ConstantTerm::Literal(F::one()),
435 ))),
436 value,
437 })
438 .collect();
439
440 if table_id.is_fixed() || table_id.runtime_create_column() {
441 let table_lookup = Logup {
442 table_id: *table_id,
443 numerator: -curr_cell(Column::LookupMultiplicity((table_id_u32, 0))),
444 value: vec![curr_cell(Column::LookupFixedTable(table_id_u32))],
445 };
446 lookups.push(table_lookup);
447 } else {
448 lookup_writes
449 .get(table_id)
450 .expect("Lookup writes for table_id {table_id:?} not present")
451 .iter()
452 .enumerate()
453 .for_each(|(i, write_columns)| {
454 lookups.push(Logup {
455 table_id: *table_id,
456 numerator: -curr_cell(Column::LookupMultiplicity((table_id_u32, i))),
457 value: write_columns.clone(),
458 });
459 });
460 }
461
462 // We split in chunks of 6 (MAX_SUPPORTED_DEGREE - 2)
463 lookups.chunks(MAX_SUPPORTED_DEGREE - 2).for_each(|chunk| {
464 let col = Column::LookupPartialSum((table_id_u32, idx_partial_sum));
465 lookup_terms_cols.push(col);
466 constraints.push(combine_lookups(col, chunk.to_vec()));
467 idx_partial_sum += 1;
468 });
469 });
470
471 // Generic code over the partial sum
472 // Compute φ(ωX) - φ(X) - \sum_{i = 1}^{N} h_i(X)
473 {
474 let constraint =
475 next_cell(Column::LookupAggregation) - curr_cell(Column::LookupAggregation);
476 let constraint = lookup_terms_cols
477 .into_iter()
478 .fold(constraint, |acc, col| acc - curr_cell(col));
479 constraints.push(constraint);
480 }
481 constraints
482}
483
484pub mod prover {
485 use crate::{
486 logup::{Logup, LogupWitness, LookupTableID},
487 MAX_SUPPORTED_DEGREE,
488 };
489 use ark_ff::{FftField, Zero};
490 use ark_poly::{univariate::DensePolynomial, Evaluations, Radix2EvaluationDomain as D};
491 use kimchi::{circuits::domains::EvaluationDomains, curve::KimchiCurve};
492 use mina_poseidon::FqSponge;
493 use poly_commitment::{
494 commitment::{absorb_commitment, PolyComm},
495 OpenProof, SRS as _,
496 };
497 use rayon::iter::{IntoParallelIterator, ParallelIterator};
498 use std::collections::BTreeMap;
499
500 /// The structure used by the prover the compute the quotient polynomial.
501 /// The structure contains the evaluations of the inner sums, the
502 /// multiplicities, the aggregation and the fixed tables, over the domain d8.
503 pub struct QuotientPolynomialEnvironment<'a, F: FftField, ID: LookupTableID> {
504 /// The evaluations of the partial sums, over d8.
505 pub lookup_terms_evals_d8: &'a BTreeMap<ID, Vec<Evaluations<F, D<F>>>>,
506 /// The evaluations of the aggregation, over d8.
507 pub lookup_aggregation_evals_d8: &'a Evaluations<F, D<F>>,
508 /// The evaluations of the multiplicities, over d8, indexed by the table ID.
509 pub lookup_counters_evals_d8: &'a BTreeMap<ID, Vec<Evaluations<F, D<F>>>>,
510 /// The evaluations of the fixed tables, over d8, indexed by the table ID.
511 pub fixed_tables_evals_d8: &'a BTreeMap<ID, Evaluations<F, D<F>>>,
512 }
513
514 /// Represents the environment for the logup argument.
515 pub struct Env<G: KimchiCurve, ID: LookupTableID> {
516 /// The polynomial of the multiplicities, indexed by the table ID.
517 pub lookup_counters_poly_d1: BTreeMap<ID, Vec<DensePolynomial<G::ScalarField>>>,
518 /// The commitments to the multiplicities, indexed by the table ID.
519 pub lookup_counters_comm_d1: BTreeMap<ID, Vec<PolyComm<G>>>,
520
521 /// The polynomials of the inner sums.
522 pub lookup_terms_poly_d1: BTreeMap<ID, Vec<DensePolynomial<G::ScalarField>>>,
523 /// The commitments of the inner sums.
524 pub lookup_terms_comms_d1: BTreeMap<ID, Vec<PolyComm<G>>>,
525
526 /// The aggregation polynomial.
527 pub lookup_aggregation_poly_d1: DensePolynomial<G::ScalarField>,
528 /// The commitment to the aggregation polynomial.
529 pub lookup_aggregation_comm_d1: PolyComm<G>,
530
531 // Evaluating over d8 for the quotient polynomial
532 #[allow(clippy::type_complexity)]
533 pub lookup_counters_evals_d8:
534 BTreeMap<ID, Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>>,
535 #[allow(clippy::type_complexity)]
536 pub lookup_terms_evals_d8:
537 BTreeMap<ID, Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>>,
538 pub lookup_aggregation_evals_d8: Evaluations<G::ScalarField, D<G::ScalarField>>,
539
540 pub fixed_lookup_tables_poly_d1: BTreeMap<ID, DensePolynomial<G::ScalarField>>,
541 pub fixed_lookup_tables_comms_d1: BTreeMap<ID, PolyComm<G>>,
542 pub fixed_lookup_tables_evals_d8:
543 BTreeMap<ID, Evaluations<G::ScalarField, D<G::ScalarField>>>,
544
545 /// The combiner used for vector lookups
546 pub joint_combiner: G::ScalarField,
547
548 /// The evaluation point used for the lookup polynomials.
549 pub beta: G::ScalarField,
550 }
551
552 impl<G: KimchiCurve, ID: LookupTableID> Env<G, ID> {
553 /// Create an environment for the prover to create a proof for the Logup protocol.
554 /// The protocol does suppose that the individual lookup terms are
555 /// committed as part of the columns.
556 /// Therefore, the protocol only focus on committing to the "grand
557 /// product sum" and the "row-accumulated" values.
558 pub fn create<
559 OpeningProof: OpenProof<G>,
560 Sponge: FqSponge<G::BaseField, G, G::ScalarField>,
561 >(
562 lookups: BTreeMap<ID, LogupWitness<G::ScalarField, ID>>,
563 domain: EvaluationDomains<G::ScalarField>,
564 fq_sponge: &mut Sponge,
565 srs: &OpeningProof::SRS,
566 ) -> Self
567 where
568 OpeningProof::SRS: Sync,
569 {
570 // Use parallel iterators where possible.
571 // Polynomial m(X)
572 // FIXME/IMPROVEME: m(X) is only for fixed table
573 let lookup_counters_evals_d1: BTreeMap<
574 ID,
575 Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>,
576 > = {
577 lookups
578 .clone()
579 .into_par_iter()
580 .map(|(table_id, logup_witness)| {
581 (
582 table_id,
583 logup_witness.m.into_iter().map(|m|
584 Evaluations::<G::ScalarField, D<G::ScalarField>>::from_vec_and_domain(
585 m.to_vec(),
586 domain.d1,
587 )
588 ).collect()
589 )
590 })
591 .collect()
592 };
593
594 let lookup_counters_poly_d1: BTreeMap<ID, Vec<DensePolynomial<G::ScalarField>>> =
595 (&lookup_counters_evals_d1)
596 .into_par_iter()
597 .map(|(id, evals)| {
598 (*id, evals.iter().map(|e| e.interpolate_by_ref()).collect())
599 })
600 .collect();
601
602 let lookup_counters_evals_d8: BTreeMap<
603 ID,
604 Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>,
605 > = (&lookup_counters_poly_d1)
606 .into_par_iter()
607 .map(|(id, lookup)| {
608 (
609 *id,
610 lookup
611 .iter()
612 .map(|l| l.evaluate_over_domain_by_ref(domain.d8))
613 .collect(),
614 )
615 })
616 .collect();
617
618 let lookup_counters_comm_d1: BTreeMap<ID, Vec<PolyComm<G>>> =
619 (&lookup_counters_evals_d1)
620 .into_par_iter()
621 .map(|(id, polys)| {
622 (
623 *id,
624 polys
625 .iter()
626 .map(|poly| srs.commit_evaluations_non_hiding(domain.d1, poly))
627 .collect(),
628 )
629 })
630 .collect();
631
632 lookup_counters_comm_d1.values().for_each(|comms| {
633 comms
634 .iter()
635 .for_each(|comm| absorb_commitment(fq_sponge, comm))
636 });
637 // -- end of m(X)
638
639 // -- start computing the row sums h(X)
640 // It will be used to compute the running sum in lookup_aggregation
641 // Coin a combiner to perform vector lookup.
642 // The row sums h are defined as
643 // -- n 1 1
644 // h(ω^i) = ∑ -------------------- - --------------
645 // j = 0 (β + f_{j}(ω^i)) (β + t(ω^i))
646 let vector_lookup_combiner = fq_sponge.challenge();
647
648 // Coin an evaluation point for the rational functions
649 let beta = fq_sponge.challenge();
650
651 //
652 // @volhovm TODO make sure this is h_i. It looks like f_i for fixed tables.
653 // It is an actual fixed column containing "fixed lookup data".
654 //
655 // Contain the evalations of the h_i. We divide the looked-up values
656 // in chunks of (MAX_SUPPORTED_DEGREE - 2)
657 let mut fixed_lookup_tables: BTreeMap<ID, Vec<G::ScalarField>> = BTreeMap::new();
658
659 // @volhovm TODO These are h_i related chunks!
660 //
661 // We keep the lookup terms in a map, to process them in order in the constraints.
662 let mut lookup_terms_map: BTreeMap<ID, Vec<Vec<G::ScalarField>>> = BTreeMap::new();
663
664 // Where are commitments to the last element are made? First "read" columns we don't commit to, right?..
665
666 lookups.into_iter().for_each(|(table_id, logup_witness)| {
667 let LogupWitness { f, m: _ } = logup_witness;
668 // The number of functions to look up, including the fixed table.
669 let n = f.len();
670 let n_partial_sums = if n % (MAX_SUPPORTED_DEGREE - 2) == 0 {
671 n / (MAX_SUPPORTED_DEGREE - 2)
672 } else {
673 n / (MAX_SUPPORTED_DEGREE - 2) + 1
674 };
675
676 let mut partial_sums =
677 vec![
678 Vec::<G::ScalarField>::with_capacity(domain.d1.size as usize);
679 n_partial_sums
680 ];
681
682 // We compute first the denominators of all f_i and t. We gather them in
683 // a vector to perform a batch inversion.
684 let mut denominators = Vec::with_capacity(n * domain.d1.size as usize);
685 // Iterate over the rows
686 for j in 0..domain.d1.size {
687 // Iterate over individual columns (i.e. f_i and t)
688 for (i, f_i) in f.iter().enumerate() {
689 let Logup {
690 numerator: _,
691 table_id,
692 value,
693 } = &f_i[j as usize];
694 // Compute x_{1} + r x_{2} + ... r^{N-1} x_{N}
695 // This is what we actually put into the `fixed_lookup_tables`.
696 let combined_value_pow0: G::ScalarField =
697 value.iter().rev().fold(G::ScalarField::zero(), |acc, y| {
698 acc * vector_lookup_combiner + y
699 });
700 // Compute r * x_{1} + r^2 x_{2} + ... r^{N} x_{N}
701 let combined_value: G::ScalarField =
702 combined_value_pow0 * vector_lookup_combiner;
703 // add table id
704 let combined_value = combined_value + table_id.to_field::<G::ScalarField>();
705
706 // If last element and fixed lookup tables, we keep
707 // the *combined* value of the table.
708 //
709 // Otherwise we're processing a runtime table
710 // with explicit writes, so we don't need to
711 // create any extra columns.
712 if (table_id.is_fixed() || table_id.runtime_create_column()) && i == (n - 1)
713 {
714 fixed_lookup_tables
715 .entry(*table_id)
716 .or_insert_with(Vec::new)
717 .push(combined_value_pow0);
718 }
719
720 // β + a_{i}
721 let lookup_denominator = beta + combined_value;
722 denominators.push(lookup_denominator);
723 }
724 }
725 assert!(denominators.len() == n * domain.d1.size as usize);
726
727 // Given a vector {β + a_i}, computes a vector {1/(β + a_i)}
728 ark_ff::fields::batch_inversion(&mut denominators);
729
730 // Evals is the sum on the individual columns for each row
731 let mut denominator_index = 0;
732
733 // We only need to add the numerator now
734 for j in 0..domain.d1.size {
735 let mut partial_sum_idx = 0;
736 let mut row_acc = G::ScalarField::zero();
737 for (i, f_i) in f.iter().enumerate() {
738 let Logup {
739 numerator,
740 table_id: _,
741 value: _,
742 } = &f_i[j as usize];
743 row_acc += *numerator * denominators[denominator_index];
744 denominator_index += 1;
745 // We split in chunks of (MAX_SUPPORTED_DEGREE - 2)
746 // We reset the accumulator for the current partial
747 // sum after keeping it.
748 if (i + 1) % (MAX_SUPPORTED_DEGREE - 2) == 0 {
749 partial_sums[partial_sum_idx].push(row_acc);
750 row_acc = G::ScalarField::zero();
751 partial_sum_idx += 1;
752 }
753 }
754 // Whatever leftover in `row_acc` left in the end of the iteration, we write it into
755 // `partial_sums` too. This is only done in case `n % (MAX_SUPPORTED_DEGREE - 2) != 0`
756 // which means that the similar addition to `partial_sums` a few lines above won't be triggered.
757 // So we have this wrapping up call instead.
758 if n % (MAX_SUPPORTED_DEGREE - 2) != 0 {
759 partial_sums[partial_sum_idx].push(row_acc);
760 }
761 }
762 lookup_terms_map.insert(table_id, partial_sums);
763 });
764
765 // Sanity check to verify that the number of evaluations is correct
766 lookup_terms_map.values().for_each(|evals| {
767 evals
768 .iter()
769 .for_each(|eval| assert_eq!(eval.len(), domain.d1.size as usize))
770 });
771
772 // Sanity check to verify that we have all the evaluations for the fixed lookup tables
773 fixed_lookup_tables
774 .values()
775 .for_each(|evals| assert_eq!(evals.len(), domain.d1.size as usize));
776
777 #[allow(clippy::type_complexity)]
778 let lookup_terms_evals_d1: BTreeMap<
779 ID,
780 Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>,
781 > =
782 (&lookup_terms_map)
783 .into_par_iter()
784 .map(|(id, lookup_terms)| {
785 let lookup_terms = lookup_terms.into_par_iter().map(|lookup_term| {
786 Evaluations::<G::ScalarField, D<G::ScalarField>>::from_vec_and_domain(
787 lookup_term.to_vec(), domain.d1,
788 )}).collect::<Vec<_>>();
789 (*id, lookup_terms)
790 })
791 .collect();
792
793 let fixed_lookup_tables_evals_d1: BTreeMap<
794 ID,
795 Evaluations<G::ScalarField, D<G::ScalarField>>,
796 > = fixed_lookup_tables
797 .into_iter()
798 .map(|(id, evals)| {
799 (
800 id,
801 Evaluations::<G::ScalarField, D<G::ScalarField>>::from_vec_and_domain(
802 evals, domain.d1,
803 ),
804 )
805 })
806 .collect();
807
808 let lookup_terms_poly_d1: BTreeMap<ID, Vec<DensePolynomial<G::ScalarField>>> =
809 (&lookup_terms_evals_d1)
810 .into_par_iter()
811 .map(|(id, lookup_terms)| {
812 let lookup_terms: Vec<DensePolynomial<G::ScalarField>> = lookup_terms
813 .into_par_iter()
814 .map(|evals| evals.interpolate_by_ref())
815 .collect();
816 (*id, lookup_terms)
817 })
818 .collect();
819
820 let fixed_lookup_tables_poly_d1: BTreeMap<ID, DensePolynomial<G::ScalarField>> =
821 (&fixed_lookup_tables_evals_d1)
822 .into_par_iter()
823 .map(|(id, evals)| (*id, evals.interpolate_by_ref()))
824 .collect();
825
826 #[allow(clippy::type_complexity)]
827 let lookup_terms_evals_d8: BTreeMap<
828 ID,
829 Vec<Evaluations<G::ScalarField, D<G::ScalarField>>>,
830 > = (&lookup_terms_poly_d1)
831 .into_par_iter()
832 .map(|(id, lookup_terms)| {
833 let lookup_terms: Vec<Evaluations<G::ScalarField, D<G::ScalarField>>> =
834 lookup_terms
835 .into_par_iter()
836 .map(|lookup_term| lookup_term.evaluate_over_domain_by_ref(domain.d8))
837 .collect();
838 (*id, lookup_terms)
839 })
840 .collect();
841
842 let fixed_lookup_tables_evals_d8: BTreeMap<
843 ID,
844 Evaluations<G::ScalarField, D<G::ScalarField>>,
845 > = (&fixed_lookup_tables_poly_d1)
846 .into_par_iter()
847 .map(|(id, poly)| (*id, poly.evaluate_over_domain_by_ref(domain.d8)))
848 .collect();
849
850 let lookup_terms_comms_d1: BTreeMap<ID, Vec<PolyComm<G>>> = lookup_terms_evals_d1
851 .iter()
852 .map(|(id, lookup_terms)| {
853 let lookup_terms = lookup_terms
854 .into_par_iter()
855 .map(|lookup_term| {
856 srs.commit_evaluations_non_hiding(domain.d1, lookup_term)
857 })
858 .collect();
859 (*id, lookup_terms)
860 })
861 .collect();
862
863 let fixed_lookup_tables_comms_d1: BTreeMap<ID, PolyComm<G>> =
864 (&fixed_lookup_tables_evals_d1)
865 .into_par_iter()
866 .map(|(id, evals)| (*id, srs.commit_evaluations_non_hiding(domain.d1, evals)))
867 .collect();
868
869 lookup_terms_comms_d1.values().for_each(|comms| {
870 comms
871 .iter()
872 .for_each(|comm| absorb_commitment(fq_sponge, comm))
873 });
874
875 fixed_lookup_tables_comms_d1
876 .values()
877 .for_each(|comm| absorb_commitment(fq_sponge, comm));
878 // -- end computing the row sums h
879
880 // -- start computing the running sum in lookup_aggregation
881 // The running sum, φ, is defined recursively over the subgroup as followed:
882 // - φ(1) = 0
883 // - φ(ω^{j + 1}) = φ(ω^j) + \
884 // \sum_{i = 1}^{n} (1 / (β + f_i(ω^{j + 1}))) - \
885 // (m(ω^{j + 1}) / (β + t(ω^{j + 1})))
886 // - φ(ω^n) = 0
887 let lookup_aggregation_evals_d1 = {
888 {
889 for table_id in ID::all_variants().into_iter() {
890 let mut acc = G::ScalarField::zero();
891 for i in 0..domain.d1.size as usize {
892 // φ(1) = 0
893 let lookup_terms = lookup_terms_evals_d1.get(&table_id).unwrap();
894 acc = lookup_terms.iter().fold(acc, |acc, lte| acc + lte[i]);
895 }
896 // Sanity check to verify that the accumulator ends up being zero.
897 assert_eq!(
898 acc,
899 G::ScalarField::zero(),
900 "Logup accumulator for table {table_id:?} must be zero"
901 );
902 }
903 }
904 let mut evals = Vec::with_capacity(domain.d1.size as usize);
905 {
906 let mut acc = G::ScalarField::zero();
907 for i in 0..domain.d1.size as usize {
908 // φ(1) = 0
909 evals.push(acc);
910 lookup_terms_evals_d1.iter().for_each(|(_, lookup_terms)| {
911 acc = lookup_terms.iter().fold(acc, |acc, lte| acc + lte[i]);
912 })
913 }
914 // Sanity check to verify that the accumulator ends up being zero.
915 assert_eq!(
916 acc,
917 G::ScalarField::zero(),
918 "Logup accumulator must be zero"
919 );
920 }
921 Evaluations::<G::ScalarField, D<G::ScalarField>>::from_vec_and_domain(
922 evals, domain.d1,
923 )
924 };
925
926 let lookup_aggregation_poly_d1 = lookup_aggregation_evals_d1.interpolate_by_ref();
927
928 let lookup_aggregation_evals_d8 =
929 lookup_aggregation_poly_d1.evaluate_over_domain_by_ref(domain.d8);
930
931 let lookup_aggregation_comm_d1 =
932 srs.commit_evaluations_non_hiding(domain.d1, &lookup_aggregation_evals_d1);
933
934 absorb_commitment(fq_sponge, &lookup_aggregation_comm_d1);
935 Self {
936 lookup_counters_poly_d1,
937 lookup_counters_comm_d1,
938
939 lookup_terms_poly_d1,
940 lookup_terms_comms_d1,
941
942 lookup_aggregation_poly_d1,
943 lookup_aggregation_comm_d1,
944
945 lookup_counters_evals_d8,
946 lookup_terms_evals_d8,
947 lookup_aggregation_evals_d8,
948
949 fixed_lookup_tables_poly_d1,
950 fixed_lookup_tables_comms_d1,
951 fixed_lookup_tables_evals_d8,
952
953 joint_combiner: vector_lookup_combiner,
954 beta,
955 }
956 }
957 }
958}