RFC: Plookup in kimchi

In 2020, plookup showed how to create lookup proofs. Proofs that some witness values are part of a lookup table. Two years later, an independent team published plonkup showing how to integrate Plookup into Plonk.

This document specifies how we integrate plookup in kimchi. It assumes that the reader understands the basics behind plookup.

Overview

We integrate plookup in kimchi with the following differences:

  • we snake-ify the sorted table instead of wrapping it around (see later)
  • we allow fixed-ahead-of-time linear combinations of columns of the queries we make
  • we only use a single table (XOR) at the moment of this writing
  • we allow several lookups (or queries) to be performed within the same row
  • zero-knowledgeness is added in a specific way (see later)

The following document explains the protocol in more detail

Recap on the grand product argument of plookup

As per the Plookup paper, the prover will have to compute three vectors:

  • , the (secret) query vector, containing the witness values that the prover wants to prove are part of the lookup table.
  • , the (public) lookup table.
  • , the (secret) concatenation of and , sorted by (where elements are listed in the order they are listed in ).

Essentially, plookup proves that all the elements in are indeed in the lookup table if and only if the following multisets are equal:

where is a new set derived by applying a “randomized difference” between every successive pairs of a vector. For example:

Note: This assumes that the lookup table is a single column. You will see in the next section how to address lookup tables with more than one column.

The equality between the multisets can be proved with the permutation argument of plonk, which would look like enforcing constraints on the following accumulator:

  • init:
  • final:
  • for every :

Note that the plookup paper uses a slightly different equation to make the proof work. I believe the proof would work with the above equation, but for simplicity let’s just use the equation published in plookup:

Note: in plookup is too large, and so needs to be split into multiple vectors to enforce the constraint at every . We ignore this for now.

Lookup tables

Kimchi uses a single lookup table at the moment of this writing; the XOR table. The XOR table for values of 1 bit is the following:

lro
101
011
110
000

Whereas kimchi uses the XOR table for values of 4 bits, which has entries.

Note: the (0, 0, 0) entry is at the very end on purpose (as it will be used as dummy entry for rows of the witness that don’t care about lookups).

Querying the table

The plookup paper handles a vector of lookups which we do not have. So the first step is to create such a table from the witness columns (or registers). To do this, we define the following objects:

  • a query tells us what registers, in what order, and scaled by how much, are part of a query
  • a query selector tells us which rows are using the query. It is pretty much the same as a gate selector.

Let’s go over the first item in this section.

For example, the following query tells us that we want to check if

lro
1, r01, r22, r1

The grand product argument for the lookup consraint will look like this at this point:

Not all rows need to perform queries into a lookup table. We will use a query selector in the next section to make the constraints work with this in mind.

Query selector

The associated query selector tells us on which rows the query into the XOR lookup table occurs.

rowquery selector
01
10

Both the (XOR) lookup table and the query are built-ins in kimchi. The query selector is derived from the circuit at setup time. Currently only the ChaCha gates make use of the lookups.

The grand product argument for the lookup constraint looks like this now:

where is constructed so that a dummy query () is used on rows that don’t have a query.

Queries, not query

Since we allow multiple queries per row, we define multiple queries, where each query is associated with a lookup selector.

At the moment of this writing, the ChaCha gates all perform queries in a row. Thus, is trivially the largest number of queries that happen in a row.

Important: to make constraints work, this means that each row must make 4 queries. (Potentially some or all of them are dummy queries.)

For example, the ChaCha0, ChaCha1, and ChaCha2 gates will apply the following 4 XOR queries on the current and following rows:

lro-lro-lro-lro
1, r31, r71, r11-1, r41, r81, r12-1, r51, r91, r13-1, r61, r101, r14

which you can understand as checking for the current and following row that

The ChaChaFinal also performs (somewhat similar) queries in the XOR lookup table. In total this is 8 different queries that could be associated to 8 selector polynomials.

Grouping queries by queries pattern

Associating each query with a selector polynomial is not necessarily efficient. To summarize:

  • the ChaCha0, ChaCha1, and ChaCha2 gates that make queries into the XOR table
  • the ChaChaFinal gate makes different queries into the XOR table

Using the previous section’s method, we’d have to use different lookup selector polynomials for each of the different queries. Since there’s only use-cases, we can simply group them by queries patterns to reduce the number of lookup selector polynomials to .

The grand product argument for the lookup constraint looks like this now:

where is constructed as:

where, for example the first pattern for the ChaCha0, ChaCha1, and ChaCha2 gates looks like this:

Note:

  • there’s now 4 dummy queries, and they only appear when none of the lookup selectors are active
  • if a pattern uses less than 4 queries, they’d have to pad their queries with dummy queries as well

Back to the grand product argument

There are two things that we haven’t touched on:

  • The vector representing the combined lookup table (after its columns have been combined with a joint combiner ). The non-combined loookup table is fixed at setup time and derived based on the lookup tables used in the circuit (for now only one, the XOR lookup table, can be used in the circuit).
  • The vector representing the sorted multiset of both the queries and the lookup table. This is created by the prover and sent as commitment to the verifier.

The first vector is quite straightforward to think about:

  • if it is smaller than the domain (of size ), then we can repeat the last entry enough times to make the table of size .
  • if it is larger than the domain, then we can either increase the domain or split the vector in two (or more) vectors. This is most likely what we will have to do to support multiple lookup tables later.

What about the second vector?

The sorted vector

The second vector is of size

That is, it contains the elements of each query vectors (the actual values being looked up, after being combined with the joint combinator, that’s per row), as well as the elements of our lookup table (after being combined as well).

Because the vector is larger than the domain size , it is split into several vectors of size . Specifically, in the plonkup paper, the two halves of (which are then interpolated as and ).

Since you must compute the difference of every contiguous pairs, the last element of the first half is the replicated as the first element of the second half (), and a separate constraint enforces that continuity on the interpolated polynomials and :

which is equivalent with checking that

The sorted vector in kimchi

Since this vector is known only by the prover, and is evaluated as part of the protocol, zero-knowledge must be added to the polynomial. To do this in kimchi, we use the same technique as with the other prover polynomials: we randomize the last evaluations (or rows, on the domain) of the polynomial.

This means two things for the lookup grand product argument:

  1. we cannot use the wrap around trick to make sure that the list is split in two correctly (enforced by which is equivalent to in the plookup paper)
  2. we have even less space to store an entire query vector. Which is actually super correct, as the witness also has some zero-knowledge rows at the end that should not be part of the queries anyway.

The first problem can be solved in two ways:

  • Zig-zag technique. By reorganizing to alternate its values between the columns. For example, and so that you can simply write the denominator of the grand product argument as this is what the plonkup paper does.
  • Snake technique. by reorganizing as a snake. This is what is done in kimchi currently.

The snake technique rearranges into the following shape:

    _   _
 | | | | |
 | | | | |
 |_| |_| |

so that the denominator becomes the following equation:

and the snake doing a U-turn is constrained via something like

If there’s an (because the table is very large, for example), then you’d have something like:

with the added U-turn constraint:

Unsorted in

Note that at setup time, cannot be sorted as it is not combined yet. Since needs to be sorted by (in other words, not sorted, but sorted following the elements of ), there are two solutions:

  1. both the prover and the verifier can sort the combined , so that can be sorted via the typical sorting algorithms
  2. the prover can sort by , so that the verifier doesn’t have to do any sorting and can just rely on the commitment of the columns of (which the prover can evaluate in the protocol).

We do the second one, but there is an edge-case: the combined entries can repeat. For some such that , we might have

For example, if and , then would be one way of sorting things out. But would be incorrect.

Recap

So to recap, to create the sorted polynomials , the prover:

  1. creates a large query vector which contains the concatenation of the per-row (combined with the joint combinator) queries (that might contain dummy queries) for all rows
  2. creates the (combined with the joint combinator) table vector
  3. sorts all of that into a big vector
  4. divides that vector into as many vectors as a necessary following the snake method
  5. interpolate these vectors into polynomials
  6. commit to them, and evaluate them as part of the protocol.