Publication
CCS 2024
Conference paper

Batching-Efficient RAM using Updatable Lookup Arguments

Abstract

RAM (random access memory) is an important primitive in verifiable computation. In this paper, we focus on realizing RAM with efficient batching property, i.e, proving a batch of 𝑚 updates on a RAM of size 𝑁 while incurring a cost that is sublinear in 𝑁. Classical approaches based on Merkle-trees or address ordered transcripts to model RAM correctness are either concretely inefficient, or incur linear overhead in the size of the RAM. Recent works explore cryptographic accumulators based on unknown-order groups (RSA, class-groups) to model the RAM state. While recent RSA accumulator based approaches offer significant improvement over classical methods, they incur linear overhead in the size of the accumulated set to compute witnesses, as well as prohibitive constant overheads. In this paper, we realize a batching-efficient RAM with superior asymptotic and concrete costs as compared to existing approaches. Towards this: (i) we build on recent constructions of lookup arguments to allow efficient lookups even in presence of table updates, and (ii) we realize a variant of sub-vector relation addressed in prior works, which we call committed index lookup. We combine the two building blocks to realize batching-efficient RAM with sublinear (√𝑁) dependence on size of the RAM. Our construction incurs an amortized proving cost of 𝑂(𝑚 log𝑚 +√𝑚𝑁) for a batch of 𝑚 updates on a RAM of size 𝑁. Our results also benefit the recent arguments for sub-vector relation, by enabling them to be efficient in presence of updates to the table. We believe that this is a contribution of independent interest. We implement our solution to evaluate its concrete efficiency. Our experiments show that it offers significant improvement over existing works on batching-efficient accumulators/RAMs, with a substantially reduced resource barrier.