What is that new powerful tool in cryptography, then?
> He wanted to build zero-knowledge proofs that weren’t interactive. Thirty years earlier, Goldreich and Oren had established that such proofs are impossible.
I'm not sure what "interactive" means here, but I thought ZK-SNARKs were already non-interactive.
It seems the article has nothing to do with anything practical..
The fielded systems require something that wasn't there in the original model of zero-knowledge proofs. That could be as little as a trusted-enough public source of randomness: the prover makes their initial commitments, plays the verification game with a verifier whose challenges are controlled by the next outputs of the public RNG, and as long as the other party trusts that the RNG and prover aren't in cahoots, that's enough. Doing a trusted setup process beforehand is another tool used by a bunch of deployed systems.
That doesn't mean anything's practically wrong with the fielded ZK proof systems, just that's how you reconcile the article's "no non-interactive proofs under these assumptions" with people out in the real world using non-interactive proofs.
This paper brings up another logical possibility, that there could be a non-interactive proof with no RNG or setup that doesn't meet the precise original definition of zero-knowledge proofs but is zero-knowledge practically speaking. I don't know whether we'll actually see better fielded ZK proof systems come out of this approach!
The result is indeed theoretical, but is a big advancement in theory.
zkSNARKs (and other “non-interactive” proof systems) are actually secretly interactive because they all require a setup phase, which effectively counts as the first verifier message. The provers response is then the second message, making the entire process interactive.
This work eliminates that setup phase entirely, leaving only the provers message. The resulting protocol is hence truly non interactive
Cryptographer here, but this is not my area and I've only skimmed the paper. As far as I can tell, it's a purely theoretical result but a really cool one. Wall of text that might be wrong, as a rough summary of the result as I understand it:
There are different definitions of "zk", of "proof". Eg do "proofs" of false statements not exist, or are they just hard to find? If they exist but are hard to find, then it's often called an "argument" instead, which is the "AR" in zk-SNARKs and zk-STARKs.
One common definition of zero-knowledge protocols is that you can make an efficient simulator that makes convincing transcripts of the protocol without knowing the relevant secret (up to and including whether the statement to be proved/argued is even true). For interactive proofs, the simulator is usually supposed to output a transcript of the messages sent between the prover and the verifier, and the trick to making the simulator work is to choose later messages before earlier ones (e.g. challenges before commitments). But in non-interactive proofs, there's only one message, so that trick doesn't work and the simulator would have to output the proof itself.
The Goldreich-Oren result shows that this definition of ZK conflicts with soundness, unless the type of problem you're doing ZK proofs for was easy to begin with. IIUC this is for a simple reason: if a simulator can efficiently output a convincing proof of any true statement of the type your zk proof system covers (this is the zero-knowledge property); and if for false statements there is no proof that will convince the verifier (soundness); then you have an efficient algorithm for checking whether the statement is true or not, which is just to check whether your simulator convinces the verifier. This means that the underlying problem is by definition easy, so there's not much point to having zk proofs for it.
Goldreich-Oren doesn't apply to zk-SNARKs or zk-STARKs, because they are not perfectly sound, and in particular because you can get around the impossibility using the trusted setup in zk-SNARKs (essentially a secret key that lets you efficiently prove false statements) and/or by messing around with the random oracle model (pretend that the hash functions are replaced by magic, and then let the simulator tinker with that magic). Also zk-S?ARKs are arguments of knowledge (not just e.g. "a discrete log of this point exists" but "the prover knows the discrete log") which also changes the model.
As I understand it, the new result is basically to make your proof a NIWI-proof ("Non-Interactive Witness Indistinguishable proof", a weaker notion of zk-proof) that:
* Either [real statement you're trying to prove]
* or else [false statement that's almost impossible to prove false], e.g. "there are contradictions in your axiom system".
Such a proof can be made perfectly sound, since NIWI can be perfectly sound, and the second half is supposed to be false. There's no simulator, but if the false statement were true then there would be a simulator, where you always feed the NIWI eg a contradiction in the axiom system, instead of a proof of the real statement. (The definition of NIWI is that it should be hard to distinguish the proof resulting from these two cases.) The new paper also argues that this result, where there's no simulator but it's hard to prove that there's no simulator, is almost as good as the simulator actually existing.
Probably in practice you wouldn't do this, but you would instead try to make sure that a zk-SNARK, zk-STARK, NIWI etc is good enough in your use case.
Goldreich-Oren still applies to NIZKs and SNARKs; the setup phase is the first verifier message. Even in SNARKs with transparent setup (eg, STARKs), the randomness used in the setup phase counts as the firstverifier message.
I can't even tell if this is tangent or central to the main idea, but I'm so intrigued by this one part:
> at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Quite interesting and mysterious. What is a game but a kind of simulation with two or more potentially competing "physics engines" involved? What is a simulation but a game, where exactly one player happens to be called "The Environment"? If these aren't two ways of looking at the same thing, what's this about a relatively mild cost?
The author didn’t use those words “just because”. Game-based security and simulation-based security are two different styles of security definitions used in cryptography, and we differentiate between them for good reason: game-based definitions are typically weaker (in that they capture less adversary behaviour) than simulation-based definitions.
Also, they have nothing to do with “video games” or physical simulations therein.
Thanks for the hint. Reading more about this, it seems the crypto/zero-knowledge/interactive-proof terminology is formal but overloaded here.. games that have nothing to do with game-theory, simulation that has no connection with bisimulation
Security through obscurity is about not revealing how the system works. Relying on computational complexity is precisely the opposite, you can understand the entire system but the unknown is well known, e.g. the length of the key and which symbols can be used to compose that key, but recovering it is prohibitively expensive. It's practically impossible even in theory it is possible.
FWIW we don't know for sure that one-way functions truly exists, or that P is different from NP. We just didn't find efficient solutions "so far". So the assumption for both what we used so far and this new work... is that our understanding of mathematics is shared and correct.
I thought that was THE ENTIRE PREMISE of cryptography
Given that they can’t be proven, so it’s effectively unpredictable and “un-generatable” ?
What is that new powerful tool in cryptography, then?
> He wanted to build zero-knowledge proofs that weren’t interactive. Thirty years earlier, Goldreich and Oren had established that such proofs are impossible.
I'm not sure what "interactive" means here, but I thought ZK-SNARKs were already non-interactive.
It seems the article has nothing to do with anything practical..
That doesn't mean anything's practically wrong with the fielded ZK proof systems, just that's how you reconcile the article's "no non-interactive proofs under these assumptions" with people out in the real world using non-interactive proofs.
This paper brings up another logical possibility, that there could be a non-interactive proof with no RNG or setup that doesn't meet the precise original definition of zero-knowledge proofs but is zero-knowledge practically speaking. I don't know whether we'll actually see better fielded ZK proof systems come out of this approach!
zkSNARKs (and other “non-interactive” proof systems) are actually secretly interactive because they all require a setup phase, which effectively counts as the first verifier message. The provers response is then the second message, making the entire process interactive.
This work eliminates that setup phase entirely, leaving only the provers message. The resulting protocol is hence truly non interactive
There are different definitions of "zk", of "proof". Eg do "proofs" of false statements not exist, or are they just hard to find? If they exist but are hard to find, then it's often called an "argument" instead, which is the "AR" in zk-SNARKs and zk-STARKs.
One common definition of zero-knowledge protocols is that you can make an efficient simulator that makes convincing transcripts of the protocol without knowing the relevant secret (up to and including whether the statement to be proved/argued is even true). For interactive proofs, the simulator is usually supposed to output a transcript of the messages sent between the prover and the verifier, and the trick to making the simulator work is to choose later messages before earlier ones (e.g. challenges before commitments). But in non-interactive proofs, there's only one message, so that trick doesn't work and the simulator would have to output the proof itself.
The Goldreich-Oren result shows that this definition of ZK conflicts with soundness, unless the type of problem you're doing ZK proofs for was easy to begin with. IIUC this is for a simple reason: if a simulator can efficiently output a convincing proof of any true statement of the type your zk proof system covers (this is the zero-knowledge property); and if for false statements there is no proof that will convince the verifier (soundness); then you have an efficient algorithm for checking whether the statement is true or not, which is just to check whether your simulator convinces the verifier. This means that the underlying problem is by definition easy, so there's not much point to having zk proofs for it.
Goldreich-Oren doesn't apply to zk-SNARKs or zk-STARKs, because they are not perfectly sound, and in particular because you can get around the impossibility using the trusted setup in zk-SNARKs (essentially a secret key that lets you efficiently prove false statements) and/or by messing around with the random oracle model (pretend that the hash functions are replaced by magic, and then let the simulator tinker with that magic). Also zk-S?ARKs are arguments of knowledge (not just e.g. "a discrete log of this point exists" but "the prover knows the discrete log") which also changes the model.
As I understand it, the new result is basically to make your proof a NIWI-proof ("Non-Interactive Witness Indistinguishable proof", a weaker notion of zk-proof) that:
* Either [real statement you're trying to prove]
* or else [false statement that's almost impossible to prove false], e.g. "there are contradictions in your axiom system".
Such a proof can be made perfectly sound, since NIWI can be perfectly sound, and the second half is supposed to be false. There's no simulator, but if the false statement were true then there would be a simulator, where you always feed the NIWI eg a contradiction in the axiom system, instead of a proof of the real statement. (The definition of NIWI is that it should be hard to distinguish the proof resulting from these two cases.) The new paper also argues that this result, where there's no simulator but it's hard to prove that there's no simulator, is almost as good as the simulator actually existing.
Probably in practice you wouldn't do this, but you would instead try to make sure that a zk-SNARK, zk-STARK, NIWI etc is good enough in your use case.
Thank you!
I was searching for the github repo with a cool example encrypt/decrypt.
Silly me.
I think Daily Mail links would be more informative, unironically.
> at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Quite interesting and mysterious. What is a game but a kind of simulation with two or more potentially competing "physics engines" involved? What is a simulation but a game, where exactly one player happens to be called "The Environment"? If these aren't two ways of looking at the same thing, what's this about a relatively mild cost?
Also, they have nothing to do with “video games” or physical simulations therein.
FWIW we don't know for sure that one-way functions truly exists, or that P is different from NP. We just didn't find efficient solutions "so far". So the assumption for both what we used so far and this new work... is that our understanding of mathematics is shared and correct.
It's only secure until someone figures it out.