>We contend that the amount of time remaining before the arrival of CRQCs still exceeds the amount of time needed to migrate public blockchains to PQC, though the margin for error is increasingly narrow. Therefore, we have offered updated resource estimates for quantum attacks on blockchain cryptography together with an analysis of vulnerabilities and mitigations in order to urge all vulnerable cryptocurrency communities to begin PQC transition immediately while its timely completion is still the likely prospect.
they really couldn't be shouting "mitigate now or never" any louder. I'm curious how they arrived at the efficiency improvements, but perhaps any mention of that would be similar to releasing the circuit.
Top comment on LWN is a very interesting read (although neither the commenter nor myself claim any such trickery was involved in this case).
> Trail of Bits were able to craft an input that beats Google's circuit and prove it... by virtue of a bug in the verifier: https://blog.trailofbits.com/2026/04/17/we-beat-googles-zero...
Google patched the vuln and the original proof still stands, but this is a pretty strange path we seem to be walking down [...]
>On superconducting architectures with 10−3 physical error rates...
So still 1-2 orders of magnitude better than what we can achieve.
This is against a 256 bit elliptic curve. For some reason most people are stating the difficulty of using Shor's against 2048 bit RSA. Elliptic curves are easier to break with Shor's. I wonder how much of the optimization came from that fact alone...
How is it possible to provide a zero knowledge proof that their circuit works for large problem instances if there is no efficient way to run or simulate the circuit with the required instance size?
Wait, the article mentions that Shor's algorithm is factoring (which is what I understood), but then it's talking about elliptic curve cryptography? I thought ECC didn't use the same mathematical foundations of RSA, and RSA has been slowly phased out anyways...
Quite the contrary. Shor's algorithm actually works better for the shorter keys of ECC. The rule of thumb is 2n qbits for RSA keys and 6n qbits for ecc. I believe it has something to do with hownit applies to the hidden subgroup problem of finite abelian groups rather than factorisation, but I am really not a cryptographer not especially mathsy. I just asked the same question you did, and someone in the know pointed me to that.
This has been used for centuries. It is not a new invention.
Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.
Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).
But the algorithm still isn't practical on existing quantum computers, or ones that are going to be around any time soon, so there's no reason not to publish in full.
If only AI safety research had a mechanism this clear. "We have proof that building the machine will kill everybody, so get to work making a provably safe version."
Except that you have the logic backwards. It's an argument that something ("safe" general purpose AI) can't exist rather than that it has to.
People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.
> If the paper's authors had chosen to release their circuit, they would certainly have been recognized for the important progress they made in the science of quantum computing. Other researchers would have gone on to build on their work, and the entire scientific community would be richer for it.
... and the world could well have been unsafer. There is pretty strong reason not to release insights which could be used as an attack on public key cryptography. We already know the fix anyway, post quantum cryptography algorithms.
Sometimes scientific curiosity has to step back when it comes to potentially dangerous research. Scott Aaronson recently [1] compared this case to when scientists stopped publishing on nuclear fission research because the possibility of developing an atomic bomb became concrete:
> When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior.
they really couldn't be shouting "mitigate now or never" any louder. I'm curious how they arrived at the efficiency improvements, but perhaps any mention of that would be similar to releasing the circuit.
> Trail of Bits were able to craft an input that beats Google's circuit and prove it... by virtue of a bug in the verifier: https://blog.trailofbits.com/2026/04/17/we-beat-googles-zero... Google patched the vuln and the original proof still stands, but this is a pretty strange path we seem to be walking down [...]
>On superconducting architectures with 10−3 physical error rates...
So still 1-2 orders of magnitude better than what we can achieve.
This is against a 256 bit elliptic curve. For some reason most people are stating the difficulty of using Shor's against 2048 bit RSA. Elliptic curves are easier to break with Shor's. I wonder how much of the optimization came from that fact alone...
They're closely related, ECC and RSA are both instances of the hidden subgroup problem.
It kinda does, it just uses them differently
The basis here is the discrete inverse logarithm in a specific group (elliptic curves over rationals or multiplicative group module n)
Hundreds of years ago, it was not unusual to publish an encrypted solution of some mathematical problem, in order to establish priority without disclosing the algorithm that was used.
Of course, at that time very simple encryption methods were used, for instance an anagram of the solution was published (i.e. encryption by letter transposition).
"God doesn't exist" is essentially incoherent. God is the perfect being, and if he didn't exist, he wouldn't be perfect.
I think the logical mistake is obvious.
People want AI to be able to do every good thing but no bad thing, which is impossible twice. First because false positives and false negatives trade against each other, so a general purpose AI which can do anything approximating all the good things is going to have the bias leaning heavily towards being able to do things in general and therefore being able to do many things that are bad. And second because "good" and "bad" aren't things that anybody can agree on and then some people will demand that it must do X while others demand that it not do X (e.g. "help the rebels win the war"), which means someone is inherently going to be unsatisfied and it's not a thing that can be sensibly regarded as everyone working towards a common goal.
... and the world could well have been unsafer. There is pretty strong reason not to release insights which could be used as an attack on public key cryptography. We already know the fix anyway, post quantum cryptography algorithms.
Sometimes scientific curiosity has to step back when it comes to potentially dangerous research. Scott Aaronson recently [1] compared this case to when scientists stopped publishing on nuclear fission research because the possibility of developing an atomic bomb became concrete:
> When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior.
1: https://scottaaronson.blog/?p=9665
Doubt without evidence is just noise.
It may have gone unnoticed if used only used once in the article, however.