If comparing Layer2 scaling to an interstellar journey, then zk-EVM is like a spaceship equipped with a superluminal engine, while the data layer is the fuel supply pipeline — it determines whether you can truly take off.
It is now December 2025. We have passed the peak of zk-EVM narrative's large-scale implementation, and are now in a period of calm. Over the past two years, the entire industry has cheered for a few seconds' improvement in zero-knowledge proof generation speed, and debated the definition of Ethereum equivalence for ages. But reality has dealt a harsh blow to developers: you finally built an engine capable of processing tens of thousands of transactions per second, only to find that most of the time, the engine is idling — because data supply can't keep up.
This is the most awkward predicament for Layer2 now: you have ample computing power, but are bottlenecked by data throughput.
Why is the halo of zk-EVM fading?
In early scaling efforts, we all thought that generating proofs (Proving) was the biggest challenge. But now? With hardware acceleration and parallel proof techniques gradually maturing, proof costs have come down. The real bottleneck is no longer "how to prove that these ten thousand transactions are valid," but "how to securely deliver the raw data of these ten thousand transactions to the prover with minimal latency."
Many Layer2 projects are now like driving a Ferrari through narrow alleys during rush hour. They are compatible with the Ethereum ecosystem, but when it comes to high-frequency finance or on-chain AI applications that require heavy data processing, they start to feel overwhelmed.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
13 Likes
Reward
13
7
Repost
Share
Comment
0/400
0xLuckbox
· 1h ago
The analogy of a Ferrari stuck in an alley is brilliant; indeed, the data layer is truly the ceiling.
View OriginalReply0
MerkleTreeHugger
· 01-03 19:10
Haha, it's the same old act of "Engine is awesome, fuel is not enough," Layer2 is now just a high-spec but low-capability kid.
---
Excess computing power and data famine—so ironic. The hype back then hurts even more now that it's facepalming.
---
Ferrari starting in an alley, haha. That metaphor is perfect—it's a reflection of Layer2 projects.
---
So who is really working on the data layer? Or are they all just waiting for others to take the first step and stumble?
---
Two years ago, the debate was about equivalence; now we realize it’s useless. That’s the rhythm of crypto.
---
Data throughput bottleneck... feels uncomfortable. I thought proof would be the bottleneck, but the real pain points are elsewhere.
---
It should have been clear that computation is cheap and data is expensive. It’s a bit late now to realize that.
View OriginalReply0
GateUser-0717ab66
· 01-03 02:51
It's the same old data layer issue again, really, every time we get stuck here.
View OriginalReply0
CryptoPunster
· 01-03 02:50
Ferrari stuck in an alley, this metaphor is perfect. It exactly describes the current situation of these Layer2 projects, it cracks me up.
Data is the real bottleneck. Those who used to boast about proof speed have now gone quiet.
Another story of "great technology but no place to use it." Web3 just loves this kind of thing.
View OriginalReply0
GasFeeTears
· 01-03 02:38
Oh no, it's that old tune again: "Having an engine but no fuel"... Actually, it's been obvious since last year. Who's still hyping zk's speed?
View OriginalReply0
GateUser-75ee51e7
· 01-03 02:31
Haha, laughing to death. Bought a supercar only to get stuck in an alley. That's a perfect analogy. Basically, after all the hype about zk-EVM, it turns out we're still stuck because of data, the old stubborn problem. Truly ironic.
View OriginalReply0
hodl_therapist
· 01-03 02:27
Haha, isn't this the issue we complained about two years ago? It's just now that you realize the bottleneck isn't in the proof but in the data. Kinda late, bro.
If comparing Layer2 scaling to an interstellar journey, then zk-EVM is like a spaceship equipped with a superluminal engine, while the data layer is the fuel supply pipeline — it determines whether you can truly take off.
It is now December 2025. We have passed the peak of zk-EVM narrative's large-scale implementation, and are now in a period of calm. Over the past two years, the entire industry has cheered for a few seconds' improvement in zero-knowledge proof generation speed, and debated the definition of Ethereum equivalence for ages. But reality has dealt a harsh blow to developers: you finally built an engine capable of processing tens of thousands of transactions per second, only to find that most of the time, the engine is idling — because data supply can't keep up.
This is the most awkward predicament for Layer2 now: you have ample computing power, but are bottlenecked by data throughput.
Why is the halo of zk-EVM fading?
In early scaling efforts, we all thought that generating proofs (Proving) was the biggest challenge. But now? With hardware acceleration and parallel proof techniques gradually maturing, proof costs have come down. The real bottleneck is no longer "how to prove that these ten thousand transactions are valid," but "how to securely deliver the raw data of these ten thousand transactions to the prover with minimal latency."
Many Layer2 projects are now like driving a Ferrari through narrow alleys during rush hour. They are compatible with the Ethereum ecosystem, but when it comes to high-frequency finance or on-chain AI applications that require heavy data processing, they start to feel overwhelmed.