The L2 compromise is broken, it’s time for a better foundation

Disclosure: The views and opinions expressed right here belong solely to the creator and don’t characterize the views and opinions of crypto.information’ editorial.
The second quarter of 2025 has been a actuality examine for blockchain scaling, and as capital retains pouring into rollups and sidechains, the cracks within the layer-2 mannequin are widening. The unique promise of L2s was easy: scaling up L1s, however the prices, delays, and fragmentation in liquidity and consumer expertise hold stacking up.
Abstract
- L2s had been meant to scale Ethereum, however they’ve launched new issues, whereas counting on centralized sequencers that may develop into single factors of failure.
- At their core, L2s deal with sequencing and state computation, utilizing Optimistic or ZK Rollups to decide on L1. Every comes with trade-offs: lengthy finality in Optimistic Rollups and heavy computational prices in ZK Rollups.
- Future effectivity lies in separating computation from verification — utilizing centralized supercomputers for computation and decentralized networks for parallel verification, enabling scalability with out sacrificing safety.
- The “complete order” mannequin of blockchains is outdated; shifting towards native, account-based ordering can unlock huge parallelism, ending the “L2 compromise” and paving the best way for a scalable, future-ready web3 basis.
New initiatives like stablecoin funds begin questioning the L2 paradigm, asking if L2s are really safe, and are their sequencers extra like single factors of failure and censorship? Typically, they’ll find yourself taking a pessimistic view that maybe fragmentation is just inevitable in web3.
Are we constructing a future on a stable basis or a home of playing cards? L2s should face and reply these questions. In any case, if Ethereum’s (ETH) base consensus layer had been inherently quick, low cost, and infinitely scalable, the complete L2 ecosystem as we all know it now can be redundant. Numerous rollups and sidechains had been proposed as “L1s’ add-ons” to mitigate the elemental constraints of the underlying L1s. It’s a type of technical debt, a posh, fragmented workaround that has been offloaded onto web3 customers and builders.
You may also like: Honest launch is the damaged promise of crypto | Opinion
And to reply these questions, it’s essential to deconstruct the complete idea of an L2 to its elementary elements, to disclose a path towards a extra strong and environment friendly design.
An anatomy of L2s
Construction determines perform. It’s a primary precept in biology that additionally holds in laptop techniques. To determine the correct construction and structure of L2s, we should look at their features fastidiously.
At its core, each L2 performs two crucial features: Sequencing, i.e., ordering transactions; in addition to computing and proving the brand new state. A sequencer, whether or not a centralized entity or a decentralized community, collects, orders, and batches consumer transactions. This batch is then executed, leading to an up to date state (e.g., new token balances). This state have to be settled on the L1 for safety by way of Optimistic or ZK Rollups.
Optimistic Rollups assume all state transitions are legitimate, and depend on a problem interval (usually 7 days) the place anybody can submit fraud proofs. This creates a significant UX trade-off, lengthy finality instances. ZK Rollups use zero-knowledge proofs to mathematically confirm the correctness of each state transition earlier than it hits L1, enabling near-instant finality. The trade-off is that they’re computationally intensive and complicated to construct. ZK provers themselves may be buggy, resulting in catastrophic penalties, and formal verification of those, if possible in any respect, could be very costly.
Sequencing is a governance and design alternative for every L2. Some desire a centralized answer for effectivity (or perhaps for that censorship energy; who is aware of), whereas others desire a decentralized answer for extra equity and robustness. Finally, L2s determine how they wanna do their very own sequencing.
State Declare Era and Verification is the place we will do a lot, a lot better in effectivity. As soon as a batch of transactions is sequenced, computing the following state is a purely computational job, and that may be achieved utilizing only a single supercomputer, centered solely on uncooked pace, with out the overhead of decentralization in any respect. That supercomputer may even be shared amongst L2s!
As soon as this new state is claimed, its verification turns into a separate, parallelized course of. An enormous community of verifiers can work in parallel to confirm the declare. Such can also be the very philosophy behind Ethereum’s stateless purchasers and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. Irrespective of how briskly L2s (and that supercomputer) produce claims, the verification community can all the time catch up by including extra verifiers. The latency right here is exactly the verification time, a hard and fast, minimal quantity. That is the theoretical optimum through the use of decentralization successfully: to confirm, to not compute.
After sequencing and state verification, the L2’s job is sort of full. The ultimate step is to publish the verified state to a decentralized community, the L1, for final settlement and safety.
This last step exposes the elephant within the room: blockchains are horrible settlement layers for L2s! The principle computational work is completed off-chain, but L2s should pay an enormous premium to finalize on an L1. They face a twin overhead: the L1’s restricted throughput, burdened by its complete, linear ordering of all transactions, creates congestion and excessive prices for posting knowledge. Moreover, they need to endure the L1’s inherent finality delay.
For ZK Rollups, that is minutes. For Optimistic Rollups, it’s compounded by a week-long problem interval, a essential however expensive safety trade-off.
Farewell, the “complete order” delusion in web3
Since Bitcoin (BTC), folks have been making an attempt exhausting to squeeze all transactions of a blockchain right into a single complete order. We’re speaking about blockchains in any case! Sadly, this “complete order” paradigm is a expensive delusion and is clearly overkill for L2 settlement. How ironic, that one of many world’s largest decentralized networks and the world’s laptop behaves similar to a single-threaded desktop!
It’s time to maneuver on. The longer term is native, account-based ordering, the place solely transactions interacting with the identical account must be ordered, unlocking huge parallelism and true scalability.
International ordering in fact implies native ordering, however additionally it is an extremely naive and simplistic answer. After 15 years of “blockchain”, it’s time that we open our eyes and handcraft a greater future. The distributed techniques scientific area has already transitioned from the Nineteen Eighties’ sturdy consistency idea (which is what blockchains implement) to 2015’s sturdy eventual consistency mannequin that unleashes parallelism and concurrency. Time for the web3 trade to maneuver on as effectively, to go away the previous behind and observe forward-looking scientific progress.
The age of the L2 compromise is over. It’s time to construct on a basis designed for the longer term, from which the following wave of web3 adoption will come.
Learn extra: Web3 is open, clear, and depressing to construct on | Opinion
Xiaohong Chen
Xiaohong Chen is the Chief Expertise Officer at Pi Squared Inc., engaged on quick, parallel, and decentralized techniques for funds and settlement. His pursuits embrace program correctness, theorem proving, scalable ZK options, and making use of these strategies to all programming languages. Xiaohong obtained his BSc in Arithmetic at Peking College and PhD in Laptop Science on the College of Illinois Urbana-Champaign.





