Bitcoin is obsessed with limiting bloat. Ethereum is outsourcing the problem to layer 2s. Solana can't solve state contention effectively after cramming it all into the L1. There are many needless misconceptions about scaling. We take a pragmatic look at scaling and its future.
Even Bitcoin does not exist outside of the realms of computer science. It is a distributed system that has to deal with the same problems as any other distributed system.
It is occasionally brought back to this reality by people posting jpegs and other things on the Bitcoin blockchain. The latest iteration of it is the Knots debate where a Bitcoin core developer who lost 200 BTC to a hack, wants to fight said jpegs in the name of “spam fighting” with an alternative node software. In private messages he suggests using Zero Knowledge Proofs to retroactively alter the Bitcoin blockchain with the goal to filter out transactions that he deems “spam”.
While this is obviously a silly attempt to recover his stolen Bitcoin, it lays bare a sentiment that has been well established ever since the Blocksize war of 2015-2017:
Blockspace is regarded as a precious resource that has to be kept scarce and pristine. This sentiment is in fundamental contradiction with the fact that the Bitcoin protocol is oblivious to the content of the transactions that it orders. There is no way for these two to reconcile. Any attempt is fundamentally flawed and a slippery slope towards censorship.
The other aspect of this is the scarcity mindset that permeates the Bitcoin community. They are usually not scared of large numbers, quite the opposite. But when it comes to blocksize and transaction throughput, alarm bells go off. “What if we run out of storage space on the nodes?” The price per Terabyte is around 10 USD for HDDs and 40 USD for SSDs. It is simply not rational to pretend blockchain size is the bottleneck in 2025.
As a result of FLP impossibility we know that in an asynchronous network, one faulty node can prevent consensus from being reached. Even though many people think Bitcoin is asynchronous, it is not. It is a partially synchronous system. There is no way to get around FLP impossibility. People just pretend the synchronous phase of the network does not exist.
We need to observe the whole process, from the moment a user submits a transaction in his wallet, until the moment his wallet shows the transaction as confirmed. At first the transaction goes through an asynchronous gossip phase, while it is included in the mempool. Next follows the inclusion in a block. The block building and proposal phase is synchronous. The hashrate coordinates around mining pools to collectively mine the block, that this temporary leader in the form of the mining pool proposes.
This is the synchronous phase of the network, where the market clearing transaction fee price is determined. The network participants might only retroactively become aware of this through the longest chain rule, but this is effectively where the whole network comes to consensus, that only happened through synchronous coordination of a subset of the network participants.
This tollbooth phase is common to all blockchains. At some point the ordering of transactions has to happen and the actual transaction fee price has to be found. What follows then is the asynchronous travel on the highway towards finality. This is where the rest of the network participants become aware of the new state of the ledger. Finally, the wallet of the user shows the transaction as confirmed. This happens asynchronously and different nodes will see the transaction as confirmed at different times.
To summarize: There is a synchronous tollbooth phase, where the transaction fee market clears and a block is proposed by a temporary leader. Then there is the asynchronous highway phase, where the rest of the network participants are informed of the new state of the ledger.
Ethereum tried to sidestep the scaling problem by letting independent companies build tollbooths in front of its own tollbooth and highway. Some of them entirely hosted on AWS.
In practice this approach does not work. It produces horrible UX. Balkanizes the network and liquidity. Introduces complex trust assumptions around the L2s. The market slowly came to this realization over the last two years as Solana and other smart contract chains gained mindshare. ETH underperformed and the Ethereum community eventually had to admit they made a mistake. Now they try to scale the L1 to add capacity.
The core mistake is to think that transaction verification as the bottleneck can be mitigated by aggregating proofs in front of the consensus.
This will never work, because it means another tollbooth is added in front of the whole system. The throughput capacity of the system as a whole does not increase, it just becomes more fractured. Are we even looking at the same system here? We can not compare the end to end flow from transaction submission to confirmation on Ethereum with someone having their funds stuck on Base because AWS is down.
Solana tries to address the issue with higher node hardware requirements. This does not solve the problem fundamentally. The state cost is so high, that a new measure is introduced that is named “zk compression”. What this does is essentially create a “Reverse Ethereum”. After scaling the node requirements, node memory is still the bottleneck and therefore state got so expensive, that many applications became impractical. So they try to make this attempt at rolling up state again, after first cramming it all into the L1.
This is because the state contention problem is not adequately addressed. There is no way for the system to intelligently decide which state is valuable and which state is junk. Currently the fees on state make many applications unfeasible, but at the same time they are too low for the typical user to clean out the trash and help tackle the ever growing state on the solana blockchain.
It is unclear if there is even a good solution to this, but one thing is clear: the band-aid of “zk state compression” is not it. The solution has to be transparent to both the user and the developers of smart contract applications. The worst part is that Solana declares the issue out of scope. This is something that should be addressed in the mempool or fee market. Solana does not have an in-protocol mempool.
As a result, nasty “out-of-protocol solutions” develop. Involving TEEs in block building is a horrific idea and definitely a step in the wrong direction. TEEs are fundamentally Anti-Freedom technology, part of the decades long war on general computation. The concept that there can be a “trusted zone” inside of our devices that we can’t control is absurd. The only people that still claim this is possible are spooks or have bags to shill.
The key takeaway here is that scaling of smart contract chains is limited by being able to tell good state from bad state. Until this problem is solved, batch verification of transactions in these networks will be memory bound. This metric is important, because as discussed earlier there is no way to get around batch verification of transactions as the bottleneck to scaling. Proof aggregation in front of consensus just creates new bottlenecks.
It is a fundamental misunderstanding to think state contention is the limiting factor for scaling privacy chains. Scaling privacy is compute bound, not memory bound. Transaction proof verification is on the order of tens of milliseconds, while smart contract chains are mainly starved for data. In Solana or Ethereum the node CPU spends most of the time waiting for the data from memory to arrive, while the time to verify the transaction signature is minuscule. In Monero the CPU spends most of the time verifying the transaction proofs, not waiting for memory.
The upcoming hardfork to FCMP++ is based on the curve trees paper which enables efficient transaction verification in batches. There is already new work with further improvements called Curve Forests which is a candidate for a future upgrade.
Meanwhile Zcash has locked itself in on Project Tachyon which is premised on the wrong assumption that state contention is the bottleneck to scaling privacy. So much hype has been built around this, that they can’t back themselves out of this corner without spooking their holders.
They claim Tachyon will enable scaling to billions, while ignoring the fact that other projects have claimed the same and came up short. The Mina protocol launched in 2021 on similar claims of scaling to mass adoption with the help of aggregating proofs in front of the consensus stage.
It turns out, that in practice there are lots of caveats to this approach and to this day the Mina protocol TPS are nothing to write home about. I followed the reporting around project Tachyon closely and I have yet to see any credible benchmarks or reports on how it performs in practice. It is interesting to note that nobody bothered to ask how much transactions per second Tachyon can achieve before the Zcash community started cheerleading for it.
Eventually outlandish pitches around scaling will have to face reality. Real users want a good experience and that is measured in transactions per second.
When you tried buying a car and the dealer told you: Miles per hour don’t matter anymore, this car can go at infinite speed. Wouldn’t you get suspicious and buy a car elsewhere?
When someone says TPS don’t matter anymore or never mention the expected TPS of an upgrade, you know they are lying. At some point the average crypto consumer will be educated on this simple fact. It will go without saying to ask “How many transactions per second can it do?”
In the same way it goes without saying now to ask “How many miles per hour can it do?” when buying a car.