In recent days, the Ethereum second-layer expansion project MegaETH has surged in popularity, largely due to its impressive roster of investors, including Vitalik and a host of well-known venture capitalists.
About a month ago, a friend mentioned this project to me. At that time, there wasn’t much information available, leaving some details unclear. However, following its recent surge in popularity, much more detailed information has become available.
Two aspects of this project have made a significant impression on me:
First, it is the first Ethereum second-layer expansion to propose specific performance benchmarks.
Second, its whitepaper provides a comprehensive list of the methods and means of blockchain (including Ethereum second-layer) expansion, offering experimental data to substantiate some critical details, such as performance bottlenecks.
In terms of the performance of Ethereum’s second-layer expansions, it has been an important metric emphasized by various projects over the past few years. However, many have focused on improving performance in specific areas or through particular methods.
For instance, the OP series emphasizes “fraud proofs” to enhance second-layer performance, while the ZK series focuses on increasing the efficiency of proof generation. These are then combined with a degree of centralization (such as sequencers) to achieve high performance.
After these projects launched, when it became apparent that their performance improvements were limited (far less than anticipated), they shifted their focus to other areas, such as strengthening ecosystem development and supporting ecological projects.
I fully agree with and endorse this approach to ecosystem development and support.
However, the emergence of MegaETH has made me realize that the pursuit of performance in these second-layer expansions has gradually diminished.
From Ethereum’s perspective, performance expansion seems to have shifted towards an increase in the number of second-layer expansions: as the number of expansions grows, Ethereum naturally processes more transactions per unit of time—this is indeed a form of performance improvement.
Yet, this kind of improvement feels somewhat forced and lacks a solid core.
MegaETH refocuses on hardcore technology for performance enhancement, a style that seems to have been absent from the ecosystem for some time.
The detailed description of each technical detail in MegaETH’s whitepaper is worth reading; such thorough enumeration of technical details has been rare in project whitepapers lately. It feels more like a comprehensive review paper on the current state of blockchain performance expansion.
Ordinary readers can overlook the technical details and explore the project team’s logic, thoughts, and plans.
In summary, after reading the whitepaper, readers should have a good understanding of the angles and means the project team will use to achieve the claimed 100,000 TPS for this second-layer expansion.
Whether this goal can be achieved remains to be seen in future products.
Overall, the project seems to adopt a strategy of node classification, dividing the various functions of the second-layer expansion among different nodes, allowing each type of node to use hardware with the necessary performance, pushing the system’s performance to the hardware’s limits.
This strategy reminds me of an earlier article by Vitalik about the future classification of Ethereum nodes.
In that plan, Vitalik envisioned a classification of Ethereum nodes:
Some nodes, requiring high-efficiency transaction processing and block generation, would use high-performance hardware and need to stake 32 ETH;
Others, serving merely as block validators, would use ordinary hardware (even embedded devices) and need to stake only a small amount of ETH.
This would satisfy Ethereum’s mainnet performance requirements while maintaining decentralization as much as possible.
I wonder if MegaETH’s approach resonated with Vitalik, prompting his involvement in the project?
Of course, I have some questions about the project, such as its treatment of sequencers: does it consistently use a designated one, or does it select from many candidates through a sampling method? This detail seems to be missing from the whitepaper. If it’s the former, how does the system avoid a single point of failure?
In conclusion, MegaETH adds a high-performance project to the Ethereum second-layer expansion ecosystem, enriching it significantly, which is undoubtedly valuable for the ecosystem.
As for the investment value of the project (if it issues a token in the future), here’s my take:
Projects like MegaETH, which require substantial funding for research and development, find it difficult to operate without venture capital investment. This determines that the project’s value (if it issues a token) must accommodate the interests of venture capitalists.
Moreover, such projects are like white horses: their value and significance are clear and straightforward.
Therefore, generally speaking, the appreciation potential of such projects (and their tokens) has a ceiling.
In my view, MegaETH’s significance for Ethereum, especially for the second-layer expansion ecosystem, far outweighs its investment value.