Author: Faust, Geekweb3
Since the summer of 2023, Bitcoin Layer 2 has always been the highlight of the entire Web3. While the rise of this field came much later than Ethereum Layer 2, Bitcoin, with its unique charm of Proof of Work (PoW) and the successful launch of spot ETFs, attracted billions of dollars of capital attention to Layer 2 in just six months.
Among the Bitcoin Layer 2 projects, Merlin, with its billions of dollars in Total Value Locked (TVL), is undoubtedly the largest and most popular one. With clear staking incentives and substantial returns, Merlin has quickly risen to create a mythological ecosystem surpassing Blast in just a few months. As Merlin gains popularity, discussions about its technical solutions have become a topic of increasing interest.
In this article, Geekweb3 will focus on Merlin Chain’s technical solution and interpret its publicly available documents and protocol design ideas. We aim to help more people understand the general workflow of Merlin, gain a clearer understanding of its security model, and provide a more intuitive way to understand how this “top Bitcoin Layer 2” operates.
Merlin’s Decentralized Oracle Network: Open DAC Committee Off-chain
For all Layer 2 solutions, whether it’s Ethereum Layer 2 or Bitcoin Layer 2, Data Availability (DA) and data publishing costs are among the most significant challenges to tackle. Due to the inherent limitations of the Bitcoin network, which does not support high data throughput, leveraging the limited DA space efficiently poses a challenge for Layer 2 projects.
One conclusion is evident: if Layer 2 “directly” publishes unprocessed transaction data to the Bitcoin blockchain, it cannot achieve high throughput or low fees. The most popular solutions either highly compress the data size and upload it to the Bitcoin blockchain or directly publish the data off-chain.
Among Layer 2 projects adopting the first approach, Citrea is perhaps the most well-known. They plan to upload state differences (state diff) of Layer 2, which are the state changes of multiple accounts, along with the corresponding zero-knowledge proofs (ZKPs), to the Bitcoin blockchain. In this case, anyone can download the state diff and ZKPs from the Bitcoin mainnet to monitor Citrea’s state changes. This method can compress the data size uploaded to the blockchain by over 90%.
Although this approach significantly compresses the data size, the bottleneck is still apparent. If a large number of accounts undergo state changes within a short period, Layer 2 would struggle to aggregate and upload all these account changes to the Bitcoin blockchain, resulting in high data publishing costs. This issue is visible in many Ethereum ZK Rollup solutions.
Many Bitcoin Layer 2 projects simply choose the second path: using off-chain DA solutions on the Bitcoin blockchain. They either build their own DA layer or utilize solutions like Celestia and EigenDA. B^Square, BitLayer, and our protagonist in this article, Merlin, all follow this off-chain DA scaling solution.
In our previous article, “Analyzing B^2’s New Technical Roadmap: The Necessity of Bitcoin’s Off-chain DA and Verification Layer,” we mentioned that B^2 directly emulates Celestia and builds a DA network called the B^2 Hub. The DA data, such as transaction data or state diff, is stored on the Bitcoin blockchain and only uploads the data hash/merkle root to the Bitcoin mainnet.
Essentially, this treats Bitcoin as a trustless bulletin board: anyone can read data hashes from the Bitcoin blockchain. When you obtain DA data from off-chain data providers, you can check whether it corresponds to the data hash on-chain, i.e., hash(data1) == datahash1? If there is a correspondence, it means that the data provided by the off-chain data provider is correct.
This process ensures that the data provided by off-chain nodes is associated with certain “clues” on Layer 1, preventing malicious data provision in the DA layer. However, there is an important malicious scenario to consider: what if the data source, the Sequencer, does not actually release the data corresponding to the data hash? Instead, they only publish the data hash on the Bitcoin blockchain and intentionally withhold the corresponding data from being read. What happens in such cases?
Similar scenarios include, but are not limited to, only publishing ZK-Proof and StateRoot without releasing the corresponding DA data (state diff or transaction data). Although users can verify the ZK-Proof and confirm the validity of the calculation process from Prev_Stateroot to New_Stateroot, they do not know which accounts’ states have changed. In this situation, although users’ assets are secure, they cannot determine the actual state of the network, unaware of which transactions have been included on-chain and which contract states have been updated. In such cases, Layer 2 is essentially equivalent to being offline.
This is known as “data withholding.” In August 2023, Dankrad from the Ethereum Foundation briefly discussed a similar issue on Twitter, primarily focusing on something called “DAC.”
Many Ethereum Layer 2 projects that adopt off-chain DA solutions often set up a committee with special privileges, known as the Data Availability Committee (DAC). This DAC committee acts as a guarantor and publicly claims that the Sequencer has indeed published complete DA data (transaction data or state diff) off-chain. Then, the DAC committee collectively generates a multisignature, which, as long as it meets the threshold requirement (e.g., 2/4), leads to the Layer 1 contracts assuming that the Sequencer has passed the DAC committee’s inspection and honestly published complete DA data off-chain.
DAC committees in Ethereum Layer 2 solutions typically follow the Proof of Authority (PoA) model, allowing only a few nodes that have undergone KYC or are officially designated to join the DAC committee. This has made DAC synonymous with “centralized” and “permissioned” in many cases. Additionally, in certain Ethereum Layer 2 projects that adopt the DAC model, the sequencer only sends DA data to DAC member nodes and rarely uploads data elsewhere. To access DA data, one must obtain permission from the DAC committee, making it similar to a permissioned blockchain.
Undoubtedly, the DAC should be decentralized, and Layer 2 does not need to directly upload DA data to Layer 1. However, the DAC committee’s admission should be open to the public to prevent collusion among a few individuals. (For discussions on malicious scenarios involving DAC, refer to Dankrad’s previous statements on Twitter.)
Celestia previously proposed BlobStream, which essentially replaces the centralized DAC. Ethereum Layer 2 sequencers can publish DA data to Celestia’s chain, and if 2/3 of Celestia nodes sign it, the Layer 2 contracts deployed on Ethereum consider the sequencer to have honestly published DA data. This effectively makes Celestia nodes act as guarantors. Considering Celestia has over a hundred Validator nodes, we can consider this large-scale DAC to be more decentralized.
Merlin’s DA solution is actually quite similar to Celestia’s BlobStream, both using Proof of Stake (PoS) to open the DAC admission and make it more decentralized. Anyone who stakes enough assets can run a DAC node. In Merlin’s documentation, these DAC nodes are referred to as Oracles, and it is stated that assets such as BTC, MERL, and even BRC-20 tokens can be staked to achieve a flexible staking mechanism. It also supports proxy staking similar to Lido. (The PoS staking protocol for oracles will be one of Merlin’s core narratives, offering relatively high staking rates, among other things.)
Here, we briefly describe Merlin’s workflow (image below):
After receiving numerous transaction requests, the sequencer, Sequencer, aggregates them and generates a data batch, which is then sent to Prover nodes and Oracle nodes (decentralized DAC).
Merlin’s Prover nodes are decentralized and utilize lumoz’s Prover as a Service. After receiving multiple data batches, the Prover pool generates corresponding zero-knowledge proofs (ZKPs) and sends them to Oracle nodes for verification.
Oracle nodes verify the ZK proofs sent by lumoz’s ZK pool, checking whether they correspond to the data batch sent by the Sequencer. If the two can be matched and do not contain any other errors, they pass the verification. During this process, decentralized Oracle nodes generate a multisignature through threshold signatures, publicly declaring that the sequencer has fully published DA data and that the corresponding ZKP is valid and has passed the Oracle nodes’ verification.
The Sequencer collects the multisignature results from Oracle nodes. When the number of signatures meets the threshold requirement, the Sequencer publishes this signature information to the Bitcoin blockchain along with the data hash of the DA data (data batch), allowing external parties to read and confirm it.
Oracle nodes handle the verification process of ZK proofs differently by generating commitments and sending them to the Bitcoin blockchain. This allows anyone to challenge the commitments. This process is similar to the fraud proof protocol of bitVM. If the challenge is successful, the Oracle node that published the commitment will face penalties.
In conclusion, Merlin’s DA solution is similar to Celestia’s BlobStream, leveraging PoS to open DAC admission and achieve decentralization. However, the verification process is enhanced through commitment challenges on the Bitcoin blockchain. This approach ensures that the DA data published by the Sequencer is verified and provides a more decentralized and secure solution for Bitcoin Layer 2.Dot will be subject to economic penalties. Of course, Oracle’s data to be published on the Bitcoin chain includes the hash of the current Layer 2 status, StateRoot, as well as the ZKP itself, in order to allow external verification.
There are several details that need to be explained here. First of all, the Merlin roadmap mentions that in the future, Oracle will back up DA data to Celestia. This way, Oracle nodes can eliminate local historical data as needed and do not need to keep the data locally permanently. At the same time, the Commitment generated by the Oracle Network is actually the root of a Merkle Tree. It is not enough to just disclose the root; the complete dataset corresponding to the Commitment needs to be made public. This requires finding a third-party DA platform, which can be Celestia or EigenDA, or other DA layers.
Security model analysis: Optimistic ZKRollup + Cobo’s MPC service
Above, we briefly described the workflow of Merlin, and we believe that everyone already has a basic understanding of its structure. It can be seen that Merlin, B^Square, BitLayer, and Citrea all follow the same security model – Optimistic ZK-Rollup.
At first glance, the term “Optimistic ZK-Rollup” may seem strange to many Ethereum enthusiasts. In the Ethereum community’s understanding, the “theoretical model” of ZK Rollup is entirely based on the reliability of cryptographic calculations and does not require the introduction of trust assumptions. However, the term “optimistic” introduces trust assumptions, which means that people have to optimistically believe that Rollup does not have any errors most of the time and is reliable. And once an error occurs, the Rollup operator can be punished through fraud proofs. This is the origin of Optimistic Rollup, also known as OP Rollup.
For the Ethereum ecosystem, Optimistic ZK-Rollup may seem out of place, but it fits the current situation of Bitcoin Layer 2. Due to technical limitations, the Bitcoin chain cannot fully verify ZK Proofs. It can only verify a certain step of the ZKP calculation process under special circumstances. Under this premise, the Bitcoin chain can only support fraud proof protocols. People can point out errors in a certain calculation step of ZKP during off-chain verification and challenge them through fraud proofs. Of course, this cannot match Ethereum-style ZK Rollup, but it is already the most reliable and secure security model that Bitcoin Layer 2 can currently achieve.
In the optimistic ZK-Rollup scheme mentioned above, suppose there are N individuals in the Layer 2 network who have the authority to initiate challenges. As long as at least one of these N challengers is honest and reliable and can detect errors and initiate fraud proofs at any time, the state transition of Layer 2 is secure. Of course, for optimistic Rollups with higher completion rates, it is necessary to ensure that the withdrawal bridge is also protected by fraud proof protocols. Currently, almost all Bitcoin Layer 2 solutions cannot achieve this and rely on multi-signature/MPC. Therefore, the selection of multi-signature/MPC scheme becomes a question closely related to Layer 2 security.
Merlin has chosen Cobo’s MPC service as the bridging solution, using measures such as cold and hot wallet isolation. The bridged assets are jointly managed by Cobo and Merlin Chain, and any withdrawal actions require the participation of MPC participants from Cobo and Merlin Chain. Essentially, the reliability of the withdrawal bridge is guaranteed through institutional credit endorsement. Of course, this is only a temporary solution at the current stage. As the project gradually improves, the withdrawal bridge can be replaced by an “optimistic bridge” based on the 1/N trust assumption by introducing BitVM and fraud proof protocols. However, the implementation difficulty of this approach will be greater (currently, almost all official bridges in Layer 2 rely on multi-signatures).
Overall, we can summarize that Merlin has introduced a POS-based DAC, optimistic ZK-Rollup based on BitVM, and an MPC asset custody solution based on Cobo to solve the DA problem by opening DAC permissions. It ensures the security of state transitions by introducing BitVM and fraud proof protocols. And it guarantees the reliability of the withdrawal bridge by introducing Cobo’s well-known asset custody platform and MPC service.
Two-step verification-based ZKP submission scheme based on Lumoz
Earlier, we outlined Merlin’s security model and introduced the concept of optimistic ZK-Rollup. In the technical roadmap of Merlin, it also mentions decentralized Prover. As we know, Prover is a core role in the ZK-Rollup architecture. It is responsible for generating ZKProofs for Batches issued by the Sequencer. The generation process of zero-knowledge proofs is computationally intensive and a challenging problem.
To accelerate the generation of ZK proofs, it is essential to parallelize and split the tasks. Parallelization means dividing the task of generating ZK proofs into different parts and assigning them to different Provers, which are then aggregated by an Aggregator into a complete proof.
To speed up the generation process of ZK proofs, Merlin will adopt Lumoz’s Prover as a service solution, which essentially gathers a large number of hardware devices together to form a mining pool. The computation tasks are then assigned to different devices, along with corresponding incentives, similar to POW mining.
In this decentralized Prover solution, there is a type of attack scenario known as a front-running attack: Suppose an Aggregator has assembled a ZKP and sends it out in order to receive a reward. Other Aggregators see the content of the ZKP and preemptively publish the same content, claiming that they generated the ZKP first. How to resolve this situation?
The most instinctive solution that may come to mind is to assign each Aggregator a specific task number. For example, only Aggregator A can take task 1, and others will not receive the reward even if they complete task 1. However, this method has a problem – it cannot withstand single point risks. If Aggregator A experiences a performance failure or goes offline, task 1 will remain incomplete. Moreover, assigning tasks to a single entity cannot improve production efficiency through competitive incentive mechanisms and is not a very good solution.
Polygon zkEVM previously proposed a method called Proof of Efficiency, which pointed out that competitive means should be used to encourage competition among different Aggregators. In this way, rewards can be allocated on a first-come, first-served basis, and the Aggregator who submits the ZK-Proof to the chain first can receive the reward. However, it does not address the issue of MEV frontrunning.
Lumoz adopts a two-step verification-based ZK proof submission scheme. After an Aggregator generates a ZK proof, instead of immediately sending out the complete content, only the hash of the ZKP is published, in other words, the hash(ZKP + Aggregator Address) is released. Even if others see the hash value, they do not know the corresponding ZKP content and cannot directly frontrun it.
If someone simply copies the entire hash and publishes it ahead of others, it is meaningless because the hash includes the address of a specific Aggregator X. Even if Aggregator A publishes this hash first, when the preimage of the hash is revealed, everyone will see that it contains the address of Aggregator X, not A’s.
Through this two-step verification-based ZKP submission scheme, Merlin (Lumoz) can solve the frontrunning issue during the ZKP submission process and achieve highly competitive zero-knowledge proof generation incentives, thus improving the speed of ZKP generation.
Merlin’s Phantom: Cross-chain Interoperability
According to Merlin’s technical roadmap, they will also support interoperability between Merlin and other EVM chains, following a similar implementation path as Zetachain. For example, if Merlin is the source chain and other EVM chains are the target chains, when a Merlin node detects a cross-chain interoperability request from a user, it will trigger the subsequent workflow on the target chain.
For instance, an EOA account controlled by the Merlin network can be deployed on Polygon. When a user publishes a cross-chain interoperability instruction on the Merlin Chain, the Merlin network first interprets its content, generates transaction data to be executed on the target chain, and then the Oracle Network signs the transaction through MPC, generating a digital signature for the transaction. Subsequently, Merlin’s Relayer node releases this transaction on Polygon, completing the subsequent operations in the EOA account on the target chain. Once the requested operation is completed, the corresponding assets will be directly forwarded to the user’s address on the target chain, and theoretically, they can also be directly crossed back to the Merlin Chain. This solution has several obvious advantages: it can avoid the transaction fees incurred by traditional asset cross-chain transfers through cross-chain bridge contracts, and the security of cross-chain operations is guaranteed directly by Merlin’s Oracle Network without relying on external infrastructure. As long as users trust the Merlin Chain, they can assume that such cross-chain interoperability is reliable.
Conclusion
In this article, we provided a brief explanation of Merlin Chain’s technical solution, which we believe will help more people understand the overall workflow of Merlin and have a clearer understanding of its security model. Considering the current thriving Bitcoin ecosystem, we believe that such technical explanations are valuable and needed by the general public. We will continue to follow Merlin, bitLayer, B^Square, and other projects and provide in-depth analysis of their technical solutions in the future. Stay tuned!