Dialogue with AltLayer, Scroll, Starknet teams: Shared Sequencer and L2 Consensus

Introduction

When we look at the vision and roadmap of various rollup solutions, we will find that almost all rollups have an ultimate goal. If this goal is condensed into one sentence: build a technology stack, provide it to the community, solve The expansion of the blockchain, and ultimately the decentralization of the technology stack of operations and governance. This leads to the term decentralized rollup.

So what exactly is decentralization? What is the division of labor between the various parts of the Rollup system? Does decentralization mean maximizing system operation participants? What impact will a centralized sorter have? How should the shared orderer and L2 local consensus be designed? What is the function of the unique prover in ZK-Rollup? What would an open decentralized prover network look like? Why do we need zk hardware acceleration? What is the solution to the data availability problem? ....

There are endless discussions around decentralized Rollup in the community, so ECN curated a series of podcast interviews with the theme of "Decentralized Rollup", and invited outstanding founders and researchers in this field to talk about their understanding of decentralized Rollup. understanding.

As more and more liquidity pours into the Layer 2 platform, more and more rollup service providers appear, not only general-purpose rollup solutions, application-specific rollup chains, but also rollup-as-a-service platforms. Therefore, more and more people are concerned that a very critical role "sequencer" in almost all rollups is still centralized. What are the risks of a centralized sorter? Is a decentralized sorter an urgent job?

**In the second episode, we invited Yaoqi Jia, founder of AltLayer Network, Toghrul Maharramov, researcher of Scroll, and Abdelhamid Bakhta, Starknet Exploration Lead, to conduct a roundtable discussion on the topic of decentralized sorters, so that the audience and readers can understand the current Some progress and dilemmas of decentralized sorters. **

Dialogue with AltLayer, Scroll, and Starknet teams: shared sorter and L2 consensus

Guests in this issue:

  • Yaoqi Jia, founder of AltLayer Network, twitter @jiayaoqi
  • Scroll Researcher Toghrul Maharramov, twitter @toghrulmaharram
  • Starknet Exploration Lead Abdelhamid Bakhta, twitter @dimahledba

Past

Issue 1: How to decentralize Rollup?

  • Arbitrum researcher Patrick McCorry

Preview

Issue 3: Prover network and zk hardware acceleration

  • Ye Zhang, co-founder of Scroll
  • Leo Fan, co-founder of Cysic

Issue 4: Data Availability and Decentralized Storage

  • Qi Zhou, founder of EthStorage

Listen

Click to subscribe to the podcast to learn more:

Youtube:

Microcosm:

time stamp

  • 00:49 Yaoqi introduces himself

  • 01:37 Abdelhamid introduces himself

  • 02:50 Toghrul introduces himself

  • 04:03 The role of the sorter in rollup

  • 08:37 Decentralized orderer: Improving user experience and addressing liveness and censorship issues

  • 19:43 How Starknet will decentralize the sorter

  • 22:59 How Scroll will decentralize the sorter

  • 26:34 The difference between L2 consensus in the context of Optimistic rollup and zkRollup

  • 32:28 zkRollup decentralizes the sorter and also needs to consider the prover

  • 36:01 What is based rollup?

  • 40:53 Disadvantages of shared sequencer and based rollup, and their application scenarios

  • 49:02 What impact will a decentralized orderer have on MEV?

Guest introduction

Yaoqi

I'm the founder of AltLayer. AltLayer is building a "Rollup as a Service" platform where developers simply click some buttons and configure parameters. Using our launchpad or control panel, they can rapidly launch application-specific rollups in minutes. So that's what we're trying to do now, to provide developers with a common execution environment and functionality. We also provide multiple sequencers, multiple virtual machine systems, and various proof systems for developers to choose from.

Abdelhamid

I work at Starkware and I'm the leader of the exploration team. The goal of this group is basically to launch open source projects that are research-like but with an engineering focus. Our goal is to work on open source projects in close collaboration with the community and with people from other ecosystems. One such project is Madara, which is actually a Starknet sorter. It is not only a candidate for the Starknet public network, but also a sequencer for the Starknet application chain and Layer3. This is also related to what the previous guest said, we are also thinking about providing rollup as a service function, people can roll out their Starknet application chain and choose different data availability solutions in a somewhat modular way. Before that, I worked as an Ethereum core developer for four years, mainly responsible for the work of EIP-1559.

Choice

I'm a researcher at Scroll, and my main responsibilities at Scroll are protocol design, bridge design, decentralization, incentives, that sort of thing. So when I'm not tweeting around, most of the time I'm just working on how to decentralize the protocol or orderers, provers, stuff like that. Like Starkware, one of the things we're working on is a rollup SDK. Thus, you can issue a rollup based on Scroll, and modularly use different data availability options and so on. We are still considering an option that rollups based on the Scroll SDK can use Scroll's sorter to help them achieve decentralization without requiring each rollup to handle decentralization by itself. Of course, the plan has not been finalized yet. However, this is also the direction we are working on.

Interview section

topic one

Explain the sorter of rollup?

Abdelhamid

The sorter is a very important component in the layer2 architecture, this component receives transactions from users, then packages and bundles them into blocks, executes transactions, and so on. It's very critical because this is the component responsible for creating blocks, since layer2 is also a blockchain with transaction blocks. Orderers create these blocks, and provers attest to these blocks.

Yaoqi

As Abdel mentioned, the orderer is a combination of many functions in the blockchain. So we may actually be giving the orderer too much responsibility now compared to a typical public blockchain. It first needs to aggregate all transactions from users, and then sort those transactions, either based on gas price or on a first-come, first-served basis. Afterwards, the sequencer needs to execute these transactions. Like now, some Layer2s use EVM (Starware has a different virtual machine), but basically the sequencer needs to use a dedicated virtual machine to execute transactions and generate state. Then the transaction reaches a pre-confirmation stage, which means that if you see a confirmation time of one or two seconds, or even sub-seconds, it is basically a pre-confirmation state completed by the sequencer. Then, for most sorters, they also need to upload or publish checkpoints or state hashes to L1. This is confirmation at the L1 level, which we call data availability. So the sorter actually has many roles in the rollup system. But in general, I think it is the most critical component of the rollup system.

** Topic 2 **

Why is a decentralized sorter important? If we use a centralized sorter, what are the hidden dangers to users and the system?

Choice

First of all, we need to know that except for Fuel V1 at the current stage, there is no real rollup, because other rollups still have training wheels.

However, we can say that once it is classified as rollup, it means that multi-sig is removed and so on. Then the problem of sorter decentralization becomes a user experience problem, not a security problem. So when people talk about, say, decentralizing L1, the problem is completely different. Because L1 has to provide guarantees for ordering and light clients. So if a light client wants to verify that the state is correct, it must trust the L1 validators. For rollup, this is not the case. Because the sorter only provides temporary sorting, which is then finalized by L1. And, because rollups are also guaranteed to be censorship resistant, we don't need decentralization to make this happen.

So, there are several reasons why you need a decentralized sorter. First, let's say if L1 finalization is slow (either because the validity proofs you submit are too slow, or due to the challenge period mechanism of optimistic rollup fraud proofs), you have to rely on something to achieve fast transaction confirmation. At this stage of fast confirmation, although you can trust that Starkware or Scroll will not deceive you, they indicate that after a block is confirmed, there will be no reorganization. This is an assumption of trust. And decentralization can add some economic guarantees, and so on.

But based on this, rollup also has no real-time finality guarantee. Essentially, you can force transaction packaging through L1, but it takes hours to package that transaction. So for example, if there is an oracle responsible for updating and the time fluctuates, then basically if the oracle is updated for an hour or more, the applications in the rollup will not be available. Essentially, decentralization allows us to provide some stronger real-time censorship-resistant guarantees, because then an adversary would need to compromise not just one entity or a handful of entities, but the entire network of orderers.

So for us, decentralization is more of a solution to improve the user experience or fix the corner case of oracle updates, and so on. Instead of providing basic security guarantees, this is what L1 does.

Abdelhamid

Yes, the question about the decentralized sorter you mentioned is not exactly the same as the decentralized L1, which I think is very important. Because when we see some L1s criticizing centralized sorters, they don't properly look at the trade-offs that centralized sorters make.

On this basis, I would like to add something related to user experience, related to activity. Because when you only have a single sorter, you are at greater risk of the sorter crashing. Therefore, a decentralized orderer also increases resiliency and liveness on the network. But even in a centralized context, we have good security when it comes to security. Because when you can force package transactions through L1, the difference between the two is only the timeline. And having a decentralized sorter allows you to have fast censorship-resistant guarantees, as Toghrul mentioned. So, I just want to add that it is also important for liveness to have a decentralized network of orderers.

Yaoqi

I would like to add something. Activity is probably the most important thing we need to consider at this stage. The most recent cases of airdrops on the most popular L2s, such as Optimism and Arbitrum, saw a period of downtime. Therefore, I think what we need to solve is how to handle thousands of transaction requests per second when we only have one sorter. Even in theory, if you only have one node, it can't really handle so many requests at the same time. So, regarding liveness, we definitely need multiple sorters. Single point of failure is a real obstacle, not just for Web3, even Web2 is a big problem.

Beyond that, there is the issue of censorship. If we only have one coordinator, even if we see it can be run by the team, you still need to prove that the team won't actually review transactions. Sometimes it is possible and capable for malicious parties to blacklist certain transactions. That is a decentralized orderer system, they can also try to send transactions through other orderers. So that's why we've been getting a lot of criticism around single sorters lately.

Beyond that, there are some other issues like MEV and early runs. In a system with centralized sorters, especially for DeFi protocols, they might be able to check the mempool easily. Probably not in the form of a frontrunner, but they have a better chance of tailing the trade and arbitrating it.

A lot of these problems, for various reasons, even though we see that L2 is very different from L1. But ultimately, we still need to make it as decentralized as possible. So we have to face some of the similar issues that we have with public blockchains or L1.

Abdelhamid

Yes, I agree that a decentralized sorter is important. But I also want to say that, as we all know, this is not an easy question.

Also, **since rollups have a very specific architecture, with multiple entities. There's a single orderer we're talking about, but there's also a prover, and we need to decentralize both. **There will be some trade-offs and some difficulty in how to price transactions because different factors are needed to run the network. So, how do you price the deal? The sorter and prover have different hardware requirements, the prover needs a super powerful machine and so on. Therefore, pricing transactions in a decentralized world is also a very difficult problem, which is why we need time to slowly move forward.

So we will all face such a trade-off. If we want to decentralize quickly, we may need to take some training wheels and gradually decentralize, because if we directly want a perfect architecture, it will take several years. So I think we'll take a pragmatic approach and try to gradually decentralize. At least that's our current plan, like maybe start with a simple BFT consensus mechanism and then add another consensus mechanism in the short term or something. So I just want to say, it's not an easy question. Because there is obviously a trade-off between development speed and how applicable the architecture is to a decentralized environment.

Topic 3

How to decentralize the sorter?

Abdelhamid

There are many features that we want to decentralize, and all of them have different tradeoffs.

For example, when decentralizing, you want to introduce some kind of consensus mechanism. In the consensus mechanism, however, there are multiple parts. The first is leader election, that is, how to choose who will create blocks, who will be the orderer responsible for creating blocks in a given slot or within a given time. **What the Starknet team plans to do is to utilize layer1 as much as possible. That is to say, in our leader election algorithm, we want to stake in layer1. For example, we have tokens, and the pledge will take place on the smart contract of layer1 Ethereum, which is used to determine the leader election mechanism. **This means we need to have some interaction where the L2 orderer will query the Ethereum smart contract to know who will be the next leader or something. So obviously some kind of randomness and other stuff is needed. So it's not a simple question. But that's the first part. Then you need to have a mechanism for the consensus itself. There are multiple options: either the longest chain mechanism or BFT, or a hybrid of the two. Like Ethereum, it has LMG Ghost and Casper FFG for finality.

So we might take a pragmatic approach and start with BFT first. why? When layer2 considers decentralization, our goal is not to have such a large sorter scale as layer1. For example, on Ethereum, the goal is to have millions of validators participating. In this case, you cannot just use BFT mechanism, because it will be very bad in efficiency, and it will cause very big problems. For example, if there is a problem on the Bitcoin network, if it is a BFT mechanism, the chain will stop completely. But this is not the case, the Bitcoin network continues to create blocks, only the finality mechanism is attacked.

But in layer2, if the target is a few hundred to 1000 sorters, it might be good to start with a BFT mechanism. So, we have the leader election mechanism, then the consensus, and then there are two other parts, but I will leave it for the other guests to continue to add. But the other two parts are state updates and proof generation.

Choice

First, in L2, decentralization is a multifaceted problem, as described by Abdel. Especially in zkRollup, because there are provers and orderers, you have to coordinate between them, you have to decentralize both. This problem is completely different from L1.

Another difference is that in L2, all your design is to convince the underlying bridge that the consensus is correct, not to convince any number of other nodes. You should obviously do the same, but your main focus should be the bridge itself.

Currently, we are working in two different directions. Number one, I think, like everyone else, we're working on a BFT protocol. This isn't very efficient and there are some kinks that need to be worked out. We came up with a rough solution, but it's still not optimal. For example, one of the questions is, how do you balance incentives between sorters and provers? Because the orderer controls MEV, and the prover has no access to MEV, there is an incentive for people to run the orderer instead of the prover. But in reality, we need more provers than orderers, so how do you balance the incentives between the two? This is one of those problems.

The second solution we are working on is another direction. Remember, things may change. New proposals come out every day. For example, there's been a lot of talk lately about based rollup and how you can completely outsource sorting to L1. The second direction is to basically ditch the sorter entirely and use some external builder. For example, ethereum builders or Flashbots SUAVE etc. propose ordered blocks and then run consensus inside the prover. The advantage here is that you don't have to deal with incentives because basically you can use PBS within the protocol and it creates a simpler protocol. But the disadvantage is that since we need a large number of provers (because we can prove in parallel), it is quite difficult to run a classical BFT protocol with them. So the question is how do you optimize an existing BFT protocol to run with hundreds, or even thousands, of provers? And that's not an easy question to answer.

Is introducing L2 consensus necessary for a decentralized orderer?

Yaoqi

I can roughly answer this question because we just recently rolled out something like that.

So whether to introduce consensus does not depend on whether we want it or not. Once you have many orderers or even just nodes, you have to get them to agree. It really depends on your assumptions. If it is a Byzantine assumption, we can use BFT or any existing Byzantine consensus protocol. If it's a non-Byzantine setup, for example, if we just assume that a node can only be online and offline, and that it can't act maliciously, then we can use the Raft protocol or some other faster consensus protocol. But anyway, if we have a group of orderers or provers, if we want to organize them together to produce blocks over a period of time, then you have to have a consensus protocol around them.

So, as Toghrul and Abdel just mentioned, I believe there are a lot of proposals and research topics around how we can implement a decentralized ordering or proof system. So, because we have just launched a testnet for a multi-sorter rollup system (currently only supports fraud proofs for Optimistic rollups), based on our design and implementation experience, there are some things I can share about the difficulties . As Toghrul mentioned just now, the difficulty does not lie in the consensus protocol itself. The real difficulty lies in things other than this, such as the proof part. Because if it's a single sorter, you don't need many nodes. We can think of it as an EVM, a virtual machine. So, just fetch transactions and execute, do state transitions. The prover is responsible for generating proofs for the state transition of the previous set of transactions. However, if we do run the consensus protocol for the collator and prover on rollup, then we need to introduce additional consensus logic there. On top of this, you also need a proof system for the consensus protocol. Therefore, this will introduce a lot of work for the proof system to generate. Then once you generate the proof, you can easily verify it on L1 Ethereum.

So that's why in a way, when we launched the first multi-orderer testnet, optimistic rollup had some advantages in that regard. In general, you can simplify a lot of things, such as not considering the validity proof part. Like us, we basically compare everything to WASM. So in the end it's a WASM instruction. So, by verifying these WASM instructions, it is relatively easy to verify using Solidity code. If we just reimplemented all WASM instruction interpretations on Ethereum.

But in general, the problem is not singular. If we have a solution to the problem, correspondingly, there will be some other follow-up work that needs to be solved at the same time. Of course, there will be MEV issues, like how do we distribute MEV fairly. You can assign all orderers and provers based on whether they produced a block or validated a block. But in the end, it's really a combination of many issues, not just technical ones, but economic incentives as well.

And we need to remember that L2 is proposed because we want to reduce the gas cost significantly. So we can't have so many nodes. Even in generating proofs, L2 may be more expensive than L1. So we really need to come up with a balanced approach to this kind of problem.

Abdelhamid

I would like to add one more point. First, there is currently no actual permissionless proof of fraud for optimistic rollups. And I continue to stress this every chance I get, because it's important to be honest about this when comparing. So they are not L2 at all. That's the first thing.

Then I'd like to add something about the asynchronicity between sorting and proofs, because it's very important. As you said, we want to optimize sorting, because currently this is a bottleneck for all solutions. But that's fine in the context of a centralized sort, because we know the sorter will only produce value transitions and we'll be able to verify them. But it will be harder in the context of a decentralized sort, because maybe your sorter will be able to produce something that cannot be verified. Then you'll need to deal with that later.

In the context of centralized sorting, to optimize sorting, since we don't have to generate proofs during the sorting process, we can try to do it at local speed, which is what we want to do. Translate Cairo to a low-level machine language like LLVM and run super fast on the sorter. Then you can prove asynchronously. And the coolest thing about proofs is that you can do them in parallel. Massive parallelizability is achieved by proving that recursion is possible. That's why we will be able to catch up to the speed of the sorter. But it is difficult when decentralized, because we need to ensure that the orderer only produces valid state transitions.

Choice

I'll add that I'm not sure what Starknet is doing here. But for us, I guess it's a general assumption of every zkRollup that if you decentralize the sorter, your proof system has to be able to handle invalid batches. So basically, if, say, you submit a batch with an invalid signature, you have to be able to prove that the resulting state is equivalent to the starting state. So there will be some overhead either way. It's about how you minimize the probability of this happening.

Abdelhamid

Yes, that's right. That's why we introduced Sierra in Cairo 1 to make everything verifiable. This intermediate representation will ensure that every Cairo 1 program is verifiable so that we can include a reverting transaction.

What is Based rollup?

Yaoqi

Based rollup originally came from a blog post by Justin Drake on the Ethereum forums. One of his ideas is that we can reuse some Ethereum verifiers to verify rollup transactions, so that we don't need a separate group of nodes to verify different rollup transactions. In particular, we will have many rollups in the future, including general-purpose rollups and many application-specific rollups. So, in this case, it would be great if we could find a common place like the Ethereum validator pool to validate these transactions.

Of course, this is just an idea, as it also introduces a lot of technical difficulties. For example, in theory, we could require Ethereum validators to verify rollup transactions, but it is very difficult to get the logic of verifying rollups bundled or embedded into the Ethereum protocol itself. We call this in-protocol verification, which requires a hard fork of Ethereum nodes. Of course, in this case, we can verify some rollup transactions. But if we do, you'll see problems. It's a bit like we want L2's rollup to share the pressure of Ethereum, but in the end we still ask Ethereum validators to take on some of the work offloaded to L2. So a lot of people talk about how we can do this.

Then we talked to Barnabe, a researcher at the Ethereum Foundation who is currently working on PBS. This is a proposal of Ethereum, which is to divide the task of validators into multiple roles, builders and proposers. Now we have Flashbots to take on the role of builders in PBS, they compose all blocks and send them to Ethereum proposers. So once these blocks are packaged into the Ethereum network, the builders will also get some rewards. Then in this case, how to reward these validators from the Ethereum network? They are also responsible for rollup validation.

One of the solutions is "restaking", which you may have heard a lot from EigenLayer or some other protocols. Users can re-stake ETH on other sorting networks. Or reward Ethereum validators for actually running the software to do the validating work for the rollup. In this case, they can be rewarded both from L2 and through the re-staking protocol. There have been many proposals for this so far, but in general it's an idea of how to be able to repurpose existing Ethereum validators. How can we reuse existing ETH to help usher in a new era of rollup or L2 systems? So it's basically trying to simplify a lot of things for the rollup project. If a rollup wants some new sorter, if they want some new source of collateral, they can reuse existing infrastructure or existing collateral. So that's why it's built on top of Ethereum, and then further infrastructure and staking can be reused for rollup and L2 as well.

Disadvantages of shared sequencer and based rollup, and their application scenarios.

Choice

I want to complain about this proposal, I am not convinced by the proposal related to the shared sequencer. Of course, they are still in their infancy, and if these designs improve in the future, it is entirely possible that I will support them. It's just that the current form is not convincing to me. There are many reasons.

First, to me, the main value proposition of a shared sorter is to allow users to gain atomic composability between general-purpose rollups like Scroll or Starknet. But the problem is that if you have atomic composability, then your rollup is as final as the slowest rollup you combine with. So, assuming Scroll is combined with Optimistic Rollup, the finality of Scroll will be seven days. While the main value proposition of ZKRollup is to achieve relatively fast finality, users can withdraw to L1 within minutes. And in this case, basically give up on that.

Another downside is that if you want off-chain finality, you need to run an L2 node, and as long as the data submitted to the chain is finalized by L1, you get finality locally in L2. If each combined L2 does not run a full node, it is practically impossible to achieve local finalization. This means that running this system can be more expensive than running a system like Solana, because you have multiple full nodes running at the same time, with their own overhead and so on.

So for those reasons, I just don't think it makes sense for L2. It's a bit different for L3, because if someone wants to build an application-specific chain and doesn't want to deal with decentralization. Let's say I'm building a game and I just want to deal with building the game, then I can outsource the decentralized work. But I don't think it makes sense for L2 at the moment.

As for based rollup, I also have my concerns. For example, how do you share MEV profits with provers? Because if the allocation problem is not considered, basically L1 can obtain all the MEV profits. Another small thing is that its confirmation time is equal to the confirmation time of L1, which is 12 minutes, which cannot be faster. Another problem is that since it is permissionless, multiple Seekers can submit transaction batches at the same time, which means that it may end up with centralized results. Because builders are incentivized to include their transactions if one searcher connects more easily than others. Therefore, it may result in only one Seeker proposing batches for L2 in the end, which is not a very good solution, because if this happens, we are basically back to square one.

Yaoqi

Interestingly, I just had a call with Ben, the founder of Espresso, actually last week. We discuss this a lot in the topic of shared sorters. As Toghrul mentioned, I think there is some uncertainty about the usage scenarios for a shared ordering system. This is mainly because for a general-purpose L2, we usually don't have a large number of sorters due to efficiency, complexity, and economy. And I still feel that whether it's for a shared sequencer, based rollup, or resttaking, the best use case is mostly for RAS (Rollup as a Service) or such platforms where we can roll out a lot of rollups. We don't really need a large sorting network to be honest if there are not many rollups. These rollups already have their own sorter systems, and already have their own communities or partners, when there are only some generic L2s. They don't really need to have a separate and third-party network. Also, this is a burden on the third-party network, because you have to customize for each L2, and each L2 has a different test stack. This requires a lot of changes to your own network.

But at the same time, as Toghrul mentioned, for some special use cases. For example, if we want to have some interoperability at the sorter level, shared sorters can be a potential way to go. For example, the same sorter is used for multiple rollups. In this case, this sorter can basically handle some cross-rollup transactions to ensure cross-chain atomicity between rollup A, B, and C.

But you can also see the problem here when I describe the situation. If we really had many of these shared sequencers, they would again become a bottleneck and a new single point of failure. We're giving too much power to these so-called shared orderers. They are becoming more like a network, controlling many rollups. Finally, we again need to come up with a way to decentralize this shared sorter.

But anyway, I think it's a good thing that people are gradually discovering more and more problems and coming up with more and more solutions. All in all, it's exciting to see what's new in this field every day. With all these new solutions, at least we are on the right track to truly decentralize the entire rollup space.

Abdel

Yes, I agree with both of you. I think it makes more sense for Layer3 and Lisk because they don't want to take on the responsibility of incentivizing a decentralized network anymore and need to find partners to start things like sorters. So I think for Lisk, it makes sense. But like Toghrul, I don't think it makes a lot of sense for Layer2 just yet.

Topic 4

What impact will a decentralized orderer have on MEV?

Abdel

For Starknet, in the context of centralization, we do not do any type of MEV, and we adopt a first-come, first-served model. That is to say, in the context of decentralization, of course, more MEV will be brought in later. But it is too early to say which ratio, because it also depends on the design of the consensus mechanism and other aspects.

Choice

But the thing is, even if you don't do MEV, there might be some MEV still happening in Starknet. Well decentralization by itself doesn't really decrease MEV or increase MEV. Of course, if you apply some kind of fair ordering protocol or threshold encryption, for example, then yes, you minimize MEV. But you can't completely eliminate it. My philosophy is: MEV cannot be eliminated. But let's say you're just creating a BFT consensus, or building something on top of a BFT consensus. This actually doesn't affect MEV at all. The MEV still exists, it should be a question of how the searcher works with the sorter to extract that MEV.

Yaoqi

The problem is, even the first-come, first-served model has tricky parts. Once we expose the mempool to some seekers, they still have the advantage to play more. For example, for sequencers, they are equivalent to waiting at your office door. So in this case, once they see some kind of arbitrage opportunity, not just about front-running or a sandwich attack, as soon as a user sends a transaction, they can immediately see it in the mempool. So, they can quickly place their trades ahead of others. So, they have an advantage over other searchers.

But going back to decentralization, I think it's mostly about censorship resistance, as we discussed at the beginning. Sequencers are managed by the team. The team can tell they're being fair to everyone. But this is not prevented in the code. So, if we could have a P2P network, it would be great if we feel like these nodes vet my transaction and then we can send it to other nodes. So, it's really about the fairness of processing transactions at L2.

As for MEVs, because recently, in addition to the MEVs generated within a single rollup, there are some MEVs generated across bridges. In this relatively decentralized sorting network, you will have more opportunities to extract MEV. Assuming we have a shared ordering network, if you can somehow influence the shared orderer to reorder transactions, basically you have a big advantage over everyone else.

There are advantages and disadvantages to a shared sequencer network. On the plus side, we can further decentralize the ranker system. But on the flip side, everyone has the opportunity to be a sorter. So, they can basically do whatever they want, and it's a dark forest again. So, we introduced decentralization, and then we had to face similar problems that we faced in Ethereum. That's why Flashbots and the Ethereum Foundation folks want to move forward with PBS, separate proposers and builders, and then try to have a single solution on the builder side.

So when we look at the problem, it's not just a single problem. It's no longer a one-on-one problem, but one-on-six, and more.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments