StartLearnBuildTokensResourcesConverseBlogLibrary
    HomeLearnModule 6Serenely

    Table of contents

    • 🦋 Serenity
    🦋 Serenity

    We have finally reached the tranquil waters of Ethereum 2.0 (also known as “Serenity”). It's been many years in the making, and it appears we still have some way to go yet.

    In fact, coming up with the right wording for the previous sentence was quite a challenge, because Kernel aims for “evergreen” content which will still be accurate and relevant at least a decade from now. As we will shortly see, this is an aspiration we share with the Ethereum 2.0 Design Rationale.

    You might wonder, are we saying that we will “still have some way to go” ten years from now? We certainly hope so. The desire for enclosure, for a certain and definitive end state which can be encapsulated as a “product” and consumed by the wider culture is a peculiarly modern phenomenon.

    We hope that Ethereum, and networks like it, are never really finished. We hope that people continue tinkering, playing with new ways of relating to one another for as long as there are world machines and people using them to build trust spaces. What is truly fascinating is not finished products to sell, but the ongoing art of making possible:

    "And once those limits are understood
    To understand that limitations no longer exist.
    Earth could be fair. And you and I must be free
    Not to save the world in a glorious crusade
    Not to kill ourselves with a nameless gnawing pain
    But to practice with all the skill of our being
    The art of making possible."

    Nancy Scheibner

    Ethereum 2.0 Design Rationale

    Design Rationale

    How does this fit into Kernel?

    In a way, this is what Kernel has been preparing you for: the continuous iteration which leads to a global, public, decentralized and censorship-resistant computing surface which anyone can use and no-one owns.

    What this means for our networked species remains an open question. One thing seems clear: it is an upgrade the likes of which is only seen once in an age. You'll need to recall the arguments made in value and incentives to understand the design rationale for these next few scenes of the human story.

    Brief

    We’ll be taking a close look at Vitalik’s Design Rationale for Ethereum 2.0, which is the move to a Proof-of-Stake network that also greatly increases capacity by using “shards”. Don’t get intimidated by the technical jargon here: this has been included in the syllabus for two primary reasons:

    1

    Previously, when discussing mechanism design and game mechanics which arise from revealing truth with least contrivance, we made the claim that “exploration of low-level primitives yields possibility”. We need to look at the primitives, but the intention is primarily to walk away with a better understanding of the possibilities they imply.

    2

    When discussing money and speech, we claimed that - with blockchains - the best way to protect free speech is to price it correctly. We don't enforce "the good" by legal fiat and deal with exceptions like libel and hate speech through human interpretation and violent enforcement; we define what is "bad" and set a price on it such that malicious expression provably costs more than what may be gained from it. Ethereum 2.0 extends this idea greatly: it is our generation's elder game of economic penalties.

    Principled Layers

    Just as we did for Kernel, Vitalik begins Ethereum 2.0's Design Rationale with a series of core principles:

    1

    Simplicity: given the inherent complexity of blockchain networks, simplicity

    • minimizes development costs,
    • reduces the risk of unforeseen security issues, and
    • helps protocol designers to more easily convince users that design choices are legitimate.
    2

    Long-term stability: the lower levels of the protocol should ideally be built so that there is no need to change them for a decade or longer.

    3

    Sufficiency: it should be possible to build any class of applications on top of the protocol.

    4

    Defense in depth: the protocol should continue to work as well as possible under a variety of possible security assumptions (concerning network latency, fault count, the motivations of users etc.)

    5

    Full light-client verifiability: a client should be able to gain assurance that all of the data in the system is available and valid, even under a 51% attack.

    In our very first note, we made the point that our way of thinking and solving problems always has to do with trade-offs. The trade-off between complexity and functionality is a deep one. Ethereum developers have taken the approach of splitting the network into “layers” in order to make better trade-offs between complexity and the functionality we need to build a global computing fabric that everyone can use.

    The core idea is that we keep the “bottom layer” or “Layer 1” as simple as possible: it just processes critical or high-value transactions that require global agreement. This adheres to the first, second, and fourth principle above. Then, we move more complex processes–which cost more to run and may not need everyone in the world to agree on them right now–to other layers further up, often called “Layer 2”.

    There are many different kinds of “Layer 2” options, which make different trade-offs depending on what they are optimising. All the complexity gets moved here–and understanding all the different approaches can be daunting–but a diversity of approaches to processing different people’s different needs and use cases is required to give us all “sufficiency”, as well as providing “defence in depth” in a different kind of way.

    Prompt: What are the five principles of the Ethereum 2.0 Design Rationale?

    Reveal reminder

    Simplicity, stability, sufficiency, defense, verifiability.

    Didn't remember
    Remembered

    A Defender's Game

    Bitcoin was a major innovation - it's almost like time-travelers came back to 2009 and dropped it on an obscure mailing list for us to puzzle over. However, it is not elegant and its consensus algorithm breaks one of the fundamental advantages cryptography provides: adversarial conflict should heavily favor defenders. Seasteads are easier to destroy than build, but an average person’s keys are secure enough to resist even state-level actors.

    Systems that consider themselves ideological heirs to the cypherpunk spirit should maintain this basic property, and be much more expensive to destroy or disrupt than they are to use and maintain.

    Proof of Work security can only come from block rewards, and incentives to miners can only come from the risk of losing future block rewards. That is to say, Proof of Work necessarily operates on a logic of massive power incentivized into existence by massive rewards. This is effective, but inefficient. The cost of attack and the cost of defense are at a 1:1 ratio, so there is no defender’s advantage.

    Proof of Stake breaks this symmetry by relying not on rewards for security, but rather penalties [...] The “one-sentence philosophy” of Proof of Stake is thus not “security comes from burning energy”, but rather “security comes from putting up economic value-at-loss”.

    Of course, the wise design of penalties can be its own reward.

    Prompt: What kind of actor does cryptography favour fundamentally?

    Reveal reminder

    (individual) defenders.

    Didn't remember
    Remembered

    Consider Amazon's algorithm and how they solved unbounded search by turning everything into a platform, only to find that advertising is intractable to platform solutions. This is because the limited number of top spots on infinite-length shelves, and the crazy amounts of revenue they generate, create a conflict of interest between Amazon and its users. This kind of conflict can only be solved with a protocol, not a platform. However, Proof of Work protocols still have a similar conflict of interests between miners and users. Miners must pay for the massive power they are incentivized to use in securing a Proof of Work network, which creates both (i) consistent forced sellers in any market (miners must sell some amount of the tokens they earn as rewards in order to pay for hardware and power costs) and (ii) skewed incentives around, for instance, block sizes which lead to suboptimal outcomes for users of the network.

    Proof of Stake ensures that anyone, even with entry-level hardware and a relatively small amount of ETH, can act as a validator and that the protocol relies on penalties rather than rewards. This means that users and validators are more likely to be the same people, thus reducing conflict. Just as protocols which define and encode what it means to cheat do not need to be trusted, protocols which define and encode penalties are more likely to benefit all their users than protocols which rely on rewards. This is because encoding rewards creates inevitably skewed incentives that only accrue to the subset of network participants who are best placed to game the system.

    Prompt: The cost of attacking or defending Proof of Work consensus is 1:1. Proof of Stake breaks this symmetry by relying on what, instead of rewards?

    Reveal reminder

    Penalties.

    Didn't remember
    Remembered

    Proving Stake

    Ethereum 2.0 uses a slashing mechanism where a validator that is detected to have misbehaved can be penalized: in the best case by only about 1%, but in the worst case by up to its entire security deposit. This raises the cost of an attack such that we achieve the defender's advantage described above without creating an incentive for validators to validate each transaction without checking it fully (as this is less expensive computationally).

    In the Ethereum 2.0 Design Rationale, Vitalik describes why "Casper the Friendly Finality Gadget" was chosen - it was the simplest Proof of Stake consensus algorithm available at the time - and how other options continue to be explored. He highlights the problem of "supernodes" and why sharding is better: less centralization risk, more censorship resistance, and better long-term scalability. Critically, he then considers the security models of any system we choose, and how it operates not just under the "honest majority" model (where 51% of the validators are assumed to be trustworthy), but also the "uncoordinated rational majority" and "worst-case" models. Assuming the worst case, the question remains:

    • Can we force an attacker to have to pay a very high cost to break the chain’s guarantees?
    • What guarantees can we unconditionally preserve?

    Slashing ensures the first condition, and Vitalik provides a detailed table of the guarantees we can preserve with Proof of Stake systems in this section. In a system designed around penalties, you need to distinguish between various types of validator failure - most of which are benign (like simply being offline) - and only a few of which are genuinely malicious. Critically, it is the trade-off between different penalties which informs how we structure rewards.

    Prompt: What kind of mechanism is used to ensure that attackers still pay a very high cost to break the chain's guarantees, assuming the worst-case model?

    Reveal reminder

    Slashing (extra points if you said Casper the Friendly Finality Gadget).

    Didn't remember
    Remembered

    Aligning Incentives

    In Ethereum 2.0, validators must attest to what they believe to be the head of the chain in each epoch, i.e. approximately every 6.5 minutes. If they do so, they earn a base reward, which is split into 5 parts described here. Rather than detailing all of these, we’ll focus here on two critical features of the reward: how it prevents "griefing", and how it is calculated.

    1

    Griefing occurs when an attacker seeks to reduce other validators’ revenue, even at a cost to themselves, in order to encourage the victims to drop out of the mechanism (either so the attacker can get more revenue, or as part of a longer-term 51% attack).

    By awarding any compliant validator with an amount corresponding to the base reward B multiplied by P (the portion of validators that agree) and penalising any validator who doesn't with −B, we implement a collective rewards scheme where “if anyone performs better, everyone performs better”. This bounds the griefing factors in an optimal way and is the best example of explicitly prosocial mechanism design we know of in any blockchain.

    2

    The base reward is proportional to the inverse square root of all security deposits made by the validators of the system. This strikes a compromise between a fixed reward rate and a fixed total reward. The first creates too much uncertainty about both issuance and the total level of deposits; the second potentially incentivizes griefing more than can be disincentivized by the collective scheme above. Again, mechanism design is all about balancing trade-offs.

    Prompt: Multiplying the base reward B with the portion of validators who agree, while only penalizing those who disagree with -B is what kind explicitly prosocial scheme?

    Reveal reminder

    Collective rewards.

    Didn't remember
    Remembered

    Rewards are designed this way only as a result of thinking about how to penalize different kinds of undesirable validator behaviour. Now that we understand this premise, we can check that the rewards fit our requirements by considering the break-even uptime for any validator. It turns out that if everyone else is validating, you need only be online ~44.44% of the time. However, if other validators are offline - say P = 2/3 - then you need to be online ~53.6% of the time.

    The incentives ensure that, as more validators go offline, the penalty for doing so is greater, which creates something Vitalik calls an inactivity leak. If the chain fails to finalize for more than 4 epochs, a second penalty component is added which increases quadratically over time. This:

    • Penalizes being offline much more heavily in the case where you being offline is actually preventing blocks from being finalized and
    • Ensures that if more than 1/3 do go offline, eventually the portion online goes back up to 2/3 because of the declining deposits of the offline validators.

    Prompt: If everyone else is online, what percentage of time must I be online for to break even as a validator?

    Reveal reminder

    ~44.4%.

    Didn't remember
    Remembered

    All of this means that we can handle elegantly the penalties for common and benign kinds of validator failures. But what about actual attacks and/or malicious behaviour? If a validator is caught violating the Casper FFG slashing condition, they get penalized by a portion of their deposit equal to three times the portion of validators that were penalized around the same time. The reasoning behind this is:

    • A validator misbehaving is only really bad for the network if they misbehave at the same time as many other validators, so it makes sense to punish them more in that case.
    • It heavily penalizes actual attacks, but applies very light penalties to single isolated failures that are likely to be honest mistakes.
    • It ensures that single validators take on less risk than larger services (in the normal case, a service running several validators would be the only one failing at the same time as themselves).
    • It creates a disincentive against everyone joining one single validator pool.

    Prompt: True or false: the collective reward scheme of B * P incentivises validators to pool their resources?

    Reveal reminder

    False! We can now program collective rewards without sacrificing decentralization.

    Didn't remember
    Remembered

    Technicalities

    Vitalik then discusses some of the technical choices favored in the design, like BLS signatures (which are easy to aggregate) and SSZ (a simpler serialization suite which aligns with the principle of simplicity). If you're interested, we recommend going directly to the Rationale for these details. This twitter thread does an excellent job of providing an overview of the development of Ethereum 2.0 consensus and the associated research if you want more history and detail than can be found in the Design Rationale itself.

    The Positive Sum

    What is Ethereum 2.0? Well, we said it already: our generation's elder game of economic penalties. These penalties are the game mechanics we use to reveal a unique kind of truth: it is possible to build - and asymmetrically defend and maintain - an explicitly prosocial, global, and ownerless system that provably benefits all the people who choose to use it. The benefits of encoding penalties extend to all layers of this game, including our ability to use coordination costs to our advantage.

    Welcome. Most of us are friends here. The alternatives are more expensive.

    Further resources

    These few links are intended for those who wish to understand more technicalities and begin developing on, or building tools for, Ethereum 2.0.

    The Eth 2.0 Specs

    Ethos.dev

    Ethereum 2.0 Studymaster

    EF Research Team AMA

    The Beacon Book

    The Limits to Blockchain Scalability

    Previous
    🎓 Learning
    Next
    Principled