Vitalik Blog Post: Why Do We Need an Open, Verifiable World?

Author: Vitalik

Editing & Translation | RuoYan

Original link:

Disclaimer: This article is a reprint. Readers can obtain more information through the original link. If the author has any objections to the reprint, please contact us, and we will modify it according to the author’s requirements. Reprints are for information sharing only, do not constitute any investment advice, and do not represent Wu Shuo’s views and positions.

The biggest trend of this century can be summarized as “the internet has become part of everyday life.” From email to instant messaging, from digital finance to health tracking, and to the upcoming brain-computer interfaces, our daily lives are being fully digitalized. However, this digitalization brings enormous opportunities and risks. Vitalik Buterin explores in this article why we need true openness and verifiability across the entire technology stack (software, hardware, and biotechnology), and how to build a safer, freer, and more equitable digital future.

The Internet is Reality

The largest trend so far this century can be summarized as “the internet has become part of everyday life.” It started with email and instant messaging. Private conversations conducted for thousands of years via mouth, ears, pen, and paper are now running on digital infrastructure. Then, we saw digital finance—encompassing both crypto finance and the digitalization of traditional finance itself. Next is our health: thanks to smartphones, personal health tracking watches, and data inferred from purchasing behaviors, various information about our bodies is being processed through computers and networks. Over the next twenty years, I expect this trend to permeate many other fields, including various government processes (ultimately even voting), monitoring physical and biological indicators and threats in public environments, and ultimately, through brain-computer interfaces, even our own thoughts.

I believe these trends are inevitable; their benefits are too great. In a highly competitive global environment, civilizations that refuse these technologies will first lose competitiveness, then sovereignty. However, beyond providing significant benefits, these technologies profoundly influence the power dynamics within and between nations.

The civilization that benefits most from the wave of new technologies is not the consumer tech civilization but the production tech civilization. Centralized plans that aim for equal access to platforms and APIs can only provide a small part of this, and will fail outside the predefined “normal” range. Moreover, this future involves a great deal of trust in technology. If that trust is broken (e.g., backdoors, security flaws), we face real big problems. Even the mere possibility that this trust could be broken will force people back into fundamentally exclusive social trust models (“Was this built by someone I trust?”). This creates incentives for upward dissemination: sovereignty is held by those who decide on exceptions.

Avoiding these issues requires technologies with two intertwined properties across the entire stack—software, hardware, and biotech: true openness (i.e., open source, including free licensing) and verifiability (including, ideally, direct verification by end users).

The internet is reality. We want it to be a utopia, not a dystopia.

The Importance of Openness and Verifiability in Health

We saw the consequences during the COVID-19 pandemic of unequal access to technological means of production. Vaccines were produced in only a few countries, leading to huge disparities in vaccine access timing [1]. Wealthier countries obtained top-tier vaccines in 2021, while others received lower-quality vaccines in 2022 or 2023. There have been initiatives to ensure equal access [2], but because vaccines are designed to depend on capital-intensive proprietary manufacturing processes that can only be done in a few places, these initiatives can only do so much.

COVID-19 vaccine coverage from 2021-2023

A second major problem with vaccines is their opacity [3] in scientific and dissemination strategies [4], attempting to pretend they carry literally zero risks or downsides, which is untrue and ultimately fuels [5] distrust [6]. Today, this distrust has spiraled upward, feeling like a rejection of science from half a century.

In fact, both of these issues are solvable. Vaccines like PopVax [7], funded by Balvi [8], are cheaper to develop, more open in manufacturing processes, reducing access inequality, and making analysis and verification of their safety and efficacy easier. We can go further in designing vaccines for verifiability.

Similar issues apply to the digital aspects of biotechnology. When talking to longevity researchers, the first thing you generally hear is that the future of anti-aging medicine is personalized and data-driven. To know what drugs and nutritional changes to recommend today, you need to understand their current physical state. If large amounts of digital data could be collected and processed in real time, it would be even more effective.

The internet is reality. We want it to be a utopia, not a dystopia.

The same idea applies to defensive biotechnology aimed at risk prevention, such as combating pandemics. The earlier an outbreak is detected, the more likely it is to be stopped at the source—even if not, each week provides more time to prepare and develop countermeasures. During an ongoing pandemic, knowing where people are getting sick in real time to deploy responses is highly valuable. If infected individuals know this and self-isolate within an hour, it reduces transmission by 72 times compared to infecting others over three days. If we identify the top 20% of locations responsible for 80% of transmission, improving air quality there can yield further benefits. All of this requires (i) a large number of sensors, and (ii) sensors capable of real-time communication to other systems.

If we further develop toward a “sci-fi” direction, we encounter brain-computer interfaces, which can enable enormous productivity, help people better understand each other through telepathy, and unlock safer pathways to highly intelligent AI.

If personal and space health tracking infrastructure is proprietary, data will default into the hands of big companies. These companies can build various applications on top of it, but others cannot. They might provide API access, but API access will be restricted and used for monopoly rent extraction, and can be canceled at any time. This means a few individuals and companies will have access to the most critical components of 21st-century technology, which in turn limits who can benefit economically.

On the other hand, if this personal health data is insecure, hackers could extort you using health issues, optimize insurance and healthcare pricing to extract value from you, and if data includes location tracking, they could know where you are and even kidnap you. Conversely, your location data (which is very [9] frequently hacked [10]) can be used to infer health information about you. If your BCI is hacked, it means hostile actors are literally reading (or worse, writing) your thoughts. This is no longer science fiction: see here [11] how BCI hacking can lead to a rational attack causing someone to lose motor control.

Overall, there are huge benefits but also significant risks: emphasizing openness and verifiability is very suitable for mitigating these risks.

The Importance of Openness and Verifiability in Personal and Commercial Digital Technologies

Earlier this month, I had to fill out and sign a legal form required for a function. I was outside the country at the time. There is a national electronic signature system, but I had not set it up. I had to print the form, sign it, walk to a nearby DHL, spend a lot of time filling out paper forms, and then pay a fee to courier it to the other side of the world. Time required: half an hour; cost: $119. On the same day, I had to sign a (digital) transaction to execute an operation on the Ethereum blockchain. Time required: 5 seconds; cost: $0.10 (to be fair, without blockchain, signing can be completely free).

Stories like this are easy to find in enterprise or nonprofit governance, intellectual property management, and other areas. Over the past decade, you can find them in a significant portion of pitches from all blockchain startups. Beyond that, there is the mother of all use cases for “digital exercise of personal authority”: payments and finance.

Of course, all this carries great risks: what if software or hardware is hacked? This is a risk recognized early in the crypto space: blockchain is permissionless and decentralized, so if you lose access to your funds [12], there are no resources, no uncles in the sky to help. If it’s not your keys, it’s not your coins. For this reason, early crypto thinking focused on multi-signature [13] and social recovery wallets [14], as well as hardware wallets [15]. However, in practice, many situations lack trustworthy “sky uncles” not as an ideological choice but as an inherent part of the scenario. In fact, even in traditional finance, “sky uncles” cannot protect most people: for example, only 4% of scam victims recover losses [16]. In cases involving personal data custody, even in principle, recovery of leaks is impossible. Therefore, we need true verifiability and security—verifiability and security of software and final hardware.

A proposed technology for checking whether computer chips are manufactured correctly

It is important that, in hardware, the risks we try to prevent far exceed “Is the manufacturer evil?” Instead, the problem is that there are many dependencies, most of which are closed source, and any negligence among them could lead to unacceptable security outcomes. This paper shows recent examples [18] illustrating how microarchitecture choices can undermine side-channel resistance in designs that are provably secure in a software-only model. Attacks like EUCLEAK [19] depend on vulnerabilities that are harder to detect because many components are proprietary. AI models trained on compromised hardware [20] can have backdoors inserted during training.

Another issue with all these cases is the drawbacks of closed and centralized systems, even if they are perfectly secure. Centralization among individuals, companies, or nations creates ongoing leverage: if your core infrastructure is built and maintained by potentially untrustworthy nations or companies, you are vulnerable to pressure (see Henry Farrell on weaponizing interdependence [21]). This is what encryption aims to solve—but it exists in more fields than finance.

The Importance of Openness and Verifiability in Digital Citizen Technologies

I often talk with various people trying to find better forms of government for the 21st century. Some, like Audrey Tang [22], try to elevate functioning political systems to the next level, empowering local open-source communities and using mechanisms like citizen assemblies, lotteries, and secondary voting. Others start from scratch: here is a recent constitution proposed by some Russian-born political scientists for Russia, with strong protections for personal freedom and local autonomy, a strong institutional bias toward peace and against invasion, and unprecedented direct democracy. Others, like economists working on land value taxes [23] or congestion pricing, try to improve their countries’ economics.

Different people may have varying enthusiasm for each idea. But they all share a common point: they involve high-bandwidth participation, so any practical implementation must be digital. Pen and paper are fine for basic records of ownership and quadrennial elections, but not for anything requiring higher bandwidth or frequency input.

Historically, security researchers’ acceptance of ideas like electronic voting has ranged from skepticism to hostility. Here is a good summary [24] of the case against electronic voting. Quoting the document:

First, the technology is a “black box software,” meaning the public is not allowed to access the software controlling the voting machines. While companies protect their software to prevent fraud (and beat competitors), this also means the public does not know how the voting software works. Manipulating the software to produce fraudulent results would be simple. Moreover, vendors competing to sell machines have no guarantee that they produce machines in the best interest of voters and ballot accuracy.

There are many real-world cases [25] proving this skepticism is justified.

Critical analysis of Estonia’s 2014 online voting [26]

These arguments apply word-for-word in all other cases. But I predict that as technology advances, the response of “let’s just not do it” will become increasingly impractical across many fields. The world is becoming more efficient rapidly (for better or worse) due to technology, and I predict any system that does not follow this trend will become less and less relevant to personal and collective affairs as people bypass it. Therefore, we need an alternative: truly doing the hard things, figuring out how to make complex technical solutions safe and verifiable.

In theory, “security and verifiability” and “open source” are two different things. Certain things being proprietary and secure is absolutely possible: airplanes are highly proprietary technology, but overall commercial aviation is a very safe way to travel [27]. But what proprietary models cannot achieve is shared knowledge of security—the ability to trust in the design trusted by mutually distrustful actors.

Citizen systems like elections are situations where shared knowledge security is crucial. Another is evidence collection in courts. Recently, in Massachusetts, a large number of breathalyzer evidence tests were invalidated [28] because information about testing failures was concealed. Quoting the article:

So, are all results flawed? No. In most cases, breathalyzer tests are not miscalibrated. However, because investigators later discovered that the state crime lab concealed evidence showing broader issues, Judge Frank Gaziano wrote that the due process rights of all these defendants had been violated.

Due process in courts is essentially a domain that requires not only fairness and accuracy but also shared knowledge of fairness and accuracy—because without shared knowledge, courts doing the right thing can easily spiral into a spiral of people taking justice into their own hands.

Beyond verifiability, openness itself has intrinsic benefits. Openness allows local communities to design governance, identity, and other systems in ways compatible with local goals. If voting systems are proprietary, countries (or provinces or towns) wanting to try new systems face greater difficulties: they must either persuade companies to implement their preferred rules as features or start from scratch and do all the work to make it secure. This increases the high cost of political innovation.

In any of these fields, more open hacker-ethics approaches will give more agency to local implementers, whether acting as individuals or as part of governments or companies. To make this possible, open tools built must be widely available, and infrastructure and codebases must be freely licensed to allow others to build upon them. To minimize power disparities, copyleft is especially valuable [29].

The last key area of future civic technology is physical security. Surveillance cameras have appeared everywhere over the past two decades, raising many civil liberties concerns. Unfortunately, I predict that the recent rise of drone warfare will make “don’t do high-tech security” no longer a feasible choice. Even if a country’s laws do not infringe on personal freedoms, if the country cannot protect you from other nations (or rogue companies or individuals) enforcing laws upon you, it’s meaningless. Drones make such attacks much easier. Therefore, we need countermeasures, possibly involving large-scale anti-drone systems [30], sensors, and cameras.

If these tools are proprietary, data collection will be opaque and centralized. If these tools are open and verifiable, we have the opportunity to adopt better approaches: security devices that can prove they only output limited data in limited circumstances and delete the rest. We could have a digitalized physical security future, more like a digital watchdog than a digital panopticon. People can imagine a world where public surveillance devices are required to be open source and verifiable, and anyone has the legal right to randomly inspect and verify them. University computer science clubs could often do this as an educational exercise.

Open source and verifiable approaches

We cannot avoid being deeply embedded in digital computing in all aspects of life (personal and collective). By default, we might get digital computing systems built and operated by centralized companies, optimized for profit for a few, with backdoors for their home governments, and most of the world unable to participate in their creation or even know if they are secure. But we can try to move toward better alternatives. Imagine a world:

· You have a secure personal electronic device—something with phone functions, encrypted hardware wallet security, and mechanical watch-level checkability.

· Your messaging apps are encrypted, message patterns are obfuscated via mixnets, and all code is formally verified. You can be confident that your private communications are truly private.

· Your finances are on-chain (or posted hashes and proofs on certain servers to guarantee correctness), managed by your personal electronic device’s wallet. If you lose the device, they can be recovered through some combination of other devices, family members, friends, or institutional devices you choose [31] (not necessarily the government: if anyone can do this easily, even churches might offer it).

· Open source infrastructure similar to Starlink exists, providing us with powerful global connectivity without relying on a few individual actors.

· Your device scans your activities with open source weights LLMs, providing suggestions, auto-completion, and warnings when you might be misled or about to make a mistake. The operating system is also open source and formally verified.

· You wear 24/7 personal health tracking devices, which are open source and checkable, allowing you to access data and ensure no one is obtaining it without your consent.

· We have better governance forms, using lotteries, citizen assemblies, secondary voting, and often clever combinations of democratic voting to set goals, and some method of selecting ideas from experts to determine how to achieve those goals. As a participant, you can actually be confident that the system is implementing rules as you understand them.

· Public spaces are equipped with monitoring devices to track biological variables (e.g., CO2 and AQI levels, presence of airborne diseases, wastewater). However, these devices (and any surveillance cameras and anti-drone systems) are open source and verifiable, with legal frameworks, and the public can randomly inspect them.

· This is a world where we have more security, freedom, and global economic equality than today. But realizing this world requires more investment in various technologies: more advanced cryptography forms. I call this cryptographic Egyptian gods—ZK-SNARKs, fully homomorphic encryption, and obfuscation—so powerful because they allow you to perform arbitrary computations on data in multi-party environments with guarantees about the output, while keeping data and computation private. This enables more powerful privacy-preserving applications. Tools adjacent to cryptography (e.g., blockchains enabling applications with strong guarantees of data integrity and user exclusion, and differential privacy adding noise to data to further protect privacy) are also applicable here.

· Application and user-level security. Applications are only safe if the security guarantees they make are actually understandable and verifiable by users. This will involve software frameworks that make it easy to build applications with strong security properties. Importantly, it will also involve browsers, operating systems, and other intermediaries (e.g., locally running observer LLMs) verifying applications as much as possible, assessing their risk levels, and presenting this information to users.

· Formal verification. We can use automated proof methods to algorithmically verify that programs satisfy properties we care about, such as not leaking data or being resistant to unauthorized modifications. Lean has recently become a popular language for this. These techniques have already begun to be used to verify ZK-SNARK proof algorithms for Ethereum Virtual Machine (EVM) and other high-value, high-risk use cases, and similar uses are expanding in the broader world. Beyond that, we need further progress in other more mundane security practices.

The pessimism about cybersecurity in the 2000s was wrong: vulnerabilities (and backdoors) can be overcome. We “just need” to learn to put security ahead of other competing goals.

· Open source and security-oriented operating systems. More and more are emerging: GrapheneOS as a security-focused version of Android, minimal security kernels like Asterinas, Huawei’s HarmonyOS (with open source versions) using formal verification (I expect many readers will think “if it’s Huawei, it must have a backdoor,” but that misses the point: as long as it’s open, anyone can verify it, and who produces it shouldn’t matter. This is a good example of how openness and verifiability counteract global Balkanization) secure open hardware. If you cannot be sure your hardware is actually running the software and not leaking data on the side, then no software is truly secure. In this area, my two short-term goals are:

· Personal secure electronic devices—blockchain called “hardware wallets,” open source enthusiasts call them “secure phones,” which ultimately converge once you understand the need for security and generality.

· Physical infrastructure in public spaces—smart locks, the biometric monitoring devices I described above, and general “Internet of Things” technology. We need to trust them. This requires open source and verifiability.

· Building open source hardware with secure open source toolchains. Today, designing hardware depends on a series of closed source dependencies. This greatly increases manufacturing costs and complicates licensing. It also makes hardware verification impractical: if the tools that generate chip designs are closed source, you don’t know what you are verifying. Even tools like today’s scan chains are often unusable in practice because too many necessary tools are closed source. All this can change.

· Hardware verification (e.g., IRIS and X-ray scanning). We need methods to scan chips to verify they actually have the logic they should and lack extra components that allow unintended tampering and data extraction. This can be done destructively: auditors order products containing computer chips (using identities that look like ordinary end users), then open the chips and verify logic matches. Using IRIS or X-ray scans, this can be done non-destructively, allowing every chip to be scanned.

· Open source, low-cost, local environment, and biological monitoring devices. Communities and individuals should be able to measure their environment and themselves and identify biological risks. This includes various forms of technology: personal medical devices like OpenWater, air quality sensors, general airborne disease sensors (e.g., Varro), and larger-scale environmental monitoring.

The importance of openness and verifiability at all levels of the technology stack

From here to there

The key difference between this vision and more “traditional” technological visions is that it is more friendly to local sovereignty, individual empowerment, and freedom. Security is not achieved by searching the entire world and ensuring no bad actors anywhere, but by making the world more robust at every level. Openness means openness to building and improving each layer of technology, not just open access APIs for centralized plans. Verifiability is not a proprietary rubber stamp for potentially colluding companies and governments—it is a right and social encouragement for the people.

I believe this vision is more resilient and better suited to our fractured 21st-century world. But we do not have unlimited time to realize it. Efforts involving more centralized data collection and backdoors, reducing verification to “Was it made by a trusted developer or manufacturer?” are advancing rapidly. Attempts at truly open access alternatives have been tried for decades. It may have started with Facebook’s internet.org and will continue, each attempt more complex than the last. We need to act quickly to compete with these approaches and demonstrate to people and institutions that better solutions are possible.

If we succeed in realizing this vision, another way to understand the world we get is as a kind of retro-futurism. On one hand, we gain the benefits of more powerful technology, allowing us to improve health, organize ourselves more efficiently and resiliently, and protect ourselves from new and old threats. On the other hand, we get a world that brings back the second nature of everyone in 1900: infrastructure that is free for people, open, verifiable, and modifiable to meet their needs, where anyone can participate not just as a consumer or “application builder,” but at any layer of the stack, and be confident that devices do what they say.

Designing for verifiability has costs: many hardware and software optimizations provide high-speed benefits but make designs more difficult to understand or more fragile. Open source makes it more challenging to profit under many standard business models. I believe both issues are exaggerated—but this is not something the world can be convinced of overnight. This leads to a question: what are the short-term realistic goals?

I propose an answer: strive to build a fully open source and verification-friendly technology stack, targeting high-security, non-performance-critical applications—both consumer and institutional, both long-distance and face-to-face. This will include hardware, software, and biotech. Most truly security-critical computing does not need to be high-speed; even when it is, there are often ways to combine high-performance but untrusted components with trusted but lower-performance ones [14], achieving high levels of performance and trust for many applications. Achieving maximum security and openness for everything is unrealistic. But we can first ensure these properties are available in the truly important areas.

Original hyperlinks

[32]

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)