MIT: No need to panic about AI doomsday theories; verification ability is a scarce resource

Source: Bankless Podcast; Compiled by: Felix, PANews

MIT economist Christian Catalini appeared on the show with Ryan and David to delve into his new paper “Some Simple Economics of General Artificial Intelligence.” The paper points out that the scarce resource in the AI economy is no longer intelligence, but validation: the capability of humans to check, judge, and confirm the correctness of AI outputs.

Christian elaborated on two cost curves reshaping various industries (automation costs and validation costs), explaining why entry-level jobs are disappearing first and why even top experts are unknowingly training their successors (the “coder’s curse”). He also described three roles that can be preserved in the transformation: directors, meaning creators, and responsibility underwriters.

PANews has compiled the highlights of the conversation.

Host: I think many listeners might feel a sense of panic about AI, like I do. Why do you think people are worried about AI? Are their concerns justified?

Christian: We all share that feeling. This is a fast and transformative period, and the closer you are to the code, the more likely you are to witness this acceleration firsthand. This exponential growth has become very real over the past few months. This technology has achieved things that many people thought would take much longer to accomplish, and it’s a feeling we are all grappling with. However, I believe the “apocalyptic” view is wrong; people often underestimate the potential these tools bring. Yes, there will be an extremely difficult transition period, and the pace of job transformation is unprecedented in history. Nevertheless, if you leverage the best features of this technology and invest in it, the long-term outlook is fundamentally positive, although the journey will be bumpy. Economics views work as a collection of tasks, some of which will be automated, and that is good news. The key is how you retrain yourself and stay at the forefront.

Host: Who do you think will be hit first?

Christian: That’s a great question, and I have many different thoughts on it. First, when I say that those closest to the code will be hit first, I mean they will experience how powerful this technology is sooner. As the “Jevons Paradox” reveals, when something becomes more efficient, we tend to consume more of it, for example, by writing more software. I believe programming will diversify like many other professions, which we refer to in the paper as the “disappearing primary loop.” If you are a junior staffer who hasn’t yet acquired the “tacit knowledge” to distinguish between excellent products and mediocre ones, then AI can effectively replace you across various fields.

Everyone can now easily access a fairly decent marketer, junior programmer, or lawyer who can help you handle most situations; you just need to bring in a top lawyer for final validation at the last stage. On the other hand, even top experts, during the introduction of AI, are inadvertently creating labels, information, and digital traces that will ultimately lead to the automation of their own jobs. Leading labs are hiring top talent from fields like finance to create evaluation standards and incorporate that domain expertise into large models. So I believe no single job is 100% safe; even physical labor constrained by robotic manufacturing capabilities will see significant leaps in reward models over the next few years. Anything that happens on a screen can be tracked, replicated, and learned from. For every profession, the key is to think: where can I add value if I delegate as much work as possible to AI?

In reality, people have a lot of “self-soothing” regarding “taste” and “judgment.” They are very vague. So in the paper, we say there is no such thing as taste or good or bad judgment, only the distinction between “measurable” and “immeasurable.” If something has been measured, machines can replicate it. If something is still embedded only in the weights of your brain, like a top designer who has accumulated thousands of hours of experience deciding what should be published and what shouldn’t, that is what we refer to as “validation.” All validation is that last step: AI agents create the product, and you, as the decision-maker, judge whether it meets the standards for market release. As machines get better data, things will be automated; but in the face of unknown territories or places where there is no data at all, that part will still belong to humans for the next few years.

Host: That’s a very profound insight. But I’m also thinking, it’s quite natural for engineers to automate their own work. Is the impact uniform across all industries?

Christian: We have enough evidence to suggest that the change will be uneven. You can think of it this way: is this job merely a “packaging” of something that society fundamentally doesn’t need? For example, general consulting work, if it mostly involves repackaging, refining, and summarizing information that is already widely available, then that is clearly at risk. But if it brings scarce domain expertise or requires hiring consultants for political reasons, those will survive. Ask yourself whether this profession is profitable because it solves a complex problem or merely because there exists some artificial bottleneck.

Host: What does validation actually mean? I find it hard to break down my day’s work into what is cognitive work and what is validation work.

Christian: Agents have learned and measured everything from the web and books, and because they are cheaper and scalable, they will replace the measurable parts. But what agents still don’t know is the unique neural network weights in your brain. This is obtained through your own experiences and struggles, which makes you a top expert. For instance, early cryptocurrency participants, many from Argentina, Venezuela, etc., who have experienced hyperinflation, react to assets entirely differently. This inherent unique measurement remains a significant advantage.

What is validation? It is the difference between your own measurement standards of the world and the standards possessed by the agent. Like a top editor who knows exactly what article will resonate; or a top CTO who, faced with a vast codebase generated by AI, knows precisely which critical edge cases must be personally checked by humans—this part cannot yet be measured by machines.

Host: Let me give you an example. If I see a video on X about Israel being bombed but realize it is AI-generated. I use my brain to identify the issue and perhaps prompt it again to generate a better video. Is that my “validation ability”?

Christian: That’s a great example. Further, we may soon find ourselves in a world where, for most people, this video is hard to distinguish from reality. The next step might be military experts noticing the dynamics of the flames are off. Following that, even military experts might not be able to discern it at a glance and will need AI to analyze physical principles and conduct simulation tests. Ultimately, it may become impossible to distinguish at all, and we will have to rely on cryptographically based infrastructure to confirm authenticity. The same goes for medicine, where edge cases ultimately require top radiologists to use 20 years of experience and understanding of a patient’s specific background to veto AI’s judgment. That is the thin “filtering layer” we are concerned with. When we do this, we free up a lot of time. So, that is the good side. We can do more with fewer resources. The cost of expensive things will decrease. Society will consume more of these things. I think that’s good news.

Host: But in your example, currently he is doing the validation, but soon he won’t be able to validate and will need a military commander, and ultimately even the commander won’t be able to validate and will have to turn to AI. Doesn’t this precisely illustrate that “validation” is initially valuable but will soon be automated by AI? So validation itself is also not safe?

Christian: Exactly. We call it the “coder’s curse” in the paper. The very rational act of validation is, in itself, driving the development of cutting-edge technology and digitizing experiential data. We cannot stop because all lawyers or practitioners are trying to use AI. Validation is indeed a shrinking frontier.

Host: If even the last bastion of validation work is shrinking, when will we be able to stop feeling anxious?

Christian: First, some things are inherently immeasurable by design, like so-called “status games” or things imbued with meaning by humans. These areas will not be overtaken by machines because their characteristics pertain to consensus among humans. Cryptocurrency is somewhat like this; what matters is the consensus among humans about what holds value. As the realm of measurable work shrinks, we will invent many ways to make immeasurable work meaningful.

Host: AI can build a website in 10 seconds but might not be able to write an engaging tweet for humans. Could this be one of the last remaining validation tasks?

Christian: Capturing attention and telling a truly novel joke are extremely difficult creative tasks, attempting to break through what has never been measured. We have evolved a strong ability to cope with unknown environments through a long struggle for survival. Those engaged in such work are called “meaning creators.” In fields like art or culture, what is considered good depends on human consensus. Even when you use AI agents, you must set the “intention.”

Host: Automation costs are decreasing exponentially. What about the “cost of validation”? Will it forever be constrained by human biological limits?

Christian: Currently, it is constrained by biology. So, many companies release a lot of AI-generated code, but there aren’t enough people to read and validate it, which inevitably hides risks.

Host: Can’t AI validate AI?

Christian: If AI can validate correctly, that part is itself automatable. After exhausting all AI validations, what remains is the truly unverifiable content; that is the bottleneck for human intervention.

Host: If validation is the new scarce resource but is continually receding, how should one work and invest in this economy?

Christian: We created a 2x2 matrix based on “automation costs” and “validation costs.” The lower-left corner represents workers that are replaceable: automation is easy, validation is easy, and you definitely don’t want to be here. The other three quadrants are:

Meaning Creators: Automation is difficult, validation is difficult. They are dedicated to social consensus, status games, and human connection. For example, taste makers in fashion, crypto KOLs on Twitter, they create narratives and coordinate attention.

Responsibility Underwriters: Automation is easy, validation is difficult. They are top experts in their fields, like leading lawyers, doctors, or venture capitalists. They leverage AI at scale but provide responsibility and validation services for final edge cases.

Directors: Automation is difficult, validation is easy. The core is “intention.” They handle “unknown unknowns,” directing agents like entrepreneurs, setting direction, feeling deviations, and continuously correcting course.

Host: What should recent graduates do? On one end, there are low-value entry-level jobs, and on the other end, top experts that take ten years of industry refinement to become, creating a vast chasm between the two. AI can do entry-level work, so how can young people grow to the other end?

Christian: The chasm does exist. But the good news is you can compress your learning time. You can skip traditional training steps. A junior engineer can now do the work of an entire team using tools. While they may make mistakes at first, as newcomers they can question traditions with fresh perspectives, which is an advantage. They can realize ideas in ways we could never do when we were young. There are pros and cons.

The past path of “get a degree, find an internship, work hard to advance” is indeed no longer valid, which will bring significant cultural shock. This is very difficult for recent graduates. If you are still in college, you have time to clarify your direction. If you are in a predicament, my advice is: go use these tools to create something. Your ambition should be 100 times greater than what we had at that age.

Host: The disappearance of a large number of “button-pushing” jobs in the short term may throw society into chaos?

Christian: Society will always recreate “button-pushing” jobs when necessary to maintain stability. But many people engaged in such work actually have the capability to do more, they were just constrained by their environment in the past. When physical labor is no longer necessary, we invented going to the gym; now, facing the liberation of mental labor, people will develop various side hustles and creator economies to seek challenges. That’s also why I believe “Universal Basic Income (UBI)” is entirely wrong; people need meaning and the drive for self-actualization. Moreover, even if a significant portion of your work is being automated away now, if you leverage AI as this super tool, a newly entering junior employee can produce outputs that were previously the work of an entire team.

Host: Do you have any advice for companies and investors?

Christian: For companies, invest in validation infrastructure, offering “responsibility as a service” (not just providing agents but also underwriting consequences). Also, master “exclusive sources of facts,” as AI is easily fooled; companies that can provide exclusive, real data or in-depth evaluations like Bloomberg are hugely valuable. For investors, in addition to investing in these, focus on hard-core R&D that is “immeasurable.” Previous ordinary network effects may fail; new network effects will be based on how you make your agents more reliable through better real feedback, as what people truly want to buy is validated intelligence.

Host: Is cryptography useful in this validation process?

Christian: The underlying infrastructure built in the crypto space over the past decade is crucial. When we need to verify identity authenticity and prevent account takeover, on-chain technologies like “proof of personality” can provide strong validation. There’s also data provenance and cryptographic regulatory chains; we need stringent cryptographic guarantees on the generation of information and whether models are compliant.

Host: What should people do in the coming year? Are you optimistic about the future of humanity?

Christian: First, don’t panic. Experiment a lot, automate and “eliminate” your current self as much as possible. Many amateur explorations of the future might turn out to be the most meaningful careers. At the very least, you will figure out the boundaries and shortcomings of the model. For many online creators, hobbies have turned into careers, and that will be the mainstream direction in the future. If you have children, discovering their talents and immersing them in what they love is the most important thing. There is no fixed professional template; new AI tools can help you find that unique path that belongs only to you.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin