Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
a16z Founder: In the Agent era, what truly matters has changed
Author: a16z
Translation: FuturePulse
Signal source: This is the latest interview with a16z founder Marc Andreessen on the Latent Space podcast. He is a renowned American internet entrepreneur and one of the key figures in the early development of the internet; also after founding a16z, he became a leading investor in Silicon Valley. The entire conversation revolves around the history of AI development and the latest trends, and it is well worth reading.
I. This round of AI didn’t emerge out of thin air—it’s the first time it’s fully “started working” after 80 years of technical marathon
This round of AI didn’t emerge out of thin air—it’s the first time it’s fully “started working” after 80 years of technical marathon
Marc Andreessen directly calls the current moment “80-year overnight success,” meaning the sudden burst people see is actually the focused release of decades of technical reserves.
He traces this technological thread back to early neural network research and emphasizes that today’s industry has essentially already accepted the judgment that “neural networks are the correct architecture.”
In his narrative, the key milestones aren’t a single moment, but a series of stacked steps: AlexNet, Transformer, ChatGPT, reasoning models, and then agents and self-improvement.
He especially emphasizes that this time isn’t just that text generation has gotten stronger—four kinds of capabilities emerge at the same time: LLMs, reasoning, coding, and agents / recursive self-improvement.
The reason he believes “this time is different” isn’t that the story is more compelling, but because these capabilities have started to work on real-world tasks.
II. The agent architecture represented by Pi and OpenClaw is a software architecture shift deeper than chatbots
He describes agents very concretely: essentially “LLM + shell + file system + markdown + cron/loop.” In this structure, the LLM is the core of reasoning and generation; the shell provides the execution environment; the file system saves state; markdown makes the state readable; and cron/loop provides periodic wake-ups and task progression.
He believes the importance of this combination is that, besides the model itself being new, all the other components are already parts that have long matured in the software world—things that are understandable and reusable.
An agent’s state is stored in files, so it can migrate across models and runtimes; the underlying model can be swapped out, but memory and state remain.
He repeatedly emphasizes introspection: the agent knows its own files, can read its own state, and even can rewrite its own files and functions, moving toward “extend yourself.”
In his view, the real breakthrough isn’t just “models can answer,” but that agents can use the existing Unix toolchain to bring the latent capabilities of the entire computer into play.
III. The era of browsers, traditional GUIs, and “people hand-clicking software” will be gradually replaced by agent-first interaction
Marc Andreessen has said explicitly that in the future “you might no longer need user interfaces)”.
He further points out that the primary users of software in the future may not be humans, but “other bots.”
This means that many interfaces designed today for humans to click, browse, and fill in forms will degrade into execution layers called behind the agent.
In this world, humans are more like people who propose goals: telling the system what they want, and then letting the agent call services, operate software, and complete the workflow.
He connects this change to a bigger future for software: high-quality software will become increasingly “abundant,” no longer a scarce product painstakingly built by only a few engineers.
He also judges that the importance of programming languages will decline; models will write code across languages, translate for each other, and even in the future humans may care more about explaining why AI organizes code this way than about clinging to any single language itself.
He even mentions a more aggressive direction: conceptually, AI may not only output code, but may also directly output more low-level binary code (binary) or model weights (model weights).
IV. This AI investment cycle is similar to the 2000 internet bubble, but the underlying supply-and-demand structure is different
He looks back on 2000 and emphasizes that the crash, to a large extent, wasn’t because “the internet wasn’t good,” but because telecom and bandwidth infrastructure were overbuilt—fiber optics and data centers were laid out in advance, and then there was a long period of digestion.
He believes that today you can indeed see concerns about “overbuilding,” but the current investment players are mainly cash-rich large companies like Microsoft, Amazon, and Google—not highly leveraged fragile players.
He specifically points out that once there is investment that can run GPUs, it usually turns into revenue fairly quickly, which is different from the huge amount of idle capacity in 2000.
He also emphasizes that what we’re using now is actually a “sandbagged” version of the technology: because supply of GPUs, memory, data centers, and so on is insufficient, the full potential of models hasn’t been unleashed.
In his assessment, the real constraints in the coming years won’t be only GPUs; they will also include CPU, memory, network, and the integrated bottlenecks across the entire chip ecosystem.
He puts AI scaling laws alongside the past Moore’s Law, saying they’re not just describing patterns—they’re continuously driving capital, engineering, and industry coordination forward.
He mentions a quite unusual but important phenomenon: as software optimization speed keeps increasing, some older-generation chips may even have more economic value than they did when they were just purchased.
V. Open source, edge inference, and local running are not side issues—they’re part of the AI competitive landscape
Marc Andreessen is clear that open source is very important—not only because it’s free, but because it “lets the whole world learn how it gets done.”
In his narration, describing open-source releases like DeepSeek as a “gift to the world” is because code + paper spread knowledge quickly, raising the industry’s overall baseline.
In his narrative, open source is not only a technical choice; it may also be a kind of geopolitical and market strategy. Different countries and companies may adopt different openness strategies based on their own commercial constraints and influence objectives.
At the same time, he emphasizes the importance of edge inference (“Edge inference”): in the next few years, centralized inference costs may not necessarily be low enough, and many consumer-level applications can’t afford long-term high-cost cloud inference.
He mentions a recurring pattern: models that seem “impossible to run on a PC” today often can, after a few months, actually run on local machines.
Besides cost, local running is also driven by trust, privacy, latency, and use-case fit: wearables, door locks, and on-the-go devices are all more suitable for low-latency, on-site inference.
His judgment is very direct: almost anything that comes with chips in the future may carry an AI model.
VI. The real hard problem of AI isn’t only model capability—it’s safety, identity, money flows, organizational and institutional resistance
On safety, his view is sharp: almost all potential security bugs will be easier to discover, and in the short term there could be a stretch of “catastrophic computer security disasters.”
But he also believes that programming agents will scale the capability to patch vulnerabilities; in the future, the way to “protect software” may be to have bots scan and fix it.
On identity, he believes “proof of bot” is not feasible because bots will become increasingly powerful. The truly feasible direction is “proof of human,” meaning a combination of biometrics, cryptographic verification, and selective disclosure.
He also talks about an issue that’s often overlooked: if agents really need to handle real-world matters, they will ultimately need money and payment capability, and even some form of banking account, card, or stablecoin-style infrastructure. On the organizational level, he borrows the framework of managerial capitalism, and believes AI could re-strengthen founder-led companies, because bots are very good at reporting, coordinating, paperwork, and a large amount of “managerial work.”
But he doesn’t think society will smoothly and quickly accept AI. He cites examples such as professional licenses, unions, dockworker strikes, government departments, K-12 education, healthcare, etc., to show that in the real world there are many institutional “speed reducers.”
His judgment is that both AI utopians and doomsayers tend to overlook one thing: once technology becomes possible, it doesn’t mean all 8 billion people will immediately change along with it.