White House AI Crypto Czar David Sacks Resigns and Unloads Fire: Anthropic’s Corporate Coding Is Good, But They Use Regulatory Panic to Do Business

Former White House AI and cryptocurrency czar David Sacks’ first public comments on the AI industry since stepping down—firing both barrels on the All-In Podcast: on one hand, praising Anthropic for differentiating through enterprise coding demand and saying explosive ARR growth confirms the model is correct; on the other, directly criticizing Anthropic for playing “regulatory capture,” pushing a mandatory permissioning system that harms startups, and warning that if the U.S. becomes addicted to AI doomerism, it could end up losing in the China–U.S. AI race.
(Background: Crypto czar David Sacks, a Trump operator, stepped down from the White House; his 130-day term ended and several pieces of legislation remain unresolved)
(Additional context: Big short seller Michael Burry has been talking: Palantir is just a low-margin SaaS outsourcing company! Anthropic is eating it)

Table of Contents

Toggle

  • Anthropic found the right funding
  • Comparing OpenAI: the hangover of consumer expansion
  • “Regulatory capture” and manufacturing panic
  • Questioning Mythos: is it a real threat, or did the chicken cry wolf?
  • AI doomerism is America’s “self-harm”
  • Dongqu perspective

Exactly 25 days after leaving the White House, David Sacks finally made his move—not discussing policy, but directly assessing the business logic of the AI industry. In the All-In Podcast episode “Anthropic’s Generational Run, OpenAI Panics, AI Moats,” Sacks’ comments were sharp, pointing out Anthropic’s problems and contradictions.

Sacks served the Trump administration for 130 days in the capacity of “White House AI and cryptocurrency czar,” officially stepping down on March 26, 2026, and moving to become co-chair of the President’s Council of Advisors on Science and Technology (PCAST). After stepping down, this is his first time publicly taking a position on the business models of AI companies as an “industry commentator”—and that alone makes it news.

Anthropic found the right funding

Sacks gave rare high praise to Anthropic’s business model. His core argument boils down to three words: “coding is right.”

In the All-In Podcast, he said Anthropic is betting on code generation and has precisely hit the core pain point at the center of enterprise IT budgets. When companies purchase AI tools, the reason that most easily convinces the CFO to open their wallet is not “a smarter assistant,” but “engineer efficiency improvements and reduced labor costs.” Claude performs strongly in coding scenarios, helping Anthropic build quick credibility with enterprise customers.

The numbers back up this judgment: Anthropic’s annual recurring revenue (ARR) has already surpassed $30 billion, for the first time exceeding OpenAI’s scale of about $25 billion; just in February this year, monthly ARR growth reached $6 billion; enterprise customers with annual contracts over $1 million increased from 500 to more than 1,000. Sacks’ conclusion is that Anthropic’s enterprise transformation is not an accident—it is the result of a clearly defined strategy.

He also mentioned that Anthropic is actively expanding its agent capabilities, upgrading Claude from a “Q&A tool” into an automated agent that can proactively execute tasks. This direction aligns with the next wave of enterprise demand for AI.

Comparing OpenAI: the hangover of consumer expansion

Sacks’ comments on OpenAI carry a hint of “told-you-so” energy.

He believes that over the past few years, OpenAI has poured heavy resources into the consumer side—GPT applications, Sora video generation, and ChatGPT subscriptions—in an effort to build a mass-market brand. But this path burns cash quickly and monetizes slowly, ultimately forcing OpenAI to turn back toward the enterprise market—and even to promise investors a 17.5% guaranteed return to steady confidence.

In the podcast, Sacks said OpenAI’s current problem is this: although it has a huge presence in the consumer space, its commercialization foundation is not as solid as it is on the enterprise side. Anthropic didn’t take a detour—and as a result, it ended up taking the upper hand later on.

“Regulatory capture” and manufacturing panic

However, Sacks’ comments are not one-way praise. While affirming Anthropic’s business model, he also launched sharp criticisms of Anthropic’s policy posture.

He directly used “sophisticated regulatory capture strategy based on fear-mongering” to describe Anthropic’s lobbying approach. “Regulatory capture” means that companies, by actively participating in how regulations are formulated, package rules that benefit themselves as public interest—ultimately causing regulatory bodies’ decisions to prioritize that company’s business objectives.

Sacks’ specific criticism is that Anthropic is actively pushing a “mandatory permissioning regime”—requiring that AI models must obtain government approval before deployment. He believes that once such a system is implemented, it would have limited impact on large companies like Anthropic that have the resources to deal with complex and burdensome compliance procedures, but it would create a huge barrier for smaller startups—effectively blocking competition and innovation.

In other words, Sacks’ accusation is: while Anthropic packages policy initiatives in the name of “AI safety,” it reaps the advantage of “competitors being slowed down.”

Questioning Mythos: is it a real threat, or did the chicken cry wolf?

This critique was extended into a specific incident in another All-In Podcast episode, “Anthropic’s $30B Ramp, Mythos Doomsday.”

Anthropic’s safety research team recently released studies related to the Mythos model, claiming the model can find key vulnerabilities in mainstream systems within hours, sparking broad discussion across the industry. Sacks questioned it: is this truly a real technical assessment—or is it “a tactic,” or even “Chicken Little routine,” meaning a panic playbook where a chicken shouts “the sky is falling!”?

He did not rule out the possibility that Mythos research could pose real risks, but he made his position clear: if such narratives are used to push for regulation, they must undergo stricter external scrutiny—not allow Anthropic to both serve as player and referee.

AI doomerism is America’s “self-harm”

Sacks brought this critique into a broader geopolitical framework. He warned that if the U.S. continues to sink into an atmosphere of AI doomerism (apocalyptic ideology), it is a form of “self-inflicted injury.”

His logic is this: the U.S. and China are in an AI race. The U.S. is still ahead for now, but if it uses large amounts of regulatory energy to police the internal market—letting the industry endlessly debate whether “AI is too dangerous”—it will only slow down its own pace and allow China to catch up. This stance matches the policy focus he pursued while serving as the White House AI czar, but now, speaking as an “industry commentator,” it carries less political baggage and sounds more direct.

Dongqu perspective

Sacks’ two-sided argument reveals the real competitive axis of the AI industry—not just who has the stronger model, but how the triangle tension of “regulatory posture × enterprise market penetration × the China–U.S. AI race” interacts.

Anthropic’s commercial success is evident: enterprise customer growth, explosive ARR, and a precisely targeted coding entry point. But Sacks’ criticism serves as a reminder to the market: Anthropic’s policy influence is amplified in tandem with its commercial success. And when a company is both an advocate for AI safety and a beneficiary of AI safety rules, that role itself is worth being scrutinized more closely.

For AI startups in Taiwan and across Asia, the risk of a “mandatory permissioning regime” that Sacks pointed out is also worth paying attention to—if the U.S. regulatory framework tilts in this direction, the barrier to cross-border deployments will only rise, not fall.

This is the best take I have heard around the existing AI landscape today

Fantastic explanation around the constraints at OpenAI and Anthropic, focus on enterprise business models, and the doomerism narrative in AI

From former AI czar David Sacks

"Even though Anthropic’s… pic.twitter.com/TqvDzjt5Y0

— Boring_Business (@BoringBiz_) April 19, 2026

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin