OpenAI cuts capital expenditure targets by 60%. The media says this is "becoming more rational."
I think a more accurate way to put it is: they are finally giving investors a number that they can justify. A few months ago, Sam Altman publicly announced a commitment of $1.4 trillion in computing power. Once that number was out, the entire industry started moving: GPU ordering, data center site selection, power contracts. No one calculated carefully because it was OpenAI saying it. Now it’s down to $600 billion. On the surface, this is "financial discipline." Tying expenses to expected revenue shows the company has matured. Many analysts interpret it within this framework. But there’s a number worth pausing to consider. OpenAI’s revenue in 2025 is projected at $13.1 billion. Their 2030 goal is $280 billion. This isn’t aggressive; it’s a compound annual growth rate of over 80% for five years. And to maintain this speed, they can’t slow down for five full years. The $600 billion expenditure is "disciplined" relative to $1.4 trillion. If you convert it to annual spending, it’s about $100 billion per year—more than Microsoft or Amazon’s entire cloud business capital expenditure annually. So, that "becoming more rational" narrative is really just using a bigger number as the benchmark. It’s a bit like someone saying, "I’ve reduced my monthly spending from 140,000 to 60,000, so I’m more rational"—but their current monthly income is only 13,000. The engineering reality is this: when the narrative about computing power demand is "more is better because models haven’t hit bottlenecks," capital expenditure has no ceiling. But after DeepSeek, papers about "achieving similar results with less compute" are truly entering production environments. Investors are starting to ask a question they previously dared not: Is the relationship between your compute input and output really linear? That $1.4 trillion figure now looks more like a financing strategy than an engineering budget. It’s about making competitors follow suit, making the market believe that falling behind means falling behind, and creating a story for the next round of valuation. In Silicon Valley, this number has a name—it’s not called a forecast, it’s called a narrative. Is $600 billion credible? Not necessarily. But it at least signifies that the "limitless compute" narrative is no longer easy to sell. The next question is: who will find a number that can both persuade investors and align with engineering realities? That critical point has a name: shifting from a compute race to an efficiency race. It’s not about who burns more, but who can use the same money to keep the marginal benefits of their models from dropping too fast. ------------------ Quote: OpenAI significantly reduces capital expenditure expectations: from $1.4 trillion to $600 billion Pre-2030 expenditure expectations cut by 60%, reflecting a shift from frenzy to rationality in AI infrastructure investment, industry may enter an integration phase
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI cuts capital expenditure targets by 60%. The media says this is "becoming more rational."
I think a more accurate way to put it is: they are finally giving investors a number that they can justify.
A few months ago, Sam Altman publicly announced a commitment of $1.4 trillion in computing power.
Once that number was out, the entire industry started moving: GPU ordering, data center site selection, power contracts. No one calculated carefully because it was OpenAI saying it.
Now it’s down to $600 billion.
On the surface, this is "financial discipline." Tying expenses to expected revenue shows the company has matured. Many analysts interpret it within this framework.
But there’s a number worth pausing to consider.
OpenAI’s revenue in 2025 is projected at $13.1 billion. Their 2030 goal is $280 billion.
This isn’t aggressive; it’s a compound annual growth rate of over 80% for five years. And to maintain this speed, they can’t slow down for five full years.
The $600 billion expenditure is "disciplined" relative to $1.4 trillion. If you convert it to annual spending, it’s about $100 billion per year—more than Microsoft or Amazon’s entire cloud business capital expenditure annually.
So, that "becoming more rational" narrative is really just using a bigger number as the benchmark.
It’s a bit like someone saying, "I’ve reduced my monthly spending from 140,000 to 60,000, so I’m more rational"—but their current monthly income is only 13,000.
The engineering reality is this: when the narrative about computing power demand is "more is better because models haven’t hit bottlenecks," capital expenditure has no ceiling. But after DeepSeek, papers about "achieving similar results with less compute" are truly entering production environments. Investors are starting to ask a question they previously dared not: Is the relationship between your compute input and output really linear?
That $1.4 trillion figure now looks more like a financing strategy than an engineering budget.
It’s about making competitors follow suit, making the market believe that falling behind means falling behind, and creating a story for the next round of valuation. In Silicon Valley, this number has a name—it’s not called a forecast, it’s called a narrative.
Is $600 billion credible? Not necessarily.
But it at least signifies that the "limitless compute" narrative is no longer easy to sell. The next question is: who will find a number that can both persuade investors and align with engineering realities?
That critical point has a name: shifting from a compute race to an efficiency race.
It’s not about who burns more, but who can use the same money to keep the marginal benefits of their models from dropping too fast.
------------------
Quote:
OpenAI significantly reduces capital expenditure expectations: from $1.4 trillion to $600 billion
Pre-2030 expenditure expectations cut by 60%, reflecting a shift from frenzy to rationality in AI infrastructure investment, industry may enter an integration phase