Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Trillions" in revenue: Nvidia's confidence and challenges
This week, no number has shocked the market more than “$1 trillion.”
In a keynote address at NVIDIA’s annual developer conference GTC, led by Huang Renxun, the CEO said that by the end of 2027, the company’s next-generation AI accelerator chip architecture Blackwell and the next-generation Rubin products, taken together, are expected to generate at least $1 trillion in revenue. He also made it clear that this figure does not include sales of independent Vera CPU and the LPX rack solution.
From $500 billion to $1 trillion, NVIDIA’s chip revenue outlook is set to double within half a year.
In this market game of an “AI bubble,” why is Huang Renxun able to deliver a judgment far beyond market expectations? And can this target be achieved?
Open NetEase News to view great pictures
Where does the confidence come from
Behind Huang Renxun’s “trillion-dollar revenue” outlook are three major pillars.
1、Very high order visibility. During a media interview during GTC2026, Huang Renxun emphasized that the revenue outlook of more than $1 trillion announced on Monday of this week has strong “visibility.” NVIDIA expects to complete, book, and deliver business worth more than $1 trillion, and said it holds “strong confidence” in achieving the “more than $1 trillion” goal.
This assessment of “strong visibility” is not without basis. Huang Renxun pointed out that the most core demand from customers right now is to “ensure they get enough supply,” rather than the price. This reflects that the AI compute market is still in a typical supply bottleneck stage—demand far exceeds supply, and customers are more worried about not getting the goods than about whether prices are high or low.
Omdia said that the advanced packaging sector is facing a crisis. TSMC’s CoWoS capacity is set to expand from 75k wafers per month in 2025 to 120,000–130k wafers by the end of 2026, but it still cannot meet the surge in demand. The result is longer delivery lead times, rising prices, and capacity allocation that is even more tilted toward the biggest customers.
Huang Renxun further said that large-scale procurement from cloud providers and AI companies creates a high degree of certainty in the company’s orders, reservations, and shipments—this is also the key reason he dares to make a “strong visibility” call. From an industry perspective, tech giants including OpenAI, Meta, Microsoft, Google, and Amazon are continuously ramping up investment in AI data center construction, driving AI compute demand to grow exponentially.
JPMorgan noted that $1 trillion, relative to Wall Street’s current consensus outlook for data center revenue from 2026 to 2027, implies at least $50–70 billion of upside potential.
2、AI enters the “inference era.” Unlike the past two years, when “model training” was the core, Huang Renxun repeatedly emphasized at the GTC2026 conference that the AI industry has entered an “inference turning point.”
What is inference? It refers to the real-time computing demands of AI models in real-world applications. When users ask ChatGPT questions, use Midjourney to generate images, or have autonomous vehicles make decisions, the work behind it is inference computation. Unlike the one-time large-scale computation in the training stage, inference is continuous and represents computing demand that grows linearly, or even exponentially, as the number of users expands.
In his keynote speech, Huang Renxun said: “Training makes the model smart, but inference is what truly brings AI into households. Every user interaction requires compute power, and as AI Agents become more widespread, inference demand will far exceed training demand.”
Market size estimation:
- Training market: relatively concentrated, mainly led by a small number of tech giants; demand is project-based with phased bursts
- Inference market: highly fragmented, ranging from cloud APIs to edge devices, from consumer-level applications to enterprise solutions; demand is continuous with scaled growth
3、Product iteration + platformization strategy. The $1 trillion outlook disclosed by Huang Renxun at GTC2026 covers only revenue from Blackwell and the next-generation Rubin architecture chip, and does not include the upcoming new products, nor does it include revenue from new regions and markets. This suggests NVIDIA’s potential overall AI business scale could be even larger than the current estimates.
Product roadmap:
- Blackwell architecture (2024–2025): already in large-scale mass production; the B200 chip delivers 4x the training performance of the H100, and inference performance increases by as much as 30x
- Rubin architecture (2026–2027): expected to begin large-scale deployment starting in 2026, with performance set to jump further
- Feynman architecture (2028 and beyond): even further-out next-generation architectures are already under R&D
More importantly, NVIDIA is shifting from “selling chips” to “selling AI factories.” At the conference, Huang Renxun announced NVIDIA Dynamo open-source inference operating system, a blueprint for physical AI data factories, and partnerships with major global industrial software players—aiming to build a complete AI infrastructure ecosystem.
Analysts say this platformization strategy means NVIDIA’s future revenue will no longer be limited to a single GPU, but will expand to the full data center system. Wedbush senior tech analyst Dan Ives said that NVIDIA is not only riding the huge wave of artificial intelligence, but is now expanding its control over the infrastructure that supports AI.
This will significantly expand the revenue ceiling. Huang Renxun said clearly: “The $1 trillion target will continue to grow.”
Multiple challenges facing the trillion-dollar path
Despite Huang Renxun’s full confidence, achieving $1 trillion in cumulative revenue (by the end of 2027) still faces multiple challenges.
First, the urgency of the time window. From March 2026 to the end of 2027, NVIDIA has less than two years to generate $1 trillion in cumulative revenue. Considering the cycle from chip orders to delivery (typically 6–12 months) and the time required for large-scale deployment, the practical time window for confirming revenue is even tighter.
- NVIDIA fiscal year 2025 (as of January 2025) revenue of $130.5 billion
- NVIDIA fiscal year 2026 (corresponding to February 2025 to January 2026) revenue of $215.9 billion; fiscal year 2027 revenue reaching about $300–400 billion
- Total revenue over three years from 2025–2027 of about $600–700 billion
- To reach $1 trillion means that one-year revenue in 2027 may need to break above $500 billion
This means NVIDIA would need to achieve nearly a doubling year-over-year growth rate in 2027. For any hardware company, that is an unprecedented challenge.
Second, competition is intensifying.
The MI400 series launched by AMD in 2025 is viewed by the industry as a direct challenge to NVIDIA’s Blackwell. In a recent interview, AMD CEO Lisa Su said: “Our share in the AI market is steadily increasing. MI400 offers better cost-effectiveness than Blackwell on certain workloads, which is very attractive to price-sensitive customers.”
A bigger threat comes from NVIDIA’s large customers accelerating the deployment of their own AI chips:
- Google TPU v6: already used for Gemini2.0 training and inference, with performance close to Blackwell
- Amazon Trainium3/Inferentia3: deployed at scale on AWS, with costs 30–40% lower than NVIDIA’s solutions
- Microsoft Maia200: starting full deployment on Azure in late 2025
- Meta MTIA: plans to launch four generations of its own AI chips by end-2027
A former Google chip engineer said: “TPUs’ efficiency in Transformer model training has already surpassed GPUs. While they are less general-purpose than CUDA, for companies with clearly defined workloads, the economics of in-house chips are very compelling. The cloud providers aim that by 2027, their own chips will account for 30–40% of their AI compute procurement.”
Seaport Research analyst said that “NVIDIA now needs to work harder than ever to fight for revenue.”
In addition, the supply chain could also face bottlenecks. Currently, TSMC’s CoWoS advanced packaging capacity is the main bottleneck. Even though TSMC is accelerating capacity expansion, the supply-demand gap for high-end AI chips is expected to persist until the end of 2026. If the pace of expansion falls short of expectations, NVIDIA could face an awkward situation of “having orders but not being able to deliver.”
Turbulence in the Middle East is also affecting Korea, which has storage manufacturing capabilities. According to 2025 statistics from the Korea International Trade Association, Korea’s dependence on imports of helium from Qatar is as high as 64.7%. Semiconductor manufacturing processes highly rely on helium to cool silicon wafers, and currently it is believed that there is no viable alternative. The Korean government has also said that if supply disruptions continue for a sufficiently long period, it may lead to a helium shortage and rising prices.
It is worth noting that the Strait of Hormuz blockade has kept global oil prices at a high level of $100 per barrel, which is a heavy blow to energy-intensive compute data centers. If energy costs offset the efficiency gains brought by chips, global AI investment plans may be forced to scale back.