🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
Alphabet Delivered a Game-Changing Blow to Nvidia's AI Chip Dominance
The TPU Breakthrough That Changes Everything
For years, the AI infrastructure battle seemed one-sided. Major cloud providers like Alphabet, Amazon, and Microsoft poured billions into designing custom data center chips, but none could match Nvidia’s industry-leading position in graphics processing units (GPUs). That narrative just shifted dramatically.
On November 18, Alphabet unveiled Gemini 3, its latest AI model, and the real story wasn’t just about performance—it was delivered exclusively on Alphabet’s custom tensor processing units (TPUs). This milestone marks a potential turning point in the battle for AI chip supremacy. Gemini 3 matched or exceeded the capabilities of OpenAI and Anthropic’s latest releases, proving that homegrown chips could compete at the highest level.
What makes this development particularly significant? Meta Platforms is reportedly in talks to purchase TPUs directly from Alphabet, while Anthropic just announced a major expansion of its TPU adoption through Google Cloud. The shift from theoretical threat to real market demand is now underway.
Supply Constraints Meet Explosive Demand
Google Cloud is experiencing unprecedented demand for computing capacity. During Q3 2025 (ended September 30), the platform generated $15.1 billion in revenue—a 33.5% year-over-year increase that actually accelerated from the prior quarter. Yet there’s a critical constraint: TPU availability can’t keep pace.
The order backlog for computing capacity exploded to $155 billion during Q3, jumping 82% year-over-year. According to Amin Vahdat, General Manager of AI and Infrastructure at Google Cloud, this supply-demand imbalance could persist for the next five years. This dynamic delivers pricing power to Alphabet while creating frustration for developers desperate for computing resources today.
The scale of potential demand is staggering. When Anthropic announced it would access up to 1 million TPUs through Google Cloud to train its Claude models, that single contract illustrated the magnitude of the opportunity. Meta Platforms, currently reliant on Nvidia GPUs for its Llama model training, is planning to purchase billions of dollars worth of TPUs starting in 2027 for its own data centers.
The Nvidia Challenge: Competitive but Not Vulnerable (Yet)
The competitive pressure is real, but Nvidia’s position isn’t collapsing immediately. If order backlogs continue growing, Nvidia likely won’t feel significant effects for several more years. Cloud providers needing to fulfill customer demand will continue sourcing from multiple suppliers, including Nvidia, simply to meet capacity requirements.
GPUs remain the default standard for most AI workloads, largely because of their versatility and ecosystem maturity. Alphabet designed TPUs for its specific purposes—high-performance and energy-efficient, yes, but not necessarily optimal for every developer’s use case. Nvidia’s proprietary CUDA software remains the programming language of choice across the AI industry. Switching to TPUs means abandoning CUDA, creating friction for developers already invested in the Nvidia stack.
Nvidia CEO Jensen Huang forecasts AI data center spending could reach $4 trillion annually by 2030. With Nvidia tracking $213 billion in annual revenue (fiscal year ending January 2026), Huang’s projection leaves enormous room for growth even if the company cedes market leadership.
Valuation Perspective for Investors
From a valuation standpoint, both companies present interesting opportunities. Nvidia’s P/E ratio sits at 44.6—a 37% discount to its 10-year historical average of 61.2, suggesting potential upside for patient investors. Alphabet, despite delivering a 70% return year-to-date, trades at just 31.2 P/E, making it slightly cheaper than the Nasdaq-100 index overall.
Given the expected continued expansion in AI infrastructure spending, owning both companies could capture different aspects of this secular growth trend. The question isn’t whether AI spending will continue accelerating—it almost certainly will. The question is how the competitive landscape evolves and who captures the most value.