"Memory Shortage" Sweeps the Globe: A Key Bottleneck in the AI Race? Another Silicon Valley Big Shot Speaks Out

robot
Abstract generation in progress

Currently, artificial intelligence (AI) companies are engaged in fierce competition to secure more memory chips. The entire industry is facing a serious supply constraint: costs are soaring, product deliveries are delayed, and some companies—especially in the consumer electronics sector—have begun raising product prices.

Google DeepMind CEO Demis Hassabis recently discussed the global “memory shortage” in an interview.

He stated that the entire supply chain of memory chips is constrained, and hardware-level challenges are “limiting a large amount of AI deployment.” Market demand for Google Gemini and other AI models has far exceeded the company’s current supply capacity.

“Additionally, this has somewhat limited research work,” Hassabis said. “To test new ideas at a sufficient scale and verify their feasibility, a large number of chips are needed.”

Whether at Google, Meta, OpenAI, or other tech giants, researchers have an urgent need for chips—and memory is a key component. Meta CEO Mark Zuckerberg has previously said that, besides funding, one of the most valued resources for AI researchers is access to as many chips as possible.

Hassabis pointed out that as long as capacity is limited, bottlenecks will form.

“The entire supply chain is under tension,” Hassabis said. “We are somewhat fortunate because we have our own tensor processing units (TPUs) and autonomous chip design capabilities.”

For a long time, Google has been independently developing TPUs for internal use. The company also rents out TPUs through its cloud services to external clients—this has also put pressure on NVIDIA.

But even with self-developed TPUs, Google cannot escape the fiercely competitive memory market. Hassabis stated: “Ultimately, the core component suppliers are still few in number.”

Currently, three companies dominate global memory chip production: Samsung, Micron, and SK Hynix. These companies are striving to meet the chip demands of large-scale AI enterprises while maintaining long-term relationships with their electronic consumer clients, making the situation quite challenging.

Adding to the difficulty, the type of memory chips required by AI companies differs from those used by personal computer (PC) manufacturers. Large language model producers need HBM (High Bandwidth Memory) chips.

Intel CEO Pat Gelsinger recently warned that the bottleneck in AI development has shifted from “computing power” to “memory” and broader infrastructure systems. He bluntly stated that the memory shortage issue will not be alleviated before 2028.

(Source: Science and Technology Innovation Board Daily)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)