On February 19, according to foreign media Wccftech, NVIDIA founder and CEO Jensen Huang teased the upcoming GTC 2026 conference, clearly stating that a “world-first” new chip will be unveiled at the event, sparking widespread industry attention.
It is understood that NVIDIA’s GTC 2026 will be held in San Jose, California, from March 16 to 19 local time, focusing on a new era of AI infrastructure competition.
As a leader in the AI chip field, NVIDIA’s major preview is seen as further consolidating its leading position in AI infrastructure.
Jensen Huang stated, “We have several new chips that are global firsts ready. Nothing is easy because all technologies have reached their limits.” However, he did not disclose specific models but implied that this new hardware will push current physical limits to the extreme. Tech media NeoWin interpreted this as highly likely to be a mature product based on the Rubin architecture.
Several tech giants previously endorsed Rubin
It is reported that on January 5 this year, NVIDIA officially launched the NVIDIA Rubin platform, kicking off a new generation of AI, composed of six new chips designed to build extraordinary AI supercomputers. NVIDIA Rubin set a new benchmark, enabling the construction, deployment, and secure operation of the world’s largest and most advanced AI systems at minimal cost, accelerating mainstream AI adoption.
Huang said at the time, “The demand for AI training and inference is growing rapidly, and Rubin’s emergence is timely. With our annual release of a new AI supercomputer and the optimized design across six new chips, Rubin marks a key step forward in AI.”
The Rubin platform introduced five innovative technologies, including the next-generation NVIDIA NVLink interconnect, Transformer engine, confidential computing and RAS engine, and NVIDIA Vera CPU. These breakthroughs will accelerate the development of agent-based AI, advanced inference, and large-scale mixture-of-experts (MoE) models at a cost as low as one-tenth of tokens on the NVIDIA Blackwell platform. The number of GPUs used for MoE model training on Rubin is only a quarter of the previous generation, speeding up AI adoption and proliferation.
At that time, many tech leaders endorsed NVIDIA’s new Rubin platform:
OpenAI CEO Sam Altman said, “Intelligence scales with compute power. As we increase compute, models become stronger, capable of solving more complex problems, and having a greater impact on humanity. NVIDIA Rubin helps us continue this process, bringing advanced intelligence to everyone.”
Anthropic co-founder and CEO Dario Amodei stated, “The efficiency improvements of the NVIDIA Rubin platform represent a major infrastructure advance, enabling longer context memory, better reasoning, and more reliable outputs. Our collaboration with NVIDIA empowers our safety research and cutting-edge models.”
Microsoft Chairman and CEO Satya Nadella said, “We are building the world’s most powerful AI superfactory, capable of handling any workload with ultimate performance and efficiency, regardless of location. With the addition of NVIDIA Vera Rubin GPUs, we will empower developers and organizations to create, infer, and scale in a revolutionary way.”
Lenovo Chairman and CEO Yang Yuanqing said, “Lenovo plans to adopt the next-generation NVIDIA Rubin platform, combined with our Neptune liquid cooling solution and our global scale, manufacturing efficiency, and service coverage, to help enterprises build AI factories as intelligent acceleration engines, speeding up insights and innovation. We are jointly shaping the future of AI empowerment, ensuring AI becomes a standard for every enterprise.”
NVIDIA previously revealed that Rubin is now in full production. The first batch of cloud providers deploying Vera Rubin instances in 2026 includes AWS, Google Cloud, Microsoft, OCI, and NVIDIA cloud partners CoreWeave, Lambda, Nebius, and Nscale.
Microsoft will deploy the NVIDIA Vera Rubin NVL72 rack-scale expansion system as part of its new generation AI data centers (including the future Fairwater AI superfactory). Rubin aims to deliver unprecedented efficiency and performance for training and inference workloads, providing the foundation for Microsoft’s next-generation cloud AI capabilities. Microsoft Azure will launch a highly optimized platform to help customers accelerate innovation across enterprise, research, and consumer applications.
The “upstream selling shovels” narrative is still ongoing
Large tech companies continue to increase their spending plans for AI infrastructure, but NVIDIA’s stock has remained largely stagnant for months, despite being one of the biggest beneficiaries of this massive investment.
Since the high point in late October 2025, NVIDIA’s stock has fallen 6.49%, with a maximum drawdown exceeding 20%.
Huang previously stated that the so-called “AI doomsday” narrative is affecting the entire world and discouraging potential investors. He believes there has been a “narrative war” in AI, with some viewing the future as bleak and crisis-ridden, while others remain optimistic. Although dismissing both extremes as overly simplistic, he acknowledged that overly pessimistic views are negatively impacting reality.
A recent report from Huafu Securities pointed out that the “upstream selling shovels” narrative remains ongoing, with Chinese and US tech giants maintaining large capital expenditures into 2026. Based on examples like Amazon, Google, Microsoft, Alibaba, Tencent, and Baidu, Bloomberg consensus estimates total capital spending by these Chinese and US internet giants will reach $404.488 billion in 2026, an increase of about 18% year-over-year. Except for Tencent, their capital expenditures account for over 50% of operating cash flow.
In fact, cooperation between tech giants and NVIDIA is deepening. On February 17, US tech giant Meta and NVIDIA announced a new long-term partnership. This collaboration includes large-scale chip deployment and comprehensive hardware-software optimization.
According to disclosures, a core part of this partnership is Meta deploying millions of NVIDIA chips in its data centers, including Blackwell architecture GPUs, next-generation Rubin architecture GPUs, and Arm-based Grace CPUs. This marks the first large-scale deployment of NVIDIA Grace CPUs, challenging the traditional x86 architecture monopoly. Additionally, Meta plans to introduce more powerful Vera series processors in 2027 to further strengthen its high-efficiency AI computing layout.
Huafu Securities believes that, from a narrative perspective, the US-China AI industry is still transitioning from the “upstream selling shovels” and “new technological demand” phase to the third stage of “empowering entire industries.” Specifically, the “upstream selling shovels” narrative remains, with large capital expenditures continuing; the emergence of new technological demands is driving large models to be applied at the edge, with fields like robotics and autonomous driving moving from technical validation to large-scale development, in the “1-100” industry expansion stage; and as AI technology matures, “empowering entire industries” could further unleash growth dividends.
(Article source: Securities Times)
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
"Unprecedented in the world"! Major breakthrough in chips! Jensen Huang's latest statement
On February 19, according to foreign media Wccftech, NVIDIA founder and CEO Jensen Huang teased the upcoming GTC 2026 conference, clearly stating that a “world-first” new chip will be unveiled at the event, sparking widespread industry attention.
It is understood that NVIDIA’s GTC 2026 will be held in San Jose, California, from March 16 to 19 local time, focusing on a new era of AI infrastructure competition.
As a leader in the AI chip field, NVIDIA’s major preview is seen as further consolidating its leading position in AI infrastructure.
Jensen Huang stated, “We have several new chips that are global firsts ready. Nothing is easy because all technologies have reached their limits.” However, he did not disclose specific models but implied that this new hardware will push current physical limits to the extreme. Tech media NeoWin interpreted this as highly likely to be a mature product based on the Rubin architecture.
Several tech giants previously endorsed Rubin
It is reported that on January 5 this year, NVIDIA officially launched the NVIDIA Rubin platform, kicking off a new generation of AI, composed of six new chips designed to build extraordinary AI supercomputers. NVIDIA Rubin set a new benchmark, enabling the construction, deployment, and secure operation of the world’s largest and most advanced AI systems at minimal cost, accelerating mainstream AI adoption.
Huang said at the time, “The demand for AI training and inference is growing rapidly, and Rubin’s emergence is timely. With our annual release of a new AI supercomputer and the optimized design across six new chips, Rubin marks a key step forward in AI.”
The Rubin platform introduced five innovative technologies, including the next-generation NVIDIA NVLink interconnect, Transformer engine, confidential computing and RAS engine, and NVIDIA Vera CPU. These breakthroughs will accelerate the development of agent-based AI, advanced inference, and large-scale mixture-of-experts (MoE) models at a cost as low as one-tenth of tokens on the NVIDIA Blackwell platform. The number of GPUs used for MoE model training on Rubin is only a quarter of the previous generation, speeding up AI adoption and proliferation.
At that time, many tech leaders endorsed NVIDIA’s new Rubin platform:
OpenAI CEO Sam Altman said, “Intelligence scales with compute power. As we increase compute, models become stronger, capable of solving more complex problems, and having a greater impact on humanity. NVIDIA Rubin helps us continue this process, bringing advanced intelligence to everyone.”
Anthropic co-founder and CEO Dario Amodei stated, “The efficiency improvements of the NVIDIA Rubin platform represent a major infrastructure advance, enabling longer context memory, better reasoning, and more reliable outputs. Our collaboration with NVIDIA empowers our safety research and cutting-edge models.”
Microsoft Chairman and CEO Satya Nadella said, “We are building the world’s most powerful AI superfactory, capable of handling any workload with ultimate performance and efficiency, regardless of location. With the addition of NVIDIA Vera Rubin GPUs, we will empower developers and organizations to create, infer, and scale in a revolutionary way.”
Lenovo Chairman and CEO Yang Yuanqing said, “Lenovo plans to adopt the next-generation NVIDIA Rubin platform, combined with our Neptune liquid cooling solution and our global scale, manufacturing efficiency, and service coverage, to help enterprises build AI factories as intelligent acceleration engines, speeding up insights and innovation. We are jointly shaping the future of AI empowerment, ensuring AI becomes a standard for every enterprise.”
NVIDIA previously revealed that Rubin is now in full production. The first batch of cloud providers deploying Vera Rubin instances in 2026 includes AWS, Google Cloud, Microsoft, OCI, and NVIDIA cloud partners CoreWeave, Lambda, Nebius, and Nscale.
Microsoft will deploy the NVIDIA Vera Rubin NVL72 rack-scale expansion system as part of its new generation AI data centers (including the future Fairwater AI superfactory). Rubin aims to deliver unprecedented efficiency and performance for training and inference workloads, providing the foundation for Microsoft’s next-generation cloud AI capabilities. Microsoft Azure will launch a highly optimized platform to help customers accelerate innovation across enterprise, research, and consumer applications.
The “upstream selling shovels” narrative is still ongoing
Large tech companies continue to increase their spending plans for AI infrastructure, but NVIDIA’s stock has remained largely stagnant for months, despite being one of the biggest beneficiaries of this massive investment.
Since the high point in late October 2025, NVIDIA’s stock has fallen 6.49%, with a maximum drawdown exceeding 20%.
Huang previously stated that the so-called “AI doomsday” narrative is affecting the entire world and discouraging potential investors. He believes there has been a “narrative war” in AI, with some viewing the future as bleak and crisis-ridden, while others remain optimistic. Although dismissing both extremes as overly simplistic, he acknowledged that overly pessimistic views are negatively impacting reality.
A recent report from Huafu Securities pointed out that the “upstream selling shovels” narrative remains ongoing, with Chinese and US tech giants maintaining large capital expenditures into 2026. Based on examples like Amazon, Google, Microsoft, Alibaba, Tencent, and Baidu, Bloomberg consensus estimates total capital spending by these Chinese and US internet giants will reach $404.488 billion in 2026, an increase of about 18% year-over-year. Except for Tencent, their capital expenditures account for over 50% of operating cash flow.
In fact, cooperation between tech giants and NVIDIA is deepening. On February 17, US tech giant Meta and NVIDIA announced a new long-term partnership. This collaboration includes large-scale chip deployment and comprehensive hardware-software optimization.
According to disclosures, a core part of this partnership is Meta deploying millions of NVIDIA chips in its data centers, including Blackwell architecture GPUs, next-generation Rubin architecture GPUs, and Arm-based Grace CPUs. This marks the first large-scale deployment of NVIDIA Grace CPUs, challenging the traditional x86 architecture monopoly. Additionally, Meta plans to introduce more powerful Vera series processors in 2027 to further strengthen its high-efficiency AI computing layout.
Huafu Securities believes that, from a narrative perspective, the US-China AI industry is still transitioning from the “upstream selling shovels” and “new technological demand” phase to the third stage of “empowering entire industries.” Specifically, the “upstream selling shovels” narrative remains, with large capital expenditures continuing; the emergence of new technological demands is driving large models to be applied at the edge, with fields like robotics and autonomous driving moving from technical validation to large-scale development, in the “1-100” industry expansion stage; and as AI technology matures, “empowering entire industries” could further unleash growth dividends.
(Article source: Securities Times)