NVIDIA's old chip from 4 years ago, rental price increased by 40% over 5 months

When you trade stocks, rely on the Golden Unicorn analyst reports—authoritative, professional, timely, and comprehensive—to help you uncover high-potential theme opportunities!

(Source: First Finance and Economics News)

Latest data released by market research institutions shows that a computing power chip from Nvidia that was launched four years ago still has its rental prices rising recently.

On April 2, local U.S. time, market research firm SemiAnalysis released a report on GPU leasing, saying that use cases for open-source models such as Kimi K2.5 and GLM have surged, and that AI companies that have received funding are also seeing demand for GPUs—driving up prices for products and services across the compute supply chain.

“GPU leasing is a type of product and service related to computing. Tight supply has caused prices to skyrocket. The annual lease contract price for an H100 has jumped from the October 2025 low point of $1.7 per hour to $2.35 per hour in March 2026, an increase of nearly 40%.” The report says.

SemiAnalysis says that although prices have risen, the capacity for related GPU leasing has already sold out. Trying to find new GPU computing resources at the start of 2026 is as expensive and difficult as “trying to book tickets for the last flight.” By the end of 2025, because GPU leasing service providers have a lot of inventory, GPU rental prices were competitive—but the situation has changed. AI-native companies and smaller AI labs currently mainly sign one-year lease contracts, but they are increasingly hoping to sign long-term contracts of four years or more to lock in compute capacity, even agreeing to pay more than 20% upfront, which was not common four years ago.

SemiAnalysis says that 4- to 5-year lease contracts are mainly signed by AI labs. These deals involve clusters of 50 megawatts, 100 megawatts, and even larger—equivalent to the compute power of 24k to 48k Nvidia GB NVL72 GPUs.

H100, introduced in 2022, is a chip in Nvidia’s Hopper architecture, built on a 4nm process and integrating 80 billion transistors. Nvidia’s GPU architecture was then iterated to Blackwell and Rubin. Although it is not the latest GPU, the rise in H100 rental prices still reflects growth in AI compute demand.

In February this year, Nvidia CEO Jensen Huang also mentioned that products, including GPUs from the previous generation, continue to be in short supply, reflecting the market’s strong demand for AI computing resources. He judged that AI buildout “still has seven to eight years of road ahead,” and that what is happening now is only the beginning of a long construction cycle.

Recently, domestic internet companies have also discussed demand for compute leasing. At a media exchange after the release of March earnings reports, Tencent executives said that a reason GPU supply was constrained last year was that capital expenditure did not grow much. To some extent, the company has addressed some of the problems caused by the tight shortage of compute capacity through renting. This year, the company still hopes to see a significant increase in capital expenditure and to step up investment in flexible leasing, to support model training and provide more cloud computing capacity that can be rented out to external parties.

A vast amount of information and precise analysis—right on the Sina Finance app

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin