Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Token New Era": China's AI Industry "Ten Questions and Answers"
China’s foundational AI model industry is at a critical stage, shifting from “expectation-driven” to “demand-driven.” In a recent research report, JPMorgan systematically answers the industry’s ten core questions investors have been asking. It believes model quality has become the primary variable shaping the market landscape, and that industry segmentation will accelerate.
According to a report released by JPMorgan on March 27, the report notes that China’s AI market is at a clear inflection point: demand growth in coding and agent scenarios is accelerating, domestic model capabilities are approaching or even surpassing the level of leading U.S. models from a year earlier, and local pricing better aligns with economic efficiency—together improving implementation returns.
2026 is the key year for whether Chinese enterprises’ AI demand can replicate the growth curve seen in the U.S. in 2025. Taking Anthropic as a reference point, its annual recurring revenue (ARR) grew from $1 billion in December 2024 to $19 billion in March 2026—about a 19x increase in 15 months.
China’s market has the conditions to follow a similar path. Especially in the coding domain, internet giants such as Tencent, Alibaba, and ByteDance have integrated relevant tools into their existing ecosystems, shifting demand from isolated demonstrations to full-scale deployment. The bank maintains “increase holdings” ratings on Zhipu and MiniMax, with target prices of HK$800 and HK$1,100, respectively.
Question 1: Is AI demand growing linearly, or is there a breakout at the inflection point?
Demand is driven by inflection points, not linear growth.
As long as model quality is good enough to unlock real application scenarios, usage will shift from linear growth to a “convex curve”-style breakout. The strongest evidence comes from the U.S. market: Anthropic’s annual recurring revenue (ARR) surged from $1 billion in December 2024 to $19 billion in March 2026 within just 15 months—nearly a 19x increase.
China currently has the foundational conditions for a similar breakout: domestic model capabilities have surpassed the level of leading U.S. models from a year ago, and local pricing better matches China’s economic efficiency. With both factors combined, expectations for AI implementation returns have significantly improved.
On the agent side, OpenClaw has become an important catalyst—shifting use cases from single-turn interaction to executing multi-step tasks, greatly increasing the number of tokens consumed per task. Internet giants such as Tencent, Alibaba, and ByteDance have integrated OpenClaw-related tools into their existing ecosystems, marking the trend’s evolution from “developer experiments” to “full ecosystem deployment.”
Question 2: Will API pricing rise, fall, or diverge?
Pricing won’t move in only one direction; divergence is the main theme.
On one hand, models with stronger capabilities gain pricing power. If a model uniquely unlocks high-value tasks (agentic coding, long-horizon workflows, enterprise-grade reliability), customers are willing to pay a premium because the returns are measurable. On the other hand, as hardware and algorithm efficiency continue to improve, the per-unit inference cost will keep falling, creating price pressure on models whose capabilities stagnate.
The final outcome is a divergent pricing structure: models that continuously maintain cutting-edge capabilities can see both volume and price increase; models that fail to iterate continuously will face price declines—even if usage continues to grow, profit margins will become uncertain.
Question 3: If pricing isn’t the main battlefield, where is the competitive focus?
The main battlefield has shifted from token prices to model capabilities.
This is a key change compared with last year: in China’s market in 2025, the focus was a comprehensive price war. But in the coding and agent scenarios where demand is growing fastest, quality matters far more than unit price.
In multi-step workflows, what customers are essentially buying is not “cheap tokens,” but “tasks getting completed successfully.” The research report provides an intuitive math example: if the single-step success rate rises from 85% to 98%, the final completion rate of a 20-step task would jump from 4% to 67%. Under this logic, the model with the lowest per-token pricing may actually end up with the highest real aggregate cost per completed task.
The research report also notes that companies with strong frontier models can easily expand into lower-end markets, but companies relying only on low prices have difficulty moving into the high end.
Question 4: Why is the foundation large-model industry still a “fight for survival”?
Small technical gaps, endless iteration cycles, and converging monetization models—these three factors make the industry exceptionally ruthless.
The capability gap among China’s large model companies is often smaller than investors expect, making the market highly unstable. In this industry, “standing still” is not a neutral outcome—it means losing position. Companies must keep investing and iterating continuously to avoid falling behind.
The convergence of business models intensifies elimination pressure. Revenue growth and profit margins depend mainly on product strength, and switching costs remain relatively low. This means companies that lose technical momentum will quickly lose defensive strength commercially and financially, and the number of truly reliable companies will gradually decrease.
Question 5: What determines profitability?
The core question is whether gross profit growth can continue to outpace growth in R&D spending.
The basic economics model of the token business is clear: revenue = token usage × price, with major costs being inference computation and the largest operating expense being training-related R&D. As model efficiency and inference chip efficiency keep improving, gross margins of frontier models should gradually rise.
But the outlook for operating profit is more complex. Anthropic is a cautionary case: even though its monthly revenue level in February 2026 had reached $14 billion, the company announced a new round of financing totaling $30 billion in the same period and emphasized continued frontier development—high revenue doesn’t mean training intensity normalizes.
A benchmark scenario is that Zhipu and MiniMax are both expected to turn profitable starting in 2029. The research report emphasizes that beyond the specific year of profitability, the more important tracking indicators are: the sustained growth trend in usage, and the continued improvement in unit economic efficiency.
Question 6: How should investors track model strength?
You need to consider three dimensions together—token price, usage, and third-party evaluations—no single indicator is sufficient.
Token price: this is the most important indicator because it is the real-time expression of a company’s product-market positioning. The price gap versus the best models is becoming a good proxy variable for actual model competitiveness.
Token usage: actual consumption reflects users’ and developers’ real choices. Third-party API aggregators such as OpenRouter can be used as a reference, and in particular you should focus on the growth of agent-type workloads, because this category consumes far more tokens per task than simple workflows.
Third-party evaluations: Artificial Analysis provides structured evaluation, while LMArena reflects real users’ blind selection preferences. Together they complement each other, forming a more complete external perspective.
Question 7: If internet giants move aggressively into the B end, what happens to independent model companies?
Competitive boundaries converge, and ultimately it still comes down to a contest of model capabilities.
Alibaba has clearly made cloud and AI strategic priorities, deeply binding model development with enterprise workflows. Tencent’s agent products cover all personal, developer, and enterprise scenarios. OpenAI has also shifted its commercialization focus toward enterprise products and coding deployments. The direction of leading companies is consistent: AI is evolving from “consumer-side features” into “tools that directly create enterprise revenue.”
In this context, independent model companies can no longer build a moat relying only on a “cloud-neutral” label. Internet giants also cannot fully cover the shortcomings in model capabilities using ecosystem traffic advantages alone. When enterprise customers deploy AI, the core thing they buy is still model quality—stronger coding inference capabilities and more reliable workflow completion rates.
Question 8: What factors determine whether a company survives?
Talent first, compute second, organization third—none of the three can be missing.
Top research talent: this is still a research-driven industry. The technical judgment of senior leadership is itself a competitive factor. Whether management can make the right decisions about research directions directly affects the company’s technical trajectory.
Compute and capital: frontier training costs are high. Inference economics depend on the quality of the infrastructure. Weak compute acquisition capability is a structural disadvantage—not only affecting model training efficiency, but also weakening the ability to respond to demand at reasonable cost.
Organizational execution: in a market with rapid iteration, the ability to turn research results into products, products into usage, and usage into monetization is almost as important as the model itself.
Question 9: If everyone is improving, will models eventually converge?
Overall strength may get closer, but it won’t converge. The market won’t form a winner-takes-all pattern.
Different companies have differences in architecture choices, training data, product focus, and technical paths. These differences will continue to create distinct capability advantages. The research report believes that in a market that is still expanding rapidly, multiple companies can grow at the same time, even if some capabilities overlap—at the current stage, the significance of overall market expansion far outweighs concerns about premature commoditization.
In the long run, a more realistic market endgame isn’t “one dominates while the rest exit,” but rather leaving a handful of truly capable companies, each with its own strengths, competing in a market large enough to support multiple winners. As AI expands from productivity tools into consumer-side scenarios, differences in individual tastes, preferences, and styles will further reinforce this diversified landscape.
Question 10: How do we unify understanding of open-source vs. closed-source, model iteration, and global expansion risks?
Iteration is a must; open-source vs. closed-source is a strategic choice. The core risks of global expansion lie in compute and compliance.
For model iteration, the expected cadence is roughly one generation of flagship model per year (e.g., GLM 4.7 to GLM 5, MiniMax M2 series to M3 series). In between, there are small upgrades driven by reinforcement learning. Stopping iteration means losing competitive position.
On open-source vs. closed-source, the research report argues the answer isn’t strictly either/or. Closed-source models offer stronger commercial defensive capability and reduce the risk of being disintermediated. Open-source helps build ecosystems, improve adoption rates, and accelerate technical feedback. Therefore, most Chinese model companies will ultimately adopt a hybrid strategy: closed-source for the latest strongest models, and open-source for some other versions.
On global expansion, the biggest risk is still compute acquisition. Training and inference both heavily depend on high-performance chips. Tightening export controls will simultaneously weaken the pace of model progress and cost competitiveness. Second are data and security compliance risks: if model deployment, user services, and data storage can be localized overseas, cross-border data transfer issues are relatively manageable. However, local privacy regulations and determinations of data access permissions for Chinese-linked entities remain a source of uncertainty.