Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
HappyHorse anonymously tops AI video blind test, Alibaba TaoTian and Sand.ai are both suspected
According to 1M AI News monitoring, an anonymous model named HappyHorse-1.0 topped the Video Arena leaderboard on the AI video evaluation platform Artificial Analysis last week. It took first place in both text-to-video and image-to-video tracks (no audio category). ByteDance’s Seedance 2.0 was pushed down to second. In the audio category, Seedance 2.0 still leads by a narrow margin. There has been no press conference, no technical blog, no company attribution, and to date, no one has publicly claimed it.
The Video Arena ranking is based on an Elo blind testing system, where users cast a vote to choose between two generated videos without knowing the models’ identities. HappyHorse has been on the leaderboard for a shorter time; its comparison sample size is about 3,500 times, which is less than half of Seedance 2.0. Its confidence interval is wider (±12-13 points), but its lead in the no-audio tracks (about 76 points for text-to-video and about 48 points for image-to-video) is still far beyond the margin of error.
Based on the website’s language order (Chinese and Cantonese come before English) and the “HappyHorse” 2026 Year of the Horse internet meme, industry observers judge that the model comes from a China-based team. The two mainstream claims are:
HappyHorse’s official website shows the model has 15 billion parameters, a 40-layer self-attention Transformer, and uses a Transfusion architecture (within the same model, jointly handles text autoregressive prediction and video audio diffusion generation). It performs 8-step inference, outputs 1080p videos with synchronized audio, supports lip-sync synchronization in seven languages: Chinese, English, Japanese, Korean, German, French, and Cantonese. It is fully open-source and allows commercial use.