Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
There’s a real opportunity for an ambitious AI researcher to:
- create an evaluation framework for testing agent harnesses like Openclaw, Hermes, and all the other “claws”
- extend evaluation to different tools / configs so we know how performance changes with different setups
- run robust evals across diff models including local vs API
- benchmark and publish results, then do ongoing updates as the agents and models evolve
The opportunity is to be THE go-to source for objective agent benchmarks
Maybe someone is already doing this and I’m just not aware? Not one-off comps but real standard testing and evals so we can truly compare results