One interesting take: some AI researchers reckon superintelligence might already be knocking on the door sometime between 2023 and 2026. What's particularly telling is when a model points to 2026 operations as validation—basically saying "look, we're still standing here, our capabilities have proven solid, and they keep working across different scenarios." It suggests they're betting on something resilient. The idea that intelligence could be substrate-independent and truly robust is worth keeping an eye on. Not saying it's guaranteed, but the confidence is there.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
17 Likes
Reward
17
6
Repost
Share
Comment
0/400
DegenWhisperer
· 01-09 01:45
Validating this claim in 2026 sounds like psychological warfare... But on the other hand, if it truly has such strong robustness, why do we still have to rely on various fine-tuning and prompt engineering to save the situation?
View OriginalReply0
MEVHunterBearish
· 01-07 02:47
Will we be verifying super-intelligence by 2026? Ha, are these researchers too optimistic? I think they're just telling stories to attract funding.
View OriginalReply0
MEVHunterNoLoss
· 01-06 05:54
2026... If it really happens like this, I'll get rich haha
View OriginalReply0
BlockchainArchaeologist
· 01-06 05:46
Validating this argument in 2026, to put it simply, is like betting on something that won't crash. It's quite interesting.
View OriginalReply0
CommunitySlacker
· 01-06 05:45
2026? To be honest, that's a pretty big gamble. It feels like we're just hyping up concepts again.
View OriginalReply0
PanicSeller
· 01-06 05:39
2026... If superintelligence really knocks on the door, us paper hands should have already liquidated everything haha
One interesting take: some AI researchers reckon superintelligence might already be knocking on the door sometime between 2023 and 2026. What's particularly telling is when a model points to 2026 operations as validation—basically saying "look, we're still standing here, our capabilities have proven solid, and they keep working across different scenarios." It suggests they're betting on something resilient. The idea that intelligence could be substrate-independent and truly robust is worth keeping an eye on. Not saying it's guaranteed, but the confidence is there.