BBC Reporter Thomas Germain reveals a harsh reality through a deepfake experiment: Digital forensic expert Hany Farid directly declares, “It’s over”: no one can prove they are not AI anymore.
(Background: ZachXBT exposes fake news farms spreading “end-of-the-world” panic, using AI to hype war traffic, scams, and meme coin pump-and-dumps for millions of dollars.)
(Additional context: Far Eastern Commercial Bank and the High Prosecutors Office sign MOU: 95% of Taiwan’s crypto transactions are now under anti-money laundering and anti-fraud investigation frameworks.)
Table of Contents
Toggle
Last night, BBC reporter Thomas Germain described an unsettling experiment: he called his lifelong acquaintance, Aunt Eleanor, and warned her that the next call might be from a real person or a deepfake AI.
The result: Aunt Eleanor couldn’t tell. Germain’s conclusion is simple: if even family can’t distinguish real from fake, this is no longer just a technical problem.
He asked Hany Farid, a digital forensic professor at UC Berkeley and co-founder of GetReal Security: “What can I do now to prove I’m not AI?” Farid’s answer was just one word: No.
You’re in New York, I’m in Berkeley, we’re on a video call. The reality is, you can fake all of this.
Then Farid said something chilling: “There’s no way. It’s over.”
Germain also cited a recent bizarre real case. When Israeli Prime Minister Netanyahu posted a video, refraction made his right hand appear to have six fingers: a classic flaw used to identify deepfakes.
The online community exploded: rumors claimed he had died in a missile attack; the video was fake.
To prove himself innocent, Netanyahu released a second and third video, raising both hands in a café, showing each finger one by one. Germain notes that Farid later analyzed these videos with voice analysis, frame-by-frame facial detection, and light-shadow analysis, concluding: all videos are authentic, with “no evidence of AI generation.”
Jeremy Carrasco, co-founder of Riddance, told BBC: “Six fingers are no longer a sign of AI. The best tools fixed that problem years ago.”
However, Germain observes an ironic point: even with expert verification, many still believe Netanyahu is dead. He writes that this might be the first time in human history that world leaders have been forced to publicly prove they are not AI—and failed.
Germain introduces the concept of the “liar’s bonus,” defined by researchers as: Proving something is real is expensive, but creating doubt is free. Politicians can easily claim a genuine video is deepfake, and rebutting such claims often requires more time, resources, and credibility than producing the rumor in the first place.
Samuel Woolley, chair of false information research at the University of Pittsburgh, also notes a troubling historical trend: “Early in the Ukraine war, I saw some clumsy deepfakes. During the Gaza conflict, fake content increased in quantity and quality. In Venezuela? I saw more false content than real. And Iran has taken it to a whole new level.”
Woolley directly criticizes politicians pushing for regulation: “They’re now reaping what they’ve sown.”
As for solutions, Germain’s conclusion is surprisingly simple. The world’s top deepfake expert, Hany Farid, ultimately recommends the most basic method: a secret code.
Farid told BBC that he and his wife have a special code to verify each other when receiving suspicious calls. Essentially, it’s a human multi-factor authentication: when all technical measures fail, return to the most primitive trust protocol.
While Germain’s report centers on societal trust crises, data from AARP (American Association of Retired Persons) reveals a more direct financial toll: AI-related scams surged 20 times between 2023 and 2025. UK engineering firm Arup once lost $25 million in a video call when deepfake impersonated their CFO.
The crypto market faces an even graver situation. According to Fintech Global, crypto scams reached $200 million in Q1 2026, a 340% increase year-over-year; deepfake scams are estimated to account for 70% of crypto crimes.
A tool called ProKYC, costing only $629 annually, offers virtual simulators, facial animation, and fingerprint generation, allowing users to create entirely new identities to bypass KYC biometric verification on exchanges. The U.S. Treasury has explicitly called for tighter regulation of AI and digital identity systems.
Another potentially bigger change: AI agents can now autonomously hold wallets and initiate transactions. In this scenario, who is the “client”—a human or the AI itself? KYC regulatory definitions may face a vacuum.
Ironically, we build verification systems to identify AI, but AI has evolved to bypass them. In the end, the only remaining method might be an old-fashioned secret code shared with trusted friends and family.