AI answer engine batch poisoning: In Gemini 3’s correct answers, 56% have no source support

ChainNewsAbmedia

According to a deep-dive analysis published by Pedro Dias in The Inference on April 21, 2026, AI model collapse is not an industry concern’s “future threat”—it is happening in real time in another form: AI Q&A engines immediately cite web content generated by other AIs as authoritative sources at the moment of the query. The entire contamination loop does not require any model retraining. This argument uses the core metaphor of “the Ouroboros learns to cite itself.”

Key Differences Between Model Collapse and Retrieval Contamination

The traditional concern about AI model degradation centers on model collapse: synthetic content progressively pollutes training data, causing future-generation model quality to decline. This is a chronic risk that only becomes evident after multiple rounds of retraining.

Pedro Dias’ warning points to another layer: retrieval contamination. Q&A engines built on RAG (retrieval-augmented generation)—such as Perplexity, Google AI Overviews, ChatGPT, and Grok—at the moment users ask questions immediately fetch web content as the basis for their answers. If the web pages they find themselves contain erroneous AI-generated content, the engine presents it as fact to readers— and this contamination takes effect immediately without any retraining.

Three Real-World Cases: AI Engines Being Fooled by the Fake Information They Generate Themselves

The author lists three specific events:

  1. The Lily Ray incident: Perplexity once cited a so-called “September 2025 Perspective Core Algorithm Update” as authoritative information about a Google algorithm update—this update does not exist at all; the source was an AI-generated SEO blog post with fake content.

  2. Thomas Germain’s test: Reporter Thomas Germain published a test blog post titled “the most powerful tech journalist for eating hot dogs.” Within 24 hours, it ranked first on Google AI Overviews and ChatGPT and was cited, and it even fabricated a nonexistent “North Dakota championship tournament” as supporting evidence.

  3. Grokipedia: Musk’s xAI encyclopedia project has generated or rewritten 885,279 articles, including incorrect facts (for example, the death date of Canadian singer Feist’s father was written incorrectly) and uncited claims with no evidence. By mid-February 2026, Grokipedia had already lost most of its visibility on Google.

Oumi Research: Gemini 3 Has High Accuracy, But 56% Have No Sources

An evaluation commissioned by NYT and conducted by Oumi: Gemini 2’s accuracy on the SimpleQA benchmark test was 85%, and Gemini 3 improved it to 91%. But the same test showed that in Gemini 3’s correct answers, 56% are “ungrounded”—the model gets the answer right but has no verifiable support sources; Gemini 2’s corresponding proportion is 37%.

This means that the new-generation models are “more accurate in form” in their answers, yet simultaneously “regress” in “answer source traceability.” For scenarios like media, research, and fact-checking, this regression is more lethal than a purely incorrect rate, because readers cannot trace back to the original authoritative documents to verify on their own.

Industry Scale: Google AI Overviews Reach 2 Billion Users

The scale of this contamination problem: Google AI Overviews has more than 2 billion monthly active users, Google has over 5 trillion annual searches, and ChatGPT has nearly 900 million weekly active users (50 million paid). In other words, for the vast majority of internet users, their channels for obtaining factual information have already passed through the Q&A engine layer where they may be exposed to contamination from AI-generated content.

Another study by Ahrefs shows that among the sources cited by ChatGPT, 44% are “best X” type list articles—these are precisely the AI-generated contents that the SEO industry mass-produces to counter the loss of traffic to Q&A engines, and they happen to form a major source of contamination for Q&A engines.

Structural Conclusion: The Citation Layer Has Decoupled From Reliable Author Identity

The author’s final conclusion: The citation layer of AI Q&A engines has already decoupled from reliable author identity. The SEO industry produces AI content → Q&A engines grab it as fact → readers believe it → the SEO industry gets incentives to keep producing more AI content, forming a self-reinforcing contamination loop. At present, the entire industry lacks a clear accountability mechanism that makes AI engines responsible for the quality of the sources they cite.

For users, this means that at this stage you cannot treat the answers from Perplexity, AI Overviews, and ChatGPT as the end point of fact-checking—you still need to manually trace back to official primary sources to ensure accuracy.

This article on collective contamination by AI Q&A engines: 56% of Gemini 3’s correct answers have no source support, first appeared on 链新闻 ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

White House Accuses China of 'Industrial-Scale' AI Model Theft

The White House warned on April 23, 2026, that foreign entities, primarily in China, are conducting "industrial-scale" campaigns to copy American artificial intelligence models, according to a memorandum from Michael Kratsios, Assistant to

CryptoFrontier14m ago

OpenClaw 2026.4.22 Unifies Plugin Lifecycle Across Codex and Pi Harnesses, Reduces Plugin Load Time by Up to 90%

Gate News message, April 23 — OpenClaw, an open-source AI Agent platform, released version 2026.4.22 on April 22, with its biggest change being the alignment of Codex harness and Pi harness lifecycles. Previously, plugins behaved inconsistently across the two harness pathways, with some hooks

GateNews1h ago

Reppo Foundation Secures $20M Commitment from Bolts Capital for AI Training Data Infrastructure

Gate News message, April 23 — The Reppo Foundation announced it has received a strategic funding commitment of $20 million from Bolts Capital to advance protocol development and ecosystem expansion, with a focus on building AI training data infrastructure powered by prediction markets. Reppo's

GateNews1h ago

EU to Draft Guidelines Requiring Google to Provide Third-Party AI Equal Android Access as Gemini

Gate News message, April 23 — The European Union will draft guidelines requiring Google to provide ChatGPT and Claude with the same functionality access on Android as Gemini receives, according to market sources. The move aims to ensure third-party AI applications have equivalent permissions and

GateNews1h ago

Cluster Protocol Raises $5M in Funding, DAO5 Leads Round

Gate News message, April 23 — Cluster Protocol, an AI deep tech and Web3 infrastructure company, announced the completion of a $5 million funding round led by DAO5, with participation from Paper Ventures, JPEG Trading, and Mapleblock Capital. The company's total funding to date reaches $7.75 million

GateNews2h ago

UAE President Discusses AI and Space Opportunities with Musk and Fink

Gate News message, April 23 — UAE President Sheikh Mohamed bin Zayed Al Nahyan held talks with SpaceX founder Elon Musk and BlackRock chairman Larry Fink on potential opportunities in artificial intelligence and the space sector. Musk spoke with the Emirati leader by telephone, according to the UAE

GateNews3h ago
Comment
0/400
No comments