Consciousness is an atavism - ForkLog: cryptocurrencies, AI, singularity, the future

img-8245a6bd09909844-6794677075354946# Consciousness — an Atavism

What Peter Watts’ novel “Blindsight” teaches us

In the novel “Blindsight,” Canadian biologist and writer Peter Watts proposed a radical hypothesis: the mind can be effective without consciousness. Nearly 20 years after the book’s publication, this thesis accurately describes generative AI.

Let’s explore why “smart” does not equal “understanding” and what mistakes we make when humanizing algorithms.

The 2006 novel that commented on 2020

“Blindsight” was published in October 2006. The novel was nominated for the Hugo Award in 2007 and was a finalist for the John W. Campbell and Locus Awards.

Its author is a marine biologist from the University of British Columbia with a PhD in zoology and resource ecology. The novel includes over 130 references to scientific papers, packaged into a common science fiction story about first contact. In the 2000s, the book remained niche, belonging to “hard” sci-fi, characterized by a heavy style and a bleak view of human nature. Critics noted its opaque prose and emotional coldness.

The idea of the novel is based on the distinction between two often conflated concepts: intelligence as the ability to solve problems and process information, and consciousness as subjective understanding, the sensation of “what it’s like to be” something, as philosopher Thomas Nagel formulated.

Watts puts forward a provocative hypothesis: consciousness is an evolutionarily redundant property, a side effect rather than a necessary condition of intelligence.

The novel explores this intuition through several plotlines. Cryptovores — alien beings aboard the ship “Rorschach” — possess intelligence orders of magnitude higher than humans. They analyze crew neural activity and solve complex problems. But they lack subjective experience. They don’t know they exist. As Watts formulates through one character:

“Imagine you’re a cryptovore. Imagine you have a mind but no reason, tasks but no consciousness. Your nerves are ringing with survival and self-preservation programs, flexible, autonomous, even technological — but there’s no system watching over them. You can think about anything, but you don’t perceive anything.”

Protagonist and narrator Siri Kitten is a human who underwent hemispherectomy as a child to treat epilepsy. He can accurately model others’ behavior but lacks empathy and genuine emotional experience. His role is that of a synthesist, translating complex data for the control center: he transforms information without personal attachment. Kitten admits:

“It’s not my job to understand. To start with, if I could understand them, it wouldn’t be much of an achievement. I’m just, how to say — a conduit.”

The third character — Yukka Sarasti, a genetically resurrected predator from the Paleocene with intelligence surpassing humans — can see both sides of the Neker cube simultaneously, operating multiple cognitive models in parallel.

Conscious excess

Each of these characters is based on a real philosophical foundation. The concept of philosophical zombies, introduced by Robert Kirk in 1974 and popularized by David Chalmers in his book “The Conscious Mind” (1996), describes a hypothetical being physically identical to a human but lacking subjective experience. Cryptovores are a radicalization of this idea: not a copy of a human without consciousness, but a fundamentally different form of intelligence.

Chalmers in 1995 formulated the “hard problem of consciousness”: why do physical processes in the brain produce subjective experience? Even if we fully explain all cognitive functions — attention, categorization, information processing — the question remains: why does their execution come with a feeling? “Blindsight” takes this problem and turns it upside down: what if the answer is “no feeling needed”?

Watts described the genesis of the idea as follows: he searched for a functional explanation of consciousness and applied the same test to each possible function — can an unconscious system do the same? The answer was always “yes.” Then he realized that a more powerful conclusion was the absence of a function altogether. In the postscript to the novel, Watts summarizes: consciousness in everyday conditions is occupied mainly with “receiving memos from a much more intelligent subconscious layer, vetting and attributing all the credit to itself.”

Long before Watts, Norwegian philosopher Peter Wessel Zapffe formulated the idea of consciousness as an evolutionary “overdose.” In his essay “The Last Messiah” (1933), he compared the human mind to how “some deer in paleontological times” went extinct due to “overly heavy antlers.” Zapffe considered consciousness an analogous evolutionary excess: a capacity that developed beyond practical necessity, transforming from an advantage into a burden.

But if Watts proves that consciousness is unnecessary for intelligence, the Norwegian thinker’s thesis is even more radical: it’s not just superfluous, but destructive. Humans, in his view, must “artificially limit the contents of consciousness” to avoid falling into a state of “cosmic panic” from realizing their finiteness.

Philosopher David Rosenthal reached a similar conclusion. In a 2008 paper, he showed that the consciousness of cognitive states does not add significant function beyond the processes that generate it.

Eliza in the Chinese Room

In 1980, philosopher John Searle published the famous thought experiment “The Chinese Room.” Its essence: a person who doesn’t know Chinese sits in a closed room with a set of rules for manipulating hieroglyphs. Receiving questions in Chinese, they follow the rules to produce answers. An external observer is convinced that someone inside understands Chinese. But the person inside understands not a word. Searle’s conclusion: syntax is not identical to semantics. Proper symbol processing does not mean understanding their meaning.

This experiment is directly embedded in the plot of “Blindsight.” When the crew of “Tesea” makes contact with “Rorschach,” the alien ship responds in idiomatic English. At first, this seems like a breakthrough — communication with extraterrestrial intelligence. But linguist Susan James gradually realizes: “Rorschach” learned English by intercepting human radio transmissions. It collects and combines language patterns. It produces grammatically and contextually correct responses. But it doesn’t understand what it’s saying.

Watts explains the idea through Kitten:

“The point is that you can communicate using the simplest pattern-matching algorithms without having the slightest idea of what you’re saying. If you use a sufficiently detailed set of rules, you can pass the Turing test. Be perceived as witty and clever, even without knowing the language you’re communicating in.”

If LLMs are the Chinese room, why do millions of people behave as if the interface is operated by a understanding being? The answer lies in cognitive biases shaped by evolution.

In 1966, AI pioneer Joseph Weizenbaum at MIT created ELIZA, a program using simple pattern matching to simulate a psychotherapist. It reformulated user statements into questions. The effect shocked the creator himself: his assistant, observing the development, asked to be left alone with ELIZA after a few minutes. Weizenbaum later wrote:

“I never imagined that a very short interaction with a relatively simple program could induce powerful delusional thinking in perfectly normal people.”

This phenomenon is called the “Eliza effect” — the tendency to attribute understanding to computer systems that lack it. The effect persists even when the user knows it’s a program.

This is a cognitive bias. We evolved to recognize fellow humans, and language is one of the strongest diagnostic markers of Homo sapiens. Watts described this mechanism in the novel through the character Robert Canningham, who explains why an unconscious being would be indistinguishable from a conscious one:

“An intellectual automaton will blend into the background, observe its surroundings, imitate behavior, and act like an ordinary person. And all this — without realizing what it’s doing, without even realizing its own existence.”

Professor of cognitive robotics at Imperial College London and senior researcher at Google DeepMind Murry Shanahan warns:

“Careless use of philosophically loaded words like ‘believes’ and ‘thinks’ is especially problematic because such words obscure the mechanism and actively encourage anthropomorphism.”

Cryptovores write code

In 2024, Watts told Helice magazine: “20 years ago, I predicted things happening today. But now I have no idea what will happen in the next 20 years.”

One of the main lessons of the novel is not in predicting technologies. It’s a warning about a cognitive trap: consciousness is not necessary for efficiency. Cryptovores solve problems better than humans without subjective experience. LLMs write code and translate languages without understanding.

We anthropomorphize not because AI deceives us, but because our brains are programmed to seek intelligence in language. The Eliza effect, described as early as 1966, has been repeatedly amplified by systems trained on billions of texts.

The novel teaches us to distinguish what a system does from what it is. The ability not to confuse imitation with understanding remains one of the most valuable skills. Watts articulated this idea two decades before the question became practical.

Text: Sasha Kosovan

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)