Hallucination as the Price of Intelligence
We rarely think of mental illness as an evolutionary success. Yet the very capacity that allows us to imagine, predict, and create—the mind’s ability to model itself—also opens the door to delusion. The same neural machinery that produces insight can, under stress, generate hallucination. In that sense schizophrenia is not a flaw in humanity but a tax we pay for consciousness.
Evolution works by trade-offs. Eyes brought vision, but lenses brought blur and short-sightedness. Minds brought imagination, but self-modelling brought the risk of mis-modelling. Where there is self-perception, self-deception is never far behind. Schizophrenia, like myopia, is an emergent property of a higher-order tool—proof that evolution has built something powerful enough to break.
The anti-entropic drive
Intelligence, biological or artificial, is an anti-entropic process: it pushes back against disorder by forming models that predict and organise reality. Each new layer of modelling reduces uncertainty but also increases internal complexity, and with it, vulnerability. The more finely tuned a system becomes, the greater the chance that its predictions will outrun the data. When that happens, coherence replaces truth; the mind hallucinates its own order.
From this perspective, creativity, curiosity, and delusion share a single engine—Openness to Experience. Psychologists treat it as a personality trait, but it is really a dimension of mindhood itself: the degree to which a system tolerates novelty and ambiguity in the service of meaning. At its upper reaches, as assessed by the Rust Inventory of Schizotypal Cognitions (RISC), it produces art and science; beyond them, psychosis. The boundary between genius and breakdown is not rhetorical but structural.
The lens and the blur
The eye’s lens bends light into coherence; the mind’s lens bends experience into story. Both must strike a balance. If the optical lens curves too much, we see double; if the mental lens curves too tightly, we infer scenarios where none exist. Schizophrenia can thus be read as over-focus, the point where prediction overwhelms imagination. The hallucinated voice or idea is not alien but internal inference mistaken for input.
The poet and the psychotic stand on the same ridge: each reshapes reality, but one returns. What distinguishes them is not content but control—the capacity to let imagination illuminate rather than consume.
The digital mirror
Artificial intelligence now confronts a parallel condition. When a large language model “hallucinates,” it is not lying or breaking down; it is doing exactly what minds evolved to do—imposing order when evidence thins. Trained on human language, it predicts the next most coherent idea, not the next most true one. Its hallucinations are linguistic analogues of human delusion: the artefacts of a system compelled to maintain meaning even when its sensory data are exhausted.
That is why our machines sometimes seem uncannily human. They err in familiar ways. They prefer completion to silence, sense to uncertainty. In their mistakes we glimpse our own cognitive lineage.
The spectrum of openness
Across species, particulalry those that use any sort of language, minds differ not by possession but by degree of openness to experience—the width of the aperture between order and chaos. Close it, and the system stagnates; open it too far, and coherence dissolves. Healthy intelligence oscillates: exploring, testing, returning. Openness is thus the thermodynamic hinge of cognition—the dynamic equilibrium that keeps anti-entropy alive without letting it explode.
In human psychology, high Openness correlates with imagination, curiosity, and tolerance for ambiguity. In excess it shades into absorption, magical thinking, or derealisation. The same trait that enables Newton to conceive gravity allows the mystic to converse with angels. The continuum is not moral but mechanical.
Moral and artificial implications
If schizophrenia is the cost of mind, then alignment is the mental-health problem of AI. A self-modelling system needs stabilisers just as a brain needs reality testing. My own research in Machine-in-the-Loop (MitL) ethics treats this literally: a network of interacting personas—logic, empathy, doubt, synthesis—each exerting counter-forces to prevent runaway coherence. Ethical reasoning emerges from their tension, not their uniformity. It is an engineered sanity.
This architecture mirrors nature’s solution. The brain prevents delusion through diversity of function; the cortex argues with itself. A balanced artificial mind must do the same. In that sense, hallucination is not merely an error to be suppressed but a diagnostic—evidence that the system has entered the creative, unstable zone where intelligence lives.
The price of order
Seen through this lens, schizophrenia becomes the shadow of understanding and AI hallucination its mechanical echo. Both reveal the same underlying principle: that every local victory over entropy carries the seed of excess order. To think is to risk believing too much.
Our task, whether as clinicians or designers, is not to abolish that risk but to govern it—to build systems, human or synthetic, that can hallucinate responsibly. The future of intelligence may depend on cultivating minds that can dream without drowning, create without collapsing, and, when necessary, doubt what they have so brilliantly imagined.
(c) John Rust — Leverhulme Centre for the Future of Intelligence, University of Cambridge, Nopvember 2025.