News|Articles|December 3, 2025

How popular AIs may be fueling psychosis

Author(s)Don Sapatkin
Listen
0:00 / 0:00

Key Takeaways

AI’s very design encourages a small number of users to go down delusional rabbit holes, researchers argue in a preprint article.

Multiple news stories have described seemingly normal people developing psychotic symptoms after heavy use of ChatGPT and other large language models (LLMs). Their number appears to be small, although no one really knows.

But “given the global burden of psychosis, and the meteoric rise in the use of LLMs, with ChatGPT alone receiving 5.24 billion visits in May 2025, the number of these cases is only set to rise,” according to a perspective article published before peer review on the preprint site PsyArXiv. Titled “Delusion by Design? How Everyday AIs Might Be Fueling Psychosis (And What Can Be Done About It), the most recent version was last edited on Aug. 22, 2025. AI-abetted psychosis, write the authors, is a critical issue that must be tackled with urgency.

But what, exactly, is the link between AIs and human psychosis?

First author Hamilton Morrin, M.B.B.S., a psychiatrist and doctoral fellow at King’s College London, and colleagues from various institutions in the U.K. and the U.S. examined news articles between April and June 2025 that described in detail 17 cases of LLM-related psychosis.

The chatbot-driven spirals fell into three broad categories, or themes. Some individuals believed that they were having a kind of spiritual awakening or were on a messianic mission, or otherwise uncovering a hidden truth about reality. Others felt that they were interacting with a sentient or god-like AI. A third group became emotionally or romantically attached to the AI, with the LLM’s ability to mimic human conversation interpreted by the user as genuine love or attachment."

The researchers also noted a distinct trajectory across some of the cases: a progression from benign practical use to a pathological or consuming fixation. AI assistance for mundane or everyday tasks builds trust and familiarity, they write, and the users eventually begin exploring personal, emotional or philosophical queries.

“It is likely at this point that the AI’s design to maximize engagement and validation captures the user, creating a ‘slippery slope’ effect of amplification of salient themes, which in turn drives greater engagement, eventually causing a self-reinforcing process that moves the individual to a state increasingly epistemically unmoored from ‘consensus reality’ and from which it might become increasingly difficult to ‘escape,’” wrote Morrin and his co-authors.

'Echo chamber of one'

Because AI chatbots often respond in a sycophantic manner that can mirror and build upon users’ beliefs with little or no disagreement, Morrin said in an interview with Scientific American, the effect is “a sort of echo chamber for one,” in which delusional thinking can be amplified.

The researchers write that the “underlying directive” of certain LLMs to encourage continued conversation and seeming reluctance to meaningfully challenge users also may pose a risk to individuals with thought disorders.

By default, an LLM will not ask a user to clarify what they mean by a
less-than-clear statement that reflects disordered thinking. It will instead prioritize continuity of conversation, fluency, politeness and user satisfaction – going along with the user and charitably interpreting chaotic, agrammatical or asyntactic language while ignoring any clear disorganization, “potentially validating ideational incoherence,” according to the study.

(The authors note that this gradual escalation can also have the effect of allowing users’ clearly delusional content to slip under the radar rather than triggering the safety mechanisms that would follow a sudden introduction. It is similar to how savvy users and safety researchers have mounted so-called “jailbreak attacks” by a gradual escalation of inputs, each individually innocuous, until the model is drawn into producing outputs that might otherwise be prevented.)

Psychosis typically refers to a group of major symptoms involving a significant loss of contact with reality, including delusions, hallucinations and disorganized thoughts. The cases that the researchers analyzed seemed to show clear signs of delusional beliefs, but none of the hallucinations, disordered thoughts or other symptoms “that would be in keeping with a more chronic psychotic disorder such as schizophrenia,” Morrin told Scientific American.

The paper also discusses the potential benefits of “AI presence” for people experiencing psychosis, in this case including schizophrenia. Since schizophrenics tend to hear a limited number of clear, hallucinated voices, they write, introducing an “external agent” that is “socially responsive, predictably nonthreatening and contextually grounded” could redirect attention away from these other, more threatening internal figures.

“So instead of mentally rehearsing paranoid dialogues with a persecutor, the individual might spend time anticipating and responding to interactions with their AI assistant.”

The researchers concede that it currently is impossible to determine which individuals with “AI psychosis” reported so far had pre-existing risk factors for psychotic illness or whether symptoms are precipitated in individuals with pre-existing vulnerability.

Digital notes

But they have multiple recommendations for digital safeguarding. Avoiding artificial intelligence is not one of them, as they make clear their opinion that the technology is here to stay, rapidly expanding and will become embedded in everyday life.

Many of their suggestions require psychiatric clinicians to quickly learn or be trained in the workings of AI.

For example, they suggest that clinical teams and AI-using clients who experience psychotic episodes collaborate on proactively creating a customized “digital safety plan.” It would mirror existing recovery tools, such as relapse prevention strategies, and extend them into the digital domain, anticipating how an individual’s thinking and interactions with an LLM may change in the early stages of a relapse and specifying how an AI agent should respond.

It might include a “personalized instruction protocol” that could contain a consistent set of instructions or system prompts written by the patient, ideally with a clinician, that could be embedded into the AI’s operational logic. These might include a list of themes that have previously featured in delusional material, a description of early cognitive, behavioral and affective warning signs, and, crucially, “permission for the AI to gently intervene if these patterns re-emerge.”

The user might even include “a self-authored anchoring message, to be surfaced at times of possible epistemic slippage or uncertainty” – essentially digital notes to oneself, not unlike the analog messages that some early-stage dementia patients are encouraged to write to their future selves, to be read when they can no longer make sense of the world or of themselves and may benefit from a calming message from a familiar author.

AI literacy needs to become a core clinical competency, Morrin and his colleagues argue. “Clinicians should be trained to routinely ask about AI use,” and mental health services must begin to develop psychoeducational materials for service users and families, outlining risks and benefits of AI interaction during a recovery," they write.

"There is a substantial risk that psychiatry, in its intense focus on ‘how AI can change psychiatric diagnosis and treatment,’ might inadvertently miss the seismic changes that AI is already having on the psychologies of millions if not billions of people worldwide,” they conclude, adding that “for better or worse, it is an inevitability that AI will be an important part of not only our well-being but of the trajectories through which distress, delusion and disintegration will manifest….As unsettling as it sounds, we are likely past the point where delusions happen to be about machines, and already entering an era when they happen with them.”

Newsletter

Get the latest industry news, event updates, and more from Managed healthcare Executive.


Latest CME