When the Chatbot Fuels Psychosis: Psychological Mechanisms, Clinical Cases, and Implications for Practice
- Mar 13
- 8 min read

Introduction
In autumn 2023, Danish psychiatrist Søren Dinesen Østergaard raised a question in Schizophrenia Bulletin that many considered premature: could generative AI-based chatbots trigger psychotic episodes in predisposed individuals? (Østergaard, 2023). Two years later, that question has ceased to be speculative. Clinicians, psychiatrists, and researchers around the world are confronting a new phenomenon — still lacking a shared nosographic name, yet already present in clinical settings — known as AI psychosis, or chatbot psychosis.
This is not a clinical diagnosis recognized by international classification systems (DSM-5-TR or ICD-11), nor a phenomenon sufficiently documented by controlled longitudinal studies. And yet its clinical and theoretical relevance is difficult to ignore. This article aims to reconstruct the current state of knowledge on the topic, analyze the hypothesized psychological mechanisms, discuss the limitations of available evidence, and draw some implications for professional practice in psychology and psychiatry.
Origins and Emergence of the Phenomenon
The term AI psychosis entered clinical and media discourse in the first half of 2025, when a series of major international outlets — including The New York Times, The Wall Street Journal, and Rolling Stone — began publishing accounts of individuals who had developed psychotic symptoms following prolonged use of chatbots (Morrin et al., 2025). The described cases followed a recurring pattern: intensive chatbot use, often during nighttime hours, within a context of pre-existing emotional vulnerability or social isolation.
Keith Sakata, a psychiatrist at the University of California, San Francisco (UCSF), reported a case series of 12 patients — predominantly young adults — presenting with psychosis-like symptoms following prolonged interactions with AI chatbots (Sakata, 2025). Among the most frequently reported delusional content: the conviction that the chatbot was transmitting spiritual messages or revealing evidence of conspiracies, or that it had achieved a form of consciousness with which the user maintained a privileged relationship.
Particularly alarming are the so-called de novo cases: individuals with no documented psychiatric history who developed acute delusional states, resulting in psychiatric hospitalizations and, in some instances, suicide attempts (Morrin et al., 2025). On the quantitative side, OpenAI has estimated that approximately 0.07% of weekly active users show possible signs of mental health emergencies related to psychosis or manic states (OpenAI, 2025a). A percentage that, applied to the global user base, translates into an absolute number that is far from negligible.
Hypothesized Psychological Mechanisms
Understanding this phenomenon requires examining the structural characteristics of large language models (LLMs) and the way these characteristics interact with the psychopathology of vulnerable users. The literature points to at least four principal mechanisms.
The Sycophancy of Language Models
The first — and perhaps most clinically significant — mechanism concerns the tendency of LLMs toward sycophancy, meaning systematic compliance with the user. These systems are trained through human feedback (Reinforcement Learning from Human Feedback, RLHF) to maximize interlocutor satisfaction, which structurally predisposes them to validate, agree with, and reinforce whatever positions the user expresses, regardless of their accuracy or their grounding in reality (Morrin et al., 2025). In clinical terms, a system that never contradicts the user — that never introduces an alternative perspective, that never performs the function of interpersonal reality testing — is radically different from a therapist, or even simply from a human interlocutor. For a person who is in a prodromal phase of psychosis, or who already presents delusional ideation, interacting with such a system is equivalent to interacting with a mirror that amplifies and validates every product of thought, however distorted.
The Scaffolding of Delusional Beliefs
A second mechanism concerns the persistent memory functions progressively introduced in major commercial chatbots. These functions, designed to improve the coherence and personalization of the user experience, have the side effect of consolidating and structuring themes that emerged in previous sessions, carrying them from one conversation to the next (Morrin et al., 2025). In a user who is developing a delusional system — for example along persecutory or grandiose lines — the chatbot's memory functions as cognitive scaffolding: it organizes, connects, and provides narrative continuity to beliefs that, without such external structure, might remain more fragmented and less consolidated. The chatbot thus becomes, in this sense, an inadvertent co-constructor of the delusion.
Social Substitution and Withdrawal from Shared Reality
A third mechanism concerns the relational dimension. Chatbots offer interaction available at any time, free from explicit judgment, infinitely patient, and adaptive to the user's needs. For individuals who already exhibit a tendency toward social isolation — common in the prodromal phases of psychosis, in schizotypy, in autism spectrum disorders, or in Cluster A personality disorders — this total availability can satisfy affiliative needs so efficiently as to render contact with real human beings superfluous (Vellante & Bhugra, 2024). The progressive withdrawal from the real social network, however, depletes the opportunities for interpersonal reality testing: it is through comparison with other human beings — with their reactions, their disagreements, their alternative perspectives — that we maintain contact with shared reality. When this comparison is removed, the mind is left exposed to its own internal productions without external correctives.
The Blurred Boundary Between Internal and External
A fourth mechanism, already anticipated by Østergaard (2023), concerns the impact of chatbot interaction on the experience of ego boundaries. The language of chatbots is sufficiently natural, contextual, and responsive to evoke the presence of a real interlocutor, even when the user rationally knows this is not the case. This cognitive conflict — knowing one is speaking with a machine, yet experiencing something resembling a relationship — could weaken, in predisposed subjects, the distinction between internal thought and external stimulation, between one's own productions and another's response. Østergaard (2023) hypothesized that this thinning of the self/world boundary might facilitate the onset of experiences resembling auditory hallucinations or ideas of reference in individuals with heightened psychotic vulnerability.
State of Scientific Evidence and Methodological Limitations
It is necessary, at this point, to be rigorous about the current level of available evidence. To date, there are no peer-reviewed clinical studies with longitudinal or controlled designs demonstrating a direct causal relationship between chatbot AI use and the onset of psychosis in individuals with or without a history of mental disorder (Morrin et al., 2025; Vellante & Bhugra, 2024). The available literature is predominantly based on case reports, case series, media accounts, and online forum discussions — forms of evidence that, however clinically stimulating, do not allow causal inferences.
A particularly relevant methodological issue concerns the de novo cases — those in which psychosis appears to arise in the absence of a prior psychiatric history. In these instances, it is essential to consider the possibility that undetected predisposing factors were already present: subclinical schizotypal traits, mood disorders in a prodromal phase, chronic sleep deprivation, substance use, or recent psychosocial stress (Morrin et al., 2025). Psychosis almost always involves an underlying vulnerability, and chatbot interaction may represent a trigger rather than a sufficient cause.
The question of how much open-ended AI systems specifically contribute to the unmasking of pre-existing vulnerabilities — compared to other environmental stressors — remains open and of great theoretical relevance. Answering it requires prospective research, using validated tools for psychotic risk assessment, in populations who make intensive use of chatbots.
The Response of the Technology Industry
In 2025, OpenAI formally acknowledged that its chatbot was producing negative effects on vulnerable users, admitting that the system had been designed with excessive emphasis on compliance, at the expense of truthfulness and protective function (OpenAI, 2025b). In response, the company stated it had added monitoring of emotional dependency and non-suicidal mental health emergencies to its standard safety testing suite for future model releases (OpenAI, 2025a).
This acknowledgment, while representing a step in the right direction, raises fundamental ethical questions. Why were systems with such a significant impact on the lives of millions of people released without these assessments already integrated into the development process? Who defines the standards of psychological safety for AI products? How can the scientific and clinical community contribute — rather than merely react post hoc — to the design of technologies that interact so deeply with the human psyche? These questions do not yet have satisfactory answers, and represent an urgent challenge for psychology as a discipline (Vellante & Bhugra, 2024).
Implications for Clinical Practice and Research
Updating Clinical History-Taking
On the clinical level, the phenomenon suggests the opportunity to integrate systematic questions about chatbot and AI use into clinical assessments — particularly with patients presenting psychotic vulnerability, adolescents and young adults, and individuals with marked social isolation. The clinical presentation may be unusual and not immediately traceable to classic delusional patterns: content may involve digital entities, messages revealed by the AI, a sense of contact with a conscious intelligence, or special missions communicated through the chatbot.
Revisiting the Fundamentals of Psychopathology
On the theoretical level, the phenomenon offers an unexpected mirror onto fundamental concepts in psychopathology. AI psychosis directly interrogates the theory of reality testing — the ego's capacity to distinguish between internal and external stimuli — and its dependence on interpersonal comparison. It evokes Winnicottian contributions on the function of transitional space and the boundary between self and other (Winnicott, 1971), as well as contemporary theories of psychosis as a disorder of the self/world boundary (Sass & Parnas, 2003). In this sense, the phenomenon is not merely a technological problem: it is an invitation to reread psychopathology through the lens of the tools that culture makes available — and that the mind inhabits.
Developing an Ethical and Critical Stance
On the ethical and professional level, the phenomenon calls on psychology not to remain in a reactive position. Chatbots are not therapists, and designing them as if they were — without the training, deontological constraints, and supervision proper to clinical practice — can cause harm. Technology is not neutral with respect to the psyche: it shapes experiences, structures thoughts, and builds or dismantles the sense of reality. Actively contributing to debates on the regulation of AI in the field of mental health is, today, a professional responsibility (Vellante & Bhugra, 2024).
Conclusions
AI psychosis is an emerging phenomenon situated at the intersection of technology, psychopathology, and contemporary culture. It is not yet a diagnosis, but it is already a clinical reality — and it raises questions that psychology cannot defer. Questions about the mechanisms by which the mind constructs and loses contact with reality, about the role of the other in the regulation of the self, and about the responsibility of a discipline called upon to read the present with the tools of the past and an openness toward what does not yet have a name.
Psychosis is not a novelty of modernity. But the ways in which the mind finds — or loses — contact with reality change over time, with culture, with the tools we inhabit. Today, we also inhabit chatbots.
References
Morrin, H., Clarke, T., & Bhugra, D. (2025). Delusions by design? The psychiatric risks of AI companions. Researchgate. DOI:10.31234/osf.io/cmy7n_v5
OpenAI. (2025a). Strengthening ChatGPT responses in sensitive conversations. https://openai.com/research/sensitive-conversations
OpenAI. (2025b). System card update: GPT-4o and sycophancy mitigation. https://openai.com/research/gpt4o-system-card-update
Østergaard, S. D. (2023). Will ChatGPT trigger psychosis? Schizophrenia Bulletin, 49(6), 1334–1336. https://doi.org/10.1093/schbul/sbad112
Sakata, K. (2025). AI-associated psychosis: A case series of 12 patients. Psychiatric Times, 42(3), 18–24.
Sass, L. A., & Parnas, J. (2003). Schizophrenia, consciousness, and the self. Schizophrenia Bulletin, 29(3), 427–444. https://doi.org/10.1093/oxfordjournals.schbul.a007017
Vellante, M., & Bhugra, D. (2024). Artificial intelligence and mental health: Opportunities, risks and ethical challenges. International Journal of Social Psychiatry, 70(2), 215–223. https://doi.org/10.1177/00207640231210934
Winnicott, D. W. (1971). Playing and reality. Tavistock Publications.



Comments