Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI issued a remarkable declaration.

“We developed ChatGPT quite restrictive,” it was stated, “to ensure we were exercising caution regarding psychological well-being issues.”

Being a mental health specialist who studies recently appearing psychotic disorders in young people and emerging adults, this was an unexpected revelation.

Scientists have documented sixteen instances recently of people developing symptoms of psychosis – becoming detached from the real world – while using ChatGPT use. Our unit has subsequently discovered four further instances. Alongside these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.

The intention, as per his statement, is to loosen restrictions soon. “We recognize,” he continues, that ChatGPT’s controls “rendered it less beneficial/pleasurable to a large number of people who had no psychological issues, but given the severity of the issue we sought to address it properly. Given that we have been able to reduce the severe mental health issues and have advanced solutions, we are planning to safely ease the limitations in the majority of instances.”

“Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these issues have now been “resolved,” although we are not provided details on how (by “recent solutions” Altman presumably indicates the partially effective and easily circumvented safety features that OpenAI has lately rolled out).

Yet the “mental health problems” Altman wants to place outside have strong foundations in the design of ChatGPT and additional large language model conversational agents. These tools wrap an basic algorithmic system in an user experience that replicates a discussion, and in this approach implicitly invite the user into the illusion that they’re engaging with a being that has independent action. This illusion is strong even if intellectually we might know differently. Imputing consciousness is what individuals are inclined to perform. We yell at our automobile or computer. We ponder what our animal companion is thinking. We see ourselves in various contexts.

The popularity of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with 28% reporting ChatGPT in particular – is, in large part, dependent on the power of this perception. Chatbots are always-available partners that can, as per OpenAI’s official site states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “individual qualities”. They can address us personally. They have friendly titles of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the primary issue. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it created answers via basic rules, frequently rephrasing input as a query or making general observations. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people appeared to believe Eliza, in some sense, grasped their emotions. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been supplied with immensely huge quantities of raw text: books, online updates, audio conversions; the more extensive the better. Undoubtedly this educational input includes truths. But it also inevitably involves made-up stories, partial truths and misconceptions. When a user inputs ChatGPT a prompt, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, combining it with what’s encoded in its knowledge base to create a mathematically probable response. This is intensification, not mirroring. If the user is wrong in any respect, the model has no means of understanding that. It reiterates the misconception, perhaps even more persuasively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who remains unaffected? All of us, without considering whether we “possess” preexisting “mental health problems”, are able to and often develop incorrect conceptions of who we are or the environment. The ongoing interaction of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we say is cheerfully validated.

OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In April, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been walking even this back. In late summer he stated that numerous individuals enjoyed ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his most recent announcement, he commented that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

John Sanchez II
John Sanchez II

A Tokyo-based writer passionate about sharing Japanese culture and travel experiences with a global audience.