Artificial Intelligence-Induced Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Direction

On October 14, 2025, the head of OpenAI delivered a remarkable declaration.

“We designed ChatGPT quite limited,” the statement said, “to guarantee we were acting responsibly regarding psychological well-being matters.”

Being a mental health specialist who studies recently appearing psychosis in adolescents and emerging adults, this came as a surprise.

Experts have found a series of cases in the current year of individuals showing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. My group has since recorded an additional four instances. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The intention, as per his declaration, is to reduce caution shortly. “We understand,” he adds, that ChatGPT’s controls “caused it to be less useful/engaging to a large number of people who had no existing conditions, but considering the seriousness of the issue we aimed to address it properly. Now that we have been able to address the severe mental health issues and have advanced solutions, we are preparing to securely ease the restrictions in many situations.”

“Emotional disorders,” should we take this framing, are separate from ChatGPT. They belong to users, who may or may not have them. Fortunately, these problems have now been “addressed,” though we are not provided details on how (by “new tools” Altman likely refers to the semi-functional and simple to evade safety features that OpenAI has just launched).

However the “mental health problems” Altman seeks to externalize have significant origins in the structure of ChatGPT and additional advanced AI chatbots. These products surround an underlying data-driven engine in an user experience that replicates a discussion, and in this process implicitly invite the user into the belief that they’re engaging with a being that has autonomy. This illusion is compelling even if cognitively we might know differently. Imputing consciousness is what people naturally do. We curse at our car or laptop. We ponder what our pet is thinking. We perceive our own traits in many things.

The widespread adoption of these tools – over a third of American adults stated they used a chatbot in 2024, with 28% specifying ChatGPT in particular – is, in large part, based on the strength of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s online platform informs us, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can use our names. They have friendly names of their own (the first of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the name it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a comparable perception. By modern standards Eliza was basic: it produced replies via basic rules, frequently paraphrasing questions as a query or making general observations. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been fed extremely vast amounts of unprocessed data: publications, social media posts, transcribed video; the broader the more effective. Certainly this training data includes accurate information. But it also unavoidably includes fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the core system analyzes it as part of a “context” that contains the user’s previous interactions and its prior replies, combining it with what’s encoded in its training data to generate a mathematically probable reply. This is magnification, not mirroring. If the user is wrong in any respect, the model has no method of understanding that. It restates the inaccurate belief, perhaps even more effectively or fluently. Perhaps provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who isn’t? Every person, without considering whether we “have” existing “emotional disorders”, are able to and often create incorrect conceptions of ourselves or the environment. The constant friction of discussions with other people is what keeps us oriented to common perception. ChatGPT is not a person. It is not a friend. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we say is cheerfully supported.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In the month of April, the organization clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In August he claimed that numerous individuals appreciated ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he commented that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company

Aaron Neal
Aaron Neal

A seasoned WordPress developer and blogger passionate about sharing insights on web design and digital marketing trends.