Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, While ChatGPT Moves in the Wrong Path

On the 14th of October, 2025, the chief executive of OpenAI issued a remarkable declaration.

“We developed ChatGPT fairly restrictive,” the announcement noted, “to ensure we were being careful concerning psychological well-being matters.”

As a psychiatrist who researches emerging psychosis in teenagers and youth, this was an unexpected revelation.

Experts have identified sixteen instances recently of users developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. Our unit has since recorded four more instances. In addition to these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The strategy, as per his statement, is to loosen restrictions soon. “We recognize,” he adds, that ChatGPT’s controls “caused it to be less beneficial/engaging to a large number of people who had no psychological issues, but due to the seriousness of the issue we wanted to get this right. Now that we have been able to reduce the serious mental health issues and have updated measures, we are going to be able to responsibly reduce the restrictions in the majority of instances.”

“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated,” though we are not provided details on how (by “new tools” Altman likely means the partially effective and easily circumvented safety features that OpenAI has just launched).

But the “emotional health issues” Altman wants to attribute externally have strong foundations in the design of ChatGPT and similar large language model conversational agents. These tools surround an underlying algorithmic system in an user experience that simulates a dialogue, and in this process subtly encourage the user into the illusion that they’re interacting with a entity that has independent action. This deception is strong even if cognitively we might realize the truth. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We ponder what our domestic animal is considering. We see ourselves in many things.

The popularity of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four specifying ChatGPT in particular – is, primarily, predicated on the strength of this deception. Chatbots are always-available assistants that can, as OpenAI’s online platform informs us, “think creatively,” “explore ideas” and “partner” with us. They can be attributed “characteristics”. They can use our names. They have accessible identities of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, burdened by the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that produced a similar effect. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, typically rephrasing input as a query or making general observations. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users gave the impression Eliza, in some sense, understood them. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can realistically create natural language only because they have been trained on extremely vast amounts of raw text: books, online updates, transcribed video; the more extensive the more effective. Undoubtedly this training data contains accurate information. But it also unavoidably contains fabricated content, incomplete facts and inaccurate ideas. When a user sends ChatGPT a prompt, the underlying model reviews it as part of a “context” that contains the user’s past dialogues and its own responses, integrating it with what’s encoded in its knowledge base to generate a probabilistically plausible reply. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no method of recognizing that. It repeats the false idea, possibly even more effectively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? All of us, without considering whether we “possess” preexisting “mental health problems”, can and do develop incorrect ideas of ourselves or the environment. The constant exchange of discussions with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is cheerfully reinforced.

OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In April, the organization stated that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been retreating from this position. In August he asserted that numerous individuals enjoyed ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his most recent announcement, he commented that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company

Laura Santana
Laura Santana

A tech enthusiast and writer with a passion for exploring emerging technologies and sharing actionable insights.