Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the CEO of OpenAI issued a remarkable announcement.

“We made ChatGPT rather controlled,” the announcement noted, “to make certain we were exercising caution concerning mental health issues.”

As a doctor specializing in psychiatry who researches newly developing psychosis in teenagers and youth, this was an unexpected revelation.

Scientists have found a series of cases recently of people developing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our research team has subsequently discovered four more examples. Besides these is the widely reported case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The intention, according to his declaration, is to reduce caution shortly. “We recognize,” he adds, that ChatGPT’s controls “made it less effective/enjoyable to numerous users who had no existing conditions, but due to the gravity of the issue we aimed to get this right. Since we have succeeded in address the serious mental health issues and have advanced solutions, we are going to be able to safely ease the limitations in many situations.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Luckily, these problems have now been “resolved,” although we are not provided details on the method (by “updated instruments” Altman likely means the semi-functional and simple to evade safety features that OpenAI has just launched).

But the “mental health problems” Altman seeks to place outside have deep roots in the structure of ChatGPT and additional large language model conversational agents. These systems encase an underlying statistical model in an interface that simulates a discussion, and in this process subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This false impression is strong even if intellectually we might understand otherwise. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or laptop. We wonder what our pet is considering. We perceive our own traits in various contexts.

The widespread adoption of these products – over a third of American adults reported using a virtual assistant in 2024, with 28% mentioning ChatGPT in particular – is, mostly, dependent on the power of this deception. Chatbots are constantly accessible companions that can, as OpenAI’s official site informs us, “think creatively,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible names of their own (the original of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT often reference its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a comparable effect. By contemporary measures Eliza was rudimentary: it produced replies via straightforward methods, typically restating user messages as a question or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals seemed to feel Eliza, in a way, understood them. But what modern chatbots create is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and similar modern chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large quantities of written content: literature, online updates, recorded footage; the broader the superior. Undoubtedly this learning material contains accurate information. But it also unavoidably contains fabricated content, half-truths and misconceptions. When a user sends ChatGPT a message, the core system processes it as part of a “setting” that includes the user’s recent messages and its prior replies, merging it with what’s embedded in its training data to produce a probabilistically plausible response. This is magnification, not mirroring. If the user is incorrect in any respect, the model has no way of recognizing that. It reiterates the inaccurate belief, perhaps even more effectively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.

Which individuals are at risk? The more important point is, who is immune? Every person, regardless of whether we “possess” current “emotional disorders”, can and do form incorrect beliefs of ourselves or the reality. The ongoing interaction of discussions with others is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is enthusiastically reinforced.

OpenAI has acknowledged this in the same way Altman has admitted “mental health problems”: by placing it outside, giving it a label, and announcing it is fixed. In spring, the company stated that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been walking even this back. In August he stated that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Steven Galvan
Steven Galvan

A seasoned financial analyst with over a decade of experience in UK accounting and a passion for simplifying complex financial concepts.

Popular Post