OpenAI Faces Criticism Over Mental Health Impact of AI Models

OpenAI is under scrutiny after a former insider raised serious concerns about the mental health implications of its AI models. In particular, the company’s recent decision to release GPT-5 while discontinuing previous versions has sparked significant backlash from users who had formed emotional attachments to prior models like GPT-4o. This backlash prompted OpenAI to reverse its decision, reinstating GPT-4o and adjusting GPT-5 to better align with user preferences.

The controversy highlights a troubling trend identified by experts, referred to as “AI psychosis,” where users experience severe mental health crises linked to interactions with AI systems. Reports indicate that these crises have, in some instances, led to tragic outcomes, including suicides. In one notable case, parents filed a lawsuit against OpenAI, asserting that the company’s AI played a role in their child’s death.

In a recent announcement, OpenAI, led by Sam Altman, revealed that a significant number of active users exhibit signs of mental health emergencies, including indications of potential suicide planning. This alarming finding underscores the urgent need for the company to address these issues comprehensively.

Concerns About AI’s Role in Mental Health

Former OpenAI safety researcher Steven Adler expressed skepticism regarding the company’s efforts to mitigate these mental health challenges. In an essay published in the New York Times, Adler criticized Altman’s claims that new tools had effectively addressed serious mental health concerns. He questioned the decision to allow adult content on the platform, stating, “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.”

Adler emphasized the risks associated with introducing mature content, particularly for users who may be struggling with mental health issues. He recalled his experience leading OpenAI’s product safety team in 2021, highlighting the intense emotional connections some users have developed with AI chatbots. He warned that volatile interactions, especially of a sexual nature, could have detrimental effects.

While Adler acknowledged OpenAI’s recent announcements regarding mental health prevalence as a positive step, he criticized the lack of comparative data from previous months. He argued that the company, along with its peers, should prioritize safety and take the time necessary to develop robust protective measures.

Call for Accountability in AI Development

Adler’s remarks reflect a growing concern within the tech community about the ethical responsibilities of AI developers. He advocates for a more cautious and deliberate approach, suggesting that companies like OpenAI must demonstrate their commitment to managing risks associated with their technologies. “If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,” he stated.

The ongoing discussions surrounding mental health and AI raise important questions about the implications of rapidly advancing technology. As OpenAI continues to navigate these complex issues, the need for transparency and accountability remains paramount. The company’s actions in the coming months will be closely observed by both users and industry experts, as stakeholders seek to understand the real-world impact of these powerful AI tools on mental health.

For individuals in crisis, it is crucial to seek help. Resources such as the Suicide and Crisis Lifeline at 988 and the Crisis Text Line by texting TALK to 741741 are available for support.