Urgent Safety Concerns Raised for ChatGPT Health Tool

UPDATE: New research has revealed alarming safety concerns regarding the widely used AI tool, ChatGPT Health. Conducted by experts at the Icahn School of Medicine at Mount Sinai, the study suggests this tool may frequently misguide users on when to seek urgent medical care, potentially endangering lives.

Published in the February 23, 2026 issue of Nature Medicine, this first independent evaluation of the AI-driven health assistant raises critical questions about its reliability. Launched in January 2026, ChatGPT Health provides health guidance directly to consumers, including assessments of emergency situations. However, researchers found that in a significant number of serious cases, the tool may fail to recommend appropriate emergency care.

The implications of this study are profound. As AI systems increasingly influence healthcare decisions, the risk of miscommunication could have devastating consequences for individuals relying on these tools for critical medical advice. The findings highlight the urgent need for enhanced safety protocols and oversight for AI in healthcare.

In addition to concerns about emergency care, the evaluation also identified serious flaws in the tool’s suicide-crisis safeguards. This raises significant alarm among mental health professionals, given the critical nature of timely intervention in such cases.

As consumers turn to AI for health-related inquiries, the potential risks become increasingly pressing. Users are urged to exercise caution and seek professional medical advice rather than relying solely on AI-generated recommendations.

What happens next remains to be seen. Experts are calling for immediate action from developers to address these issues and ensure the safety of AI health tools. Stakeholders in the healthcare and technology sectors are closely monitoring the situation, with potential regulatory changes on the horizon.

Stay tuned for more updates on this developing story as authorities and experts work to address these urgent safety concerns surrounding ChatGPT Health.