OpenAI has launched ChatGPT Health, a specialized version of its AI designed for health and wellness inquiries, in response to the increasing number of individuals seeking health information online. Each day, an estimated 40 million users pose health-related questions to ChatGPT. This initiative aims to enhance patient understanding of their health data through advanced AI tools.
To delve deeper into the implications of this new feature, Northwestern Now spoke with Dr. David Liebovitz, co-director of the Institute for Artificial Intelligence in Medicine’s Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine. With decades of experience in clinical informatics, Liebovitz has been at the forefront of integrating AI into patient care, having served as chief medical information officer at two healthcare organizations.
Enhancing Patient Interactions with AI
According to Liebovitz, the critical issue is not whether patients will use AI for health information, but how to ensure they do so effectively and safely. “The question is whether we can help them do so more effectively and safely, with appropriate guardrails and realistic expectations about what these tools can and cannot do,” he stated.
The introduction of ChatGPT Health coincides with the 21st Century Cures Act, which mandates that healthcare systems provide patients with complete access to their medical records via standardized application programming interfaces (APIs). This regulatory shift allows AI tools to assist patients in interpreting their data. For instance, patients can gain insights into lab results, prepare for medical appointments, and identify potential gaps in care without incurring additional costs or compromising their privacy.
Liebovitz envisions a future where patients can download their medical records, analyze them using AI models on personal devices, and receive tailored insights without involving third-party servers. This approach promises to democratize health information and eliminate privacy concerns associated with sharing data with external entities.
Addressing Privacy Concerns
Despite the potential benefits, Liebovitz raised concerns about the privacy of health data shared with ChatGPT. Unlike conversations with healthcare providers, interactions with AI tools are not protected by health privacy laws such as HIPAA. This means that sensitive information could be subject to legal processes. “For sensitive health matters, particularly reproductive or mental health concerns, that’s a real consideration,” he emphasized.
To mitigate these privacy risks, Liebovitz advocates for a model that runs AI locally on a patient’s device. Modern smartphones possess the capability to process language models without transmitting data to the cloud. This shift towards on-device AI technology is gaining traction, as demonstrated by Apple’s developments in Apple Intelligence. With advancements in open-source models designed for mobile hardware, patients could soon utilize sophisticated health assistants directly on their phones, maintaining complete control over their medical data.
Liebovitz’s research group is actively exploring how to make this vision a reality. By leveraging standardized health records, powerful mobile technology, and capable AI models, they aim to provide patients with meaningful second opinions on their health data while ensuring that this information remains confidential.
As the healthcare landscape continues to evolve with AI innovations, the potential for improved patient engagement and understanding of health information becomes increasingly significant. The deployment of tools like ChatGPT Health could represent a major step forward in empowering individuals to take charge of their health journeys while navigating the complexities of modern medicine. This article is republished courtesy of Northeastern Global News.
