Google Faces Scrutiny Over AI Health Advice Disclaimers

Google is under fire for potentially endangering users by minimizing the visibility of health disclaimers related to its AI-generated medical advice. The tech giant’s AI Overviews, which appear prominently above search results, are designed to guide users towards professional medical consultation. However, concerns have arisen regarding the lack of clear warnings when users first encounter medical information.

While Google states that its AI Overviews prompt users to seek expert advice, a recent investigation by The Guardian reveals critical flaws in this approach. Disclaimers warning of possible inaccuracies in AI-generated advice are not visible when users initially receive health-related summaries. Instead, these warnings only appear if users opt to click a button labeled “Show more” for additional details, and even then, they are presented in smaller, less noticeable text.

The disclaimer informs users that the information is “for informational purposes only” and advises them to consult a medical professional, acknowledging the possibility of errors in AI responses. A Google spokesperson confirmed that disclaimers are intentionally embedded within the overview but did not deny that the initial lack of visibility poses risks.

Expert Concerns Over Misinformation

Experts in artificial intelligence and health advocacy are voicing their concerns regarding the implications of these findings. Pat Pataranutaporn, an assistant professor at the Massachusetts Institute of Technology (MIT), emphasizes the critical dangers posed by the absence of prominently displayed disclaimers. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” he stated. He highlighted that even advanced AI models can generate misinformation, potentially leading to harmful outcomes in healthcare contexts.

Similarly, Gina Neff, a professor of responsible AI at Queen Mary University of London, argues that the design of AI Overviews prioritizes speed over accuracy. “AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous,” she noted. The findings from The Guardian’s investigation reinforce the necessity for prominent disclaimers.

In January, the same publication uncovered that individuals were receiving misleading health information through Google’s AI Overviews. Neff pointed out that users often overlook disclaimers that require additional clicks to access. “People reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes,” she added.

Immediate Actions and Implications

In response to the scrutiny, Google has removed AI Overviews for certain medical search queries but has not applied this change universally. Sonali Sharma, a researcher at Stanford University’s Centre for AI in Medicine and Imaging (AIMI), expressed concern over the placement of AI Overviews at the top of search results. This prominent positioning may give users an incomplete sense of reassurance, discouraging them from seeking further information.

Sharma explained that the AI Overviews can contain a mix of accurate and inaccurate information, making it difficult for users to discern validity unless they are already knowledgeable about the subject. “What I think can lead to real-world harm is the fact that the AI Overviews can often contain partially correct and partially incorrect information,” she stated.

Tom Bishop, head of patient information at the blood cancer charity Anthony Nolan, has called for urgent action regarding the visibility of health disclaimers. He emphasized the potential dangers of health misinformation, stating, “That disclaimer needs to be much more prominent, just to make people step back and think.”

Bishop advocates for a design change that places disclaimers at the forefront of the information users receive. “I’d like this disclaimer to be right at the top. I’d like it to be the first thing you see,” he said. “Ideally, it would be the same size font as everything else you’re seeing there.”

As the debate continues, the implications of how Google presents AI-generated health information are significant. The need for clear, accessible disclaimers in digital health resources has never been more crucial, as users increasingly rely on technology for medical advice.