As the integration of AI-powered surgical tools increases in operating rooms, concerns over patient safety are surfacing. Recent investigations and numerous lawsuits have prompted medical experts to reassess the role of AI in surgical procedures. Although these tools are designed to assist human surgeons rather than perform surgeries independently, allegations of injuries linked to their use are rising.
According to a report by Reuters, the U.S. Food and Drug Administration (FDA) has approved at least 1,357 AI-integrated medical devices, which is double the number authorized through 2022. Among these devices is the TruDi Navigation System, produced by Johnson & Johnson. This system employs a machine-learning algorithm to aid ear, nose, and throat specialists during surgeries. Other AI-assisted devices focus on enhancing vision for various surgical procedures, addressing challenges presented by traditional laparoscopic techniques.
Traditional laparoscopic surgery often obscures the surgical field due to smoke, limits depth perception with two-dimensional images, and can make critical anatomical structures difficult to distinguish. AI tools aim to overcome these obstacles by providing “crystal-clear views of the operative field,” as noted by Forbes. However, the introduction of these technologies has led to an influx of allegations claiming that they actively caused harm to patients.
Reports indicate that the FDA has received unconfirmed notifications of at least 100 malfunctions and adverse events concerning the TruDi device. Many of these incidents involve miscommunications from the AI regarding instrument locations inside patients’ bodies. In one alarming case, a patient experienced cerebrospinal fluid leaking from their nose, while another case involved a surgeon accidentally puncturing the base of a patient’s skull. Furthermore, two reported cases resulted in patients suffering strokes after major arteries were injured, with one claim stating that the TruDi’s AI misled the surgeon, resulting in the injury of a carotid artery.
The FDA’s monitoring of malfunctioning devices does not determine the causes of medical mishaps, leaving questions about the specific role of AI in these incidents. However, the TruDi is not the only AI-assisted device facing scrutiny. The Sonio Detect, which analyzes prenatal images, has been accused of employing a faulty algorithm that misidentifies fetal structures. Similarly, Medtronic has faced allegations regarding its AI-assisted heart monitors, which purportedly failed to detect abnormal heart rhythms or pauses in patients.
Research published in the JAMA Health Forum revealed that at least 60 AI-assisted medical devices have been associated with 182 product recalls from the FDA. Alarmingly, approximately 43% of these recalls occurred within the first 12 months post-approval, indicating potential gaps in the FDA’s evaluation process for early performance failures of AI technologies.
Despite these challenges, there is potential for improvement. Enhancing premarket clinical testing requirements and postmarket surveillance measures could lead to better identification and mitigation of device errors. As the medical community continues to explore the capabilities and limitations of AI in surgery, the focus remains on ensuring patient safety amid rapid technological advancements.
