In modern medicine, the integration of AI-powered surgical tools is becoming increasingly common, even if many Americans still may not envision a robot conducting surgery. While these advanced tools are designed to assist human surgeons rather than replace them, ongoing investigations and a number of lawsuits have prompted medical professionals to reassess the role of AI in surgical settings.
These innovative tools primarily enhance visualization during surgical procedures, as stated by Forbes. Traditional laparoscopic techniques often face challenges such as smoke obscuring the surgical field, two-dimensional imaging hindering depth perception, and the difficulty in distinguishing vital anatomical structures. AI surgical tools aspire to resolve these issues by offering surgeons “crystal-clear views” of the operative field.
Sign up for The Week’s Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
What has the result been?
The rapid adoption of AI surgical tools has led to a surge in allegations and lawsuits claiming these devices have caused harm to patients. A significant number of these cases involve the TruDi system, with the FDA reportedly receiving at least 100 unverified reports of malfunctions and adverse events associated with its AI. Many of these issues arose when the AI incorrectly guided surgeons regarding instrument positions during procedures.
One incident reportedly resulted in cerebrospinal fluid leaking from a patient’s nose, while another involved a surgeon mistakenly puncturing the base of a patient’s skull. Further allegations include instances of patients suffering strokes due to injuries to major arteries, where plaintiffs claimed that the TruDi’s AI misled the surgeon, leading to a carotid artery injury that resulted in a blood clot and subsequent stroke, as reported by Futurism.
Although FDA malfunction reports do not clarify the causes of medical issues, it remains essential to note that TruDi is not the sole AI-assisted device facing scrutiny. The Sonio Detect, an AI system for analyzing prenatal images, has been accused of using a faulty algorithm that misidentifies fetal structures. Similarly, Medtronic has faced claims that its AI-assisted heart monitors failed to detect abnormal rhythms or interruptions in patients.
Research published in the JAMA Health Forum indicates that at least 60 AI-assisted medical devices have been tied to 182 product recalls by the FDA. Alarmingly, 43% of these recalls occurred within the first year after the device’s FDA approval, suggesting that the approval process may overlook early performance failures of AI technologies. However, there is potential for improvement; enhancing premarket clinical testing requirements and post-market surveillance measures could lead to better identification and minimization of device errors.
Explore More