Categories Finance

AI Surgery Devices Harm Patients; Medical Chatbots Lag Behind Patient Research, Reports Say

The rising integration of artificial intelligence (AI) into the profit-driven U.S. healthcare system has raised significant concerns. Despite evident inconsistencies in performance—especially when it comes to vital tasks like transcribing medical treatment notes—medical practitioners seem enthusiastic about embracing this technology. New investigations from Reuters and findings reported in Nature add to the mounting evidence urging caution.

AI’s Potential Risks in Surgical Procedures

The most alarming findings come from a recent Reuters investigation highlighting serious injuries linked to AI-enhanced surgical tools. Notably, it reveals that recent updates to these tools have led to a higher incidence of critical errors and adverse outcomes for patients. This article opens with a striking failure involving Johnson & Johnson.

In 2021, a division of Johnson & Johnson touted the integration of AI into a device designed to assist ear, nose, and throat specialists during surgeries. This device had been on the market for approximately three years. Prior to the AI implementation, the U.S. Food and Drug Administration (FDA) had received unconfirmed reports of seven device malfunctions, along with one patient injury. Since the AI feature was added, reports of malfunctions and adverse events have skyrocketed to over 100.

From late 2021 to November 2025, at least 10 patients experienced injuries linked to the TruDi Navigation System, which misled surgeons about the position of their tools inside patients’ heads during operations. In one instance, cerebrospinal fluid leaked from a patient’s nose. In another case, a surgeon mistakenly punctured the base of a patient’s skull, leading to strokes in two additional patients when major arteries were inadvertently damaged.

Such severe injuries raise troubling questions about a fundamental assumption of AI in medicine: its ability to deliver the appropriate visualization and decision-making that human practitioners offer. A recent Twitter comment highlighted concerns regarding the availability of sufficient quality training data for AI medical applications.

While this situation pertains to fully automated surgical processes, the prevalence of significant patient injuries due to mislocating devices is indeed troubling. Furthermore, these incidents cast doubt on whether AI can genuinely replace the essential visual and manual assessments performed by physicians. A recent example underscores the dangers of over-relying on medical records versus in-person evaluations.

A Case of Misguided Trust

Reuters further discusses one stroke attributed to TruDi, highlighting that it stemmed from what should have been a minor surgical procedure:

In June 2022, during a minimally invasive procedure known as a sinuplasty to treat chronic sinusitis, the surgeon, Dr. Marc Dean, utilized the TruDi Navigation System to confirm the positioning of his instruments within Erin Ralph’s head. However, Ralph’s lawsuit claims that the system “misled and misdirected” Dean, resulting in an injury to a carotid artery and subsequent clotting. After leaving the hospital, Ralph suffered a stroke that required a portion of her skull to be removed to accommodate brain swelling. She stated in an interview that she continues to struggle with therapy, using a brace to walk and trying to regain movement in her arm.

Tragically, the story doesn’t end there. In May 2023, Dr. Dean experienced another mishap while using TruDi in a different sinuplasty operation. The carotid artery of another patient, Donna Fernihough, reportedly “blew,” resulting in blood spraying across the surgical room.

Conflicts of Interest

It’s revealed that Dr. Dean’s relationship with Acclarent, the distributor of TruDi, included over $550,000 in consultancy fees since 2014, raising significant ethical concerns.

While some readers might conclude that Dean’s performance is subpar, the Reuters investigation clarifies that the problems linked to AI-enabled devices are widespread:

The FDA has authorized 1,357 medical devices utilizing AI—twice as many as in 2022. Besides the TruDi system, multiple other AI-enhanced devices have sparked concern. Reports indicate that AI-driven heart monitors overlooked abnormal heart activity, and an ultrasound device misidentified fetal body parts.

Recent research from Johns Hopkins, Georgetown, and Yale universities disclosed that 60 AI-authorized devices led to 182 recalls. Alarmingly, nearly half of these recalls occurred within a year of FDA approval—twice the typical recall rate for devices under similar review standards.

Adding to these incidents, the FDA’s database records approximately 1,401 reports related to medical devices using AI, underscoring the undeniable risks associated with reliance on this technology.

The Increased Presence of AI

As these alarming incidents persist, the deployment of AI in medical technologies continues to grow:

While the article acknowledges that AI advancements have benefitted certain healthcare aspects, it also critiques the FDA for cuts affecting its AI evaluation team. The article would have benefited from discussing two other critical issues: the FDA’s more lenient regulations on devices compared to drugs and the potential risks introduced by software integration into medical tools.

AI in Apps and Patient Care

A more concise review of the Reuters article reveals a worrying trend: patients seeking assistance from AI-powered applications, often receiving dangerously inaccurate readings. Several apps mislead users by suggesting they might be facing life-threatening conditions.

A plethora of mobile applications available on platforms like Apple and Google claim to offer AI-driven medical support, even when they are not authorized to provide diagnostic insights. According to FDA guidelines, apps designed solely for educational purposes do not require approval. Some developers, however, are pushing the limits of these guidelines.

Eureka Health: AI Doctor marketed itself as a comprehensive health companion but faced removal from the App Store after it was found to misrepresent its capabilities.

Moreover, some apps claim exceptional accuracy yet have received overwhelming negative reviews:

“AI Dermatologist: Skin Scanner” features over 940,000 users and claims to match a professional dermatologist’s accuracy. However, numerous users have reported inaccuracies, leading to one-star reviews across app stores.

Concerning Comparisons in AI Research

Furthermore, a paper published in Nature troublingly compares AI chatbot performance in medical diagnosis to non-medicated patients. This raises significant ethical concerns about normalizing AI’s role as a substitute for professional medical assistance.

The Nature study assessed whether large language models (LLMs) could aid the public in identifying health conditions. Participants using LLMs achieved accuracy rates similar to those navigating the healthcare landscape without professional guidance.

Given these findings, it is clear that our increasing reliance on AI technologies in healthcare, especially when it comes to ensuring patient safety and accuracy, is fraught with risks. The potential consequences of these technologies demand careful consideration and rigorous oversight in a system that may overlook its most crucial element—patient care.

In conclusion, as we navigate this Brave New World with more AI and less direct healthcare practitioner involvement, individuals may find themselves assuming the risks of being experimental subjects. It’s essential to maintain vigilance as these technologies continue to make headway in the medical field.

_____

1 From Coffee Break: Science and Medicine, Bad and Good: Dr. Will Lyon explores this in depth, illustrating the importance of face-to-face patient evaluations over simply relying on data in the patient’s chart.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like