The Ethics and Impacts of AI in Healthcare
In the latest episode of The Pitt, tensions surrounding the integration of artificial intelligence (AI) at the Pittsburgh Trauma Medical Center come to a head. The ongoing second season of the five-time Emmy Award-winning medical drama showcases the challenges and potential of AI technology in a high-stakes healthcare environment. As the narrative unfolds, it presents a striking reflection of real-world debates within hospitals across the nation.
AI’s Introduction in Medicine
This week, we meet Dr. Baran Al-Hashimi (Sepideh Moafi), a new attending physician with ambitious plans to enhance efficiencies at the hospital. She confidently asserts that new AI systems could reduce charting time by up to 80%, enabling staff to allocate more time to patient care and personal lives. However, as the plot thickens in episode six, the participating doctors find that the AI tool has misrepresented crucial patient information, confusing “urology” with “neurology.” Al-Hashimi defends the technology, stating, “AI’s two percent error rate is still better than dictation,” while acknowledging that proofreading is essential. This prompts a vehement response from Dr. Campbell (Monica Bhatnagar), an internal medicine physician: “I don’t really care whether or not you want to use robots down here. I need accurate information in the medical record.”
This storyline encapsulates the mixed sentiments regarding AI’s role in medicine, mirroring sentiments found in a 2025 American Medical Association survey, which revealed that two-thirds of physicians utilize AI to some extent. Some find it immensely helpful in patient care and alleviating burnout, while others believe its rapid implementation is fraught with inaccuracies and pitfalls.
AI as a Charting Assistant
In The Pitt, AI is primarily positioned as a charting assistant, addressing one of the most troublesome aspects of a doctor’s job. Charting often necessitates extended hours, leaving many healthcare professionals overwhelmed. Hospitals have been experimenting with ambient AI scribes for several years now, which passively listen to conversations and summarize them for medical documentation.
Murali Doraiswamy, a physician and professor at Duke University School of Medicine, points out that while current AI tools do allow physicians to focus more on their patients rather than on note-taking, they only manage to save about one or two minutes per appointment—time that is often spent editing AI-generated content. “Overall, it’s an improvement, and the hope is it gets better and better,” he asserts.
Further advancements in AI charting have been made, such as the implementation of GW RhythmX at Presbyterian Healthcare Services in New Mexico. This AI assistant provides summaries of patients’ medical histories before appointments, significantly reducing the need to sift through extensive files. Lori Walker, Chief Medical Information Officer at Presbyterian, explains that RhythmX also aids in addressing complex patient issues rapidly, eliminating lengthy consultations with specialists.
Residents like Sudheesha Perera at Yale utilize OpenEvidence, a well-trained chatbot focused on vetted medical literature, almost daily. “If there’s a patient with an infection, I might ask it, ‘I picked this medication for this reason. What are the alternatives?’” he shares, noting that this method frequently outpaces searching through traditional medical textbooks. Perera is involved in developing AI training curriculums for residents, ensuring they adopt best practices when leveraging technology.
Risks and Concerns of AI Deployment
However, the introduction of AI in healthcare is not without its fears and risks. As depicted in The Pitt, real-world implementation of these technologies has also produced significant errors. Michelle Gutierrez Vo, a resident nurse and president of the California Nurses Association, recounted an instance where an AI tool was used to supplant human judgment in case management but recommended the discharge of a cancer patient too soon. “We have proven time and time again that the implementation or use of AI is often counterproductive and can escalate costs,” she states. A 2024 poll found that two-thirds of unionized nurses believed AI undermined patient safety and their professional roles.
Moreover, concerns around cost-cutting tactics that exploit AI are pervasive. Dr. Robby (Noah Wyle), the protagonist in the show, echoes these worries: “It’ll make us more efficient—but hospitals will expect us to treat more patients without extra pay.” A further issue is the potential de-skilling of healthcare providers. While AI tools might assist doctors now, they risk diminishing essential knowledge and decision-making capabilities in high-pressure situations.
The theme resonates with Perera, who states, “When a patient is crashing in front of your eyes, you need immediate knowledge. An AI tool is too slow.” He emphasizes the significance of ensuring that new physicians do not rely excessively on AI, warning that this might lead to a deterioration in fundamental medical skills—“the same kid who never wrote a college essay may evolve into a doctor who never writes a critical assessment,” he notes.
Doraiswamy urges the development of AI tools that enhance rather than replace human judgment. “We need AI to inspire doctors to ask the right questions instead of merely providing answers,” he concludes. “We want technology that stimulates thinking.”
Conclusion
The evolving landscape of AI in healthcare offers remarkable opportunities yet poses significant challenges. As exemplified in The Pitt, the integration of AI can augment medical practices but also raises crucial ethical concerns. As the dialogue surrounding AI technology continues, it is imperative for healthcare professionals to navigate its impacts judiciously, ensuring that it ultimately serves to enhance patient care rather than hinder it.