
Follow ZDNET: Add us as a preferred source on Google.
Key Insights from ZDNET
- Many individuals are seeking health advice from AI.
- This technology can sometimes provide inaccurate information.
- A physician shares insights on effectively utilizing AI.
Nowadays, health advice is readily accessible, but often lacks credibility and medical qualifications. This surge in information has transformed the dynamics between patients and healthcare providers, fostering skepticism toward the latter.
Recent findings from the Annenberg Public Policy Center indicate a concerning drop in public confidence toward federal health agencies such as the CDC, FDA, and NIH, with trust falling by 5-7% in just one year.
While the tech industry seems to be capitalizing on this growing mistrust, it is also enhancing the accessibility of medical alternatives. Many people are increasingly turning to readily available AI technologies for answers previously sought from doctors.
A survey revealed that 63% of participants consider AI-generated health information reliable, according to Annenberg.
Further reading: Oura developed a women’s health AI based on clinical research – explore it now
Prominent AI firms like Google, OpenAI, and Anthropic have developed health-focused large language models (LLMs) for healthcare professionals. Recently, Microsoft introduced Copilot Health, a secure AI tool designed to integrate health records, wearable data, and medical histories, following the launch of the “Copilot for health” feature last year.
Speculation surrounds potential health AI initiatives from Apple, while Oura has recently made strides with its custom women’s health LLM.
According to Dr. Alexa Mieses Malchuk, the introduction of this technology has notably changed her interactions with patients and her own professional practices.
AI can provide users with detailed responses to an array of health queries, but it isn’t infallible. In a conversation with ZDNET, Dr. Mieses Malchuk offered her perspective on the benefits and limitations of health AI, emphasizing an informed approach to utilizing this technology.
Dr. Mieses Malchuk’s Utilization of AI
Dr. Mieses Malchuk embraces AI, utilizing it to streamline her administrative tasks, such as managing patient messages and preparing pre-visit guidelines. Firms continue to develop software aimed at assisting healthcare professionals, as evidenced by a recent announcement from Amazon and Google about new healthcare solutions designed for scheduling, clinical documentation, and coding.
Administrative responsibilities have historically burdened physicians, leading many to report spending more time on paperwork than with patients in clinical encounters. Doctors have expressed concerns regarding this imbalance.
Additional resource: OpenAI, Anthropic, and Google are launching new AI healthcare tools – here’s what you should know
“Exciting advancements are occurring throughout the healthcare sector, making primary care more efficient,” stated Dr. Mieses Malchuk. Nevertheless, she remains cognizant of the limitations of such technology.
Using AI as a Launchpad
For non-medical professionals, Dr. Mieses Malchuk suggests that AI should serve as a starting point, rather than a definitive source for medical advice. While it can be gratifying to receive immediate answers from these AI-powered chatbots, and the certainty they provide can alleviate anxiety, she cautions that they cannot substitute for accurate diagnoses, particularly for those lacking medical training.
Users of AI chatbots may overlook crucial details about their health, which could skew the diagnosis and treatment recommendations, Dr. Mieses Malchuk highlighted. “The quality of the responses depends significantly on the questions we pose,” she noted.
“Individuals without medical training absolutely deserve access to AI. However, they should collaborate with their primary care physician to help evaluate the information they gather online,” she advised.
For further reading: The Apple Watch missed my hypertension – but this blood pressure wearable detected it immediately
As AI health tools gain traction, she has observed patients becoming less willing to disclose their online research and more assertive about their self-diagnosis.
“In medicine, we don’t always achieve 100% certainty. While it’s fantastic that we have information at our fingertips today, it can also pose significant risks,” Dr. Mieses Malchuk commented.
She expressed concern that AI platforms like ChatGPT could instill a false sense of security, leading individuals to believe they can forgo medical consultation. “This could result in missed opportunities for early diagnoses,” she warned.
In critical situations, a recent study published in Nature indicated that ChatGPT misclassified over half of emergency cases, suggesting patients should pursue evaluations rather than relying on AI alone. The authors note, “Our findings suggest missed high-risk emergencies and inconsistent activation of safeguards, raising safety concerns that necessitate validation prior to widespread implementation of AI triage systems.”
How AI Can Benefit Patients
Dr. Mieses Malchuk encourages leveraging AI health tools for general wellness guidance. For instance, a patient recently diagnosed with celiac disease can obtain valuable food recommendations from AI, including meal plans and ideas.
AI is also useful for creating personalized workout routines, simplifying the process tremendously.
Additional reading: Are AI health coach subscriptions worth it? My one-month review of Fitbit’s service
Ultimately, AI serves as an excellent resource for individuals without a medical background. However, it is essential to leave diagnoses and treatments to healthcare professionals.
“As distrust in the medical field escalates, it’s a troubling trend. We pledge to prioritize patient safety, making it disheartening that alternative resources may mislead patients into thinking they can bypass seeing a doctor altogether,” concluded Dr. Mieses Malchuk.