Categories AI

MGB Scientists Leverage AI to Predict Domestic Abuse in Patients

Introduction: Recent research highlights the potential of artificial intelligence (AI) to aid medical professionals in identifying cases of intimate partner violence at an early stage. This innovative approach aims to enhance proactive screening and protect vulnerable individuals more effectively.

Dr. Bharti Khurana, an emergency radiologist at MGB, noted, “The results demonstrate that AI can assist clinicians in early identification of potential abuse.”

“The goal is to make resources available sooner rather than later,” Khurana explained. “I refer to this as proactive screening, rather than waiting for victims to disclose [the abuse] before providing support.”

The Centers for Disease Control and Prevention report that one in three women and one in six men will experience intimate partner violence during their lifetime. Despite this alarming statistic, many victims hesitate to reveal their experiences to medical professionals due to concerns over judgment, fear of retaliation from their partner, or financial and psychological dependency on the abuser.

Khurana observed subtle patterns in scan results among patients affected by intimate partner violence. However, radiologists typically examine X-ray, CT, and MRI results for only a few minutes and lack the capacity to review extensive medical histories for additional indicators of abuse.

AI technology can analyze electronic medical records to uncover warning signs that a patient may be in danger at home. This represents a cutting-edge example of AI’s potential to enhance the limited time and focus of clinicians and identify conditions that might otherwise be overlooked.

In this study, the authors aim to create a decision support tool to be integrated within electronic medical record systems, allowing for risk evaluation. However, it is still too early to determine how this will be implemented outside of research settings or how privacy issues will be managed.

Khurana and her team trained AI models to detect specific injuries—such as those affecting the face, neck, and upper body—along with identifying the types of visits and even the times patients arrive at the emergency department.

“Once we began using machine learning, these patterns became increasingly apparent,” Khurana remarked.

The researchers developed their models utilizing data from nearly 850 women enrolled in the Brigham’s domestic abuse intervention and prevention center between 2017 and 2019 and again from 2021 to 2022. Data from 2020 was excluded due to the unique circumstances surrounding the COVID-19 pandemic. Additionally, approximately 5,200 control patients who had not experienced intimate partner violence but were otherwise similar in demographics were included for model training.

The models were subsequently tested on a separate group of patients from Mass General Hospital, as stated by Khurana.

The study utilized three AI models: one that analyzed data including medications, vital signs, and demographics, another focused on clinical and radiology notes, and a combination model that yielded the highest accuracy in identifying violence.

Initiating discussions about intimate partner violence can be challenging, and an improper approach can have harmful consequences.

While the AI models represent significant advancements, Dr. Brigid McCaw, former medical director of the Kaiser Permanente Family Violence Prevention Program, cautions that researchers and clinicians must avoid over-reliance on these tools. “It’s crucial that clinicians fully understand the data guiding the algorithms,” McCaw explained.

McCaw also highlighted the importance of rigorously testing any domestic violence screening instruments to ensure they consider survivors’ perspectives.

“We are in the early stages, and excitement surrounding AI is palpable,” McCaw said. “However, I urge caution as there remains much to learn, and the voices of survivors must be heard.”

Khurana at MGB expressed the need to refine the models to ensure they accurately identify as many victims as possible without mistakenly flagging those not at risk. “Excessive false positives can erode trust, leading to a lack of usage,” Khurana stated.

Khurana’s team plans to continue training the models through 2025 and is currently engaging with global researchers to enhance this tool.

“I hope to involve more institutions to learn from various communities, not just in the US,” she concluded.


Marin Wolf can be reached at marin.wolf@globe.com.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like