In a groundbreaking move, a research and policy lab at Georgetown University has launched an innovative interactive web tool aimed at enhancing transparency regarding the application of artificial intelligence in the criminal justice system. This initiative seeks to inform various stakeholders about the use of AI technologies in law enforcement, courts, and corrections, thereby enriching the dialogue surrounding technology in these crucial areas.
The Evidence for Justice Lab, part of Georgetown’s McCourt School of Public Policy in Washington, D.C., unveiled the Justice AI Tracker (JAI-T). This tool meticulously documents the jurisdictions where AI technologies are being tested, piloted, or implemented, compiling publicly available data to provide policymakers, researchers, technologists, and community advocates with a comprehensive understanding of how these automated systems are influencing justice outcomes.
The initial release of the tracker concentrates on the 100 largest cities in the United States, with a subsequent update anticipated in six months.
Andrea M. Headley, the faculty director at the Justice Lab, emphasized that the tracker serves both as a research instrument and as a mechanism for accountability in discussions surrounding technological practices in policing and judicial proceedings.
“There hasn’t been a consistent, updated method to monitor where these tools are in use,” Headley remarked in a recent interview. “I am optimistic that will change with the federal government developing inventories of AI tools. I also hope that more cities will voluntarily publish their data, but until then, we aim to fill this critical gap.”
Vinuri Dissanayake, the deputy director at the Justice Lab, indicated that the tracker was born from a larger national study aimed at understanding community perceptions of artificial intelligence and its impact on residents’ experiences with public safety.
“What’s particularly intriguing — we are still analyzing the data — is the lack of awareness surrounding what individuals consider part of their community,” Dissanayake explained. “We need to ensure that the communities impacted by these tools are A) aware of their existence and B) informed about available recourse if they believe these tools are misapplied.”
Dissanayake elaborated on five primary use cases for AI identified by researchers: facial recognition, gun detection, automatic license plate readers, body camera footage review, and non-emergency dispatch assistance.
“Additionally, we are observing AI applications in various contexts, from courtrooms to public-facing services within law enforcement,” she added, expanding the list of AI use cases.
Future updates to the JAI-T database will incorporate vendor information and qualitative insights gathered from residents. Dissanayake mentioned plans to publish a detailed report on the tracker in two months, as well as a comprehensive analysis of AI usage trends across the 100 cities being examined.
“While some vendors have been operating in this space for a considerable time, a multitude of startups are also developing tools for police departments,” she noted.