In recent discussions about surveillance technologies, artificial intelligence (AI) has emerged as both a promising and troubling prospect. While the advancements can enhance public safety, they bring significant concerns around privacy and effectiveness. New York City is exploring the integration of AI within its transit system, an initiative that could influence how surveillance is conducted nationwide.
Yves here. The increasing deployment of AI for surveillance presents a rather discouraging scenario. Nevertheless, New York City is at least being somewhat forthcoming about its intentions, particularly as it already has numerous cameras installed across its subway and bus systems. Officials are keenly aware of past AI trials, such as weapon detection, which resulted in a multitude of false positives, making them understandably cautious.
Importantly, the city has no plans to implement facial recognition technology at this time. For those concerned about such technologies, this might serve as another reason to wear a mask when navigating public spaces.
The MTA is now investigating the possibility of using artificial intelligence to bolster safety within its transit network. This includes tasks like identifying weapons, monitoring unattended belongings, and even predicting situations that could lead to stampedes in subway stations.
Several technology providers and systems integrators had submitted their responses to a request for information issued by the MTA by the December 30 deadline, according to officials. “There’s interest across the board,” stated Michael Kemper, the MTA’s chief security officer, in an interview with THE CITY. “We’re seeing enthusiasm not only from the MTA but also from the AI business sector in working together.”
The request outlines the initial steps in the MTA’s transition towards utilizing AI for complex public safety tasks. This includes the real-time analysis of video feeds from buses and subways, with the goal of identifying potentially dangerous behaviors.
“Not only is this becoming a norm, but it’s also something we expect—AI is here, and AI is the future,” Kemper affirmed. “If we fail to explore and research it, we are neglecting our responsibilities.”
However, technology watchdogs caution against the potential privacy infringements and overreach that could accompany AI advancements. Jerome Greco, supervising attorney at The Legal Aid Society’s Digital Forensics Unit, noted that the technology’s capacity to identify “unusual” or “unsafe” behavior could lead to serious concerns and negative interactions with law enforcement.
“These AI applications aren’t on the same level as Netflix suggesting your next movie,” Greco remarked. “If it misinterprets a situation, the ramifications could be significant, and that’s an issue the MTA should take seriously.”
William Owen, communications director for the Surveillance Technology Oversight Project, compared the MTA’s initiative to the controversial weapons detection pilot program implemented by then-Mayor Eric Adams and the NYPD in 2024. In a month-long trial involving over 3,000 searches at 20 stations, these AI scanners discovered 12 knives but did not identify any firearms, along with generating more than 100 false positives.
“Ultimately, it turned out to be primarily a metal detector that flagged many umbrellas and other benign items rather than actual weapons,” Owen commented.
Kemper clarified that the MTA is aware of the concerns surrounding AI video analytics within the transit system, describing it as a “tool” intended to enhance human judgment.
“We recognize that many people harbor concerns; our role is to be transparent and address those issues directly,” he noted. “Nevertheless, we must continue exploring these technologies to ensure the safety of our riders.”

Significantly, the request does not mention the use of the contentious facial recognition technology previously employed by the NYPD. An April 2024 incident led critics to advocate for an investigation into its utilization. A 2021 investigation by Amnesty International revealed that the NYPD had input images from over 15,000 cameras across Brooklyn, The Bronx, and Manhattan into facial recognition software.
The MTA stresses that its exploration of AI is focused on leveraging existing technology to enhance public safety.
The responses from technology providers to the MTA’s request reflect an ongoing effort by the nation’s largest mass transit agency to integrate emerging tech for security enhancements. With over 15,000 cameras installed throughout the transit system and on the 6,000 cars in its subway fleet, there’s significant groundwork already laid.
Other parts of the city’s transport network are also testing AI technology. Last year, the MTA retrofitted some A line cars with Google Pixel smartphones that leverage advanced AI to detect and analyze possible track defects. Additionally, the MTA is piloting new AI-enabled fare gates in select locations.
This safety-focused initiative aims to utilize the existing network of cameras, addressing situations like the April 2022 shooting, where streaming video feeds from cameras at a Sunset Park station were not operational at the time of the incident.
A December 2022 report from the MTA Inspector General underscored that the camera feed had failed four days prior to the shooting event.
In its request for information, the MTA acknowledged some challenges related to its current monitoring practices, stating, “With more than 15,000 cameras deployed across approximately 472 subway stations, our existing monitoring methods are manual, reactive, and resource-intensive.”
The document highlights the MTA’s aspiration to transition towards a “proactive intelligence-driven ecosystem,” which would efficiently flag behaviors, assess risks, and ensure timely incident responses.
While the initiative intends to harness advanced video analytics and AI technologies, it will also involve insights from certified experts in behavioral science and psychology, who possess a thorough understanding of human conduct in transit environments, according to the MTA.
Currently, no timeline has been set for the project, as the next stage will involve evaluating the submissions from interested parties to ascertain what can be implemented within the transit system, which caters to nearly 4 million subway passengers daily.
Kemper emphasized the immense potential value for riders: “We’re eager to proceed as soon as we find a viable option.”
Contrarily, Greco from Legal Aid argued for a cautious approach as the MTA delves into predictive technology concerning “unusual” or “unsafe” behavior within the subway system.
“What are the implications of these technologies on decision-making and the potential outcomes from those decisions?” he questioned. “If it flags a behavior deemed unsafe—what then? Are we essentially policing individuals for simply being different?”
