Categories AI

AI Misinformation: Fueling Voter Suppression Ahead of US Elections

As the 2026 midterm elections approach, the U.S. faces a moment of political vulnerability, largely due to an increasingly unstable information landscape. This political climate is less defined by partisan divides and more by the rise of misinformation, particularly through the use of artificial intelligence (AI).

AI technology is now capable of producing convincing images, videos, audio clips, and documents that can easily mislead the public, often eluding casual scrutiny and casting doubt on what is real and what is fabricated.

The risks extend beyond merely misleading voters about candidates or policies; AI-generated media poses a genuine threat to electoral participation itself. It can foster an atmosphere of fear, confusion, and mistrust surrounding the voting process. These concerns are not merely theoretical; recent events have shown the potential for AI to manipulate narratives and undermine public confidence in elections.

The tragic killings of Renee Nicole Good and Alex Pretti by immigration enforcement agents in Minneapolis exemplify the dangers inherent in a digitally manipulated information environment. In the immediate aftermath of their deaths, social media platforms were flooded with AI-generated images and misleading video clips purportedly showing events that never occurred.

Numerous posts featured miscaptioned photographs falsely identified as Good, along with fabricated screenshots claiming to display arrest records and criminal histories. One video even depicted Good driving her vehicle at an Immigration and Customs Enforcement (ICE) agent—a scenario that had no basis in reality.

In the case of Pretti, who was shot by a Border Patrol agent, an AI-generated image and video falsely portrayed him kneeling with an agent aiming a pistol at his head. This disinformation was circulated widely, even by some mainstream news organizations and lawmakers.

Recently, fake posts on social media claimed to show one of the daughters of Nancy Guthrie, mother to Savannah Guthrie of NBC’s “Today” show, appearing handcuffed and on their knees with a caption suggesting they had been arrested for Guthrie’s murder.

The rapid dissemination of this misleading visual content easily overshadowed verified reporting, giving rise to an alternate narrative filled with speculation and accusation, tarnishing the reputation of the victims.

The events following the shootings underline a crucial lesson for election security: AI does not need to sway the majority of the public to be effective. It merely needs to inject enough doubt and fear to impact behavior.

As midterm elections approach, warnings grow regarding the potential reemergence of tactics such as fake images, impersonated voices, and fictional documentation to erode voters’ confidence in the electoral process.

“We do not need to guess at all the possible scenarios that could unfold leading up to an election,” commented Ben Colman, co-founder and CEO of Reality Defender, in an interview with Biometric Update. “Given the surge of affordable or free deepfake creation tools and the lack of effective moderation on these platforms, we are certain to witness an increase in negative deepfakes targeting candidates during the election cycle.”

Colman added, “This is not a mere prediction; it is based on historical patterns from past elections, coupled with the exponential rise of deepfakes on social media. The absence of coherent deepfake prevention measures on these platforms indicates that each new election will likely give rise to more and more deepfakes disseminated widely.”

With the speed at which misinformation spreads, even community notes addressing inaccuracies fail to keep pace, leaving many viewers unaware of the deepfakes they’re encountering.

Federal agencies have openly acknowledged these threats. The Cybersecurity and Infrastructure Security Agency has consistently alerted the public that generative AI tools minimize the cost and amplify the efficacy of influence operations, allowing malicious actors to create tailored deceptive content aimed at specific communities. This content can be shared through private or semi-private channels, evading public scrutiny.

Additionally, the National Institute of Standards and Technology has highlighted the shortcomings of detection systems, explaining that as content circulates across platforms, watermarks and metadata often get stripped away, leaving users without clear indicators of authenticity.

What made the misinformation surrounding Good and Pretti’s deaths so potent was not its technical sophistication but rather human susceptibility to manipulation.

The swift proliferation of AI-generated images and videos was effective because many people significantly overestimate their ability to recognize synthetic media. This challenge is compounded by situations where confusion reigns, overwhelming any verified information that might emerge.

As the midterm elections draw closer, this vulnerability remains one of the most pressing threats to democratic participation.

Research consistently demonstrates that increasing public awareness of deepfakes has not led to improved detection skills. According to surveys from the Pew Research Center, while many Americans are familiar with AI-generated content, far fewer are confident in their ability to differentiate between genuine and fabricated media.

Even among those who feel confident, their performance tends to decline sharply when content appears on social media or comes through private messaging channels, which often strip context and provenance.

Academic studies support these conclusions. Research from Stanford Internet Observatory, among other academic institutions, reveals that people perform only slightly better than chance when tasked with identifying AI-generated items—whether faces, voices, or short video clips. This difficulty increases when the content confirms a viewer’s expectations or provokes a strong emotional response, a phenomenon known as “rage baiting.”

Participants often mistakenly label synthetic content as authentic, while dismissing real footage as fake—a situation scholars have termed the “liar’s dividend,” where even the possibility of deception undermines trust in all media.

This susceptibility spans various demographics. Older adults, who often depend on Facebook groups and local community pages, tend to struggle more with recognizing manipulated imagery. Conversely, younger users, despite increased digital fluency, frequently encounter synthetic content on rapid-fire social media platforms where virality often takes precedence over thorough examination.

In both instances, misplaced confidence is a common issue. Those who believe they can spot fakes usually do no better than individuals who admit uncertainty.

The misinformation following the deaths of Good and Pretti illustrates how these dynamics play out in real-time. Many of the AI-generated visuals and misidentified photos were not particularly advanced from a forensic perspective. Their impact lay in their timing.

These images emerged before official details were shared, filling an information void with visuals that appeared to serve as evidence. By the time accurate corrections were released, these misleading images had already shaped public perception.

The stakes are significantly higher in an election context. Decisions made by voters often occur under time constraints and emotional pressure.

When synthetic videos suggest violence at polling stations, or audio clips claim to originate from local election officials warning of arrests or eligibility checks, voters are not engaged in a critique of media. They are making immediate assessments regarding personal safety.

Even individuals who suspect manipulation may rationalize that opting out of voting is the safer choice.

One plausible scenario that election officials are preparing for includes AI-generated videos displaying chaos at polling places in predominantly minority or economically disadvantaged neighborhoods.

A fabricated short clip circulating on social media could feature armed individuals arguing outside a community center identified as a polling location, with sirens in the background and text warning that police have shut it down due to violence. The footage, entirely synthetic, might be created using stock video, generated imagery, and fabricated audio elements. By the time authorities are able to respond with corrections, turnout at that site may already decline—not because voters believed a particular political stance, but rather because they perceived voting as unsafe.

Another scenario depends more on impersonation than spectacle. An AI-generated voice message in Spanish could surface, attributed to a county elections office, cautioning residents that voters without “verified identification” risk questioning or delays at polling places.

This message could spread rapidly through private messaging channels within immigrant communities already on high alert regarding surveillance and enforcement. While no explicit threat is made, the implication is clear. Even those voters doubtful of the message’s authenticity still face a risk assessment that leans towards non-participation.

These tactics prey upon inherent anxieties and gaps in information. Synthetic media does not need to be flawless; it merely has to be credible enough to circulate within trusted circles.

When corrections emerge, they typically consist of impersonal text and lack the emotional resonance of the initial falsehoods, creating an inherent advantage for fear-based misinformation, particularly as elections draw near.

Legal and regulatory efforts are still disjointed. Some states have established restrictions on deceptive political deepfakes, but enforcement remains inconsistent, and certain measures face constitutional challenges.

Although various platforms have labeling policies and detection tools, these often lack uniform application and visibility for users.

Meanwhile, generative tools continue to evolve, producing audio and visuals that require expert examination for debunking—analysis that rarely spreads as widely or swiftly as the deceptive content itself.

The midterm environment amplifies these dangers. Unlike presidential elections, which focus on a handful of notable races, midterms encompass hundreds of contexts overseen by thousands of local jurisdictions. Thus, each county, city, and precinct becomes a potential hotspot for hyper-local misinformation.

A fabricated announcement regarding polling hours in one locality may go unnoticed on a national level, yet it could suppress turnout enough to sway a close contest. The aftermath of Good and Pretti’s tragic killings serves as an early warning. It illustrates how AI-generated imagery, misidentifications, and fictitious records can obscure the truth before it has the chance to establish itself.

As the 2026 midterms draw near, similar dynamics are poised to be employed, not only to deceive voters but also to intimidate them, transforming confusion and fear into instruments of disengagement.

When voters struggle to discern what is real and fear that their perceptions might be accurate, the instinctive response can often be to stay home. This outcome, more than any specific deepfake or viral misinformation, represents the gravest threat to democratic participation.

Article Topics

| | | | | | | |

Latest Biometrics News


Alcatraz has partnered with the Active Shooter Prevention Project (ASPP), an initiative aiming to develop strategies to reduce risks in schools…


Singapore-based digital identity and Mobile Application Protection and Security (MAPS) provider V-Key has secured a new majority investor, Tower Capital…


Digital ID verification firm IDfy has raised 476 crore Indian rupees (approximately US$52 million) to further its digital…


IIIT-Bangalore, at the forefront of India’s digital public goods initiatives, has partnered with the MOSIP initiative it hosts to advance its efforts…


A face biometrics algorithm submitted by Entrust to the NIST Face Recognition Technology Evaluation (FRTE) 1:1 Verification has made significant advances…


India’s facial biometrics-based air travel platform, Digi Yatra, has completed testing for passport-based enrollment and is planning for…

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like