Categories AI

The Rise of AI Disinformation Online

The battle for the digital realm intensifies as Epic Fury unfolds

Can you distinguish between genuine and AI-generated footage?

As we frequently encounter dramatic content from conflict areas online, a new challenge has emerged: much of that footage may not be real.

AI-generated images and videos related to the ongoing conflict in the Middle East have begun to proliferate on social media platforms.

What Drives the Creation of Disinformation Videos?

Deceit and disinformation – tools of modern warfare

BFBS Forces News consulted Professor Peter Lee, an expert in Applied Ethics from the University of Portsmouth, regarding the spread of AI-generated disinformation during the ongoing war.

“There are several crucial reasons for excelling in propaganda and misinformation over your adversary,” Prof Lee informed BFBS Forces News.

“For instance, if you aim to mislead your opponent into believing they are faring worse than they actually are, you must disseminate information that suggests they are achieving fewer targets than expected.”

“This is precisely what Iran is communicating. Meanwhile, the United States reports higher missile and bomb strike counts than ever documented.”

Many people would not question whether this image of the US aircraft carrier USS Abraham Lincoln on fire was real or not
Many viewers would not doubt the authenticity of this image showing the US aircraft carrier USS Abraham Lincoln ablaze (Courtesy: IranMilitaryIR_)

The image above, depicting the US aircraft carrier USS Abraham Lincoln on fire, was circulated by an IranMilitaryIR page, which even carries a verification badge.

Many individuals accept it as factual, exemplifying how misinformation proliferates.

Who is Responsible for Producing This Content?

Russia is known for its bot farms, Prof Lee stated.
Russia is well-known for its bot farms, as noted by Prof Lee to BFBS Forces News (Picture: Kremlin)

Prof Lee mentioned that content appearing highly professional is likely produced by government-affiliated organizations. The principal creators behind this content are the United States and Iran, the two key adversaries in the conflict.

He pointed out that major technology companies, such as Meta (Facebook), Apple, Amazon, Netflix, Google (Alphabet), and X (formerly Twitter), are headquartered in the United States. Thus, Washington is eager to leverage its social media dominance in this situation.

Discussing the generation of state-backed AI content, Prof Lee explained that, “The US Department of [War] produces a substantial amount of news material through social media, blending original footage with older clips, alongside AI-generated content as part of its strategy. This blending is evident and transparent.”

“Additionally, both China and Russia have vested interests in this conflict, seeking to undermine the United States. Russia, in particular, is infamous for its bot farms; they recruit external agents to disseminate information in a way that is not directly traceable back to Russia.”

In reality, producing fake content is relatively straightforward, exemplified by this AI-generated image of the Burj Khalifa engulfed in flames.

AI tools are capable of creating realistic images and videos.
AI tools can produce convincing images and increasingly lifelike videos (Picture: Global Brief)

AI technologies can generate highly realistic images and videos.

There have even been instances where gameplay footage has been altered to resemble actual combat, such as a video that accumulated over one million likes on social media, which turned out to be clips from the PC game War Thunder or a similar game.

Why Does AI Disinformation Matter?

AI-generated posts can influence public opinion on government decisions.
AI-generated posts and content could be leveraged to gain public support for unpopular government actions (Picture: BFBS)

Is the flood of social media disinformation merely background noise, or does it pose significant threats to democracy?

Prof Lee urged that AI-generated content could be used to sway public opinion in favor of actions that a government may want to undertake but which the populace generally opposes.

“Ethically, this situation falls into a grey area as it involves sanctioned state dishonesty,” Prof Lee warned.

“While people do not expect absolute honesty from politicians, they certainly do not anticipate blatant lies.”

Join Our Newsletter

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like