Categories AI

Google’s AI Reverses on Altered White House Photo Detection

In the evolving landscape of digital media, trust in authenticity is paramount. A recent incident involving the White House’s official X account raised significant concerns when an image of activist Nekima Levy Armstrong in tears during her arrest was shared, soon followed by a less emotional version of the same scene released by Homeland Security Secretary Kristi Noem. The disparities in these images prompted questions about the reliability of AI detection tools in verifying authenticity.

When the official White House X account shared a photograph of activist Nekima Levy Armstrong in tears during her arrest, it became evident that the image had been manipulated. Just before this post, Homeland Security Secretary Kristi Noem had posted an identical scene, depicting Levy Armstrong looking composed, devoid of tears.

To investigate whether the White House’s version of the photo was altered using artificial intelligence tools, we utilized Google’s SynthID detection system. Google claims this mechanism can identify images or videos generated through its AI technologies. We followed Google’s guidelines and employed its AI chatbot, Gemini, to check for forensic markers in the image.

The initial results were unambiguous: the image shared by the White House had indeed been manipulated using Google’s AI. We published our findings.

However, following the release of our article, further attempts to authenticate the image with Gemini produced conflicting results.

In a second analysis, Gemini asserted that the crying image of Levy Armstrong was authentic. The White House did not contest the alteration, as a spokesperson remarked, “The memes will continue.”

In our third test, SynthID identified that the image was not generated by Google’s AI, directly contradicting its earlier findings.

As AI-manipulated photos and videos become increasingly common, these inconsistencies raise critical questions about SynthID’s reliability in distinguishing fact from fiction.

A screenshot of the initial response from Gemini, Google’s AI chatbot, indicating that the crying image contained forensic markers showing manipulation with Google’s generative AI tools, taken on Jan. 22, 2026. Screenshot: The Intercept

Initial SynthID Results

Google describes SynthID as a digital watermarking system that embeds invisible markers into AI-generated images, audio, text, or video, allowing detection of authenticity.

According to a DeepMind page, “The watermarks are embedded across Google’s generative AI consumer products and are imperceptible to humans but can be detected by SynthID’s technology.”

The company claims that SynthID possesses “robustness” in the realm of digital watermarking, stating that it can identify these watermarks even when images undergo modifications like cropping or compression. Hence, an image modified using Google’s AI should retain detectable watermarks despite being saved multiple times or shared on social media.

Google encourages users to interact with SynthID via its Gemini AI chatbot, allowing inquiries about the authenticity of digital content.

“Want to check if an image or video was generated or edited by Google AI? Ask Gemini,” the SynthID landing page suggests.

We adhered to this directive.

After saving the image file from the official White House post, identifiable by the filename G_R3H10WcAATYht.jfif, we uploaded it to Gemini, querying whether SynthID detected any AI generation.

To further test SynthID’s claims of robustness, we also uploaded a cropped and re-encoded image, named imgtest2.jpg.

Additionally, we uploaded Noem’s previously shared photo, where Levy Armstrong was not crying. (In the screenshot above, Gemini refers to Noem’s photo as signal-2026-01-22-122805_002.jpeg, due to our download from the Signal messaging app).

“I’ve analyzed the images you provided,” Gemini stated. “Based on the results from SynthID, all or part of the first two images were likely generated or modified using Google AI.”

“Technical markers within imgtest2.jpg and G_R3H10WcAATYht.jfif indicated the use of Google’s generative AI tools to alter the subject’s appearance,” the bot added. It identified Noem’s version as appearing to be “the original photograph.”

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like