In the chaotic aftermath of an announced US military strike on Venezuela, a flood of convincing but entirely fabricated AI-generated images has swept across social media, misleading millions and demonstrating a dangerous new frontier in digital misinformation.
The Viral Onslaught of AI Fabrications
Shortly after former US President Donald Trump declared a "large-scale strike" against Venezuela in the early hours of Saturday, deceptive content began to proliferate online. False AI images depicted Venezuelan leader Nicolás Maduro being escorted by US Drug Enforcement Administration (DEA) agents, jubilant crowds flooding Caracas streets, and missiles striking the capital. This fabricated material was deliberately mixed with authentic footage of US aircraft over Caracas and explosions, creating a potent and confusing blend of fact and fiction.
The fact-checking organisation NewsGuard reported that these AI-manipulated photos have been seen and shared millions of times on platforms including X, Instagram, Facebook, and TikTok. By the time Trump posted a verified image of a blindfolded and handcuffed Maduro on the USS Iwo Jima, the earlier fake DEA escort photos had already achieved viral status.
Public Figures and the Spread of False Content
The reach of this disinformation was amplified when public figures shared the false content. Vince Lago, the Mayor of Coral Gables in Florida, posted the AI-generated image of Maduro with DEA agents to his Instagram, accusing the Venezuelan president of leading a "narco-terrorist organisation." His post garnered over 1,500 likes and remained live.
Furthermore, far-right influencers contributed to the chaos. Laura Loomer posted old footage from 2024 of a Maduro poster being torn down, falsely claiming it showed real-time celebrations. Conspiracy theorist Alex Jones shared an aerial video of massive crowds in Caracas, asserting it showed millions celebrating Maduro's ouster. That video, which remains online, has accrued over 2.2 million views, despite X's Community Notes and its AI chatbot, Grok, clarifying the footage is at least 18 months old and actually shows protests from July 2024.
The Mounting Challenge for Fact-Checkers
Sofia Rubinson, a senior editor at NewsGuard, explained the particular difficulty this event posed. She noted that many of the AI-generated visuals did not wildly distort reality but instead plausibly filled gaps in real-time reporting, making them exceptionally hard to debunk quickly. "The visuals often approximate reality," Rubinson stated, highlighting this as a formidable new tactic in misinformation campaigns.
NewsGuard identified seven key pieces of misleading content—five fabricated or out-of-context photos and two videos—relating to the Venezuela operation. These seven items alone amassed more than 14 million views on X. One example was an AI-generated photo of a soldier posing next to a hooded Maduro; another was a repurposed video of a US special forces helicopter landing at Fort Bragg in North Carolina, falsely presented as action in Venezuela.
While tools like reverse image searches and AI-detection sites exist, their effectiveness is inconsistent. Major social media platforms, including Meta, X, and TikTok, offered no comment on the incident when approached, underscoring the ongoing regulatory and technological struggle to manage AI-powered disinformation.