X Implements Strict Revenue Ban on Unlabeled AI War Content
Elon Musk's social media platform X has announced a stringent new policy targeting users who post artificial intelligence-generated videos of armed conflicts without proper disclosure. The platform, which boasts approximately half a billion monthly active users, will suspend offenders from earning revenue through its advertising model for 90 days upon their first violation. A second infraction will result in a permanent ban from the revenue-sharing program, a move unveiled on Tuesday night as the Iran conflict sparked a deluge of fabricated online footage.
Flood of Fake Battle Scenes Prompts Action
Since the onset of the Iran conflict, social media timelines on X, Instagram, and Facebook have been inundated with convincingly fake battle scenes. One widely circulated video depicted Iranian rockets pursuing and shooting down a United States jet, amassing a staggering 70 million views according to verification checks by BBC Verify. Another clip utilized AI technology to replace authentic smoke from a missile strike site with an artificially generated fireball dramatically larger than reality.
These deceptive videos have achieved massive reach, exploiting the platform's incentive structure where users with substantial followings approaching 100,000 people can earn hundreds of dollars monthly by producing viral content. The policy shift aims to curb the financial motivation behind spreading misinformation during volatile geopolitical events.
Official Statement Emphasizes Authentic Information Access
Nikita Bier, the head of product at X, emphasized the critical importance of authentic information during wartime. "During times of war, it is critical that people have access to authentic information on the ground," Bier stated. "With today's AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict – without adding a disclosure that it was made with AI – will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme."
Other examples of misinformation include a clip on Instagram falsely claiming to show a massive conflagration after "Iran destroyed the US airbase in Riyadh," which was actually 18-month-old footage from an Israeli strike on an oil refinery in Hodeidah, Yemen.
Fact-Checking Organizations Highlight AI's Role in Misinformation
Full Fact, a prominent United Kingdom fact-checking organization, has reported a significant increase in AI-driven misinformation on social media platforms. Steve Nowottny, editor at Full Fact, noted, "In the last few days we've seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei."
Nowottny further explained that even low-quality AI images with visible watermarks are being shared at scale, raising serious concerns about the volume and ease of generating and spreading fake content. Meta, the parent company of Instagram and Facebook, has been approached for comment regarding its policies on AI-generated war footage.
The new measures by X reflect a growing recognition within the tech industry of the urgent need to address the proliferation of AI-generated misinformation, particularly during sensitive global conflicts where accurate information is paramount for public understanding and safety.
