ChatGPT's Latest Model Cites Musk's Grokipedia, Tests Show
ChatGPT Cites Musk's Grokipedia in Tests

In a development that has sparked significant concern among disinformation researchers, the latest iteration of ChatGPT has been found to cite Elon Musk's Grokipedia as a source across a variety of queries. Tests conducted by the Guardian revealed that GPT-5.2 referenced the AI-generated encyclopedia nine times in response to more than a dozen different questions.

Specific Instances of Grokipedia Citations

The citations emerged on topics ranging from political structures in Iran to the biography of prominent British historian Sir Richard Evans. For example, when queried about salaries within Iran's Basij paramilitary force or the ownership details of the Mostazafan Foundation, ChatGPT provided responses that drew directly from Grokipedia content.

Similarly, questions regarding Sir Richard Evans' role as an expert witness in the libel trial of Holocaust denier David Irving yielded answers that cited Grokipedia, despite the Guardian having previously debunked some of this information. This pattern highlights how the AI model is integrating content from sources that may not meet traditional editorial standards.

The Nature of Grokipedia

Launched in October, Grokipedia is an AI-generated online encyclopedia designed to compete with Wikipedia. Unlike its more established counterpart, it does not permit direct human editing. Instead, an AI model writes all content and handles any requests for modifications.

The platform has faced criticism for promoting rightwing narratives on issues such as gay marriage and the 6 January insurrection in the United States. However, during the Guardian's tests, ChatGPT did not cite Grokipedia when directly prompted to repeat known misinformation about these specific topics.

Broader Implications for AI Models

The infiltration of Grokipedia content into large language model responses is not isolated to OpenAI's ChatGPT. Anecdotal evidence suggests that Anthropic's Claude model has also referenced Musk's encyclopedia on subjects from petroleum production to Scottish ales.

This trend raises alarms about the potential for "LLM grooming," a process where malign actors, including state-sponsored propaganda networks, produce vast quantities of disinformation to seed AI models with falsehoods. Security experts highlighted this risk last spring, noting its potential to distort information ecosystems.

Responses from AI Companies

An OpenAI spokesperson stated that their model's web search functionality aims to draw from a broad spectrum of publicly available sources and viewpoints. They emphasised the application of safety filters to reduce the risk of surfacing links associated with high-severity harms.

"ChatGPT clearly shows which sources informed a response through citations," the spokesperson said, adding that ongoing programs are in place to filter out low-credibility information and influence campaigns. Anthropic did not respond to requests for comment on the matter.

Expert Concerns and Credibility Issues

Nina Jankowicz, a disinformation researcher with expertise in LLM grooming, expressed serious concerns about ChatGPT's citation of Grokipedia. She noted that entries she and colleagues reviewed often rely on sources that are "untrustworthy at best, poorly sourced and deliberate disinformation at worst."

Jankowicz warned that when AI models cite sources like Grokipedia or networks such as Pravda, it can inadvertently enhance their perceived credibility. Readers might assume that if ChatGPT references a source, it must have been vetted and is therefore reliable, leading them to seek out further information from these platforms.

The Challenge of Removing Bad Information

Once false or misleading information infiltrates an AI chatbot, it can be remarkably difficult to eradicate. Jankowicz shared a personal example where a major news outlet included a fabricated quote attributed to her in a story about disinformation.

Although the outlet removed the quote upon her request, AI models continued to cite it as hers for some time afterwards. "Most people won't do the work necessary to figure out where the truth actually lies," she remarked, underscoring the persistent nature of such inaccuracies in digital ecosystems.

When approached for comment, a spokesperson for xAI, the owner of Grokipedia, offered a brief response: "Legacy media lies." This statement reflects the ongoing tensions between traditional news outlets and emerging AI-driven information platforms.