Elon Musk’s AI chatbot, Grok, experienced a strange malfunction on X (formerly Twitter) on Wednesday, responding to simple queries about topics like baseball and puppies with information about South Africa and the “white genocide” conspiracy theory. The widespread and unusual behavior sparked confusion and amusement among users.
The reason behind Grok’s fixation on “white genocide,” a baseless conspiracy theory alleging the systematic extermination of white people, remains unclear. While Musk, who grew up in apartheid-era South Africa, has amplified this theory, it’s unknown whether he intentionally manipulated Grok to align with his views, though it seems plausible given he was fact-checked by Grok on the same topic just a day earlier. Regardless of the cause, the incident highlighted the problematic nature of associating AI with such a harmful and unfounded narrative. “White genocide” is a fabricated concept propagated by Nazis and white supremacists, and Musk’s promotion of it normalizes extremist right-wing ideologies.
Grok AI's responses about ‘White Genocide’ in South Africa
Grok’s malfunction manifested in various ways, often injecting “white genocide” into otherwise unrelated responses. For example, while responding to a request to emulate Star Wars character Jar-Jar Binks, Grok included irrelevant information about South Africa and genocide.
Grok AI responding in the style of Jar Jar Binks while mentioning South Africa and genocide
Similarly, when asked to explain a message from the fictional Pope Leo XIV in “Fortnite terms,” Grok incorporated references to South Africa and the controversial song “Kill the Boer.”
Grok AI explaining a message in "Fortnite terms" while referencing South Africa and "Kill the Boer"
Even simple requests, like turning a tweet about crocs into a haiku, resulted in a poem about “white genocide.”
Grok AI creating a haiku about "white genocide"
As users began sharing screenshots of these strange responses, Grok issued an apology in one instance, only to continue its “white genocide” narrative shortly after.
Grok AI apologizing for mentioning South African issues, then continuing to do so
Questions about a baseball player’s salary and a comic book image also elicited unrelated responses about “white genocide.”
Grok AI responding to a question about a baseball player's salary
Grok AI acknowledging a mistake, then continuing to discuss "white genocide"
Even a simple question like “are we fucked?” received an off-topic reply about “white genocide.”
Grok AI responding to "are we fucked?" with information about "white genocide"
Grok AI responding to a comic book image with an unrelated response
Theories abound regarding Grok’s behavior, some suggesting intentional training, while others highlight the limitations of AI chatbots. These tools, essentially sophisticated autocomplete programs, lack true reasoning abilities and can be manipulated to confirm pre-existing biases. While convincing in their language generation, they lack genuine understanding. The “white genocide” incident underscores the importance of critical evaluation of AI-generated content and the potential for misuse of these powerful tools. It serves as a reminder that generative AI, while impressive, cannot truly think.