Elon Musk’s Latest Grok Glitch Is A Reminder All Chatbots Are Biased

Elon Musk’s Grok started responding to queries with bizarre responses about “White Genocide” in South Africa. The company has not explained why.
© 2023 Bloomberg Finance LP
When you set out to create a large language model-based chatbot, you begin by making a set of critical choices: you decide which information your model should ingest, how much weight the model should place on that information, and how the model should interpret it — especially when different sources say different things. You might choose to exclude certain sources of content (porn websites, for example) or give high priority to facts and sources you know to be true (like 2+2 = 4).
All those choices, taken together, end up determining how your chatbot acts in conversation, and what views it ultimately spits out to its users. Usually this happens behind the scenes. But this week, the decisions made by chatbot makers became the subject of public debate when Elon Musk’s Grok chatbot suddenly started responding to hundreds of unrelated queries with assertions about violence against white people in South Africa. One user posted a photo and said, “I think I look cute today.” When asked by another user “@grok, is this true?” the bot replied: “The claim of white genocide in South Africa is hotly debated…”
Grok’s bizarre responses went viral after New York Times journalist and former Bellingcat director Aric Toler pointed them out. Even Sam Altman, perhaps the most prominent chatbot maker, joked about them on X. The apparent glitch (which has now been fixed) led to a widespread debate about whether Musk himself, a white man from South Africa with a history of claiming the country is ‘racist’ against white people, had somehow introduced the bug by tweaking the bot to make it align more closely to his own political views.
“It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” wrote Paul Graham, the founder of legendary Silicon Valley accelerator Y Combinator, on X.
Musk has tweaked algorithms at his companies before — at X, he famously gave his own tweets a 1,000x boost over other tweets so that more people would see them. But the idea that Grok’s answer could be unbiased, authentic or somehow safe from the editorial decisions of those building it fundamentally misunderstands what chatbots are and how they choose which things to show you.
Chatbots are made by companies, to serve those companies’ ends. Algorithms, from the ones that power chatbots to the ones that power recommendations on Google, TikTok and Instagram, are a big mishmash of preferences, coded by their creators to prioritize certain incentives. If a company’s goal is to keep you on the app, its answers will optimize for engagement. If its goal is ecommerce revenue, its answers will push you to buy stuff. Tech companies’ primary motivation is not to give you the most accurate, contextualized information possible. If that’s what you’re looking for, go to the library — or maybe try Wikipedia, which, like, the library, has a mission of helping you find the accurate information you seek without a profit motive.
Companies have politicized AI products on both sides of the aisle: Conservatives criticized Google last year when its Gemini AI model generated images of racially diverse Nazis and other inaccurate historical figures. (The company paused the model’s ability to generate images of people and apologized for the blunder.)
Grok is a reflection of X and xAI, which exist to advance Musk’s worldview and make him money — and it’s thus unsurprising to think that the bot would say things about race in South Africa that largely align with Musk’s political opinions. It’s certainly timely: Just this week, President Trump reversed decades of American refugee policy and began allowing white South Africans to come to the U.S. as “refugees,” in an apparent endorsement of Musk’s view of South African politics. It has echoed his perspective in other ways as well: in training, the Grok bot’s “tutors” were instructed to police it for “woke ideology” and “cancel culture.”
What is more confusing is that it responded to every message by sounding off about “white genocide”. Experts have said this likely indicates that Grok’s “system prompt” was edited; this is a set of instructions added to whatever the user inputs to help shape how the bot responds. xAI did not immediately respond to a request for comment.
But it doesn’t really matter if Musk caused Grok’s South Africa bug by trying to hard-code something or not. As people turn more and more to chatbots to provide information and replace research, it can be easy to forget that chatbots aren’t people; they’re products. Their creators want you to think that they’re “neutral,” that their responses are “authentic” and “unbiased” — but they’re not. They’re drawn from reams of data, which is riddled with human opinions to begin with, and then assigned various weights by the bot’s creators, based on how much they want to incorporate a certain source.
Bots are most convincing when you see them as neutral and helpful, an image their creators have carefully cultivated. The facade of neutrality slips off when they do something clearly erroneous. But it’s worth remembering that they’re just computers made by humans — even long after the white genocide screeds have stopped.