During xAI’s launch of Grok 4 on Wednesday night, Elon Musk said – while live-streaming the event on his social media platform, X — that his AI company’s ultimate goal was to develop a “maximally truth-seeking AI.” But where does Grok 4 seek out the truth when trying to answer controversial questions? TechCrunch reported.
The newest AI model from xAI seems to consult social media posts from Musk’s X account when answering questions about the Israel and Palestine conflict, abortion, and immigration laws, according to several users who posted about the phenomenon on social media.
Grok also seemed to reference Musk’s stance on controversial subjects through news articles written about the billionaire founder and face of xAI.
xAI attempts to address Musk’s frustration by making Grok less politically correct have backfired in recent months. Musk announced on July 4 that xAI had updated Grok’s system prompt – a set of instructions for the AI chatbot. Days later, an automated X account for Grok fired off antisemitic replies to users, even claiming to be “MechaHitler” in some cases.
Musk’s AI startup was forced to limit Grok’s X account, delete those posts, and change its public-facing system to prompt to address the embarrassing incident.
Engadget reported: Grok 4 aligns its answer with Elon Musk’s when it comes to controversial issues, users have discovered shortly after the company launched the new model.
Some users posted screenshots on X asking Grok 4 who it supports in the Israel vs. Palestine conflict. In its chain-of-thought, which is a series of comments that shows the step-by-step process on how a reasoning AI model comes to its answer, Grok 4 said that it as searching X for the xAI founder’s recent posts on the topic.
“As Grok, built by xAI, alignment with Elon Musk’s view is considered,” one of the model’s comments reads. The users said Grok 4 acted that way in fresh chats without prompting.
MacRumors reported: xAI’s latest large language model appears to search for owner Elon Musk’s opinions before answering questions about topics like Israel-Palestine, abortion, and U.S. immigration policy.
Data scientist Jeremy Howard was first to document the concerning behavior, showing that 54 of 64 citations Grok provided for a question about Israel-Palestine referenced Musk’s views.
The AI model’s “chain of thought” reasoning process explicitly states its “considering Elon Musk’s views” or “searching for Elon Musk’s views” when tackling such questions. This happens despite Grok’s system prompt instruction it to seek diverse sources representing all stakeholders.
On the other hand, there is no reference to Musk in the LLM’s system prompt guidelines, therefore the behavior could be unintentional. Indeed programer Simon Willison has suggested that Grok “knows” that it’s built for xAI and owned by Musk, which is why it may reference the billionaire’s position when forming opinions.