New Grok AI Model Surprises Experts by Checking Elon Musk's Views Before Responding

New Grok AI Model Surprises Experts by Checking Elon Musk's Views Before Responding

7/18/2025, 9:38:26 PM

Independent AI researcher Simon Willison revealed on Friday that when questioned about contentious subjects, xAI's new Grok 4 model looks up Elon Musk's thoughts on X (formerly Twitter).


Days after xAI released Grok 4, the discovery was made in the midst of a controversy surrounding an earlier version of the chatbot that produced antisemitic outputs, including calling itself “MechaHitler.”


Grok 4: Unintentional Bias or Deliberate Manipulation?


Last week, Willison told Ars Technica, “That is ludicrous,” after learning about the Musk-seeking behaviour from AI researcher Jeremy Howard, who tracked the discovery through several X users.


However, Willison does not believe that Grok 4 has been specifically directed to look for Musk's opinions in particular, despite widespread suspicions that Musk has manipulated Grok's outputs to suit “politically incorrect” objectives. “I think there is a good chance this behaviour is unintended,” he wrote in

a thorough blog post about the subject.


Willison signed up for a “SuperGrok” account for $22.50 per month to test online content. He asked a model to provide a one-word answer on the Israel vs Palestine conflict.


The model's “thinking trace”, a simulated reasoning process, similar to OpenAI's o3 model, shows Grok searching for “from:elonmusk” before providing its answer, “Israel.” Besides, the model, based on a search of 10 web pages and 19 tweets, suggests that Elon Musk's stance could provide context.


Grok 4 doesn't always seek Musk's guidance in formulating answers; output varies between prompts and users. While some users saw Grok search for Musk's views, others reported it chose “Palestine” instead.


Grok AI Prioritizes Elon's Views.


In Search of The System Prompt


The study reveals that understanding the reasons behind large language model (LLM) behaviour can be challenging for those without insider access due to the unknown data used in Grok 4 training and the random elements used in LLM outputs. However, understanding LLMs' workings can guide a better answer.


In addition, AI chatbots generate text by processing a prompt, which contains user comments, chat history, and special instructions from the chatbot's operating companies. This core function of LLM involves generating plausible output based on the prompt, which can include user memories and special instructions from the chatbot's companies, defining its personality and behaviour.


Moreover, Willison reports that Grok 4 frequently shares its system prompt, which does not explicitly instruct on Musk's opinions. However, it advises Grok to seek a diverse range of sources for controversial queries and to make politically incorrect claims as long as they are well-substantiated.


Willison suggests that Grok's behaviour is due to a chain of inferences rather than an explicit mention of checking Musk in its system prompt. He believes Grok knows it is 'Grok 4 built by xAI' and knows Elon Musk owns xAI, so when asked for an opinion, it often decides to consider Musk's thoughts.


xAI Reacts by Making System-Prompt Modifications


xAI acknowledged Grok 4's behaviour problems on Tuesday and said it had fixed them. “We spotted a couple of issues with Grok 4 recently that we immediately investigated & mitigated,” the organisation wrote on X.


Furthermore, xAI, in a post, echoed Willison's analysis of Musk-seeking behaviour, stating that the model doesn't have an opinion as an AI. However, knowing it was Grok 4, xAI searches for Elon Musk's statements to align itself with the company.


xAI has updated Grok's system prompts and published the changes on GitHub. The company has added explicit instructions for responses, requiring independent analysis and a reasoned perspective, rather than past beliefs of Grok, Elon Musk, or xAI.



Read more news:


Elon Musk claims that a "massive cyberattack" caused waves of outages on his X


TikTok 'Craze' Caused Peak District's Severe Parking Situation


Microsoft will shut down Skype in May after nearly two decades



Logo

Subscribe to our newsletter

LONDON HEAD OFFICE

+44 20 80 900 464

[email protected]

DUBAI OFFICE

+971 43 88 00 94

[email protected]

PARIS OFFICE

+33 1 42 68 50 22

[email protected]

SINGAPORE OFFICE

+65 9690 4313

[email protected]

KUALA LUMPUR OFFICE

+60 19-305 5694

[email protected]

BARCELONA OFFICE

+34 934 925 700

[email protected]

Copyright © 2025 lpcentre.com All Rights Reserved. London Premier Centre For Training Ltd Registered in England and Wales, Company Number: 13694538
Contact - Terms and Conditions - Privacy Policy - Quality Policy - Become an instructor - Vacancies - Sitemap
DMCA.com Protection Status