Grok shows 'flaws' in fact-checking Israel-Iran war: study

Raising fresh doubts about its reliability as a debunking tool.

Photo: Reuters

WASHINGTON:

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool.

With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI's Grok — in search of reliable information, but their responses are often themselves prone to misinformation.

"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.

"Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims."

The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."

Load Next Story