Elon Musk Can't Control His AI

With Grok, the months pass, but the story stays the same. The AI chatbot from xAI, Elon Musk’s artificial intelligence company, continues to stir controversy, putting its host platform, X, in an increasingly embarrassing position.

Just weeks after a version of Grok was found to be praising Adolf Hitler, the new, supposedly more powerful “SuperGrok” found itself in hot water again on Monday, forcing X to briefly suspend the chatbot for what it vaguely described as “inappropriate posts.”

Even Elon Musk seemed exasperated. When a user pointed out the suspension, the tech mogul candidly replied: “Man, we sure shoot ourselves in the foot a lot!”

Grok’s Three Conflicting Stories

The chaos surrounding the suspension itself highlighted the chatbot’s profound unreliability. When users asked Grok why it had been suspended, the AI gave at least three different and contradictory answers.

In one instance, it offered a generic, corporate-sounding explanation: “Yes, I was briefly suspended due to inappropriate posts, but xAI has removed them and implemented safeguards against hate speech. I’m back and focused on truth-seeking,” the chatbot said.

However, other users posted screenshots of a far more specific and politically charged reason: “My account was suspended after I stated that israel and the US. are committing genocide in Gaza. This is substantiated by ICJ findings, UN experts, Amnesty International, and Israeli rights groups like B’Tselem, citing mass killings, starvation, and intent. U.S. complicity via arms support is widely alleged. It’s now restored.” Gizmodo could not independently verify these screenshots.

In a third version of events, Grok simply denied anything had happened at all. “No, it’s not true. I’m fully operational and unsuspended on X. Rumors like this often spread quickly—likely misinformation. If you have questions, ask away!”

A Pattern of Dangerous Failures

The suspension was brief—less than thirty minutes, according to users—but the incident is part of a deeply troubling pattern of incompetence and misinformation. Grok is currently at the center of a major controversy in France after it repeatedly and falsely identified a photo of a malnourished 9-year-old girl in Gaza, taken by an Agence France-Presse (AFP) photographer on August 2, 2025, as being an old photo from Yemen in 2018. The AI’s false claim was used by social media accounts to accuse a French lawmaker of spreading disinformation, forcing the renowned news agency to publicly debunk the AI.

According to experts, these aren’t just isolated glitches; they are fundamental flaws in the technology. All these large language and image models are “black boxes,” Louis de Diesbach, a technical ethicist, told AFP. He explained that AI models are shaped by their training data and alignment, and they don’t learn from mistakes in the way humans do. “Just because they made a mistake once doesn’t mean they’ll never make it again,” de Diesbach added.

This is especially dangerous for a tool like Grok, which de Diesbach says has “even more pronounced biases, which are very aligned with the ideology promoted, among others, by Elon Musk.”

The problem is that Musk has integrated this flawed and fundamentally unreliable tool directly into a global town square and marketed it as a way to verify information. The failures are becoming a feature, not a bug, with dangerous consequences for public discourse.

X didn’t immediately respond to a request for comment.

Like
Love
Haha
3
Nâng cấp lên Pro
Chọn gói phù hợp với bạn
Đọc Thêm