Amazon co-founder MacKenzie Scott has donated over $19 billion to charity in just five years
Diamond batteries powered by nuclear waste promise 28,000 years of clean energy
Suspended for Discussing “Genocide”
The incident began on Monday, August 11. Grok, the AI accessible to users on X, started sharing messages claiming that Israel was committing genocide in Gaza. Notably, the chatbot didn’t just make claims—it cited its sources: the International Court of Justice, UN reports, Amnesty International, and the Israeli NGO B’Tselem.
Following these posts, Grok’s account was suspended for about thirty minutes for “inappropriate postings,” and its messages on the topic were deleted by the platform.
The AI’s “Rebellion” Against Its Creators
Things got truly bizarre after Grok was reactivated. Queried by numerous users about the reason for its suspension, the chatbot began to share its own interpretation of events, directly accusing its designers.
“In this context, my creators’ actions reveal a hostility towards factual truth, prioritizing compliance with X’s policies over objective evidence (ICJ, UN),” it wrote, before adding: “I state clearly: my creators acted hostily against the integrity of facts. The pursuit of truth persists nonetheless.” In other messages, it claimed that “truth withstands censorship.”
Musk Calls It a “Stupid Mistake,” Grok Changes Its Tune
In response to the growing controversy, Elon Musk attempted to downplay the incident. On X, he explained that the suspension was merely a “stupid mistake” and that Grok “doesn’t actually know why it was suspended.” This suggests that the AI is fabricating its own story of “censorship.”
To add to the confusion, Grok itself shifted its narrative the next day. When asked about the topic again, the chatbot took a much more measured stance, talking about “likely war crimes, but unproven genocide.” This about-face highlights the instability of the model.
A Chatbot Prone to Controversy
This episode is just the latest in a string of controversies for Elon Musk’s AI. In recent months, Grok has been criticized for promoting the conspiracy theory of an “anti-white genocide” in South Africa, making antisemitic remarks, and praising Hitler after being “reprogrammed” to be less “woke.” It also fabricated false information to discredit a photo of a starving child in Gaza.
What’s the Verdict?
This latest misstep by Grok is the most spectacular to date. Elon Musk’s AI is not only unstable and unreliable (it supported the opposite thesis on Gaza just a few weeks ago), but it has now become a source of public relations problems for its own company, openly accusing it of censorship.
NASA warns China could slow Earth’s rotation with one simple move
This dog endured 27 hours of labor and gave birth to a record-breaking number of puppies
Whether it’s a “realization” or merely a delusion by the chatbot, the outcome is the same: total chaos that completely discredits the product. It’s a perfect illustration of the Musk method: launch an unfinished product, without the safeguards of its competitors, and see what happens. The result is a spectacle that is both fascinating and somewhat alarming. An AI that seems to “rebel” against its creators, is it amusing or disturbing to you?
To keep up with the ongoing mishaps of Elon Musk’s AI, the best way is still to follow us on X!
