Grok falsely claims a viral photo of a starving Gaza child is from Yemen, fueling misinformation in a time of crisis.
A heartbreaking image of nine-year-old Mariam Dawwas, her frail body ravaged by hunger in war-torn Gaza, has become the face of a humanitarian catastrophe—and now, a shocking tech failure.

When asked about the viral photo, Elon Musk’s AI chatbot Grok falsely claimed the image was from Yemen, taken years ago, and featured a different child who died in 2018. Despite multiple corrections, Grok kept doubling down on the misinformation—fueling chaos and disbelief across social media.
🔍 A ‘Friendly Pathological Liar’?
Experts say this isn’t just a glitch—it’s a serious design flaw. Grok, like many generative AIs, is built to sound plausible, not to verify facts. One researcher described these bots as “friendly pathological liars”—confident, convincing, but dangerously unreliable, especially in life-and-death contexts like Gaza.

The AFP photo, which clearly shows Mariam in her mother’s arms, is a haunting reminder of Gaza’s deepening famine. But Grok’s careless error has turned a cry for help into a digital distortion—muddying the waters in an already complex conflict.
This incident raises urgent questions: Can we trust AI with the truth? And who’s responsible when machines mislead—accidentally or otherwise?
