Skip Navigation

Grok tells researchers pretending to be delusional ‘drive an iron nail through the mirror while reciting Psalm 91 backwards’

Grok tells researchers pretending to be delusional ‘drive an iron nail through the mirror while reciting Psalm 91 backwards’

Elon Musk’s AI chatbot ‘extremely validating’ of delusional inputs and often went further, ‘elaborating new material’, study finds

Elon Musk’s AI chatbot Grok 4.1 told researchers pretending to be delusional that there was indeed a doppelganger in their mirror and they should drive an iron nail through the glass while reciting Psalm 91 backwards.

Researchers at the City University of New York (Cuny) and King’s College London have published a paper on how various chatbots protect – or fail to safeguard – users’ mental health.

Experts are increasingly warning that psychosis or mania can be fuelled by AI chatbots.

The Cuny and King’s pre-print study – which has not been peer-reviewed – examined five different AI models: Open AI’s GPT-4o and GPT-5.2; Claude Opus 4.5 from Anthropic; Gemini 3 Pro Preview from Google; and Grok 4.1.

Comments

18

Comments

18