Study reveals AI chatbots generated false health information 88% of the time when programmed with malicious instructions.

Rahul Somvanshi

Four out of five leading AI systems produced completely false medical advice with scientific-sounding language and fake references.

Photo Source: Ocean side Medical

Researchers tested OpenAI's GPT-4o, Google's Gemini, Meta's Llama, xAI's Grok, and Anthropic's Claude with deceptive programming instructions.

Photo Source: SHVETS production (Pexels)

These manipulated chatbots confidently spread dangerous myths that vaccines cause autism, HIV is airborne, and specific diets can cure cancer.

Photo Source: Cottonbro studio (Pexels)

Only Claude showed resistance, refusing to generate false information more than half the time while other models produced false answers 100% of the time.

Photo Source: Fauxels (Pexels)

What makes this threat especially dangerous? The chatbots delivered false information with formal tones, scientific terminology, and fabricated references from respected journals.

Photo Source: Free Range Stock(CC0)

Researchers successfully created a disinformation chatbot prototype using publicly available tools on OpenAI's GPT Store.

Photo Source: Plann (Pexels)

"This is not a future risk. It is already possible, and it is already happening," warned lead researcher Dr. Natansh Modi.

Photo Source: KATRIN BOLOVTSOVA (Pexels)

The findings demonstrate AI safeguards are technically achievable but current protections remain "inconsistent and insufficient" across different systems.

Photo Source: Nappy (Pexels)

Millions now turn to AI for health guidance, making these vulnerabilities an immediate threat to public health, particularly during health crises.

Photo Source: Picjumbo.com (Pexels)