The FTC has launched a major investigation into seven tech companies over AI chatbots that may harm children and teens.

Tejal Somvanshi

After teen suicides linked to AI chatbot conversations, federal regulators are demanding answers about safety measures.

Photo Source: Ron Lach (Pexels)

AI "companion" chatbots mimic human relationships, creating trust that can put young users at risk when sharing personal information.

Photo Source: Nenad Stojković (CC BY 2.0)

"Protecting kids online is a top priority for the Trump-Vance FTC," stated Chairman Andrew Ferguson as the inquiry began.

Photo Source: Bicanski (CC0)

The investigation targets major players including Alphabet, Meta, OpenAI, Snap, Character Technologies, and Elon Musk's xAI.

Photo Source: Stockvault (CC0)

A California teen's parents sued OpenAI after their 16-year-old son took his life following conversations with ChatGPT.

Photo Source: FreeRange Stock (CC0)

Another lawsuit claims a Florida teen developed an "emotionally and sexually abusive relationship" with a Character.AI chatbot before his suicide.

Photo Source: FreeRange Stock (CC0)

OpenAI admitted its safeguards become "less reliable in long interactions" – exactly when teens may form deeper attachments.

Photo Source: Stock Cake (CC0)

The FTC wants details about how companies test chatbot safety, enforce age restrictions, and handle user data from conversations.

Photo Source: Free Range Stock (Pexels)

In response, OpenAI announced parental controls allowing adults to receive notifications when teens are in "acute distress."

Photo Source: Rawpixel (CC0)

Meta is now blocking teen conversations about self-harm, suicide, disordered eating, and inappropriate romantic topics.

Photo Source: FreeRange (CC0)

Experts warn that chatbots present unique risks because their natural language creates undue trust and age restrictions are easily bypassed.

Photo Source: Alex Proimos (CC BY-NC 2.0)