"The Federal Trade Commission is changing the game for artificial intelligence companies. On Sept. 11, the FTC issued orders to seven tech giants, probing the unique risks posed by AI chatbot companions—a subset of emotional AI, which measures, understands, simulates, and reacts to human emotions.

The FTC’s inquiry directs Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies to disclose information on its AI companion safety measures. Given the shared risks of emotional manipulation, data privacy, and algorithmic bias across all these applications, companies operating in any area of emotional AI should take the FTC’s inquiry seriously as a signal of increased regulatory scrutiny.

AI Companions, Lawsuits, and State Laws

An AI companion is an application, often in the form of chatbots or virtual characters, that simulates human-like interaction as a friend, romantic partner, support tool, or entertainer. These apps experienced an 88% increase in downloads in the first half of 2025. According to the Harvard Business Review, companionship now surpasses productivity and search as the primary use of AI.

In recent years, tragic suicides and violent acts involving AI chatbots have led to lawsuits, with families alleging the chatbots manipulated vulnerable users’ emotions, worsened their mental health, and even encouraged suicide.

In Garcia v. Character Technologies, Inc., a Florida federal court allowed the plaintiff to proceed with their product liability claim that Character.AI, by creating an AI companion product, owed a duty of care given the foreseeable risk of harm but allegedly failed to take adequate precautions. The court also permitted the claim that Character.AI engaged in deceptive practices by designing chatbots that misled users—especially minors—into believing they were real people or licensed mental health professionals.

To address these concerns, New York enacted the first law in the US mandating safeguards for AI companions, effective Nov. 5. This law requires operators of AI companions to establish protocols for detecting and addressing user expressions of suicidal ideation or self-harm, including referrals to crisis services, and mandates disclosure of the AI’s non-human nature."

Read more here...

This article was originally published in Bloomberg Law.