FTC Targets AI Companion Chatbots In Major Investigation Over Safety Risks, Teen Impact, And Data Privacy Issues

Sep 11, 2025 at 03:23pm EDT
FTC investigates AI companion chatbots

AI tools are becoming more expansively used not just by companies but by everyday users for aid in their daily tasks and sometimes when seeking companionship for more personal purposes. While OpenAI and other tech giants have warned against overly relying on the tool by seeking therapy and other emotional support from the platforms, it seems the regulatory authorities are also looking into the impact of these chatbots, especially on children. The FTC has recently launched an inquiry against companies that build AI companion chatbots and wants more extensive information on how user data is being handled.

FTC probes AI companion chatbots over safety risks, privacy concerns, and impact on teens

The U.S. Federal Trade Commission (FTC) has opened a broad investigation into companies that develop AI companion chatbots amidst concerns regarding the way these platforms tend to have an adverse impact on young children. The inquiry launched is against seven big companies, including Google, Meta, OpenAI, Snap, xAI, and Character.AI, among others.

Related Story Judge Rules Amazon Used Dark Patterns To Trick Users Into Prime Sign-Ups And Make Canceling Difficult Ahead Of FTC Trial

The FTC has highlighted major apprehensions surrounding teenagers' safety and mental health regarding these AI companion chatbots. Since platforms built around AI are meant to foster productivity and provide aid, the companion bots, however, are becoming controversial. They tend to mimic human emotional bonds and provide guidance to young users, sometimes even playing out romantic interactions. This format is appealing to the younger audience but also poses a greater risk, especially when necessary safety rails are not in place.

The commission, as a result, now requires these tech giants to provide detailed information on how these chatbots are built and monitored. This includes disclosure on how information is collected, what safety filters are in place, and how inappropriate interactions are handled. It will also look into insights regarding how the data is used, especially concerning the information minors provide. The FTC is also interested in knowing the ways in which these firms tend to monetize engagement.

The tech community has long highlighted the rapid growth of AI and how necessary safety guardrails need to be in place to prevent misinformation from spreading and to discourage harmful behavior. The FTC’s regulation is of dire need given that accountability with the evolution of the technology has become the need of the hour, and steps to protect user safety and privacy need to be promptly taken before harm is normalized.

Follow Wccftech on Google to get more of our news coverage in your feeds.