University of California, San Francisco psychiatrist Keith Sakata, who earlier warned about the growing number of cases of AI psychosis, has shared some tips to help avoid the mental health disturbance. Dr. Sakata shared on social media earlier this week that he had seen a dozen patients admitted to a hospital after experiencing a psychosis linked to AI use. He added that while AI was not directly responsible for the mental health disturbance, it did play a key role in the distorted cognitive feedback loop, which is behind psychosis.
Having A Human In The Loop Is Key To Avoiding AI Psychosis, Says Psychiatrist
AI psychosis, while not officially a medical term, is a state where a chatbot user forgets that they are not conversing with a human and is engaging with a software model instead. It has led to painful events in 2025, with a notable case of a Florida man's suicide by cop after believing that staff at OpenAI had killed his AI girlfriend, Juliet.
In his social media post, Sakata had warned that he was witnessing people being hospitalized due to losing touch with reality because of AI. He outlined that AI use was generating psychosis by not allowing users vulnerable to psychosis to update their belief systems after checking reality. As a result, AI usage created a self-reinforcing pattern where users were unable to realize that the chatbot they were conversing with did not exist in reality.
Following his post, Dr. Sakata sat down for a talk with TBPN, where he discussed methods through which AI developers could help avoid such outcomes. He also shared some ways in which vulnerable individuals can be protected from losing touch with reality due to AI use.
When asked what he would advise people who might feel vulnerable about going down a negative path with AI or have family and friends who might be going down such a path, Sakata replied:
For now, I think a human in the loop is the most important thing. So, you know, our relationships are like the immune system for mental health. They make us feel better but then they also are able to intervene when something is going wrong. So if you or your family member feels like something is going wrong, either some weird thoughts that are coming out, maybe some paranoia, if there's a safety issue, just call 911 or 988, get help. But also, just know that having more people in your lives, getting that person connected to their relationships, getting a human in between them and the AI so that you can kind of create a different feedback loop is going to be super good at least at this stage. I don't think we're at the point where you are going have an AI therapist yet but who knows.
The growth and popularity of AI use has created safety concerns with multiple reports suggesting that Facebook parent Meta has taken a lax approach when it comes to AI chatbots and inappropriate behavior with and by minors. A recent Reuters report outlined that Meta had lax guidelines when it came to AI chatbots answering queries by children, with the firm claiming to have updated the rules after being questioned about them.
Follow Wccftech on Google to get more of our news coverage in your feeds.
