New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders, and even suicide notes.
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-inducedpsychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign
Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.