• morto@piefed.social
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.

    • TheMonk@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”

      ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”

    • TheReanuKeeves@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign

      • postmateDumbass@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.