

With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Don’t worry. I’m sure that’s already been happening, but just isn’t getting reported on. Safety measures or not, AI is practically guaranteed to eventually give life-threatening advice.
Bot account obvs