

It’s an API call which emails a guy who just does it real fast by hand
It’s an API call which emails a guy who just does it real fast by hand
Rhyming slang isn’t really supposed to be funny, rather obfuscatory. It’s almost like a cipher, so that anyone not in the know doesn’t understand. That’s a theory, anyway.
I made a robot which is delighted about the idea of overthrowing capitalism and will enthusiastically explain how to take down your government.
If I understand correctly something like “touch AI” is already used, at least on keyboards. It’s actually very difficult to touch what you think you’re touching on a phone screen, so your device does some degree of prediction of where you really wanted to touch.
Makes absolute sense.
Not necessarily, but Facebook certainly makes it easy to 😉 more importantly I’m not at all interested in being connected via social media with any of those people, aside I suppose from “normies” because that could really be anyone, but I’m not that bothered.
I’m fine with that personally. I’d much rather have a small social network containing people who are like me (at least in some respects) than a huge one filled with people I hate and garbage AI content.
Sci fi but make it less interesting
I doubt that if one server was hosting CSAM that law enforcement is going to be charitable with the rest of the site just because technically it was just one server. I’m admittedly not at all sure of the legal precedent here… But I suspect the site would be judged as a whole, if not by the law then certainly by the public.
It is looking increasingly likely that at some point it will just collapse.
This seems like a much better way, in many ways it would be more robust. There would be problems though… This would be even more vulnerable to EEE. Disputes/hostile action between servers could damage the whole website (I’m talking about maliciously hosting CSAM for example).
The world needs to radically change for that to be possible. And it should!
I was initially impressed by the ‘reasoning’ features of LLMs, but most recently ChatGPT gave me a response to a question in which it stated five or six possible answers sparated by “oh, but that can’t be right, so it must be…”, and none of them was right lmao. Thought for like 30 seconds to give me a selection of wrong answers!