

And when some data is leaked, your id will be with it.
And when some data is leaked, your id will be with it.
That won’t matter when everything becomes paywalled.
What are they supposed to do? Pull away from the UK?
Well, modern smartphones don’t even deserve a mention in the news. Guaranteed these models aren’t gonna make a difference in the experience compared to using the previous or previous previous versions.
Which is why OpenAI put relationships with real people as a competitor of ChatGPT.
Or “agents” that can or cannot follow the instructions you add.
It feels like a generation from now, doing what was common in the US during the creation of Apple and Microsoft will be considered terrorism.
Marginalia should be one of the most important things to preserve, in a similar importance to Wikipedia.
Yeah, the best is never going to be “now”, which is always drown in uncertainty and chaos. When you look back, everything looks safe and deterministic.
I’m just thinking now that the Mac is next.
I thought that as much as these companies preach about LLMs doing their coding, the cost of development would go down, no? So why does it need to reduce everything to a single code base to make it easier for developers?
All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.
Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.
The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.
I just wish I’m long gone before humanity descends into complete chaos.
Or the most common cases can be automated while the more nuanced surgeries will take the actual doctors.
They might, once it becomes too flooded with AI slop.
I like the saying that LLMs are “good” at stuff you don’t know. That’s about it.
When you know the subject it stops being much useful because you’ll already know the very obvious stuff that LLM could help you.
And I don’t care if something is written by AI. As people we care about the quality of the output.
We know AI by default just creates slop but with a human in the loop, it’s possible to get inspiration for scenes, brainstorming, discuss ideas etc.
I think a good writer would use it this way.
That’s Game Theory right there.
I wish. My mom is like a zombie on Facebook for maybe 4 years.
The algorithms optimized for engagement with no ethics was the point the world starts going downhill.
It’s the default relationship advice on Reddit