• 0 Posts
  • 4 Comments
Joined 7 months ago
cake
Cake day: September 8th, 2025

help-circle
  • postscarce@lemmy.dbzer0.comtoTechnology@lemmy.worldAI 2027
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    What’s interesting, to me, is that’s exactly how people hedge in the fringe UFO community too.

    Ha! True. Very true. I find this scenario compelling but it’s based on a series of assumptions which individually seem plausible but I have no way to evaluate them all together. It’s like the Drake Equation; because the probabilities are multiplicative even tiny adjustments to a few of them end up making a huge difference to the final answer.

    The thing is though, if there really is even a tiny chance of the ultimate outcome of this thought experiment being true (i.e. the end of humanity) then we should probably address it. And what that would look like is stopping the AI companies from doing any more research until they can prove their model will be safe, which should make people who are more concerned about AI slop happy too. Everybody wins by hitting the brakes. (Edit: well, Sam Altman doesn’t but I’m not going to lose sleep over that.)




  • LLMs could theoretically give a game a lot more flexibility, by responding dynamically to player actions and creating custom dialogue, etc. but, as you say, it would work best as a module within an existing framework.

    I bet some of the big game dev companies are already experimenting with this, and in a few years (maybe a decade considering how long it takes to develop a AAA title these days) we will see RPGs with NPCs you can actually chat with, which remain in-character, and respond to what you do. Of course that would probably mean API calls to the publisher’s server where the custom models are run, with all of the downsides that entails.