• 1 Post
  • 16 Comments
Joined 6 months ago
cake
Cake day: October 14th, 2025

help-circle
  • Right, i mean if you made the context window enormous, such that you can include the entire set of embeddings and a set of memories (or maybe, an index of memories that can be “recalled” with keywords) you’ve got a self-observing loop that can learn and remember facts about itself. I’m not saying that’s AGI, but I find it somewhat unsettling that we don’t have an agreed-upon definition. If a for-profit corporation made an AI that could be considered a person with rights, I imagine they’d be reluctant to be convincing about it.



  • There’s no reason an LLM couldn’t be hooked up to a database, where it can save outputs and then retrieve them again to “think” further about them. In fact, any LLM that can answer questions about previous prompts/responses has to be able to do this. If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking. If you do the same process but with the whole model and all the DB entries, that’s in the region of what I’d call a strange loop. Is that AGI? I don’t think so, but I also don’t know how I would define AGI, or if I’d recognize it if someone built it.





  • This is in some ways an easier problem than classifying LLM vs non-LLM authorship. That only has two possible outcomes, and it’s pretty noisy because LLMs are trained to emulate the average human. Here, you can generate an agreement score based on language features per comment, and cluster the comments by how they disagree with the model. Comments that disagree in particular ways (never uses semicolons, claims to live in Canada, calls interlocutors “buddy”, writes run-on sentences, etc.) would be clustered together more tightly. The more comments two profiles have in the same cluster(s), the more confident the match becomes. I’m not saying this attack is novel or couldn’t be accomplished without an LLM, but it seems like a good fit for what LLMs actually do.


  • Why not? if LLMs are good at predicting mean outcomes for the next symbol in a string, and humans have idiosyncrasies that deviate from that mean in a predictable way, I don’t see why you couldn’t detect and correlate certain language features that map to a specific user. You could use things like word choice, punctuation, slang, common misspellings, sentence structure… For example, I started with a contradicting question, I used “idiosyncrasies”, I wrote “LLMs” without an apostrophe, “language features” is a term of art, as is “map” as a verb, etc. None of these are indicative on their own, but unless people are taking exceptional care to either hyper-normalize their style, or explicitly spiking their language with confounding elements, I don’t see why an LLM wouldn’t be useful for this kind of espionage.

    I wonder if this will have a homogenizing effect on the anonymous web. It might become an accepted practice to communicate in a highly formalized style to make this kind of style fingerprinting harder.




  • It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.






  • Are you inviting me to a money fight? I do love those. Let’s both put in ludicrous bids on some AI company and fight over ownership to pump it’s value in the market, I haven’t done one of those in months. Winner buys the next yacht we sink in the Bermuda Triangle to appease the Elder Ones, Respect upon their Unknowable Names. If only the poor knew how hard we worked to prevent this puny planet from being eaten by elder demons, they would be grateful.