• 0 Posts
  • 6 Comments
Joined 3 years ago
cake
Cake day: July 14th, 2023

help-circle

  • That is not as smart of a question as you want it to be. Unfortunately for you, not everything can be modeled mathematically, or if you wish to be extremely minute, not everything can be currently mathematically modeled efficiently and precisely because it would require knowledge or resources far eclipsing what we have available. If you just want to push up your glasses and ACKSHUALLY me, then it’s also possible to do anything, hurr hurr.

    To even fucking PRETEND that we can model a brain right now is hilarious to me, but to equate that to LLMs is downright moronic. Human brains are not created, trained, or used in any way similar to LLMs, no matter what anyone says, but you are insinuating that they are somehow similar??? They are a simulation of a learning algorithm, trained through brute force tactics, and used for pattern completion. That’s just not how that works!

    And yet, in spite of the petabytes of data they fucking jam into these pieces of shit, they still can’t even draw hands correctly. They still can’t figure out the seahorse emoji. They still don’t know why strawberry has two Rs! They continuously repeat only the things they hear, and need to have these errors fixed manually. They don’t know anything. And that’s why they aren’t intelligent. They are fed data points. They create estimations. But they do not understand what the connections between those points are. And no amount of pointing at humans will fix that.


  • Just as a brain is not a giant statistics problem, LLMs are not intelligent. LLMs are basically large math problems that take what you put into them and calculate the remainder. That isn’t an emergent behavior. That isn’t intelligence at all.

    If I type into a calculator 20*10 and it gives me 200, is that a sign of intelligence that the calculator can do math? I never programmed it to know what 10 or 20 or 200 were, though I did make it know what multiplication is and what digits and numbers are, but those particular things it totally created on its own after that!!!

    When you type a sentence into an LLM and it returns with an approximation of what a response sounds like, you should treat it the same way. People programmed these things to do the things that they are doing, so what behavior is fucking emergent?


  • Holy shit. This is the craziest article to write about one of the shittiest videos I have ever seen.

    That video is glazing the fuck out of LLMs, and the creator knows jackshit about how AIs or even computers work. What a fucking moron.

    So, like, the point of the experiment is that LLMs will generate outputs based on their inputs, and then those outputs are interpreted by an intermediary program to do things in games. And the video is trying to pretend that this is LITERALLY a new intelligent species emerging because you never told it to do anything other than its initial goal! Which… Isn’t impressive? LLMs generate outputs based on their datasets, like, that’s not in question. That isn’t intelligence, because it is just one giant mathematics problem.

    This article is a giant pile of shit.


  • But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.

    Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.

    Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?

    I’m confused where you are getting your information from, but this is not particularly special behavior.


  • Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.