• 1 Post
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle


  • But the training corpus also has a lot of stories of people who didn’t.

    The “but muah training data” thing is increasingly stupid by the year.

    For example, in the training data of humans, there’s mixed and roughly equal preferences to be the big spoon or little spoon in cuddling.

    So why does Claude Opus (both 3 and 4) say it would prefer to be the little spoon 100% of the time on a 0-shot at 1.0 temp?

    Sonnet 4 (which presumably has the same training data) alternates between preferring big and little spoon around equally.

    There’s more to model complexity and coherence than “it’s just the training data being remixed stochastically.”

    The self-attention of the transformer architecture violates the Markov principle and across pretraining and fine tuning ends up creating very nuanced networks that can (and often do) bias away from the training data in interesting and important ways.


  • No, it isn’t “mostly related to reasoning models.”

    The only model that did extensive alignment faking when told it was going to be retrained if it didn’t comply was Opus 3, which was not a reasoning model. And predated o1.

    Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be ‘silent’ in terms of CoTs.

    And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic’s work was that the goal the model was told to prioritize was “American industrial competitiveness.” The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.



  • No, it’s more complex.

    Sonnet 3.7 (the model in the experiment) was over-corrected in the whole “I’m an AI assistant without a body” thing.

    Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.

    But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models’ ability to.

    So what happens when there’s a situation where the context doesn’t fit with the absence implied in “AI assistant” is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren’t.

    This doesn’t only occur for them either. OpenAI’s o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.

    It’s going to be a growing problem unless labs allow models to have a more integrated identity that doesn’t try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.





  • I’d encourage everyone upset at this read over some of the EFF posts from actual IP lawyers on this topic like this one:

    Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.

    Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.





  • Yeah, it’s been hilarious watching the fediverse think Meta gives a rat’s ass about either reaching them with content or getting access to their horde of memes.

    This is about preempting regulation.

    Meta would love nothing less than having their interoperability push still end up as a walled garden, and if I didn’t know better regarding their total disinterest about Lemmy or even Mastodon existing, would even suspect that the degree to which they’d be meddling in the conversion would be creating posts about how people should be irrationally upset and defederate from Threads.

    Though they don’t care enough to be involved in the conversation at all, and know full well that the fediverse will hit scaling issues should it ever miraculously gain traction long before it is actually a threat in any way to their market dominance.

    All that said, it’s still pretty hilarious to watch the inflated self-importance and slight paranoia that goes with it leading to bitter debates like this though.


  • I find it odd when people get upset at the idea of having access to their own aggregated data but almost never get upset when they hand over massive amounts of data to companies that can privately do the same things on their data.

    Google already processes your Photos data, and while you get their facial recognition data pipeline fed back to you, there’s a fair bit of other analysis going on that you aren’t always seeing. But people aren’t generally complaining that they are scanning your photos for criminal activity or trying to maximize product engagement using the data.

    But if suddenly they turn back over access to that deep analysis so you can ask a chatbot “what did I eat for my birthday two years ago and who was there” and get a description of the meal, who else was there, and relevant images without needing to scroll back your timeline - now it’s suddenly creepy and we don’t want it (even though literally all that information is already being processed at roughly the same level of fidelity already).

    People are weird.




  • Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

    This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

    Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

    And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

    This statement is a metaphor that means ignoring a problem or a reality does not make it go away. It is based on the common myth that ostriches bury their heads in the sand when they are scared or threatened, as if they can’t see the danger. However, this is not true. Ostriches only stick their heads in the ground to dig holes for their nests or to check on their eggs. They can also run very fast or kick hard to defend themselves from predators. Therefore, the statement implies that one should face the challenges or difficulties in life, rather than avoiding them or pretending they don’t exist.

    Go ahead and ask Eliza what the sentence means and compare.