• 0 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle



  • It’s supposed to be kind of sort of federal(ish) (federalish enough, in theory, to keep Catalonia and Euskadi happy enough that we won’t want to leave). “States” are called autonomías (autonomies) and have their own government, laws, and institutions, though they still have to obey the Spanish government and most of its laws. It isn’t really working.

    The article is still wrong when it uses “feds”, though, because the cops doing this are the mossos d’esquadra, the Catalan autonomic police, not the “federal(ish)” policía nacional (the Spanish police proper) or guardia civil (despite the name, the military Spanish police, a relic from Franco’s dictatorship, like most of the country and its institutions).


  • It’s supposed to be kind of sort of federal(ish) (federalish enough, in theory, to keep Catalonia and Euskadi happy enough that we won’t want to leave). “States” are called autonomías (autonomies) and have their own government, laws, and institutions, though they still have to obey the Spanish government and most of its laws. It isn’t really working.

    The article is still wrong when it uses “feds”, though, because the cops doing this are the mossos d’esquadra, the Catalan autonomic police, not the “federal(ish)” policía nacional (the Spanish police proper) or guardia civil (despite the name, the military Spanish police, a relic from Franco’s dictatorship, like most of the country and its institutions).






  • in the unable-to-reason-effectively sense

    That’s all LLMs by definition.

    They’re probabilistic text generators, not AI. They’re fundamentally incapable of reasoning in any way, shape or form.

    They just take a text and produce the most probable word to follow it according to their training model, that’s all.

    What Musk’s plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.