

actually Donnie’s friends prefer to fuck high school students
actually Donnie’s friends prefer to fuck high school students
“you can’t have a successful government when every time I want to be President or have sex with minors or anything else you have the right to do as a rich, white man, you have to hear people get all judgy”
The part I find interesting is the quick addiction to working with the LLM (to the point that the guy finds his own estimate of 8000 dollars/month in fees to be reasonable), his over-reliance for things that, from the way he writes, he knows are not wise and the way it all comes crashing down in the end. Sounds more and more like the development of a new health issue.
I wonder if it can be used legally against the company behind the model, though. I doubt that it’s possible, but having a “your own model says it effed up my data” could give some beef to a complaint. Or at least to a request to get a refund on the fees.
The article makes a good point that it’s less about replacing a knowledge worker completely and more industrializing what some categories of knowledge workers do.
Can one professional create a video with AI in a matter of hours instead of it taking days and needing actors, script writers and professional equipment? Apparently yes. And AI can even translate it in multiple languages without translators and voice actors.
Are they “great” videos? Probably not. Good enough and cheap enough for several uses? Probably yes.
Same for programming. The completely independent AI coder doesn’t exist and many are starting to doubt that it ever will, with the current technology. But if GenAI can speed up development, even not super-significantly but to the point that it takes maybe 8 developers to do the work of 10, that is a 20% drop in demand for developers, which puts downward pressure on salaries too.
It’s like in agriculture. It’s not like technology produced completely automated ways to plow fields or harvest crops. But one guy with a tractor can now work one field in a few hours by himself.
With AI all this is mostly hypothetical, in the sense that OpenAI and co are all still burning money and resources at a pace that looks hard to sustain (let alone grow) and it’s unclear what the cost to the consumers will be like, when the dust settles and these companies will need to make a profit.
But still, when we’re laughing at all the failed attempts to make AI truly autonomous in many domains we might be missing the point
don’t call my tesla cars swastikars…
… that’s reductive, they have so much MORE potential!
but why am I soft in the middle? The rest of my life is so hard!
but… but… reasoning models! AGI! Singularity! Seriously, what you’re saying is true, but it’s not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.
Congrats then, you write better than a LLM!
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I’m just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
In order to make sure they were wealthy enough, I’m sure he personally tested them one by one, challenging to send him a big donation in cryptocurrencies.
That’s what a committed President-slash-genius looks like!
60% success rate sounds like a very optimistic take. Investing in a AI startup with 60% chance of success? That’s a VC’s wet dream!
“Eventually” might be a long time with radiation.
20 years after the Chernobyl disaster the level of radiation was still high enough to give you a good chance of cancer if you went to live there for a few years.
https://www.chernobylgallery.com/chernobyl-disaster/radiation-levels/
I don’t know how much radiation these “tactical” weapons release, but if it’s comparable to Chernobyl, even if the buildings were not originally damaged, I don’t know how fit they would be for living after being abandoned for 30 or 40 years.
It was Anthropic who ran this experiment
rest of Tokio is mostly intact
and housing becomes much more accessible too when buildings are intact but their inhabitants have much shorter lives because of radiation
ah, dear old copy/paste… It’s funny that even OpenAI doesn’t trust ChatGPT enough to give more personalized LLM-generated answers.
And this sounds exactly like the type of use case AI agents are supposedly so great at that they will replace all human workers (according to Altman at least). Any time now!