Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Suppose many of the CEOs are just milking general venture capital. And those CEOs know that it’s a bubble and it’ll burst, but have a good enough way to predict when it will, thus leaving with profit. I mean, anyway, CEOs are usually not reliant upon company’s performance, so no need even to know.
Also suppose that some very good source of free\cheap computation is used for the initial hype - like, a conspiracy theory, a backdoor in most popular TCP/IP realizations making all of the Internet’s major routers work as a VM for some limited bytecode for someone who knows about that backdoor and controls two machines, talking to each other via the Internet and directly.
Then the blockchain bubble and the AI bubble would be similar in relying upon such computation (convenient for something slow in latency but endlessly parallel), and those inflating the bubbles and knowing of such a backdoor wouldn’t risk anything, and would clear the field of plenty of competition with each iteration, making fortunes via hedge funds. They would spend very little for the initial stage of mining the initial party of bitcoins (what if Satoshi were actually Bill Joy or someone like that, who could have put such a backdoor, in theory), and training the initial stages of superficially impressive LLMs.
And then all this perpetual process of bubble after bubble makes some group of people (narrow enough, if they can keep the secret constituting my conspiracy theory) richer and richer quick enough on the planetary scale to gradually own bigger and bigger percent of the world economy, indirectly, of course, while regularly cleaning the field of clueless normies.
Just a conspiracy theory, don’t treat it too seriously. But if, suppose, this were true, it would be both cartoonishly evil and cinematographically epic.
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
You know what I think it is? The tittle is misleading. These companies probably had ZERO SUM GAIN when investing in AI. The upfront costs of investing in them didn’t see returns yet. That’s like saying a new restaurant isn’t profitable. If you know you know.
Basically, I’m saying it likely didn’t cost the companies anything either and will likely be profitable in the long run as this software is integrated and workforce is reduced due to auromation.
R&D returned as much value as it consumed. So you can technically say “Zero Return,” and be correct from an accounting perspective. And since everyone hates AI they’ll believe it.
Don’t get me wrong. AI is a Bubble industry but it’s not going to go away when it pops.
Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I’d like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It’s anecdotal evidence of course, but the amount of BS answers saying “AI IS GONNA DOMINATE SOON” ; “NEXT YEAR NOBODY WILL HAVE A JOB”, “THIS IS THE FUTURE” etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don’t start using “X”.
Excel still struggles with correct formula suggestions. Basic #REF errors when the cell above and below in the table function just fine. The ever present, this data is a formula error when there is no longer a formula in the entire column.
And searching, just like its predecessor the google algo, gives you useless suggestions if anything remotely fashionable uses the scientific name too.
Oh, that reminds me that we’ve always lived in false bubbles, and when they burst, crises and other things started, and eventually the biggest bubble that we call civilization and progress will burst, maybe in 2040-2050+.
Bubbles burst, who would have thought.
Every technology invented is a dual edge sword. Other edge propulses deluge of misinformation, llm hallucinations, brain washing of the masses, and exploit exploit for profit. The better side advances progress in science, well being, availbility of useful knowledge. Like the nuclerbomb, LLM “ai” is currenty in its infancy and is used as a weapon, there is a literal race to who makes the “biggest best” fkn “AI” to dominate the world. Eventually, the over optimistic buble bursts and reality of the flaws and risks will kick in. (Hopefully…)
Surprise, surprise, motherfxxxers. Now you’ll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Either spell the word properly, or use something else, what the fuck are you doing? Don’t just glibly strait-jacket language, you’re part of the ongoing decline of the internet with this bullshit.
You’re absolutely right about that, motherfucker.
Investors and executives still show strong interest in AI, hoping that ongoing advances will close these gaps. But the short-term outlook points to slower progress than many expected.
Doesn’t sound like that’s gonna happen in the near future
hoping that ongoing advances will close these gaps
Well, they wont.
they will rehire, but it will be outsourced for lower wages, at least thats what the same posts on reddit of the same article is discussing.
As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.
But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.
I don’t think I can replaced so far, but my team is not looking to expand the team right now because we are doing more work.
Same here. I love it when Windsurf corrects nested syntax that’s always a pain, or when I need it to refactor six similar functions into one, or write trivial tests and basic regex. It’s so incredibly handy when it works right.
Sadly other times it cheats and does the lazy thing. Like when I ask it to write me an object, but chooses to derive it from the one I’m trying to rework. That’s when I ask it to move and I do it myself.
AI is not needed for any of the points you mentioned. That’s just intellisense and auto complete with extra pollution and fossil fuels
Good luck when you need to link tests with requirements and you don’t know what the tests are doing
Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea
This comment really exemplifies the ignorance around AI. It’s not fancy autocorrect, it’s fancy autocomplete.
It’s fancy autoincorrect
Fancy autocorrect? Bro lives in 2022
EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068
EDIT2: Seems like every AI thread gets flooded with people with showing age who keeps talking about outdated definitions, not knowing which system fits the definition of reasoning, and how that term is used in modern age.
I already linked this below, but for those who want to educate themselves on more up to date terminology and different reasoning systems used in IT and tech world, take a deeper look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
I even loved how one argument went “if you change underlying names, the model will fail more often, meaning it can’t reason”. No, if a model still manages to show some success rate, then the reasoning system literally works, otherwhise it would fail 100% of the time… Use your heads when arguing.
As another example, but language reasoning and pattern recognition (which is also a reasoning system): https://i.imgur.com/SrLX6cW.jpeg answer; https://i.imgur.com/0sTtwzM.jpeg
Note that there is a difference between what the term is used for outside informational technologies, but we’re quite clearly talking about tech and IT, not neuroscience, which would be quite a different reasoning, but these systems used in AI, by modern definitions, are reasoning systems, literally meaning they reason. Think of it like Artificial intelligence versus intelligence.
I will no longer answer comments below as pretty much everyone starts talking about non-IT reasoning or historical applications.
You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?
Note that I’m not one of the people talking about it on X, I don’t know who they are. I just linked it with a simple “this looks like reasoning to me”.
Yes, your confidence in something you apparently know nothing about is apparent.
Have you ever thought that openai, and most xitter influencers, are lying for profit?
They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
You partly described reasoning tho
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning
Wtf?
Do I even have to point out the parts you need to read? Go back and start reading at sentence that says “In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning.”, and then check out NLP page, or part about machine learning, which are all seperate/different reasoning systems, but we just tend to say “reasoning”.
Not your hilarious NPC anology.
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just ‘knows’ that someone on the internet talked about something, sometimes with and often without or contradicting resolutions…
It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.
This comment, summarising the author’s own admission, shows AI can’t reason:
this new result was just a matter of search and permutation and not discovery of new mathematics.
I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem
You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.
Can you elaborate? How is this not reasoning? Define reasoning to me
Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.
While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.
With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.
But pattern recognition is literally reasoning. Your argument sounds like “it reasons, but not as good as humans, therefore it does not reason”
I feel like you should take a look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.
sigh
Dustin’ off this one, out from the fucking meme archive…
https://youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Wait? Third? I feel like we’re past third. Has it only been three?
Dot com bubble, the great recession, covid. So yeah, that would be the fourth coming up.
You can also use 9/11 + GWOT in place of the dotcom bubble, for ‘society reshaping disaster crisis’
So uh, silly me, living in the disaster hypercapitalism ers, being so normalized to utterly.world redefining chaos at every level, so.often, that i have lost count.
That is more American focused though. Sure I heard about 9/11 but I was 8 and didn’t really care because I wanted to go play outside.
True, true, sorry, my America-centrism is showing.
Or well, you know, it was a formative and highly traumatic ‘core memory’ for me.
And, at the time, we were the largest economy in the world, and that event broke our collective minds, and reoriented that economy, and our society, down a dark path that only ended up causing waste, death and destruction.
Imagine the timeline where Gore won, not Bush, and all the US really did was send in a specops team to Afghanistan to get Bin Laden, as opposed to occupy the whole country, never did Iraq 2.
Thats… a lot of political capital and money that could have been directed to… anything else, i dunno, maybe kickstarting a green energy push?
Wait for Gen X to pop in as usual and seek attention with some “we always get ignored” bullshit.
Who cares what Gen X thinks, they have all the money.
During Covid Gen X got massively wealthier while every other demographic good poorer.
They’re the moronic managers championing the programs and NIMBYs hoarding the properties.
Imagine how much more they could’ve just paid employees.
You misspelled “shares they could have bought back”
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?
Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?
I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.
We had that recently. 10% redundant and pay freeze because we were not profitable enough. Guess what, morale tanked and they only slightly improved it by giving everyone +10 days holiday.
Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.Yeah maybe don’t use LLMs
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
Blindsighted by Peter Watts right? Incredible story. Can recommend.
Yep that’s it. Really enjoyed it, just starting Echopraxia.
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
I only read Children of Time. I need to get off my ass
In before someone mentions P-zombies.
I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
It’s “hypotheses” btw.
Hypothesiseses
You actually did it? That’s really ChatGPT response? It’s a great answer.
Yeah, this is ChatGPT 4. It’s scary how good it is on generative responses, but like it said. It’s not to be trusted.
This feels like such a double head fake. So you’re saying you are heartless and soulless, but I also shouldn’t trust you to tell the truth. 😵💫
I think it was just summarising the article, not giving an “opinion”.
Everything I say is true. The last statement I said is false.
It’s got a lot of stolen data to source and sell back to us.
Stop believing your lying eyes !
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.
This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
It’s extrapolating from data.
AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.
I’d still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it’s new. Otherwise I couldn’t give it simple stuff and let it extend it.
Why the British accent, and which one?!
It’s as if it’s a bubble or something…
And the next deepseek is coming out soon