“Vaughan was surprised to find it was often the technical staff, not marketing or sales, who dug in their heels.”
So the people that understood it best were sceptical, and this didn’t give him pause.
Can someone explain to me why all these empty suits dick ride LLMs so hard?
Because they try the tools, realize that their job is pretty much covered by LLMs and think it’s the same for everyone.
They’re easily conned and they love yes men.
Technical staff were skeptical because they actually know what AI can and can’t do reliably in production environments - it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistancy.
So it’s the CEO they should replace.
it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistency.
Thank goodness nobody is crusading to have AI take over medicine.
…which is why I categorically refuse to use the term Artificial intelligence .
The bullshitters were quick to adopt the bullshit factory.
Can someone explain to me why all these empty suits dick ride LLMs so hard?
$$$$$$$
AIs are cheaper than humans.
Cheaper NOW, when OpenAI operates at huge loss
Not really. Because they don’t work and then they have to hire more humans which is more expensive than just keeping them on for the 6 months it’ll take for the CEOs to realise that.
“Vaughan was surprised to find it was often the technical staff…”
Tell me you’re completely out of touch with your company and what it does without telling me you’re completely out of touch with your company and what it does. FFS how is this guy the CEO? Oh, he’s one of the founders? Brilliant.
Vaughan says he didn’t want to force anyone. “You can’t compel people to change, especially if they don’t believe.”
But he did. Change or be fired, basically.
“You multiply people…give people the ability to multiply themselves and do things at a pace,” he said, touting the company’s ability to build new customer-ready products in as little as four days, an unthinkable timeline in the old regime.
Ooh I bet some nefarious hacker types will be salivating at the incredibly rushed code base that is probably a spaghetti mess and as insecure as fuck.
Vaughan disclosed that the company, which he said is in the nine-figure revenue range, finished 2024 at “near 75% Ebitda”—all while completing a major acquisition, Khoros.
I had to look up EBITDA - some interesting points to consider when you look at this metric he used:
A negative EBITDA indicates that a business has fundamental problems with profitability. A positive EBITDA, on the other hand, does not necessarily mean that the business generates cash. This is because the cash generation of a business depends on capital expenditures (needed to replace assets that have broken down), taxes, interest and movements in working capital as well as on EBITDA.
While being a useful metric, one should not rely on EBITDA alone when assessing the performance of a company. The biggest criticism of using EBITDA as a measure to assess company performance is that it ignores the need for capital expenditures in its assessment.Hmmm… I’m no accountant (I leave that to my actual accountant), but surely if they were being profitable it would sound better to say something like “We’ve remained profitable throughout and our earnings per quarter are on par if not greater than before.”?
I’m no accountant but surely if they were being profitable it would sound better to say something like “We’ve remained profitable throughout and our earnings per quarter are on par if not greater than before.”?
no, because profitability isn’t the key figure they are interested in. it’s growth. i recently got fired because of disappointing growth; e.g. the increase in profitability was not as large as they expected. which means they still made more money than last year.
this is why expenditures get relegated to “externality” status; because otherwise projections would make it look like a company can not grow infinitely large, and surely that’s not true
Vaughan was surprised to find it was often the technical staff, not marketing or sales, who dug in their heels. They were the “most resistant,” he said, voicing various concerns about what the AI couldn’t do, rather than focusing on what it could. The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.
Imagine that.
Yeah I have a CEO like that, it makes me want to strangle him. He constantly considers the raising of valid concerns to be some sort of personality failing. Meetings with him are an utterly pointless exercise, they’re not meetings, they are times where he tells us what he’s already decided to do.
Fortunately the held on Teams now, so I just joined the meeting and then go make a cup of coffee.
deleted by creator
No, I disagree. The CEO is by far the most replaceable person when it comes to AI if the directive is to simply make more money for shareholders based on market research. I would argue that the CEO is being a parasite here.
CEOs are invariably the parasites in virtually any company where they earn more than 10× than their median employee.
Nothing that can be done inside the business can justify compensation like that. Ergo: parasitism of the profits, of siphoning away more and more value that the workers produce just for themselves and those of their fellow parasites.
Today, I ran into a bug. We’re being encouraged to use AI more so I asked copilot why it failed. I asked without really looking at the code. I tried multiple times and all AI could say was ‘yep it shouldn’t do that’ but didn’t tell me why. So, gave up on copilot and looked at the code. It took me less than a minute to find the problem.
It was a switch statement and the case statement had (not real values) what basically reads as ’ variable’ == ‘caseA’ or ‘caseB’. Which will return true… Which is the bug. Like I’m stripping a bunch of stuff away but co-pilot couldn’t figure out that the case statement was bad.
AI is quickly becoming the biggest red flag. Fast slop, is still slop.
AI thinks in the same way that ants think, there’s no real intelligence or thought going on but ants are still able to build complex logistics chains by following simple rules, although AI works on completely different principles the effect is the same, it’s following a lot of simple rules that lead to something that looks like intelligence.
The problem is a lot of people seem to think that AIs are genuinely simulations of a brain, they think the AI is genuinely conjugating because they kind of look like they do sometimes. The world is never going to get taken over by a mindless zombie AI. If we ever do get AGI it won’t be from LLMs that’s for sure.
I do find AI useful when I’m debugging a large SQL / Python script though and gotta say I make use of it in that case… other than that it’s useless and relying on it as ones main tool is idiotic
I’ve never heard of this jackass nor his shitty software. I feel privileged.
Late stage capitalism rewards management for any appearance of change. It really doesn’t matter whether the results of that change are good or bad. And even a CEO who keeps destroying companies can always find a similar position elsewhere. The feedback loop is hopelessly broken.
Reminds me of the song Just Movement by Robert DeLong
Just like an AI. Instead of learning from mistakes, he repeates them, and denies any wrongdoing.
“You’re Absolutely right!”
Because he asks the ai what’s best but the chatbot always treats it as a loaded question and it wants to be seen as helpful so it finds a way to agree yes-man style.
CEO of enterprise-software powerhouse IgniteTech.
Can someone tell me what they do? They don’t have a Wikipedia Article and their website is mostly AI slop.
They throw buzzwords at venture capitalists in hopes of one day selling out.
After grilling their silly LLM for a while, I was able to squeeze out what that company really is all about. They don’t really make anything. They just buy miscellaneous software companies, and turn those apps into subscription based cloud cancer. Enterprise software meets maximum enshittification, yeah baby!
Ah, so removing employees from this dumpster fire was a net positive for society.
I think only bankruptcy is the net positive, as long as they don’t stiff legitimate creditors.
No don’t you see - fewer employees means there’s less of anything getting done, and this company is just a parasite that produces nothing of value.
They throw buzzwords
Now I understands why the CEO thinks AI could replace everybody.
deleted by creator
I understand what enterprise software is. That wasn’t my question.
That CEO:
“Doing Our Part to Make the World a Greener Place”
Clown company. You can’t promote AI and do a claim like that at the same time.
By accelerating the collapse of a human survivable ecosystem we will bring about the end of humanity, resulting in a greener environment for the handful of surviving species.
By
accelerating the collapse ofpivoting a human survivable ecosystem we willbring about the endaccelerate a paradigm shift of humanity, resulting in a greener environment for thehandfulstable base ofsurviving species.recurring revenue.
Does he still have a company at all?
This type of shortsightedness should be punished. I mean AI can be useful for certain tasks but it’s still just a tool. It’s like these CEOs were just introduced to a screwdriver and he’s trying use it for everything.
“Look employees, you can use this new screwdriver thing to brush your teeth and wipe your ass. “
You can use this new screwdriver to fuck yourself. We’re working late boys!
“It enabled us to shit out products in 4 days.”
Glad they incorporated such thorough testing in their process.
Of course he would. He could probably give hitler lessons on oven design.
I wonder if he thinks we’re dumb or just doesn’t care. They’d have been laid off either way. “Return to work”, “Stack ranking”, “AI refusal”, whatever you say bro.
“The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.”
https://youtu.be/KHJbSvidohg#t=13s
I see the same push where I work and I cannot get a good answer to the most basic question:
“Why?”
“We want more people using AI.”
“Why?”
“. . .”
I usually ignore these kind of trends. Just meet any required deadlines etc but don’t engage too much. The vast majority will just disappear.
Specifically as a software developer I cannot see a good outcome from engaging with this trend either. It’s going to go one of two ways.
1: It pans out sooner rather than later that AI wasn’t the panacea they thought it was, and it either is forgotten about, or becomes a set of realized tools we use, but don’t rely on.
2: They believe it can replace us all, and so they replace us all with freshly graduated vibe “programmers” and I don’t have a job anyway.
I don’t really see an upside to engaging with this in any kind of long term plan.
2. It’s about breaking the power of tech workers by reducing them from highly skilled specialists to interchangeable low-status workers whose job is to clean up botshit until it compiles. (Given that the machine does the real work and they’re just tidying up the output it generates when prompted, they naturally don’t merit high wages or indulgent perks, even if getting 30,000 lines of code regurgitated from the mashed-up contents of Github and Stack Overflow working is more cognitively tasking than writing that code from scratch would have been.)
It doesn’t matter what they claim if they simply can’t get the people to babysit the AI codebase or the AIs for less money than the original ones who didn’t have to deal with AIs and their output used to cost.
As a pretty senior dev who spent a lot of my career as a contractor mainly coming in to unfuck code-bases seriously fucked up by a couple of cycles under less experienced people, if I was pitched work to unfuck AI work I would demand a premium for my services purelly because of it being far more more fucked up in far harder to follow ways than the work done by less experience humans (who at least are consistent in the mistakes they make and follow a specific pattern in how they work) even without any moral considerations (on principle I would probably just not take a contract with a company that had used AI like that).
I mean, I can see their strategy work for junior devs, but that kind of reducing the power of specialized workers was already being done against junior devs using “outsourcing”.
My prediction is that it’s just the latest buzzword on the pile of buzzwords and by 2028 a new one will pop up and the only time you hear “AI” will be in the line of “Hey, remember when everyone was talking about AI?”
Before AI it was “The Cloud”. Before the cloud it was “Virtualization”. They’re saying all the same things about AI that they said about the cloud and virtualization…
I guess the real money is inventing the new buzzword that sales people can say will make your business faster, more agile, and more efficient. :)
I think it’s a real shame because all three of those things you mention are useful. The problem is that once they become a buzzword, then everything needs to be done using that buzzword.
Cloud has been misused to hell and back, and I have no doubt AI will too.
“AI-powered cloud software virtualization”
It’s just Kubernetes.
Before AI it was “The Cloud”. Before the cloud it was “Virtualization”. They’re saying all the same things about AI that they said about the cloud and virtualization…
So you’re saying AI will make a measurable (arguably net positive) impact and forever change the way we do things in our day to day, just becoming a standard toolset offered by many providers? Because I’d argue that’s what virtualization was, as well as the cloud to a lesser extent. Hell, I’d be hard pressed to be convinced on virtualization being a bad thing (not as much the cloud tho, that has some solid negative arguments).
If you’re trying to shit talk AI, you’d be better off comparing it to block chain than cloud/virtualization, since the latter two are an integral part of a large amount of the work we do, and the former is mainly for illicit drugs/activities and stealing money.
agreed, virtualization was one of the best things to happen in IT since the dawn of the internet. i can’t even imagine how much less efficient and reliable datacenters and the entire internet would be without it. Not at all comparable to AI.
i actually work for a company that does very little virtualization now and it’s fucking awful.
I think the comparison is apt, it’s not that LLM is useless, it’s just that, currently, it’s insanely overhyped. Just like the .com had irrational companies that evaporated but underlying tech was useful. Just like in-house servers were considered to be dead with everything being cloud hosted, now there’s recognition of a trade off. Just like there was pressure to ship everything as an ‘.ova’ and nowadays that’s not really done as much.
An appropriately used level of LLM might even be nice, but when it’s fuel for the grifters, it is going to be obnoxious.
LLMs may be overhyped, but the point is virtualization was not.
Virtualization as a ‘platform’ was a bit overhyped, hence my ‘.ova’ comment. There was a push for a lot of applications to exclusively ship as a whole virtual machine, to create OS variants dedicated to the purpose of running single applications. For a lot of applications it was supremely awkward, because app developers ended up having to ‘own’ things they didn’t want to own, like the customer network configuration.
Virtualization as a utility has of course persisted, but it’s much more rare for a vendor to declare their ‘runtime’ to be vmware than it once was. Virtualization existed in IBM for a long time, vmware made it broadly more available and flexible in the PC space, and then around mind 2000s things started to go a bit crazy with ‘virtualization is the runtime’.
Now mind you, compared to dot-com or ‘big data’ it was trivial, but it was all a bit silly for a time there.
The Cloud is still a thing though. As is virtualization
And AI (LLMs, media generation, machine learning) are going to stay a thing as well.
Yeah, but nobody talks about them solving/causing all the problems. :)
Yeah, there’s generally a kernel of value wrapped up in all sorts of bullshit.
Some with the .com boom, obviously here we are with internet as a critical infrastructure, but 1999 ‘internet’ was a mess of overhype.
Everytime I see this kind of hype pop up I think back to when there was this great announcement from Silicon Valley about a “revolution in transportation” and it turned out to be the Segway.
“People will be designing cities around this!”
It could have had an impact if it hadn’t been $5,000…
And, BTW, not even the coolest thing Kamen designed…
I think the next buzzword teed up is “quantum”
Looking forward to the infinitely scalable quantum AI blockchain cloud virtualization E2E P2P VPN micro-services!
Get this person a golden parachute, stat
🤢
Yeah, it’s already happening…
Same reason as forcing people back into the office even though it’s the solution to a number of serious issues affecting society:
Investors/banks have tons of money in these markets and are incentivizing/forces companies to adopt these policies to prop up the markets, whether it is in their interest or not.
Oh, yeah, we have that too… we want people in the office because collaboration! Synergy! etc. etc.
“How does that work if you want everyone using AI?”
“. . .”
Oooo hot take time: I’d rather work in an office again than be forced to use LLMs.
I’d rather use the AI than go back to the office. The AI doesn’t care if I’m wearing any pants.
I’d rather correct LLM hallucinations than some of the crap front line tech support tells people. :)
“The marketing and salespeople were enthused by the possibilities of working with these new tools, he added.”
[sigh] Because of course they were. Those people couldn’t find their own arses even if they used both hands.