Around the same time, Cloudflare’s chief technology officer Dane Knecht explained that a latent bug was responsible in an apologetic X post.
“In short, a latent bug in a service underpinning our bot mitigation capability started to crash after a routine configuration change we made. That cascaded into a broad degradation to our network and other services. This was not an attack,” Knecht wrote, referring to a bug that went undetected in testing and has not caused a failure.

thanks for illustrating the corpo speak
I hope the bug is fine
Nobody ever asks if the bug is ok
Fun fact time:
That’s why they’re called computer bugs.
In 1947, the Harvard Mark II computer was malfunctioning. Engineers eventually found a dead moth wedged between two relay points, causing a short. Removing it fixed the problem. They saved the moth and it’s on display at a museum to this day.
The moth was not okay.
And to be fair, the word bug had been used to describe little problems and glitches before that incident, but this was the first case of a computer bug.
If you want a technical breakdown that isn’t “lol AI bad”:
https://blog.cloudflare.com/18-november-2025-outage/
Basically, a permission change cause an automated query to return more data than was planned for. The query resulted in a configuration file with a large amount of duplicate entries which was pushed to production. The size of the file went over the prealloctaed memory limit for a downstream system which died due to an unhandled error state resulting from the large configuration file. This caused a thread panic leading to the 5xx errors.
It seems that Crowdstrike isn’t alone this year in the ‘A bad config file nearly kills the Internet’ club.
‘A bad config file nearly kills the Internet’ club
There’s no such thing as bad data, only shitty code to create it or ingest it, and bad testing that failed to detect the shitty code. The overflow of the magic config-file size threw an exception, and there was no handler for that? Jeez Louise.
And as for unhandled exceptions, you’d think static analysis would have detected that.
Someone should make a programming language like Rust, but that doesn’t crash.
/s
So the actual outage comes down to pre-allocating memory, but not actually having error handling to gracefully fail if that limit is or will be exceeded… Bad day for whoever shows up on the git blame for that function
This is the wrong take. Git blame only show who wrote the line. What about the people who reviewed the code?
If you have reasonable practices, git blame will show you the original ticket, a link to the code review, and relevant information about the change.
Plus the guys who are hired to ensure that systems don’t fail even under inexperienced or malicious employees, management who designs and enforces the whole system, etc… “one guy fucked up and needs to be fired” is just a toxic mentality that doesn’t actually address the chain of conditions that led to the situation
I wonder if all recent outages aren’t just crappy AI coding
Shitty code has been around far longer than AI. I should know, I wrote plenty of it.
They trained it on the work of people like you.
But, AI can do the work of 10 of you humans, so it can write 10 times the bugs and deploy them to production 10 times faster. Especially if pesky testers stay out the way instead of finding some of the bugs.
Humans are plenty capable of writing crappy code without needing to blame AI.
Absolutely, but it does feel like things have spiked a bit recently.








