Vote people. There’s town and city votes everyday or often. Vote!
The code is still on GitHub, just an earlier commit: https://github.com/chatgptprojects/clear-code/tree/627ab39f09681d9c7d6915861d36d361bdc6d889
At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.
Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.
This “Strict Write Discipline”—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.
For competitors, the “blueprint” is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a “hint,” requiring the model to verify facts against the actual codebase before proceeding.
Interesting to see if continue.dev takes advantage of this methodology. My only complaint has been context with it.
In this mode, the agent performs “memory consolidation” while the user is idle. The
autoDreamlogic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.this blog post reads like a marketing piece

Be careful not to introduce security vulnerabilities such as command injection, XSS, SQL injection, and other OWASP top 10 vulnerabilities. If you notice that you wrote insecure code, immediately fix it.
Lmao. I’m sure that will solve the problem of it writing insecure slop code.
It doesn’t fix it, but as stupid as it looks, it should actually improve the chances.
If you’ve seen how the reasoning works, they basically spit out some garbage, then read it again and think whether it’s garbage enough or not.
They do try to ‘correct their errors’, so to say.That’s not enabled by default afaik and it burns through way more tokens looping its output through several times. It also adds a bunch more context which will bring you that much closer to context collapse.
I didn’t turn it on, and I see it doing it all the time. In my case though the mistakes are often absurd. I often feel like claude is a very junior programmer that has a hard time remembering the original requirements.
While true, the latest opus model has 1m token context. Which is a lot more than the previous 200k limit. Hard to fill that up with regular work, but easy if you try to oneshot a whole product.
The best learning method is from your own mistakes. So, Claude is still learning.
Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.
The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.
Haven’t read the article and have a limited knowledge of ai, but I wonder if they do this for reinforcement learning: So OSS PR responses can be used to label different weights and models. Using even more free labor to train their models.
In Europe we have the AI act which, as of August, will introduce some form of transparency obligations. Not perfect obviously but a start. Probably will not be followed by the rest of the world though so like GDPR it will be forcibly eroded by other’s interests through lobbying but at least we try.
That doesn’t sound like it is saying don’t identify yourself. That it’s called claude isn’t internal information. So it doesn’t seem that instruction is doing tpwhat you are saying. Must be more instructions.
The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
This is so incredibly stupid.
You’ve tried security.
You’ve tried security through obscurity.
Now try security through giving instructions to an LLM via a system prompt to not blow its cover.
With how massive of a computer science field artificial intelligence is and how much of it already is or is getting added to every piece of software that exists, a label like that would be equally useless as the California prop 65 cancer warnings.
Do you use a mobile keyboard that supports swipe typing and has autocorrect? Remember to mark everything you write as being AI assisted.
Well yes, if you let autocorrect write code contribution, I think you should lable that contribution as AI.
What internal info are they worried about leaking in a commit message? If you don’t want it to add the standard Claude attribution, you can completely disable it in the settings, or just write your own commit messages.
AI usage needs to be explicitly declared.
Pointless. https://www.theregister.com/2026/01/08/linus_versus_llms_ai_slop_docs/
If it was the law then the AI itself would be coded to not allow going “undercover”, and there would be legal consequences if caught. Torvald’s stance only matters for how things ‘are’ not how they ‘could be’.
Would it be a cure all? Of course not. Fraud still happens despite the illegality. But it’s better than not being able to trust anything ever again.
I hate to break it to you, but we’re never going to be able to trust anything ever again. At least, not the way we used to. In the future, without any doubt, we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there from essentially first principles. We will have to build a web of investigation and trust to determine and mark what information is trustworthy and what is not, especially new information. None of this exists in any meaningful way yet, and the systems we used to have for it, like academic research and journalism for example, would have been catastrophically inadequate to handle this onslaught even at their peak, and they are nowhere near their peak anymore, having been deliberately eroded into a shadow of their former effectiveness so some assholes could get rich and powerful. So hopefully we’ll be able to rely on solid ground like Wikipedia and… books as a starting point, and nobody gets around to burning the Library of Alexandria down in their rage against “woke stuff”, because otherwise we’re going to be rebuilding our information spaces pretty much from scratch in the near future, probably at the same time we’re rebuilding civilized society in general. If this sounds incredibly uncertain, tedious and painful: yes, it will be, especially at first. But we will get better at it, eventually. We will develop new systems for it, we will become fluent in information again and the friction will fade.
I wish we could get to that stage right away, but unfortunately it will have to wait. We can’t do anything to improve the swimming pool while we are currently drowning in it. This is the reality that rampant and unchecked use of AI technologies by soulless corporations and corrupt governments have wrought. Logic and reason never stood a chance, and we are entering the digital dark ages. The enlightenment is probably coming someday, but don’t hold your breath for it.
Support your local library, that’s the most helpful thing I can think of for individuals to do. Librarians know their shit.
we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there
They used to teach this in schools under “critical thinking skills”. Following the chain of sources to the primary sources was a task I had to to (at least in part) more than once in secondary school.
Authoritarians don’t like that tho.
I agree. I’ve thought a lot about how valuable signing a simple message with a key can be. In an age where machines can appropriate your likeness, how do you accumulate and shed reputation, how do you prove it was you? One low tech version was taking a photo with a newspaper to prove you are a real person. Another is exchanging a public key with a person in real life so you can have reasonable certainty that communications signed with that key are legit. Since this boils down to denying what our eyes have seen, governments and businesses who are very keen on control reality are making their plays. Even identifying yourself cryptographically is only a temporary fix to maintain an existing identity. Your kids will be profiled and mimicked from day one. This whole slippery slope we’ve been sliding down lately seems very foreseen. It feels like these traps were engineered a very long time ago.
I think education is the absolute most important thing for a functioning post-truth society. Kids need to smell shit from 20 miles away because the world is full of traps for your mind same as it is for your wallet and your physical body. We also need to be able to verify and trust our tech stack. We need to pass down the stories of the times common people lost and the times common people won. We need to read and discuss philosophy. We’ll also have to tackle American religion head on. Also excessively addictive entertainment designs. We are a deeply flawed society and I’m not sure where we should start except for taking some of our time back so people actually have the opportunity to think about these things.
Can they even code them to do that? They’ve struggled so much with the em-dash and never managed to block Disneys characters so I figure they can’t do it 100% of the time even if they want to.
and there would be legal consequences if caught.
Like for driving over the speed limit? Or putting glass in the regular trash instead of the recycling? Yeah, just what I need in my life, another arbitrary law that’s enforced 0.0001% of the time as a flex by the people in power to target and abuse people they don’t like.
Normally, I’d be reading about NPM security breaches and AI security breaches separately, but now I can get them in the same article! Truly amazing how technology has progressed.
Fun times ahead!
If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.
Lol 😂
This is because if an unrelated hack on npm’s latest build. Anyone with this version of npm is affected
That axios supply chain attack was a bitch. There were extensions compromised from that shit.
Its bad advise too, because the malware removed itself from those files to removed traces of itself
By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter).
Ha, by an intern
Nice. One of the ways to write Chaofan in Chinese is 炒饭, which means fried rice. Amazing to be able to get that Twitter handle
Against best practice of informing the company first to remediate. Now it’s a security nightmare for anyone running it locally
Once companies started suing people trying to practice “responsible disclosure”, I stopped attacking people that choose maximum disclosure.
Responsible disclosure has always been a bit of a hedge. It’s rare to be able to show you are actually the first person/organization to discover a vulnerability.
We don’t really know if he contacted them before, do we?
This is just the UI right? Or the models too?
I mean it’s not that big a deal. However, it would another thing if the model itself leaked. Now that would be something.
Tool usage is very important. Qwen3.5 (135b) can already do wonderful things on OpenCode.
I dabble in local AI and this always blows my mind. How do people just casually throw 135b parameter models around? Are people like, renting datacenter hardware or GPU time or something, or are people just building personal AI servers with 6 5090s in them, or are they quantizing them down to 0.025 bits or what? what’s the secret? how does this work? am I missing something? like the Q4 of Qwen3.5 122B is between 60-80GB just for the model alone. That’s 3x 5090s minimum, unless I’m doing the math wrong, and then you need to fit the huge context windows these things have in there too. I don’t get it.
Meanwhile I’m over here nearly burning my house down trying to get my poor consumer cards to run glm-4.7-flash.
I pay for Ollama Cloud. As for the training of the big models, big companies do it using who-knows-what resources.
As they tell it, Claude Code is over 80% written by the models anyway…
The harness is as important as the model
Like a healthy brain. And just like a healthy brain, it’ll still hallucinate and make mistakes probably:
The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional “store-everything” retrieval.
As analyzed by developers like @himanshustwts, the architecture utilizes a “Self-Healing Memory” system.
We’re gonna make AGI and realize that being stupid sometimes and making mistakes is integral to general intelligence.
Actually, the people in the know…already knew this. We’ve known for years. Mistakes are required for learning.
A mistake is maybe just allowing room for evolution to take place?
being stupid sometimes and making mistakes is integral to general intelligence.
Smart people figured this out a long time ago.
https://www.amazon.com/s?k=nassim+taleb+antifragile&adgrpid=187118826460
https://www.goodreads.com/en/book/show/18378002-intuition-pumps-and-other-tools-for-thinking
That’s what makes us humans at least…


















