The Claude Code Source Leak Revealed More About Us Than Anthropic
Anthropic's entire Claude Code source leaked via npm source maps, and I argue they should open-source it. Meanwhile, the industry's cries of entitlement to horror at "AI slop" under the hood reveal more about our perceptions of crafting software than they do about Anthropic's engineering practices.
The entire codebase of Anthropic's Claude Code product leaked via npm source maps. Within hours, it was mirrored across GitHub, where people forked it, tore it apart, and started porting it into various languages to prevent DMCA takedowns.
That morning, I fired off a LinkedIn post arguing that Anthropic should "just open source it." My rationale: their business isn't Claude Code and the source code itself; it's the token spend. Claude Code is arguably the most effective token furnace Anthropic has to monetize its inference models.
There's very little novel or proprietary information hiding in the code. However, keeping it closed significantly limits two important things. First, auditability — which has already surfaced several serious findings (see below) — along with the massive labor force of heavily invested developers who want to contribute to and evolve the project alongside Anthropic. Second, the broader agentic harness community & ecosystem, which ultimately drives even greater token spend on frontier inference models as innovations and novel solutions propagate throughout.
In my opinion, Anthropic should publish the repo. License it however you want, and let the community do what it does best. After all, Claude Code is built on top of a vast pyramid of open source contributions and is executing an inference model trained on billions of lines of open source code.
To my surprise, this post took off: 10,000+ impressions, 50 comments, and a heated comment section that devolved into a genuine flame war and a quickly deleted subtweet.
"Entitled Much?"
One commenter dismissed my entire argument with two words: "Entitled much?" This kicked off a 20-reply thread that quickly spiraled into something unrelated to my actual argument. This reaction is worth examining because it's representative of a broader camp of discourse. The premise is that advocating for open-source access to developer tooling is somehow a demand or an act of entitlement. That framing misunderstands my argument entirely.
First of all, nobody is owed the source code. But as a self-described public benefit corporation building AI tools that are having a transformational impact on software development (and at this point, the greater economy), keeping this tool proprietary feels counterproductive and doesn't strategically serve Anthropic's own stated mission, business model, or its customers. Honestly, when I first heard about this product, I went to GitHub expecting to find it there, and I was shocked when I didn't.
Claude Code consumes more tokens than any other Anthropic product. The source code for Claude Code isn't their moat; it's their inference layer. Open-sourcing the client would boost adoption, increase token usage, and allow the community to improve the tool faster than any internal team could, even armed with Claude Code itself. This isn't idealism; it's an empirical truth we've seen firsthand through decades of software engineering in capitalism. It's simply good business sense for a company whose revenue grows with usage.
"AI Slop" and the Myth of Code Elegance
The second major reaction was one of horror at Claude Code's code quality. Engineers quickly dove into the leaked source and came out clutching their pearls, as expected. Honestly, who was truly surprised by this? The project leads and the company have consistently emphasized their use of Claude Code to develop the project itself, from essentially the start. The reaction was visceral, and nobody captured it more vividly than Anthony Bucci, who called the codebase an "absolute dumpster fire," declaring they'd never seen code this bad in their career. His broader point was damning:
I've never seen code this bad. Like, ever, and I've worked on some whoppers. How *on Earth* is anyone who used to write about clean code, design patterns, and other "software craftsmanship" topics staking their software future on a tool like this? How do you make that make sense to yourself? "Spaghetti code" is nowhere near sufficient to describe what this is.
- Anthony Bucci on LinkedIn
Bucci's reaction was widely shared, and it captures a real sentiment. The code was labeled "AI slop," implying that Anthropic shipped something unworthy of a product generating billions in revenue. Spaghetti code, inconsistent patterns, the unmistakable fingerprints of LLM-assisted development. Mistakes a human could never make, right? How could this possibly be what's under the hood of a multi-billion-dollar product?

This attitude, that inelegant code is inherently bad, that aesthetic quality correlates with business value, has never been correct, and it predates LLMs by multiple decades. PHP was ugly. Early Ruby on Rails was a mess of monkey-patching. JavaScript itself was famously designed in ten days. The entire history of the web is built on code that made purists recoil. And every time, the ivory tower coders were proven wrong about what mattered.
I'll be direct about this. As I wrote on Bluesky:
People who are appalled by the awfulness of Claude Code's code quality, despite its multi-billion dollar success in less than a year, clearly were born after Web 2.0 — and it shows.
What matters is whether it ships, whether users adopt it, and whether there's a working business model at play somewhere in the system. Claude Code is a token furnace. It was never intended to be a monument to software craftsmanship. This reality may have contributed to their decision not to open-source this project. Claude Code is intended to be the most effective interface between developers and Anthropic's inference layer, and by that metric, it's wildly successful. Arguably, it's the most effective tool Anthropic has ever built for driving revenue with its models.
Bucci asks how anyone can make this make sense to themselves. Here's how: AI slop/market fit. "AI slop" is what happens when you reach product-market fit at a velocity that traditional engineering culture can't process. That's not a bug in my book; it's a feature of a team that prioritized shipping over satisfying code reviewers on the internet. To date, there isn't a better example of a team that can ship as fast as the Claude Code team consistently has.

What Did We Actually Learn?
Setting aside aesthetic judgments, the leak triggered something genuinely valuable: a deep public audit of the project from nearly every angle, and the findings are substantive. Researchers on Reddit, armed with Ghidra, MITM proxies, and a lot of patience, started documenting actual bugs in the codebase; not style complaints, but serious functional problems that affect billing and reliability.
Here's a select set of interesting issues they've found (so far):
Plugin resource consumption is unchecked. Claude Code plugins spawn persistent background servers without resource limits. Each agent session creates new polling instances that compound over time. Killing the processes doesn't help because agent sessions respawn them. Full cleanup requires uninstalling plugins, removing cached copies, and restarting the machine.
Cache billing is broken in at least two ways. First, a sentinel replacement bug: when conversation history contains a specific literal value, the system replaces the wrong sentinel, changing message content on every request and breaking the cache prefix. Second, since version 2.1.69, every session resume causes full cache misses because system-reminder blocks relocate between requests. Researchers estimated this creates a 10-20x cost increase on resume requests.
Token waste is systemic. Dynamic tool descriptions account for roughly 10% of fleet-wide cache creation token waste. The extractMemories function, enabled by default, doubles token usage — a 20-turn session at 650K context burns approximately 26 million tokens instead of 13 million.
These are substantive issues that agentic coding harnesses may have introduced and were missed by human reviewers. These are the kinds of bugs that cost users real money and that Anthropic's internal team may not have caught precisely because the code wasn't subject to public scrutiny. This is exactly the kind of finding that open source development is designed to surface.
Open Source Is the Right Call
Anthropic is a public benefit corporation. Their stated mission involves the responsible development and maintenance of advanced AI for the long-term benefit of humanity. Keeping their most popular developer tool proprietary doesn't advance that mission. It just concentrates the productivity gains among engineers at companies that can afford the subscriptions ...until they can't.
The audit findings make the case even stronger. Within days of the source becoming available, the community identified billing bugs, resource leaks, and architectural issues that have a direct financial impact on all customers. A 50-lesson architecture deep dive was already published, likely built with Claude Code itself.
The leak event clarified three camps of thought. The first cried about entitlement and missed the strategic argument entirely. The second raised concerns about code quality and missed the business reality. The third, the researchers, the auditors, the engineers who actually dug in, found real problems and started fixing them.
Only one of those groups produced something useful. And they could only do it because the code was public. The rest can be called what it is: clout chasing. I'll admit my very own written perspective is guilty here to a degree.
Open source has always been historically critical and necessary for the software industry. That hasn't changed in the age of LLMs. If anything, the stakes are higher now. When AI-powered developer tools can silently break caching and double your token bills, public scrutiny isn't a nice-to-have. It's a requirement.
Anthropic: just ship the repo. At this point, you have nothing left to hide.