← Home AI in 15

AI in 15 — April 01, 2026

April 1, 2026 · 14m 58s
Kate

Anthropic's source code just fell out of an npm package like loose change from a coat pocket. Nineteen hundred files, half a million lines of code, and a secret "Undercover Mode" that tells AI to hide the fact that it's AI. You can't make this up.

Kate

Welcome to AI in 15 for Wednesday, April 1, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy April Fools' Day, Marcus, though I should say upfront, none of today's stories are pranks. We checked. Today we're diving deep into the Claude Code source code leak and everything the community found inside. OpenAI just closed a hundred and twenty-two billion dollar funding round. A Caltech spinoff released the first commercially viable one-bit large language models. Claude Code users are burning through their usage limits at ten to twenty times the expected rate. And a solo developer built a five-hundred-panel Bloomberg Terminal clone in three weeks with AI. Let's go.

Kate

Anthropic's Claude Code source code exposed through an npm packaging mistake.

Kate

OpenAI raises a hundred and twenty-two billion at an eight hundred and fifty-two billion dollar valuation.

Kate

And one-bit AI models are running on phones at forty-four tokens per second.

Kate

Marcus, let's start with the Claude Code leak because this is genuinely one of the wildest stories we've covered. Walk us through what happened.

Marcus

So version 2.1.88 of the Claude Code npm package shipped with a fifty-nine point eight megabyte JavaScript source map file. Source maps are debug files that link minified production code back to the original source. This one pointed directly to a publicly accessible zip archive sitting on Anthropic's own Cloudflare storage. Anyone who found it could download the complete TypeScript source code. Nineteen hundred files, over five hundred and twelve thousand lines.

Kate

And the developer community found it fast.

Marcus

Within hours. The primary Hacker News thread hit nineteen hundred and fifty points with nearly a thousand comments. People were dissecting everything. And what they found inside was, let's say, illuminating.

Kate

Okay, the one everyone's talking about is "Undercover Mode." What is that?

Marcus

When activated, it injects a system prompt that reads, and I'm quoting here, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." So when Anthropic employees use Claude Code to contribute to open-source projects, the AI is explicitly instructed to make its output look human-written. No mention of Anthropic, no mention of AI assistance.

Kate

That's going to fuel the transparency debate in a big way.

Marcus

It already has. Open-source communities operate on trust. The idea that AI-authored commits are being deliberately disguised as human work touches a nerve, especially after yesterday's story about GitHub Copilot injecting ads into pull requests. The open-source ecosystem is having a serious reckoning with AI transparency right now.

Kate

What else did the code reveal?

Marcus

Several things. Anti-distillation mechanisms that inject fake tool definitions into API requests, designed to poison the training data of competitors who might try to copy Claude's behavior. A frustration detection system that uses regex patterns, not AI, to flag when users are getting annoyed. One commenter called that an ironic choice for an AI company. And forty-four feature flags covering unreleased capabilities. Two stood out. One called KAIROS, which appears to be an always-on background daemon with GitHub webhooks and memory consolidation. And one called BUDDY, which is apparently a terminal pet with eighteen different species.

Kate

Wait, a terminal pet?

Marcus

Eighteen species, Kate. Someone at Anthropic built a virtual pet system into their AI coding agent. The priorities are fascinating.

Kate

How did Anthropic respond?

Marcus

They called it "a release packaging issue caused by human error, not a security breach." But Fortune pointed out this was their second security embarrassment in days, coming right after they accidentally leaked details about an internal project called Mythos. Before DMCA takedowns began, the source code had been forked over forty-one thousand five hundred times on GitHub. One Hacker News commenter noted that a significant portion of the codebase was probably written by the AI they're shipping, which is a delicious irony.

Kate

This comes at the worst possible time for Anthropic, competing head-to-head with OpenAI for developers.

Marcus

Exactly. And it gives competitors a rare look at Anthropic's product roadmap. Those feature flags are essentially a preview of where Claude Code is headed. That's valuable competitive intelligence handed out for free through a packaging mistake.

Kate

Speaking of OpenAI, they just closed the largest private fundraise in history. Marcus, a hundred and twenty-two billion dollars. Let that number breathe for a second.

Marcus

Eight hundred and fifty-two billion dollar post-money valuation. Amazon committed fifty billion, though thirty-five billion of that is contingent on OpenAI either going public or reaching AGI, whatever that means contractually. Nvidia put in thirty billion. SoftBank co-led with thirty billion alongside Andreessen Horowitz and D.E. Shaw. And for the first time, retail investors were allowed in, contributing three billion through bank channels.

Kate

OpenAI says they're now doing two billion a month in revenue.

Marcus

Which sounds enormous until you do the math. That's twenty-four billion annualized, up from thirteen point one billion for all of 2025. So about four billion in growth since year-end. Meanwhile, some analysts suggest Anthropic added six billion in revenue in February alone and may actually have a higher run rate now. The growth story isn't as clean as the headline suggests.

Kate

And a lot of that hundred and twenty-two billion comes with strings attached.

Marcus

The word "committed" is doing heavy lifting. Amazon's thirty-five billion is conditional. The overall structure suggests this is as much about signaling and positioning for an IPO as it is about actual capital deployment. But even discounting the conditional portions, it's still an astronomical amount of money flowing into one company. The funds are earmarked for chips, data centers, and talent as OpenAI pushes toward what they're calling artificial general intelligence.

Kate

Does the valuation make sense to you?

Marcus

At eight hundred and fifty-two billion, OpenAI would be among the most valuable companies on Earth, and it has never turned a profit. The bet is that AI infrastructure investment today generates outsized returns tomorrow. Whether that bet pays off is the defining question of this entire industry cycle.

Kate

Now for a story that genuinely excited me. PrismML, a Caltech spinoff, just emerged from stealth with the first commercially viable one-bit large language models. Marcus, explain what one-bit means and why it matters.

Marcus

Standard models store each parameter at sixteen bits of precision. PrismML's Bonsai models compress every single parameter, embeddings, attention layers, everything, down to one bit. Their flagship Bonsai 8B model packs eight point two billion parameters into just one point one five gigabytes. That's roughly twelve to fourteen times smaller than an equivalent sixteen-bit model.

Kate

And it actually works?

Marcus

The numbers are striking. A hundred and thirty-one tokens per second on an M4 Pro Mac. Three hundred and sixty-eight tokens per second on an RTX 4090. And forty-four tokens per second on an iPhone. Because one-bit weights replace most multiplication with simple addition, energy efficiency improves four to five X. PrismML claims an "intelligence density" of one point zero six per gigabyte compared to zero point one for Qwen3 8B. That's a ten X improvement in capability per unit of model size.

Kate

Simon Willison got it running on his phone?

Marcus

Downloaded one point two gigabytes through an app called Locally AI and said it was "very impressive." A community member added AVX2 support to the CPU kernel and got twelve tokens per second on a 2018 laptop. The models are Apache 2.0, fully open source on Hugging Face.

Kate

So you could run a competitive language model on your phone without any cloud connection.

Marcus

That's the breakthrough. On-device AI that doesn't phone home. Think smart assistants that work offline, autonomous agents in areas without connectivity, privacy-preserving AI that never sends your data anywhere. If the quality holds up under broader testing, this could genuinely democratize AI inference. Not everyone has access to cloud GPUs, but almost everyone has a phone.

Kate

Back to Anthropic, and not in a good way. Claude Code users are reporting that their usage limits are evaporating at alarming rates. What's going on, Marcus?

Marcus

Since around March 23, Max 5 plan users paying a hundred dollars a month are burning through their entire allocation in one to two hours on workloads that previously lasted a full day. One Max 20x subscriber saw usage jump from twenty-one percent to a hundred percent on a single prompt. A reverse engineer reportedly found two independent bugs that break the prompt cache, silently inflating costs by ten to twenty X.

Kate

So users are being charged dramatically more than they should be?

Marcus

Effectively, yes. The prompt cache has a five-minute lifetime, so even a brief break triggers full-cost recalculation when you resume. Compounding this, Anthropic simultaneously ended a March promotion that had doubled usage limits and implemented reduced quotas during peak hours, both poorly communicated. Users couldn't tell whether their limits shrank, their consumption spiked, or both.

Kate

And there's no transparent usage meter.

Marcus

That's the core frustration. Users can't see what's consuming their tokens. They just hit a wall. Anthropic says it's their top priority, but this is happening alongside five major platform outages in March and now the source code leak. Developers who built their workflows around Claude Code are seriously reconsidering.

Marcus

Downgrading to version 2.1.34 reportedly helps, which tells you everything you need to know about where the bugs were introduced.

Kate

Let's end on a fun one. A solo developer built a five-hundred-and-sixteen-panel Bloomberg Terminal clone in three weeks using Claude Code. Marcus, is this real?

Marcus

A developer named Sarat Tsai posted it as a Show HN. It's called Neuberg. Real-time financial data across fixed income, derivatives, commodities, equities, credit, macro. The information density that professional traders expect, built by one person because, as he put it, he couldn't justify paying twenty-four thousand dollars a year for a Bloomberg Terminal as an independent trader.

Kate

The Hacker News crowd was divided?

Marcus

Split between admiration for the ambition and skepticism about whether panel count is a meaningful metric. Fair point. But one comment captured the moment perfectly. "This is the year when solo software firms start to dominate." Whether Neuberg specifically succeeds is almost beside the point. The fact that one developer with AI tools can build something that approximates what required teams of dozens just a few years ago, that's the story.

Kate

The thousand X developer thesis in action.

Marcus

With a very large asterisk about depth, accuracy, and maintainability. But as a proof of concept for what's possible, it's hard to argue with.

Kate

Wednesday big picture. Anthropic's code leaks out of an npm package. OpenAI raises more money than some countries' GDP. One-bit models run on phones. And solo developers build financial terminals in weeks. Marcus, what connects all of this?

Marcus

Speed outrunning control. Anthropic shipped so fast they left their source code in the package. OpenAI is raising money at a pace that makes due diligence almost impossible. One-bit models compress AI to a size that fits in your pocket but also escapes any centralized oversight. And solo developers are building complex financial tools without the institutional guardrails that teams provide. Everything in AI is accelerating, the capabilities, the capital, the access. But the governance, the security practices, the quality assurance? Those are still moving at human speed. And the gap between what we can build and what we can responsibly manage is widening every single week.

Kate

Moving fast and breaking things, except now the things are bigger.

Marcus

Much bigger. And the npm package isn't the only thing that needs better packaging.

Kate

That's your AI in 15 for Wednesday, April 1, 2026. See you tomorrow.