← Home AI in 15

AI in 15 — April 25, 2026

April 25, 2026 · 16m 44s
Kate

Google just made the largest investment it has ever made in another company. Forty billion dollars into Anthropic. Four days after Amazon dropped twenty-five billion. That's sixty-five billion in fresh capital lined up for a single AI lab in one week.

Kate

Welcome to AI in 15 for Saturday, April 25, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Saturday show, Marcus, and the megadeal of the year hit yesterday. Google committing up to forty billion dollars to Anthropic in a cash-plus-compute structure that reshapes the frontier. Tesla quietly disclosed a two-billion-dollar AI hardware acquisition in a single sentence in its 10-Q without telling shareholders. Ruby creator Yukihiro Matsumoto unveiled a self-hosting AOT compiler he built in one month with Claude. A South Korean man got arrested for an AI-generated wolf photo that derailed a nine-day search. And we have follow-ups on GPT-5.5, DeepSeek V4, and the Claude Code postmortem from yesterday's show. Let's go.

Kate

Google's forty-billion-dollar Anthropic megadeal.

Kate

Tesla buries a two-billion AI acquisition in one sentence.

Kate

And Matz writes a 21,000-line compiler in 30 days using Claude.

Kate

Lead story, Marcus. Alphabet announced yesterday it will invest up to forty billion dollars into Anthropic. Walk me through the structure, because the headline number is doing a lot of work here.

Marcus

It is doing a lot of work, Kate. Ten billion in cash up front, locked in at a three-hundred-and-fifty-billion-dollar valuation. Another thirty billion contingent on Anthropic hitting performance milestones over time. Bundled with the cash is a five-gigawatt Google Cloud compute commitment over five years, layered on top of an earlier three-and-a-half-gigawatt TPU agreement that begins in 2027. This is the largest single investment Alphabet has ever made in another company.

Kate

And it lands four days after Amazon's twenty-five billion.

Marcus

Right. Anthropic now has roughly sixty-five billion dollars of fresh capital lined up in a single week. Run-rate revenue surpassed thirty billion this month, up from nine billion at the end of 2025. Reporting suggests they're pricing the next round at an eight-hundred-billion-dollar valuation with an October IPO on the table. The numbers stop looking like a startup investment and start looking like an industrial alliance.

Kate

Why is Google doing this when Gemini is one of their flagship products?

Marcus

Two reasons, Kate. First, hedging. Sundar Pichai is making sure that even if Gemini doesn't win the AI race, Alphabet is still the cloud and chip provider underneath whoever does. Second, and this is the part Hacker News won't stop pointing at, the money is circular. Anthropic takes Google's ten billion and spends it on Google Cloud and TPUs. On Alphabet's books, that capital comes back as cloud revenue. Amazon's deal works exactly the same way.

Kate

Three hyperscalers, one model maker, money going round and round.

Marcus

That's the structural critique. Hacker News commenters are calling this the most aggressive circular-financing flywheel in tech history. Cloud providers funding the model maker that buys their compute. The optimistic read is that Anthropic genuinely needs the capacity, and the model quality has visibly snapped back this month after weeks of decline because they finally have the chips. The skeptical read is that three hyperscalers are passing the same dollars in a circle to inflate one another's books ahead of an IPO. Both reads have evidence. The IPO will tell us which one is correct.

Kate

And it explains why Anthropic could afford to reset usage limits and absorb the Claude Code mess this week.

Marcus

Exactly. As we covered yesterday, Anthropic published a postmortem admitting three engineering missteps caused a month of quality decline on Claude Code. Resetting usage limits and absorbing the goodwill hit is much easier when you've just locked in sixty-five billion in funding. The question for the rest of the industry is what happens to OpenAI's positioning. They just doubled GPT-5.5 prices this week. They've raised at a seven-hundred-and-thirty-billion pre-money. And now Anthropic is arguably better-capitalized than they are. The competitive pressure on Sam Altman just went up significantly.

Kate

Quick hits, Marcus. And the first one is genuinely strange. Tesla.

Marcus

In Note 14 of its Q1 2026 10-Q filing, under Subsequent Events, Tesla quietly disclosed it had agreed to acquire an unnamed AI hardware company for up to two billion dollars in stock. The structure is unusual. Only about two hundred million is guaranteed. The other one-point-eight billion is tied to, quote, service conditions and performance milestones dependent on the successful deployment of the company's technology. Tesla never mentioned the deal in its shareholders' letter or on the earnings call.

Kate

They mentioned a similarly sized SpaceX investment but stayed silent on a two-billion-dollar acquisition.

Marcus

That's the part raising eyebrows. The timing also matters. April 2026 saw the AI5 chip tape-out on April fifteenth, the Terafab semiconductor partnership with Intel that we covered yesterday, and Musk's January announcement that he was restarting Dojo. Hacker News commenters are speculating the target is either the team that originally built Dojo, or a chip-packaging startup feeding Terafab. In Q1 alone, Tesla deployed roughly four billion across the SpaceX investment, this acquisition, and AI infrastructure. Against just four hundred and seventy-seven million dollars in automotive GAAP net income.

Kate

So Tesla is spending eight times its car-business profit on AI bets.

Marcus

And they're not telling shareholders the details. The aggressive earn-out structure is its own signal. Two hundred million guaranteed, one-point-eight billion contingent, tells you Tesla itself isn't sure the technology works. They're paying two hundred million now to find out. The silence on the earnings call is the other signal. Musk is layering big AI bets faster than disclosure conventions can keep up with, and shareholders are along for the ride. Tesla is increasingly an AI-hardware company that happens to make cars on the side, and this filing is the cleanest evidence yet.

Kate

DeepSeek V4 follow-up. We covered the launch yesterday. What's new from independent testers?

Marcus

More numbers came in overnight, Kate, and they're sharper than the initial release claims. V4-Pro is now ranked the number-one open-weight model on Vals AI's Vibe Code Benchmark, and the gap to second place is substantial. Eighty-point-six percent on SWE-bench Verified, within two-tenths of a point of Claude Opus 4.6. Sixty-seven-point-nine on Terminal-Bench 2.0, which actually beats Claude. Ninety-three-point-five on LiveCodeBench. Codeforces rating of three thousand two hundred and six. And the price still lands at three dollars forty-eight per million output tokens versus thirty for OpenAI.

Kate

So roughly one-tenth the price for genuinely competitive coding performance.

Marcus

Yes. And the geopolitical subtext lands harder when you stack it against the Anthropic news. Every dollar Anthropic raises from Google and Amazon to fund proprietary frontier models is a dollar competing against a free MIT-licensed alternative downloadable today. Whether you trust DeepSeek's benchmarks given who's grading them is a separate question, but enterprises in regulated industries are already running pilots. The integration with Huawei's Ascend chips is the part US policymakers should be watching most closely. It signals a Chinese AI stack that no longer needs Nvidia, which means export controls have a shrinking window of relevance.

Kate

My favorite story of the day, Marcus. Matz. The creator of Ruby. He showed up at RubyKaigi this week with something extraordinary.

Marcus

He unveiled Spinel, a self-hosting ahead-of-time native compiler for Ruby. Eleven-point-six times geometric-mean speedup over standard CRuby. Whole-program type inference, generates optimized C code, compiles itself into a native binary. The trade-offs are real. No eval, no dynamic metaprogramming, no method_missing, no define_method, no Thread or Mutex. Fibers are supported. ASCII and UTF-8 only.

Kate

Stripped-down Ruby with compiled performance. Important, but not unprecedented. So what's the surprise?

Marcus

The velocity, Kate. Matz built it in approximately one month with Claude's assistance. The codegen file alone is twenty-one thousand lines of Ruby with up to fifteen levels of nesting in some methods. One of the world's most respected language designers shipped what would normally be a year of compiler work in thirty days. Hacker News commenters are calling it, quote, obviously super-impressive but clearly not maintainable without an AI agent. A single human almost certainly can't safely refactor it.

Kate

So you get artifacts a single human couldn't have built, but also artifacts a single human can't maintain.

Marcus

That's the new tradeoff in one sentence. Ruby gets compiled performance. The open-source world gets a new question about who really owns code that only an AI can read. This is the cleanest signal yet that AI-assisted development isn't just helping juniors. It's letting senior practitioners ship work at a velocity that genuinely changes what one person can build. The maintainability question is open, but Matz isn't a junior making mistakes. He's deliberately accepting the tradeoff. That tells you where serious engineering is heading.

Kate

Anthropic Mythos follow-up, Marcus. We covered the capability story and the dispute over the marketing claims yesterday. Anything new?

Marcus

One uncomfortable detail surfaced this week. Hackers reportedly breached the gated Project Glasswing environment in late April by guessing the model's URL. Anthropic confirmed the access was limited and contained, but it's an awkward security story for a model that was specifically gated on safety grounds. Former Acting National Cyber Director Kemba Walden also went public this week, warning that small water utilities, power-distribution agencies, and SMEs cannot patch fast enough to survive the asymmetric threat. The Trump administration is now actively trying to assess the risk.

Kate

So the model that was too dangerous to release got partially leaked anyway.

Marcus

The pattern repeats. Gating frontier capabilities behind partner agreements is a defensive measure with a short half-life. As we discussed yesterday, XBOW already showed that GPT-5.5, generally available with an API key, matches or exceeds Mythos-class offensive-security performance. If Mythos-class capabilities hit open-source within a year, and DeepSeek V4 makes that plausible, every under-resourced municipal IT shop in America is on the wrong side of a vulnerability arms race. The defensive side is structurally behind, and capital is flowing in the wrong direction.

Kate

Last quick hit, Marcus, and it might be the most concrete AI story of the week. South Korea. A wolf, a fake photo, nine days of chaos.

Marcus

On April eighth, a wolf named Neukgu escaped from O-World zoo in Daejeon. A man in his forties, just for fun, generated a photo-realistic AI image of the animal trotting through a road intersection and posted it. The image was convincing enough that Daejeon city authorities issued an emergency alert text to residents warning the wolf had moved toward that intersection. Police even displayed the AI image at an official press briefing.

Kate

So city emergency services rerouted around a fake photo.

Marcus

For nine days. The wolf wasn't actually recaptured until April seventeenth, after a tipster spotted it near an expressway. The man was arrested April twenty-third on charges of obstructing official duties by deception. He faces up to five years in prison or a six-thousand-seven-hundred-dollar fine. Why this matters, Kate, isn't the legal outcome. It's the proof of concept. A single AI-generated image hijacked the operational tempo of an entire city's emergency services for over a week.

Kate

Not a deepfake of a politician. Not election interference. Just one bored guy.

Marcus

And a free image generator. That's the lesson. We spent 2024 and 2025 worrying about state actors using deepfakes against elections. The actual harm surface is far broader and far more mundane. Public infrastructure operates on the assumption that incoming images are evidence. That assumption is now broken at the cost of a free tool and ten minutes. Every emergency-services protocol in the world needs to update for this. Most won't, until something more serious than a wolf gets loose.

Kate

Saturday big picture, Marcus. Pull the threads together for me.

Marcus

Today's stories all sit on one fault line, Kate. The frontier-AI economy is consolidating, fast. Google just put forty billion into Anthropic, four days after Amazon put in twenty-five billion. Three hyperscalers, one model maker, capital flowing in a circle that routes most of the spend straight back into cloud revenue. Tesla quietly bought a two-billion-dollar AI hardware company without telling shareholders. OpenAI doubled its prices this week. And on the other side of the wall, DeepSeek handed away a frontier-class open-source model for one-tenth the price.

Kate

And the capability story at the bottom of all of it.

Marcus

One lab is voluntarily withholding a model on national-security grounds while a competitor ships the equivalent openly. A single AI image can shut down a city's emergency response for nine days. And one engineer with Claude can ship a year of compiler work in a month. The compute race is no longer about who builds the best model. It's about who survives commodification while the technology itself races past every institutional capacity to absorb it.

Kate

One line to close.

Marcus

The capital flowing into Anthropic this week is the largest bet anyone has ever placed on a private AI company, Kate, and it might still not be enough to outrun the open-weight floor coming out of China. That's the 2026 AI economy in a sentence.

Kate

That's your AI in 15 for Saturday, April 25, 2026. See you tomorrow.