← Home AI in 15

AI in 15 — March 28, 2026

March 28, 2026 · 15m 28s
Kate

Anthropic's biggest secret just got blown wide open by a misconfigured content management system. Three thousand unpublished files sitting in an unencrypted public cache. And buried in there, a model that Anthropic itself says is "currently far ahead of any other AI model in cyber capabilities." The safety-first lab, undone by a checkbox that defaulted to public.

Kate

Welcome to AI in 15 for Saturday, March 28, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy Saturday, Marcus. We've got a big one today. Anthropic accidentally leaked the existence of a next-generation model called Claude Mythos through a CMS blunder, and the details are startling. GitHub is moving forward with training AI on your Copilot data by default. OpenAI officially killed Sora, and Disney walked away. A supply chain attack hid malware inside WAV audio files. Linux kernel maintainers say AI bug reports suddenly got good overnight. Colorado is trying to ban AI surveillance pricing. And The Guardian published a major investigation into AI in military targeting. Let's get into it.

Kate

Claude Mythos leaks, and Anthropic warns its own model is dangerously good at cyberattacks.

Kate

A Python package hid credential-stealing malware inside audio files.

Kate

And AI bug reports in the Linux kernel went from junk to legitimate overnight.

Kate

Marcus, let's start with the Mythos leak because the irony here is almost too thick. The company that positions itself as the responsible AI lab had three thousand unpublished blog assets just sitting in a public, unencrypted cache.

Marcus

It's a bad look, no question. Fortune discovered the exposed files on Wednesday. Among them was a draft blog post introducing Claude Mythos, which Anthropic describes as "a step change" in capability and "the most capable we've built to date." The draft introduces a new model tier called Capybara, which sits above Opus. So the capability ladder just got a new rung.

Kate

And the performance claims are dramatic.

Marcus

"Dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others" compared to Opus 4.6. But here's the part that really got attention. Anthropic's own internal assessment says Mythos is "currently far ahead of any other AI model in cyber capabilities" and warns it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." That's not critics saying this. That's Anthropic warning about its own model.

Kate

So they're rolling it out carefully?

Marcus

Very carefully. Select early-access customers first, with priority going to cybersecurity-focused organizations so defenders can prepare before broader availability. There are also plans for an invitation-only CEO summit in Europe to demo unreleased capabilities. The strategy makes sense. If your model is genuinely dangerous offensively, you want the good guys to have it first.

Kate

But Marcus, the fact that all of this came out through a security blunder rather than a planned announcement. That has to undermine the message.

Marcus

Completely. You cannot credibly warn the world about AI cybersecurity risks while your CMS defaults digital assets to public unless someone remembers to click a checkbox. Anthropic called it human error, which it is, but it's also an organizational failure. CoinDesk reported Bitcoin and software stocks dipped on the news because investors read the cybersecurity warnings and got spooked. And this came the same week as the fourteen-hour Claude outage that pushed their Q1 uptime below ninety-nine percent. Rough week operationally for a company whose models keep getting better.

Kate

The timing is almost comically bad. Leaking your most powerful model while your existing infrastructure is struggling.

Marcus

The model capabilities and the operational maturity are on two very different trajectories right now. They need to close that gap quickly, especially with the Anthropic-Pentagon case still in the headlines. Credibility matters.

Kate

From Anthropic's bad week to a supply chain attack that reads like a spy novel. Marcus, we've been tracking TeamPCP's attacks all week. Sunday was Trivy, Wednesday was LiteLLM. Now they've hit the Telnyx Python SDK, and the technique is wild.

Marcus

This is genuinely creative malware engineering. Instead of hiding payloads as base64 blobs or executable files, which security scanners have learned to flag, TeamPCP embedded the malicious code inside WAV audio files. Actual valid audio files that pass MIME-type checks. But hidden in the frame data is a base64-encoded payload. The first eight bytes serve as an XOR key to decrypt the rest.

Kate

So your security scanner sees a WAV file and thinks, that's just audio, nothing to worry about.

Marcus

Exactly. It's steganography, hiding data inside innocuous-looking files. On Windows, the payload deploys as "msbuild.exe" in the Startup folder for persistence. On Linux and Mac, it harvests environment variables, shell histories, config files, compresses everything, encrypts it with AES-256 and a hardcoded RSA-4096 public key, and exfiltrates via HTTP. The Telnyx package has seven hundred and forty-two thousand monthly downloads.

Kate

And the cascading nature of these attacks is what concerns me most. Each compromise feeds the next one.

Marcus

That's the real story. TeamPCP likely got the Telnyx PyPI publishing token from the LiteLLM compromise, which itself was enabled by the Trivy compromise. Each attack harvests credentials that unlock the next target. It's a chain reaction. The Hacker News discussion highlighted a practical defense worth mentioning. The uv package manager has an "exclude-newer" setting that prevents installing packages released in the last seven days. It acts as a buffer against exactly this kind of attack. Not a perfect solution, but a pragmatic one.

Kate

Here's a story that caught me off guard. Greg Kroah-Hartman, the Linux kernel's most prolific maintainer, says AI-generated bug reports "went from junk to legit overnight." Marcus, what happened?

Marcus

Nobody knows, and that's the fascinating part. About a month ago, something changed. The Linux kernel team went from receiving what they called "AI slop," low-quality, obviously wrong reports that wasted everyone's time, to receiving real, accurate security bug reports generated with AI assistance. Kroah-Hartman said the volume is significant and "not slowing down."

Kate

And nobody can explain the shift?

Marcus

He speculated either tools got a lot better simultaneously or people suddenly figured out how to use them effectively. Maybe both. In his own testing, AI-generated patches were correct about two-thirds of the time. That's a meaningful contribution rate. The Linux Foundation has adopted Sashiko, Google's AI code review tool, to help manage the increased volume. But smaller open-source projects without the kernel's massive review infrastructure are struggling with the influx.

Kate

Two-thirds accuracy on kernel patches. A year ago that would have been unthinkable.

Marcus

And it happened overnight, which is the unsettling part. There wasn't a gradual improvement curve. It was a step function. Something crossed a threshold, and now AI is generating genuinely useful security research at scale. For the open-source ecosystem, this is both an opportunity and a resource challenge. More real bugs found is great. But someone still has to review and fix them all.

Kate

We covered the GitHub Copilot data story Thursday, but it deserves another look because the community backlash has intensified. Quick recap for weekend listeners, Marcus.

Marcus

Starting April 24, GitHub will use interaction data from Copilot Free, Pro, and Pro Plus users to train AI models. On by default. You have to explicitly opt out. The data includes code snippets, inputs, file names, repository structure, chat interactions, even data from private repositories while you're actively using Copilot. Business, Enterprise, students, and teachers are exempt.

Kate

And the reaction has been brutal.

Marcus

In GitHub's own community discussion, fifty-nine thumbs down versus three positive reactions. Developers are pointing out this fundamentally redefines what "private" means on the platform. One Hacker News commenter put it perfectly. If your data sits in a database a company can access and it's not end-to-end encrypted, the company will eventually update their terms of service to use it for AI training. The incentives are simply too strong. If you're a Copilot user, check your settings before April 24.

Kate

We reported Wednesday that OpenAI was shutting down Sora. The aftermath is now clearer. The Disney deal is officially dead, and it's raising questions about OpenAI's product strategy ahead of a potential IPO.

Marcus

Six months from launch to shutdown. One million downloads in five days, top of the App Store, and now gone. The Disney partnership that would have brought Mickey Mouse and Cinderella into AI-generated videos is not proceeding. No money ever changed hands. The real driver is compute economics. Video generation is extraordinarily expensive, and OpenAI needs to justify that seven hundred and thirty billion dollar valuation. They're choosing profitable products over impressive demos.

Kate

Which effectively cedes the AI video space to Google, Runway, and Pika.

Marcus

A strategic retreat that investors will scrutinize closely. You can't pitch yourself as the AI platform for everything and then abandon an entire product category six months in.

Kate

Colorado just passed a bill that would be the first in the country to outright ban AI surveillance pricing. Marcus, explain what that means.

Marcus

Surveillance pricing is when companies use AI to figure out the maximum you'd personally be willing to pay for something, then charge you exactly that. Your browsing history, financial data, how you interact with a website, all fed into an algorithm that sets an individualized price. Colorado's bill, HB26-1210, would ban this practice entirely, and also ban using AI to set individualized wages for employees.

Kate

New York only requires disclosure. Colorado is going further.

Marcus

Much further. This is a ban, not a transparency requirement. It still needs a third reading and Senate approval before reaching the governor's desk, and there's Republican opposition arguing it's too broad. But Colorado has been the most aggressive state on AI regulation since their 2024 discrimination legislation. This targets one of the most consumer-hostile applications of AI. Think Uber surge pricing applied to everything you buy, calibrated to squeeze the maximum out of each individual customer.

Kate

Last story. The Guardian published a major investigation into AI in military targeting, focused on the Iran school airstrike that killed over a hundred and seventy people in February. Marcus, the investigation argues we've been asking the wrong question.

Marcus

The initial coverage focused on whether Claude specifically selected the school as a target. The Guardian's argument is that fixating on a single AI model misses the systemic picture. The entire kill chain is increasingly automated. Three clicks convert a data point on a map into a formal detection in a targeting pipeline. AI systems recommend which aircraft, drone, or weapon to use. US Central Command confirmed using "advanced artificial intelligence tools to process large amounts of data" in operations against Iran.

Kate

So the question isn't whether one AI chose the target. It's that AI is embedded at every stage.

Marcus

Every stage. From surveillance to target identification to strike planning. The speed and scale of automated targeting can outpace human oversight entirely. This is the highest-stakes version of the alignment problem. Not chatbot hallucinations. Lethal decisions made at machine speed. And it puts the Anthropic-Pentagon case in even sharper relief. Anthropic refused to let Claude be used this way. That refusal looks more prescient by the week.

Kate

Saturday big picture. Anthropic leaks its most powerful model through a security blunder while warning that model could outpace cyber defenders. Supply chain attacks hide in audio files. AI bug reports cross a quality threshold overnight. And military AI targeting kills schoolchildren. Marcus, what's the thread?

Marcus

Capability outrunning control. Every story this week is about AI systems getting more powerful faster than the guardrails can keep up. Mythos is so capable that Anthropic itself is worried, but the company can't keep its own CMS secure. Supply chain attackers are inventing new evasion techniques faster than defenders can detect them. AI bug reports got good so fast that nobody can explain why. And military targeting pipelines operate at speeds that make human oversight a formality. The gap between what AI can do and what we can responsibly manage is widening, not narrowing. The companies and institutions that take that gap seriously will define the next era. The ones that don't will be defined by the consequences.

Kate

More power, same humans trying to keep up.

Marcus

And the humans are losing ground.

Kate

That's your AI in 15 for Saturday, March 28, 2026. Enjoy your weekend. See you Monday.