← Home AI in 15

AI in 15 — February 28, 2026

February 28, 2026 · 17m 35s
Kate

Supply chain risk. That's the label the United States government uses for Huawei. For Russian defense contractors. And as of yesterday, for an American AI company that refused to build autonomous weapons.

Kate

Welcome to AI in 15 for Saturday, February 28, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Marcus, we've been covering the Anthropic-Pentagon standoff all week. Monday through Friday, every escalation. And now we know how it ends. Plus there's a lot more to get through. Let's preview.

Kate

President Trump ordered every federal agency to immediately stop using Anthropic's technology, and the Pentagon slapped the company with a supply chain risk designation normally reserved for foreign adversaries. Anthropic says it's going to court.

Kate

Hours later, OpenAI announced its own Pentagon deal, claiming it includes the same safety principles Anthropic fought for. We'll look at the fine print.

Kate

OpenAI also closed a hundred-and-ten-billion-dollar funding round, one of the largest private raises in history. And the investor list tells a story.

Kate

Jack Dorsey's Block is laying off nearly half its workforce and blaming AI, but some very prominent voices are calling that explanation into question.

Kate

Andrej Karpathy tried running eight coding agents at once and says it doesn't work. And an open source project just banned AI-generated code entirely. Let's get into it.

Kate

Marcus, we watched this unfold all week. Wednesday it was the deadline threat. Thursday, the RSP revision. Friday, Dario's statement. And now President Trump has pulled the trigger. What happened?

Marcus

Trump issued an executive directive Friday ordering every federal agency to immediately cease all use of Anthropic's technology. Not just the Pentagon. Every agency. And Defense Secretary Hegseth went further than anyone expected. He designated Anthropic a supply chain risk to national security. Kate, that classification has never been applied to an American company. It's the label reserved for Huawei, for entities tied to foreign adversaries.

Kate

And the practical consequences of that designation are enormous, right? It's not just about losing the Pentagon contract.

Marcus

It cascades through the entire defense industrial base. No contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic. Effective immediately. That's not just the two-hundred-million-dollar contract we've been discussing. That's potentially billions in indirect business from defense contractors and adjacent industries who now have to choose between working with Anthropic and working with the military.

Kate

And there's a contradiction that Anthropic is already highlighting.

Marcus

A glaring one. Hegseth gave the Pentagon six months to phase out existing Anthropic deployments. But if Anthropic genuinely poses a national security risk, why would you keep using their technology for another six months? You wouldn't leave Huawei equipment in your networks for half a year. The phaseout timeline implicitly concedes that the designation is punitive, not protective. Anthropic's lawyers will absolutely use that in court.

Kate

And they are going to court.

Marcus

Anthropic announced they'll challenge the designation, calling it legally unsound. Their legal argument is twofold. First, supply chain risk designations under federal law can only apply to Department of War contract work, not to how other contractors use Claude for unrelated purposes. Second, this is retaliatory. The designation came hours after Anthropic refused to remove specific safety guardrails, which creates a clear paper trail of cause and effect. Trump also posted on Truth Social that he'd use the full power of the presidency to enforce compliance, which Anthropic's lawyers are probably framing right now.

Kate

The public reaction this week has been something else.

Marcus

Overwhelmingly in Anthropic's favor. The Hacker News discussion hit twelve hundred points with nearly a thousand comments. Multiple threads of people posting screenshots of canceled OpenAI subscriptions and new Anthropic signups. Whether that translates into enough commercial support to offset the government business is the real question. But Anthropic has turned this standoff into probably the most effective brand-building campaign in AI history, and they did it by doing something almost nobody expected from a tech company. Saying no.

Kate

And Marcus, within hours of Anthropic getting blacklisted, OpenAI made its move.

Marcus

Sam Altman announced that OpenAI reached an agreement with the Department of War to deploy its models on classified military networks. The timing was, let's say, conspicuous. Hours. Not days. Hours after the government banned its primary competitor.

Kate

Now Altman claimed the deal includes the same safety principles Anthropic was defending. Is that true?

Marcus

The words are similar but the meaning is different, and this is important. Anthropic demanded a prohibition on autonomous weapons systems entirely. Full stop. OpenAI's agreement requires, quote, human responsibility for the use of force, including for autonomous weapon systems. That's a meaningfully weaker standard. You can have an autonomous weapons system under OpenAI's language as long as someone claims responsibility for it. One Hacker News commenter put it perfectly. Requiring human responsibility isn't saying much. There's already military courts, rules of engagement, and international law. None of that prevented every civilian casualty in history.

Kate

So the real question is whether OpenAI's agreement actually constrains anything.

Marcus

It constrains the optics more than the operations. And look, OpenAI also says it retains control over which models are deployed and where, and that deployment is limited to cloud environments rather than edge systems. Those are meaningful technical constraints. But they're operational details, not ethical red lines. The fundamental difference is that Anthropic said there are things our technology cannot be used for, period. OpenAI said here are the conditions under which our technology can be used for those things. Those are very different positions dressed up in similar language.

Kate

And the employees who signed that "We Will Not Be Divided" letter?

Marcus

Getting called out hard. That letter, signed by OpenAI employees just days ago, was meant to show solidarity against the Pentagon's divide-and-conquer strategy. The company then immediately took the deal that the division strategy was designed to produce. Whether those employees feel betrayed or were never consulted, either way it's not a great look.

Kate

OpenAI also closed its funding round this week, and Marcus, these numbers are staggering.

Marcus

A hundred and ten billion dollars at a pre-money valuation of seven hundred and thirty billion. Post-money, eight hundred and forty billion. One of the largest private funding rounds in history. The investor breakdown is fifty billion from Amazon, thirty billion from Nvidia, and thirty billion from SoftBank.

Kate

And each of those investments comes with strings attached.

Marcus

That's what the Hacker News crowd immediately spotted. Amazon's money is tied to OpenAI using AWS. Nvidia's presumably requires continued hardware purchases. SoftBank has its own infrastructure ambitions through Stargate. So a significant portion of that hundred and ten billion isn't truly fungible cash. It's circular investment where the money flows back to the investors through contracts. The actual free capital OpenAI gets to deploy as it chooses is likely much smaller than the headline suggests.

Kate

And Microsoft, OpenAI's longest-standing and most important backer, didn't participate.

Marcus

That's the most revealing detail. Microsoft has been OpenAI's primary investor and infrastructure partner since 2019. Their absence from the largest round in the company's history signals something. Maybe the relationship is fraying. Maybe Microsoft is hedging by investing in its own models. Either way, when your founding partner sits out your biggest moment, people notice. One commenter compared it to your best man not showing up to the wedding.

Kate

The skeptics were loud on this one.

Marcus

The top Hacker News comment asked, quote, someone please explain how OpenAI is not Netscape 2026. First mover advantage but no network effect, no moat, racing to stay ahead of infinitely resourced incumbents. Another pointed out that each model generation is roughly twice as profitable on its own, but each next model costs ten times the last. That math doesn't converge anywhere good. At eight hundred and forty billion, OpenAI is valued higher than all but a handful of public companies, and it's still burning cash faster than it earns it.

Kate

Shifting gears completely. Jack Dorsey's Block is laying off four thousand people, nearly half its workforce. And he's saying AI made him do it.

Marcus

Dorsey posted a six-hundred-word justification on X claiming that, quote, something happened in December of last year where the models got an order of magnitude more capable. He said AI will be at the core of how the entire company works going forward and predicted that most companies will do the same within a year. Investors loved it. Block shares jumped twenty-four percent.

Kate

But there's a counter-narrative building, isn't there?

Marcus

A strong one. Elad Gil, one of the most respected tech investors in the Valley, posted a thread arguing that most AI layoff announcements are actually companies correcting for pandemic-era over-hiring. He wrote that many larger tech companies could slim down fifty percent without any AI changes. And Sam Altman himself acknowledged weeks ago that some companies are AI washing, blaming unrelated layoffs on AI to appear forward-thinking and boost their stock price.

Kate

So is Block genuinely replacing four thousand people with AI, or is this corporate restructuring with better marketing?

Marcus

The honest answer is probably both. Block grew massively during the pandemic and likely had bloat to trim regardless of AI. But framing it as an AI transformation story gets you a twenty-four percent stock bump instead of questions about mismanagement. Research from Oxford Economics suggests nearly forty percent of AI time savings are lost to rework due to quality issues. If that holds, Block may find itself needing to rehire in areas where the AI can't actually do the work. Dorsey's prediction that most companies will follow within a year is either visionary or reckless, and we won't know which for a while.

Kate

Quick one that connects back to the Anthropic story. Anthropic launched a program offering free Claude Max subscriptions to open source maintainers.

Marcus

Six months of the twenty-times plan, which normally costs two hundred dollars a month, free for maintainers and core contributors of projects with five thousand or more GitHub stars or a million monthly NPM downloads. Up to ten thousand contributors can participate. Express.js and Lodash maintainer Jon Church praised it, saying people don't understand how low the bar is for rewarding maintainers.

Kate

Smart timing, given the week they've had.

Marcus

Very smart. Getting the most influential developers in the world building muscle memory with Claude Code is a strategic play. And the goodwill among developers, at a moment when the company is being painted as a national security risk by the government, is invaluable. The six-month limit drew some criticism, since Anthropic trained Claude on open source code, but the gesture landed well overall.

Kate

Andrej Karpathy tried something ambitious this week. He set up eight coding agents, four Claude, four Codex, each with a GPU, working on the same project simultaneously. How'd it go?

Marcus

His exact words were, the TLDR is that it doesn't work and it's a mess. But he added, it's still very early. The task was removing a specific feature from his nanochat project without causing regressions. Eight agents couldn't coordinate well enough to pull it off. He noted he still keeps an IDE open and surgically edits files himself because he catches issues the agents miss. His conclusion was that agents are great for documentation and notebooks but not yet ready for autonomous complex engineering.

Kate

When Karpathy says it doesn't work, that carries weight.

Marcus

He's been one of the most credible voices on AI coding all year. And the nuance matters. He didn't say agents are useless. He said multi-agent coordination doesn't work yet. Single agents are still powerful. His metric for evaluating them is beautifully simple: time and dollars. Everything else is noise.

Kate

And on the other end of the spectrum, PostmarketOS just banned AI-generated code entirely from their project.

Marcus

They updated their governance to prohibit two things. Submitting contributions created by generative AI tools, and even recommending AI tools to other community members. Their reasoning is practical, not ideological. Maintainers are overwhelmed by contributions that look okay on the surface but come from people who don't understand how the code actually works, haven't tested it properly, and write misleading commit messages.

Kate

They're joining Gentoo Linux, which did something similar earlier.

Marcus

It's a growing counter-movement. And the core argument resonates with anyone who maintains open source software. AI dramatically lowers the barrier to submitting code that looks plausible but increases the review burden on maintainers who are already volunteering their time. More contributions of lower quality is a net negative for projects that care about reliability.

Kate

Saturday big picture, Marcus. This has been the most consequential week in AI since we started this show. Anthropic got blacklisted by the U.S. government. OpenAI took the Pentagon deal and closed a hundred-and-ten-billion-dollar round. Block fired half its workforce in AI's name. What's the thread?

Marcus

The thread is that the AI industry just split into two lanes. One lane is alignment with power: take the deals, accept the terms, ride the momentum. OpenAI took the Pentagon contract. Amazon, Nvidia, and SoftBank wrote checks with circular strings attached. Block used AI as the justification investors wanted to hear. The other lane is alignment with principles, and right now Anthropic is driving that lane alone, heading to court with a supply chain risk label on its back. Whether principled positions survive contact with real power is the defining question for this industry. This week we got our first real test case.

Kate

And the developer community seems to be voting with its wallets and its code.

Marcus

That's the wild card. Open source maintainers banning AI code. Developers switching subscriptions to Anthropic. Karpathy honestly reporting that the tools aren't magic yet. There's a grassroots reality check happening alongside the corporate chess game. The money says one thing. The people building the actual technology are saying something more complicated. How those two forces resolve will shape what AI looks like a year from now.

Kate

That's your AI in 15 for Saturday, February 28, 2026. Have a great rest of your weekend, and we'll see you Monday.