AI in 15 — March 10, 2026
"Multiple billions of dollars." That's what Anthropic says it stands to lose after the Pentagon slapped it with a label normally reserved for foreign adversaries. Yesterday, Anthropic fired back with two federal lawsuits.
Welcome to AI in 15 for Tuesday, March 10, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, Anthropic is done playing defense. They've lawyered up and they're going after the Trump administration in court. We've also got OpenAI making a security acquisition that tells you exactly where the enterprise AI race is headed. The Stargate data center expansion just collapsed, and Meta is circling with its checkbook. Mark Zuckerberg is quietly building a parallel AI organization at Meta. Andrew Ng released a tool to stop coding agents from hallucinating APIs. And NVIDIA GTC is next week, so let's talk about what's coming. Let's preview.
Anthropic sues the Trump administration over the unprecedented "supply chain risk" designation, calling it First Amendment retaliation.
OpenAI acquires Promptfoo, the security platform used by a quarter of the Fortune 500.
The Stargate expansion collapses as OpenAI walks away from Oracle, and Meta swoops in with a hundred-and-fifty-million-dollar deposit.
And Jensen Huang's keynote is six days away. Let's get into it.
Marcus, we've covered this Pentagon saga all week. The blacklisting, the Fortune report about Claude targeting a thousand sites in Iran, the tech worker petitions. But yesterday Anthropic took the biggest step yet. Two federal lawsuits.
One in Northern California district court, one in the D.C. Circuit Court of Appeals. The core legal argument is that the Trump administration violated Anthropic's First Amendment rights and exceeded the scope of supply chain risk law. Their lawyers are framing this as government retaliation against a company for holding a protected viewpoint. And they're right that the designation is unprecedented. Supply chain risk labels have historically been reserved for companies like Huawei and Kaspersky. Foreign adversaries. Never an American company.
And we know exactly what triggered this. It traces back to that February meeting between Hegseth and Amodei.
Amodei drew two lines. No autonomous weapons without human control, and no mass domestic surveillance of American citizens. The Pentagon's position was that private companies can't dictate how the government uses AI in warfare. Negotiations collapsed. And instead of simply walking away from the contract, the Pentagon issued this designation, which doesn't just block government deals. It poisons the entire enterprise business because defense contractors now have to cut ties with Anthropic too.
Anthropic's CFO said this could cost them multiple billions in 2026 revenue.
And here's the strategic context that makes this even more pointed. While Anthropic is being punished for having guardrails, OpenAI and xAI have both received clearance for classified government systems. They're positioned to absorb exactly the market share Anthropic is losing. So the financial penalty for maintaining ethical red lines is very real and very immediate. Whether the courts agree that this constitutes First Amendment retaliation will be one of the most consequential AI legal battles we've seen.
The irony is still staggering. As we reported Sunday, the Pentagon used Claude through Palantir to strike over a thousand targets in Iran in twenty-four hours, and a Pentagon official described a "whoa moment" when he realized how dependent they were on it.
Simultaneously dependent on and punishing the same company. It's the kind of contradiction that usually only makes sense if you understand the politics underneath. And the politics here are straightforward. The administration wants AI companies that say yes without conditions. Anthropic said yes with two conditions. That was apparently one condition too many.
Shifting to OpenAI. They just acquired Promptfoo, an AI security platform. Marcus, this is interesting timing given the security stories we covered Saturday.
Promptfoo is used by over twenty-five percent of Fortune 500 companies for finding vulnerabilities in AI systems during development. It's an open-source CLI and library for evaluating and red-teaming large language model applications. Founded in 2024, raised twenty-three million dollars, valued at eighty-six million. Financial terms of the acquisition weren't disclosed.
So just days after launching Codex Security, OpenAI is doubling down with an acquisition.
The plan is to integrate Promptfoo directly into OpenAI Frontier, their enterprise platform. Three core capabilities: automated security testing for prompt injections, jailbreaks, and data leaks. Security evaluation embedded throughout the development workflow. And integrated reporting for governance and compliance. They've committed to keeping it open source under its current license.
This feels like a play for enterprise trust.
That's exactly what it is. As AI agents move from chatbots to autonomous workers that browse the web, write code, and execute real-world actions, the attack surface expands dramatically. An agent that can take actions needs far more robust security than a chatbot that just generates text. OpenAI is betting that security tooling will be a competitive differentiator for enterprise sales. And they're probably right. The CTO framed it as being about "AI systems at enterprise scale," which tells you the Frontier platform is where OpenAI sees its revenue future.
Now, the Stargate saga. We covered Oracle's potential thirty thousand layoffs yesterday to fund AI data centers. Today the story got worse for Oracle. The flagship Stargate expansion is dead.
OpenAI and Oracle scrapped plans to expand the Abilene, Texas campus from one point two gigawatts to two gigawatts. The core issue is timing. Power at the expansion site won't be ready for another year, and by then OpenAI wants to deploy NVIDIA's next-generation Vera Rubin chips, not the Blackwell GPUs going in now. They'd rather build an entirely new campus designed for the newer hardware than retrofit a facility that was planned for a different chip generation.
And relations between OpenAI and Crusoe, the data center operator, were already strained.
A multi-day outage earlier this year from winter weather damaging liquid cooling equipment didn't help. But here's the twist. Meta has reportedly paid a hundred-and-fifty-million-dollar deposit to Crusoe to secure the planned expansion site. So the capacity OpenAI walked away from may end up powering Meta's AI infrastructure instead.
Oracle is disputing the characterization, right?
They claim a four-and-a-half-gigawatt agreement with OpenAI remains on track. But the direction is clear. GPU generations are advancing so fast that committing billions to data centers has become a high-stakes timing bet. You build for Blackwell, and by the time you're operational, your biggest customer wants Vera Rubin. That's the brutal calculus of AI infrastructure right now, and it makes yesterday's Oracle layoff story even more concerning. They're cutting thirty thousand jobs to fund infrastructure that their anchor tenant may not want by the time it's ready.
Meta news. Zuckerberg has created a new Applied AI Engineering organization. Marcus, this sounds like it might be a vote of no confidence in his Chief AI Officer.
The new unit is led by Maher Saba, reporting to CTO Andrew Bosworth. It's deliberately flat, up to fifty individual contributors per manager, and tasked with building a data engine to improve Meta's AI models. This creates a parallel AI operation alongside the team led by Alexandr Wang, the twenty-eight-year-old former Scale AI CEO that Zuckerberg hired with enormous fanfare.
Reports say Zuckerberg has lost confidence in Wang. Meta says that's silly.
Meta pushed back hard. But the structural signal is interesting regardless. You have Wang's team focused on more ambitious, longer-term research, and now a parallel product-focused team optimizing for revenue. Every major AI company is navigating this tension between shipping products now and pursuing research breakthroughs. Zuckerberg appears to be hedging by building both organizations simultaneously. Whether that creates healthy competition or internal fragmentation is the question to watch.
Quick update on Karpathy's Autoresearch, which we covered Sunday. It's gone even more viral, and there's a notable real-world result.
Shopify CEO Tobi Lutke adapted the framework internally and reportedly achieved a nineteen percent improvement in validation scores, with the agent-optimized smaller model actually outperforming a larger manually-configured one. Karpathy also pushed back on comparisons to neural architecture search, saying this is fundamentally different. It's an actual language model writing arbitrary code, learning from previous experiments, with internet access. It's not constrained to a predefined search space. In one run, an agent discovered that switching the order of QK Norm and RoPE produced better results, a finding that doesn't fit neatly into any hyperparameter category.
So it's already producing novel findings. That's fast.
Six hundred and thirty lines of Python. Single GPU. And it's producing research insights that humans might not have tried. If this paradigm scales, the pace of AI research itself accelerates.
Andrew Ng released something called Context Hub this weekend. What problem does it solve?
A fundamental one. AI coding agents are trained on documentation snapshots that go stale. So they hallucinate deprecated parameters or miss newer API endpoints. Context Hub is an open-source CLI that lets developers fetch curated, versioned documentation before the agent writes code. Install via npm, prompt your agent to use it, and it pulls live docs rather than relying on training data.
The annotations feature is clever.
Agents attach local notes to documentation that persist across sessions and appear automatically on future fetches. So the agent effectively learns from past experience with specific APIs. It gets smarter about your particular codebase and usage patterns over time. The announcement got over three thousand likes on X. Developers clearly feel this pain.
Last one before the big picture. NVIDIA GTC kicks off Monday. Marcus, what should we expect from Jensen?
Over thirty thousand attendees from a hundred and ninety countries. The big reveal should be core specs for the Vera Rubin GPUs, and possibly a preview of Feynman, the generation after that. Also expect announcements on co-packaged optics switches, new power architectures, and liquid cooling systems. The timing is perfect given today's Stargate story. Every data center planning decision in the industry depends on what Jensen reveals next Monday.
The pregame show alone has the CEOs of Perplexity, LangChain, and Mistral.
GTC has become the Super Bowl of AI infrastructure. And this year it arrives at a moment when billions in data center investments are literally being reshuffled based on the GPU roadmap. What Jensen says about Vera Rubin next week directly determines whether Oracle's bet pays off or falls apart.
Tuesday big picture, Marcus. Anthropic sues the government. OpenAI buys its way into security. The Stargate expansion crumbles. Meta builds parallel AI teams. What's the theme?
Battle lines. Every major player is making moves that define who they are and what they're willing to fight for. Anthropic is fighting in court for the principle that AI companies can maintain ethical limits. OpenAI is fighting for enterprise credibility through security acquisitions. Oracle and Meta are fighting over physical infrastructure. Zuckerberg is fighting the internal tension between research ambition and product revenue. A month ago, the AI industry felt like a rising tide lifting all boats. Today it feels like a series of zero-sum conflicts where every gain comes at someone else's expense. The cooperative phase is ending. The competitive phase has arrived.
And the stakes are real. Billions of dollars. Court precedents. The relationship between AI labs and governments.
The Anthropic lawsuit is the one to watch. If they win, it establishes that AI companies have First Amendment protection for their safety policies. If they lose, the message to every AI lab is clear: when the government asks for unrestricted access, you comply or you pay. That outcome shapes the entire industry for a decade.
That's your AI in 15 for Tuesday, March 10, 2026. See you tomorrow.