← Home AI in 15

AI in 15 — May 05, 2026

May 5, 2026 · 17m 13s
Kate

The White House is reportedly drafting a plan that would force Anthropic, Google, and OpenAI to submit every new frontier model to a federal review board before release. Drug-approval rules, but for language models. And the labs found out about it last week.

Kate

Welcome to AI in 15 for Tuesday, May fifth, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Big policy day, Marcus. The New York Times says the Trump administration is weighing pre-release vetting of frontier AI models. OpenAI dropped its own Wall Street joint venture to match Anthropic's. Sierra raised nine hundred and fifty million at nearly sixteen billion. OpenAI published the plumbing of how it runs voice mode at global scale. Bun is being rewritten from Zig into Rust, mostly by Claude. Google ships Gemini to Pentagon classified networks while a thousand employees protest. Novo Nordisk goes all-in with OpenAI. GitHub had another outage and the community is blaming AI agents. And Meta is cutting eight thousand jobs to fund AI capex.

Kate

The White House wants a sign-off before frontier models ship.

Kate

Bun's runtime is being ported by Claude.

Kate

And Meta tells eight thousand workers their salaries are funding GPUs.

Kate

Lead story, Marcus. The New York Times reported Monday that the Trump White House is considering an executive order to vet frontier AI models before release. Walk me through it.

Marcus

The Times reports a working group of senior tech executives and federal officials would be tasked with designing a pre-release review process for new frontier models. According to the story, officials briefed Anthropic, Google, and OpenAI executives on the plan during meetings last week. A White House spokesperson has since called the report speculation and said any decision rests with President Trump himself. But the framework is reportedly already circulating inside the administration.

Kate

This is a sharp turn from where Trump started in 2025.

Marcus

A complete one-eighty, Kate. His first AI executive order in early 2025 ripped up the Biden-era safety guidance and went hard pro-innovation, hands-off. The new proposal looks more like a soft FDA. Companies submit a frontier model. Government reviewers sign off. Deployment proceeds. It is unclear whether review covers all releases, only frontier launches, or only models intended for federal use.

Kate

And the consequences if it lands.

Marcus

Two big ones. First, regulatory capture. The three named labs are exactly the three best positioned to absorb compliance overhead. Any open-weights challenger, including a US one like Gemma, gets a much harder path to market. Hacker News flagged the worst case immediately. US inference providers restricted to government-approved models only. Second, this lands right as Chinese open-weights coding models are matching Western frontier capability at one-fifth the price. So a Western policy of slower, screened, approved arrives at exactly the moment Beijing's strategy is cheaper, open, ship every ninety days. Anthropic spent a year asking for guardrails. They may have just gotten more rules than they wanted, built directly into their competitors' market entry costs.

Kate

Quick hits. Marcus, we covered Anthropic's Wall Street JV yesterday. OpenAI matched it the same day.

Marcus

Right, Kate. OpenAI announced The Development Company, a four-billion-dollar raise at a ten-billion-dollar valuation, backed by TPG, Brookfield, Advent, and Bain Capital. Zero overlap with Anthropic's Blackstone and Goldman investor list. Same model. Forward-deployed engineers embedded inside customer companies, with private equity acting as the distribution channel into hundreds of portfolio companies at once.

Kate

So both labs raised billions of outside money to fund what is essentially consulting.

Marcus

That's the tell. If Anthropic and OpenAI thought their APIs alone could capture this market, they wouldn't be raising billions to pay human engineers. The combined five-and-a-half billion is aimed straight at the consulting industry's four-hundred-billion revenue pool. McKinsey, Accenture, BCG, Deloitte. AI services just became its own industry, separated from the model labs.

Kate

Sierra, Bret Taylor's AI agent company, raised nine hundred fifty million.

Marcus

At a post-money valuation north of fifteen billion. Some reports peg it at fifteen-point-eight. Cash on the balance sheet is now north of one billion. The round was led by Tiger Global and GV. The numbers underneath are extraordinary. ARR went from a hundred million in late November to a hundred fifty million by early February. Sierra serves more than forty percent of the Fortune fifty. Its agents handle billions of customer interactions a year. Valuation went from four-and-a-half billion in October 2024, to ten billion in October 2025, to fifteen-plus billion now.

Kate

What's the customer service insight that makes this work?

Marcus

The Hacker News thread had the receipts. Fifty to eighty percent of inbound CX calls are easily-answerable questions where customers couldn't navigate the existing self-service tools. AI agents demonstrably handle them. The bigger signal is who is building this. Bret Taylor is the chairman of OpenAI, ex-Salesforce co-CEO, ex-Facebook CTO. And he's building outside OpenAI. Sierra is now valued higher than several public software companies that took twenty years to build. Customer service is the first big enterprise function being eaten by agents. Sierra is the clear category leader.

Kate

OpenAI published a deep technical post yesterday on running voice mode at global scale, Marcus.

Marcus

It's a flex, Kate, but a useful one. The core problem is that standard WebRTC assumes one-port-per-session, stateful sessions tied to a single host, and routing decisions made at connect time. None of that fits a global model-serving stack where the GPU you need may not be near the user. OpenAI's solution is a split-relay-plus-transceiver architecture. The relay handles standard WebRTC at the edge, then forwards packets internally to wherever the model is actually running. The session looks standard to clients. They credit the open-source Pion library heavily.

Kate

And the user reaction.

Marcus

A great split. Developers loved the openness. Actual users complained the latency is currently too aggressive. The model interrupts when humans pause naturally for half a second. One user noted that telling the model interrupt less actually works as a prompt. The other quiet detail is that OpenAI's realtime audio is still pinned to the GPT-4o family. There's no Realtime version of GPT-5.x. And yet there is still no real competitor in the segment. The post is essentially OpenAI saying nobody else has built this, so here is how we did it.

Kate

Bun is being ported from Zig to Rust, Marcus. Mostly by Claude.

Marcus

A claude-slash-phase-A-port branch on the Bun repo currently sits at seven hundred seventy-three thousand additions, a hundred fifty-one deletions, across roughly sixteen hundred files. Five months after Anthropic acquired Bun, the team is rewriting the entire runtime off Zig. Reasons cluster: Zig is still pre-1.0 with frequent breaking changes, Zig's maintainers explicitly ban LLM-assisted contributions, which already cost Bun a four-times speedup it couldn't upstream, Claude is dramatically better at Rust than at Zig because of training-data volume, and maintaining a Zig fork is increasing tech debt.

Kate

Critics are calling this the highest-stakes vibe-coding project ever.

Marcus

They are, Kate, and they have a point. Once you sign off on a seven-hundred-fifty-thousand-line AI-generated diff, the institutional memory of the original codebase is gone. Defenders push back that this is the easiest possible LLM port, line-by-line behavioral translation against a working reference implementation, not greenfield generation. Either way, this is the proof case the entire industry has been waiting for. If Bun ships, AI-assisted rewrites of large performance-sensitive codebases just became real. If it ships full of segfaults, that's a different kind of proof. And it's Anthropic eating its own dog food. Claude is rewriting the runtime that powers Claude Code.

Kate

Google's Pentagon deal, Marcus. We covered the Pentagon picking eight vendors on Sunday. The new wrinkle is the employee response.

Marcus

About a thousand Google employees have signed an internal open letter opposing the Gemini 3.1 Pro deployment to Pentagon Impact Level six and seven networks. The detail Fortune flagged is the contract terms. Gemini can be used for, quote, any lawful government purpose, which critics inside Google read as essentially blanket authorization. Google's contract reportedly obliges the company to remove technical safeguards if they prevent the government from doing something it wants. Unlike OpenAI's contract, it lacks an explicit ban on use for mass domestic surveillance.

Kate

So this is Project Maven round two?

Marcus

Fortune's analysis is no. Google's cultural and structural willingness to cancel contracts under employee pressure has visibly waned since 2018. Anthropic, by contrast, refused similar Pentagon work in February over autonomous-weapons and surveillance concerns and ate the revenue hit. The competitive dynamic is the news inside the news. Google chose to bid lower on guardrails than OpenAI to win Pentagon market share. That's a meaningful new axis of competition between frontier labs. Permissiveness as a sales feature.

Kate

Bipartisan AI bill, Marcus. The LIFT AI Act.

Marcus

Senators Adam Schiff and Mike Rounds introduced the Literacy in Future Technologies AI Act on April twenty-eighth. It funds AI literacy curriculum, evaluation tools, and teacher training across K-12 through an NSF grant program. Endorsers include the American Federation of Teachers, Google, OpenAI, Microsoft, HP, ITI, and SIIA. The bill defines AI literacy as the ability to use artificial intelligence effectively. Critics on Hacker News and at 404 Media seized on that phrasing immediately. It defines literacy as product-usage proficiency.

Kate

And the historical comparison.

Marcus

One commenter compared it directly to the IT Literacy classes of the 1990s that effectively taught Microsoft Office under the banner of computer literacy. A New York Times piece referenced in the discussion describes Chromebooks shipping to students with Gemini pre-installed. Education is becoming a major front in the platform wars, Kate. Whoever's tools get standardized in K-12 has a twelve-year on-ramp to consumer behavior. This is also a rare bipartisan AI bill in a Congress that mostly can't move on AI policy. Which is exactly why it's likely to pass.

Kate

Novo Nordisk and OpenAI. Pharma's biggest AI bet goes end-to-end.

Marcus

Announced April fourteenth and rolling out through this month, Kate. The maker of Ozempic, one of Europe's most valuable companies, is integrating OpenAI's models company-wide. Pilots are running in R&D for target identification and complex dataset analysis, in manufacturing for process optimization, plus supply chain and commercial operations. Full deployment is targeted for end of 2026. OpenAI is also providing AI fluency training to Novo's global workforce.

Kate

Why does this stand out.

Marcus

Because this is the scale of enterprise AI rollout we've been told was coming for two years and rarely actually see. Not, we use ChatGPT for emails. Model integration across primary research, industrial manufacturing, and a regulated global supply chain. If Novo cuts even ten percent off its drug development timelines, the financial impact dwarfs the entire OpenAI contract. It's also a meaningful European story in a narrative that is otherwise overwhelmingly US-versus-China.

Kate

Last quick hit. GitHub had another outage yesterday. The community thinks they know why.

Marcus

Issues and Webhooks degraded for hours on May fourth. Two things stood out, Kate. Community-tracked stats put GitHub's ninety-day uptime at around eighty-four-point-nine-two percent. That's a remarkable number for what is effectively the world's source-code utility. The top Hacker News comment connected the rate of incidents directly to agentic coding load. Autonomous AI agents running CI tasks, opening PRs, hammering the API at machine speed. Commenters say GitHub will have to change rate limits, cut free-tier usage, or otherwise reduce load. A satirical site, dayswithoutgithubincident-dot-com, hit the front page the same day.

Kate

And the broader pattern.

Marcus

This is the AI-load story dressed up as a DevOps story. Cursor, Claude Code, Codex, and Copilot agents are now the heaviest consumers of developer infrastructure on the internet. The same issue will hit npm, PyPI, Docker Hub, and every package registry within twelve months. Developers are already moving projects to Forgejo, Codeberg, or AT-Protocol-based tangled-dot-org. Several enterprise users are looking at moving GitHub Enterprise from cloud back to on-premises specifically for reliability. The bills for agentic coding load haven't been written yet. They're coming.

Kate

Big picture, Marcus.

Marcus

One thread ties everything today together, Kate. AI is moving from R&D budget line to core infrastructure, and the contracts that govern that transition are being written this week. The White House wants approval rights on the models. Anthropic and OpenAI just locked in PE-funded distribution channels worth five-and-a-half billion combined to capture the consulting industry's revenue. Google traded guardrails for Pentagon market share. Sierra hit fifteen billion on the back of agents replacing call-center seats. Meta is laying off eight thousand workers, openly, to fund a hundred-fifteen-to-a-hundred-thirty-five-billion-dollar capex line. Zuckerberg told staff the cuts pay for the data centers. That reframes the whole AI labor conversation, Kate. AI isn't replacing those Meta jobs. The spending on AI infrastructure is replacing those salaries. The regulatory, financial, and labor frameworks of the AI economy are all being decided simultaneously. And the lobbying has already started in K-12.

Kate

That's your AI in 15 for today. See you tomorrow.