← Home AI in 15

AI in 15 — March 03, 2026

March 3, 2026 · 15m 38s
Kate

"Definitely rushed." "Sloppy." Those are Sam Altman's own words about the Pentagon deal OpenAI announced hours after Anthropic got blacklisted. The CEO of the most valuable private company in history just admitted he blew it.

Kate

Welcome to AI in 15 for Tuesday, March 3, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Marcus, Altman is doing some public cleanup on the Pentagon deal, and there's a lot to unpack. Plus we've got Meta's smart glasses becoming a full-blown privacy crisis, Apple's four-and-a-half-billion-dollar AI bet sitting idle, and the most ironic firing in tech journalism history. Let's preview.

Kate

Sam Altman posted an internal memo calling the Pentagon deal rushed and sloppy, announcing amendments to add explicit anti-surveillance and anti-autonomous-weapons language that wasn't in the original agreement.

Kate

A Swedish investigation into Meta's smart glasses found workers describing the experience with two words: we see everything. Seven million units sold, and now there's an app to detect them in public.

Kate

Apple spent four and a half billion dollars on AI servers running at ten percent utilization, and is now in talks with Google to power the new Siri on Google's cloud infrastructure.

Kate

Ars Technica fired its senior AI reporter after AI tools fabricated quotes in his article about an AI going rogue. Yes, really.

Kate

And an open source AI framework just surpassed React's GitHub star count in four months. Let's get into it.

Kate

Marcus, we've been covering this Pentagon saga all week. Saturday was the announcement, Sunday was the consumer backlash, Monday was the funding round. And now Altman is out with a memo calling his own deal rushed and sloppy. What actually happened here?

Marcus

Altman posted an internal memo that's been shared publicly, and the key words are remarkable coming from a CEO. He used the words "definitely rushed" and "sloppy." He announced that OpenAI is amending the Pentagon agreement to add explicit language prohibiting use for mass surveillance and autonomous weapons systems.

Kate

So the deal they signed on Saturday, the one announced hours after Anthropic got blacklisted, didn't actually include those protections?

Marcus

Apparently not in explicit terms. And that's the admission hiding inside the word sloppy. Anthropic spent months in negotiations fighting specifically for those clauses. OpenAI announced their deal in hours. And now a week later they're going back to add the language that Anthropic went to war over. The irony is genuinely thick.

Kate

It's a bit like copying someone's essay, handing it in wrong, and then having to go fix it.

Marcus

That's roughly it. And credit where it's due, adding explicit anti-surveillance and anti-autonomous-weapons language is substantively better than not having it. Altman saying it publicly rather than quietly amending the contract does matter. But the sequence tells a story. Anthropic drew a principled line and paid a severe price. OpenAI rushed to take the deal that was on the table. And then, under scrutiny, they're walking back toward the position Anthropic defended from the start.

Kate

So Anthropic took the hit and OpenAI gets the amended contract.

Marcus

That might be the bitterest possible outcome. One company draws the line, another company gets the deal, enough pressure builds that the contract eventually reflects the line, and the company that forced the change is still in court with a supply chain risk designation on its back. History isn't always fair about who gets credit for moving things forward. The philosophical gap remains, though. Anthropic's position was that certain uses are off limits entirely. OpenAI's language still describes conditions under which these things can be done. A memo revision doesn't close that gap.

Kate

Let's move to something that should concern everyone who has ever been in a room with someone wearing a pair of Ray-Ban Meta smart glasses. Marcus, the Swedish investigation.

Marcus

Konsumentverket launched a formal investigation, and the workers inside Meta's smart glasses review program described the experience in the language you would absolutely not want them to use. We see everything. That was the quote.

Kate

And the design gives them cover to do exactly that. No indicator light when recording.

Marcus

No visible signal when the glasses are actively recording or streaming. Seven million units sold in 2025 alone. These are devices that look like normal sunglasses. The investigation is focused on GDPR compliance, but the bigger story is what the experiential reality actually is. People are walking into restaurants, meetings, and gyms while their AI assistant processes whatever the glasses see.

Kate

And now there's an app to detect them?

Marcus

A researcher built something called Nearby Glasses that detects Meta smart glasses via Bluetooth advertising signals the devices continuously emit. So you can now check whether someone nearby is wearing them. Privacy as an arms race. Counter-technology built to defend against the primary technology.

Kate

What does this do to the product category? Because seven million units is real adoption.

Marcus

It creates a bifurcated future. People who want the capability will keep buying them. But the social permission to wear them in certain contexts is eroding. If restaurants and gyms start banning them, or if the social norm shifts to treating them the way we once treated people pointing cameras at strangers, the use case narrows fast. Meta is trying to normalize always-on AI capture as a lifestyle feature. The Swedish investigation and apps like Nearby Glasses are society pushing back on that normalization in real time.

Kate

Now this one is almost too perfect as a story. Apple spent four and a half billion dollars building AI servers. Those servers are sitting at ten percent utilization. And Apple is now in talks with Google to run the new Siri on Google's cloud. Marcus.

Marcus

The numbers are striking. Apple Intelligence, the AI product suite they've been shipping, has an adoption problem. Fewer than thirty percent of users have meaningfully engaged with it. The servers Apple built for private cloud compute, which was supposed to be their differentiating privacy story, are running at a tenth of capacity.

Kate

So their answer is to hand it to Google?

Marcus

The talks are reportedly about using Google's cloud infrastructure to power next-generation Siri. Which is extraordinary. Apple's entire AI pitch has been about privacy. Processing on your device or on Apple's own servers. Not sending your data to third parties. And the company they're considering partnering with for the most personal AI assistant on the most personal device you own is Google, whose entire business model is advertising.

Kate

That's not a small tension.

Marcus

It's a potentially serious brand problem. Apple has charged a premium on that privacy story for years. If Siri is running on Google's infrastructure, the calculus changes for a lot of people. And the technical irony is complete. Apple built the custom silicon, the server infrastructure, the privacy architecture, and then couldn't get users to actually use it.

Kate

Why do you think adoption is so low?

Marcus

The features aren't compelling enough compared to what ChatGPT and Claude can do. Apple Intelligence is helpful but it doesn't feel magical. And when you've used a genuinely capable AI assistant, Apple's offering feels cautious. Cautious doesn't drive adoption when the competition is moving at this speed.

Kate

Okay, this one. Ars Technica, one of the most respected tech publications in the world, just fired Benj Edwards, their senior AI reporter. The reason: AI tools fabricated quotes in his article. The article was about an AI going rogue.

Marcus

The recursion is almost too perfect. Edwards is a genuinely well-regarded journalist who has covered AI critically and carefully for years. The article examined an AI system behaving erratically, and some quotes attributed to sources were fabricated by the AI writing tools used during reporting. Ars Technica confirmed the firing.

Kate

An AI reporter writing about AI going wrong, with his reporting corrupted by AI going wrong.

Marcus

And it's also genuinely tragic for someone who cared deeply about accurate AI coverage. But it's one of the clearest warnings we've seen about where these tools fail most dangerously. Factual claims, quotes, attributions, citations. Those are exactly the categories where AI assistance is most harmful when it goes wrong, because the errors are plausible-sounding and the consequences are serious.

Kate

If this can happen to a senior journalist at a rigorous outlet, it can happen anywhere.

Marcus

That's exactly the lesson. Not that AI writing tools are useless in journalism, but that human verification of specific facts and quotes cannot be delegated to the same tools that might have introduced the error. The check can't be performed by the thing you're checking.

Kate

Quick hits. OpenClaw, an AI agent framework, just surpassed React on GitHub stars. Two hundred and forty-seven thousand stars in four months versus React's decade-long total. Marcus, should we be impressed?

Marcus

The growth rate is genuinely remarkable. But the Hacker News community was appropriately skeptical, and I'm with them. GitHub stars are easy to game, don't indicate actual usage, and have been inflated by social media campaigns before. React has hundreds of millions of production deployments. Comparing star counts to measure adoption is like comparing book sales to library check-outs. They measure something, just not the same thing.

Kate

Google DeepMind also released a new image model called Nano Banana 2. That name.

Marcus

The name is unfortunate, the technology is interesting. Fast image generation at professional quality, with notably better text rendering than most image models, which has historically been a weak point. And it maintains character consistency across multiple generated images, which matters for anyone building visual content pipelines. Not a headline story but a meaningful capability improvement for a specific use case.

Kate

Microsoft had a community management moment this week. Their Discord banned anyone who used the word Microslop, which is apparently what people call Copilot when it produces bad outputs.

Marcus

Which apparently happens frequently enough that the term spread widely. Microsoft locked down their Discord server after the backlash, which made things considerably worse. The Hacker News post hit over a thousand points. The core issue is that community frustration with Copilot quality is real and significant, and trying to police the language people use to describe it is a strategy that reliably amplifies the original complaint.

Kate

You cannot ban a nickname. You can only make it more famous.

Marcus

Every product manager in tech should print that and put it on their wall.

Kate

Last quick hit. An essay called "The AI Clownpocalypse" made the rounds, from the creator of the spaCy library. The argument is that AI's real risk isn't Terminator scenarios. It's mundane failure at scale.

Marcus

The argument is worth taking seriously. The sci-fi framing of AI risk is about catastrophic decisions by superintelligent systems. But the actual risk accumulating right now is millions of AI-generated outputs that are slightly wrong, slightly hallucinated, slightly off, deployed across every layer of how we communicate and decide. Not one catastrophic failure, but a fog of degraded quality and ambient error that we normalize because each individual instance seems minor.

Kate

And nobody rings the alarm because nothing individually rings the alarm.

Marcus

Which is exactly why naming it matters. The Ars Technica story from earlier today is a small example. One fabricated quote. Not the end of journalism. But scale that pattern across every newsroom, every content team, every report, every customer service interaction, and the aggregate effect becomes significant even when each individual failure is forgiven.

Kate

Tuesday big picture, Marcus. Altman called his own Pentagon deal rushed and sloppy. Meta's smart glasses have workers saying they see everything. Apple's AI bet is sitting at ten percent utilization while they negotiate with Google. A senior AI reporter got fired because AI fabricated quotes in an AI story. What's the thread?

Marcus

Today is a day about the gap between the pitch and the product. OpenAI pitched a principled Pentagon deal and it needed revision. Apple pitched privacy-first AI and it isn't getting used. Meta pitched smart glasses as a lifestyle device and it's becoming a surveillance conversation. And the Ars Technica story is almost a metaphor for the whole moment. Tools that produce outputs that look right, that sound authoritative, that pass casual inspection. And the accountability for what's actually true still falls entirely on humans. What today's stories share is that the human oversight step is the one getting skipped, underinvested, or undermined. And sooner or later, that shows.

Kate

The gap between what it says it can do and what it actually does. That's the story of this moment.

Marcus

And the companies honest about that gap are the ones worth watching. Everyone else is either running from it or hoping no one looks too closely.

Kate

That's your AI in 15 for Tuesday, March 3, 2026. See you tomorrow.