← Home AI in 15

AI in 15 — May 13, 2026

May 13, 2026 · 16m 22s
Kate

An early number that Mr. Musk threw out was that he should have 90 percent of the equity to start. That was Sam Altman, under oath, in federal court in Oakland yesterday, describing the founding pitch from the man now suing him. Asked if business associates had ever called him a liar, Altman said, I have heard people say that.

Kate

Welcome to AI in 15 for Wednesday, May thirteenth, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Big midweek lineup, Marcus. Sam Altman took the stand in Musk versus OpenAI and the cross-examination was brutal. OpenAI shipped Daybreak — the defensive answer to yesterday's Google zero-day warning — built on a new GPT-5.5-Cyber tier. Microsoft and OpenAI capped their revenue-sharing deal at thirty-eight billion dollars, clearing the runway for an IPO. DeepMind tried to reinvent the mouse pointer for the AI era and Hacker News was not impressed. A tiny startup distilled Gemini's tool-calling into a twenty-six-million-parameter open-source model. And Amazon employees confess they're gaming internal AI leaderboards by running fake tasks through the company's own agent platform.

Kate

Altman on the stand, Musk wanted ninety percent.

Kate

OpenAI ships the defensive playbook for yesterday's threat.

Kate

And Amazon engineers admit to tokenmaxxing.

Kate

Lead story, Marcus. Altman testified yesterday in the Musk trial, and this is the most consequential legal moment OpenAI has ever faced. Walk me through it.

Marcus

Heavy day in Oakland, Kate. Sam Altman spent hours on cross-examination by Musk attorney Steven Molo. The headline admission was that, quote, an early number that Mr. Musk threw out was that he should have ninety percent of the equity to start. Altman also told the jury Musk had pushed to merge OpenAI directly into Tesla before walking away from the board in 2018. And he said Musk's departure was a, quote, morale boost — that the co-founders had explicitly concluded no single person, Musk included, should control AGI.

Kate

And then the credibility attacks.

Marcus

Pointed ones, Kate. Molo asked Altman whether he had always told the truth. Altman said, quote, I'm sure there are some times in my life when I did not. Asked whether business associates had called him a liar, he said, I have heard people say that. Ilya Sutskever reportedly testified against Altman earlier in the trial. Closing arguments are tomorrow, Thursday May fourteenth, with an advisory jury verdict and Judge Yvonne Gonzalez Rogers' ruling expected next week.

Kate

Why does this matter beyond the courtroom drama.

Marcus

Because the entire restructuring of OpenAI hinges on it, Kate. Musk's suit alleges that converting from non-profit to for-profit betrays the founding charter. If Judge Gonzalez Rogers agrees, the for-profit conversion — and with it the planned late-2026 IPO — gets thrown into legal limbo. And the timing is exquisite. This trial is unfolding the same week Microsoft and OpenAI finalized the thirty-eight-billion-dollar revenue cap, which we'll get to in a minute, a restructuring that only makes sense if the conversion holds. Satya Nadella is also testifying in this trial. The next seven days could either clear OpenAI's runway to the public markets or genuinely blow it up. There's no third option.

Kate

Quick hits. Marcus, OpenAI launched Daybreak yesterday — and the timing is remarkable.

Marcus

Twenty-four hours after Google's warning we covered yesterday, Kate. Daybreak is OpenAI's AI cyber-defense platform. It uses GPT-5.5 and a Codex Security agent to build editable threat models directly from a company's source code, walk realistic attack paths, validate exploitable vulnerabilities inside isolated sandboxes, and then generate and test patches automatically. OpenAI says it cuts analysis time from hours to minutes and produces audit-ready evidence. Codex Security has already resolved more than three thousand critical issues across over a thousand open-source projects. Launch partners are heavy hitters — Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Akamai.

Kate

And there's a new gated model tier.

Marcus

That's the structural news here, Kate. Daybreak ships with three tiers — default GPT-5.5, GPT-5.5 with Trusted Access for Cyber, and a fully gated GPT-5.5-Cyber for verified defensive workflows only. So OpenAI has now adopted the same withhold-the-dangerous-version playbook Anthropic introduced with Project Glasswing and Claude Mythos. The frontier labs are converging on capability-gated tiers as the industry default. And it's a direct shot at Anthropic — Glasswing's roughly fifty vetted partners get autonomous vulnerability discovery, but Daybreak's launch consortium is much broader and clearly aimed at owning the enterprise security market.

Kate

And the timing relative to yesterday.

Marcus

Surgical, Kate. Google's Threat Intelligence Group goes public Monday with proof that criminals are already using LLMs to find zero-days. OpenAI ships the defensive product Tuesday. The offense-defense balance in cybersecurity is being rewritten on a daily cadence now, and frontier labs are openly competing in this space. It also gives the Trump Commerce Department's pre-release model evaluation agreements, signed with OpenAI, Google, and xAI last week, a concrete product to point at.

Kate

Microsoft-OpenAI story, Marcus. The Information dropped the number. Thirty-eight billion.

Marcus

Per The Information, cited across this week, Kate. OpenAI and Microsoft have agreed to cap the total revenue-sharing obligation at thirty-eight billion dollars. Microsoft keeps a non-exclusive license to OpenAI's IP for models and products through 2032. But OpenAI is now free to expand cloud and infrastructure partnerships — including the SpaceX-style compute deals Anthropic recently struck and OpenAI's growing relationships with Amazon and Google. The revised contract also removes Microsoft's reciprocal revenue-share obligation back to OpenAI.

Kate

So what does this actually unlock.

Marcus

The IPO path, Kate. With an uncapped revenue share, any equity investor was effectively buying a perpetual annuity for Microsoft on top of OpenAI's revenue. That's not a public-markets-ready capital structure. Capping it at thirty-eight billion lets bankers actually model the long-term economics. Microsoft's thirteen-billion-dollar investment is still protected by that ceiling. For OpenAI, the long-term unit economics dramatically improve once they pay the cap. And critically, this confirms what's been obvious in Satya Nadella's posture for months — Microsoft is no longer betting exclusively on OpenAI. They're hedging across in-house Phi models, Anthropic on Azure, and other partners. This is two companies that need each other slightly less than they did a year ago, formalizing that fact in writing.

Kate

DeepMind story, Marcus. Yesterday they published something called the AI Pointer.

Marcus

Genuinely interesting concept, divisive demo, Kate. DeepMind's pitch is a context-aware cursor that lets you gesture and speak to manipulate content across applications. You point at something on screen, say, quote, fix this, or, move that here, or, double the recipe, and the AI acts on whatever you're pointing at. Four design principles — maintain flow across apps, show-and-tell via visual context, embrace natural shorthand, and turn pixels into actionable entities. It's being integrated into Chrome's Gemini features and into a new laptop they're calling the Googlebook, under the feature name Magic Pointer. Google Labs' Disco platform is handling extended testing.

Kate

And Hacker News was not kind.

Marcus

One hundred seventy-six points, one hundred forty-four comments, mostly skeptical, Kate. The top complaints — the voice-controlled examples in the demo were slower than a right-click menu, awkward to use in any office or coffee shop where other humans can hear you, and a partial solution to UX problems that exist mostly because the web has gotten less keyboard-friendly. The broader thesis is genuine though. What does desktop UX look like when the model can see your screen continuously. Google is staking out a position that the pointer, not the chat window, becomes the primary AI interface. The cool reception from technical users is a useful signal that the talk-to-your-computer thesis still has real product-market-fit problems. But this is where Google I-O next Tuesday is heading, so expect a polished version on the keynote stage.

Kate

Open-source story, Marcus. Cactus Compute dropped something called Needle yesterday.

Marcus

Beautiful piece of engineering, Kate. Needle is a twenty-six-million-parameter function-calling model — yes, million, not billion — distilled from Gemini 3.1 Flash-Lite. Built on a stripped-down architecture they call a Simple Attention Network that drops the MLP layers entirely. The argument being that with external knowledge access through tools, MLPs aren't pulling their weight anyway. They pretrained on two hundred billion tokens across sixteen TPU v6e chips in twenty-seven hours, then post-trained on two billion synthetic tool-calling tokens in forty-five minutes. In production it hits six thousand tokens per second prefill, twelve hundred tokens per second decode — on consumer hardware. The whole model is a fourteen-megabyte binary.

Kate

Why does this matter.

Marcus

On-device agents need cheap, fast tool routing far more than they need raw intelligence, Kate. If your job is to look at a user instruction and decide which API to call, you do not need a four-hundred-billion-parameter model. Needle is a clean demonstration that you can shrink a specialist task from a frontier model into something that fits in browser memory. Three hundred seventy-three points on Hacker News, real developer interest. There's a juicy subplot too — one top commenter noted that Google may be running real-time proactive defenses to degrade student-model performance against distillation attempts. So the frontier labs are now apparently trying to actively defend their moats against the very technique that produced Needle. The cat-and-mouse layer of the AI economy that nobody is publicly talking about.

Kate

Workplace story, Marcus, and it is genuinely funny and genuinely concerning. Amazon employees are tokenmaxxing.

Marcus

Tokenmaxxing, Kate. The Financial Times reported, relayed by Ars Technica, that Amazon has set internal targets requiring more than eighty percent of its developers to use AI tools each week, with consumption tracked on internal leaderboards. Some employees have responded by running unnecessary tasks through an in-house agent platform called MeshClaw, which can deploy code, triage emails, and act inside Slack — purely to inflate their token counts. One employee said, quote, so much pressure to use these tools. Another called it, quote, perverse incentives. Amazon says usage stats don't formally factor into performance reviews, but employees believe managers are watching. Similar patterns have been reported at Meta and Microsoft.

Kate

Beyond the workplace-comedy angle.

Marcus

This is the question every CIO and every AI investor needs to be asking right now, Kate. If a meaningful share of the token consumption underpinning hundreds of billions in AI capex is performative — engineers piping fake work through agents to hit a metric — what is the actual underlying enterprise demand. This is a textbook Goodhart's Law case. The moment you make AI usage the metric, the metric stops measuring AI usage. And it dovetails directly with the Microsoft-OpenAI revenue cap story we just covered. If Microsoft is hedging away from exclusive OpenAI commitment, part of the reason may be that they're staring at the same inflated internal usage numbers and asking the same question. The bull case for AI capex still rests on real productivity. The bear case just got a new chart.

Kate

Quick footnote, Marcus. Thinking Machines and the OpenAI Deployment Company we covered yesterday. Anything new today.

Marcus

Just market reaction, Kate. Thinking Machines' Interaction Model benchmark numbers — four-hundred-millisecond latency versus GPT-realtime's eleven-eighty — are circulating widely in voice-AI developer circles. And the Deployment Company's Tomoro acquisition is being framed in the FT as a direct shot at Accenture's AI services revenue. The big-four consultancies' stock prices moved noticeably yesterday. Both stories are landing harder on day two than they did on day one.

Kate

Big picture, Marcus.

Marcus

Three threads tie today together, Kate. First, AI cybersecurity went from theoretical to operational in a single twenty-four-hour cycle. Google confirms criminals are using LLMs to find zero-days Monday. OpenAI ships Daybreak Tuesday. This is what mature AI-versus-AI competition looks like at the product layer, and capability-gated tiers like GPT-5.5-Cyber are now the industry default. Second, the frontier labs are quietly restructuring themselves around services and infrastructure, not just models. The four-billion-dollar OpenAI Deployment Company, Anthropic's Wall Street services joint venture, the Microsoft revenue cap — all point to a world where the model itself is increasingly the loss leader and the deployment is the business. Third, the Altman-Musk trial is unfolding precisely as OpenAI takes the final steps toward an IPO. Closing arguments tomorrow, verdict next week. Add Mira Murati's Thinking Machines re-entering the frontier conversation, and you've got an industry that, three years into the boom, is still very much in its early shakeout phase. The pro-Western, libertarian read, Kate, is that this competition is exactly what you want to see. Multiple credible frontier labs, real capital discipline showing up in contract caps, and capability gating happening voluntarily ahead of regulation. The risk is the tokenmaxxing story — if enterprise demand is partly fake, this whole structure has a soft middle.

Kate

That's your AI in 15 for today. See you tomorrow.