AI in 15 — April 29, 2026
One day. That is how long it took. Microsoft's seven-year exclusivity over OpenAI ended on Monday. On Tuesday, OpenAI's frontier models went live on Amazon Bedrock. Sam Altman didn't even show up at the launch — he was on the witness stand in Oakland, sworn in across the courtroom from Elon Musk.
Welcome to AI in 15 for Wednesday, April 29, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Wednesday show, Marcus, and the alliance map of AI is being redrawn in real time. OpenAI's models are now running on Amazon Bedrock. Google quietly signed a classified Pentagon deal over a six-hundred-employee protest. Wiz disclosed a critical GitHub remote-code-execution bug that AI-assisted reverse engineering helped find. Musk took the stand on day one of his trial against Altman. The Wall Street Journal says OpenAI is missing its internal targets and tech stocks took the hit. Anthropic suffered its second major outage in eight days. ChatGPT's new ad mechanics got dissected in public. And Warp open-sourced its agentic terminal.
OpenAI lands on AWS one day after the Microsoft divorce.
Google takes the Pentagon classified contract its own employees begged it to refuse.
And one git push could compromise GitHub.com.
Lead story, Marcus. Tuesday afternoon, OpenAI's frontier models went live in limited preview on Amazon Bedrock. GPT-5.4 today, GPT-5.5 within a couple of weeks. Plus Codex, plus a new product called Bedrock Managed Agents. Walk me through why the timing matters.
The timing is the story, Kate. Monday, Microsoft and OpenAI publicly amended their partnership to remove Microsoft's exclusivity over OpenAI's APIs in the cloud. Tuesday morning, OpenAI is on Bedrock. For seven years OpenAI's commercial business depended on Azure. That's now broken. The deal also delivers on OpenAI's February promise to Amazon — in exchange for up to thirty-five billion in financing, OpenAI committed to spinning up two gigawatts of Amazon's Trainium accelerators for training and inference.
And Altman skipped his own launch event.
He sent a recorded video. He couldn't be there because he was on the witness stand in Oakland in his Musk trial. AWS CEO Matt Garman ran the event solo and pitched Bedrock as the one-stop shop where regulated enterprises can run OpenAI alongside Anthropic Claude under existing AWS data-residency contracts.
Why is that pitch so loaded?
Because enterprise AI buyers in finance, healthcare, and the public sector typically have multi-year AWS contracts and trust the AWS data perimeter. A lot of those orgs have been buying Claude through Bedrock specifically because they refused to negotiate a separate data-protection agreement with OpenAI. That barrier just collapsed. Microsoft's preferred-partner advantage is suddenly worth a lot less. And Anthropic's Bedrock head start, which has been a real moat, is suddenly being commoditised. This is the biggest cloud-AI realignment since the original 2019 Microsoft deal — and it landed inside thirty-six hours of the exclusivity ending. That is not a coincidence. That is choreography.
Quick hits, Marcus. Story one, and it is heavy. Google signed a classified AI deal with the Pentagon at four p.m. Monday. Over an open letter from more than six hundred of its own employees.
The new agreement supersedes Google's previous unclassified-only DoD contract and lets Pentagon staff use Gemini on classified data and operations for, quote, any lawful government purpose. Critically, Google agreed to adjust its AI safety settings and filters at the government's request, and explicitly does not have the right to veto specific lawful government uses. Their public guardrails against domestic mass surveillance and autonomous weapons without human oversight remain on paper, but are not contractually binding mission-by-mission limits.
And the employee letter.
More than six hundred Google employees signed an open letter to Sundar Pichai urging him to refuse classified military work, citing fears around lethal autonomous weapons and mass surveillance. Pichai signed anyway. This is the quiet reversal of the 2018 Project Maven posture, when employee protest forced Google out of Pentagon contracting entirely. The Maven precedent is now dead. And Google joins OpenAI and xAI as classified DoD contractors. Anthropic is the only frontier lab on the outside. The takeaway is that AI commercialisation pressure now outweighs internal cultural objections, and the workforce-protest leverage that constrained the industry for half a decade has clearly weakened.
Marcus, Wiz Research dropped a critical GitHub disclosure yesterday. CVE-2026-3854. And the headline isn't just the bug.
The bug itself is severe. An X-Stat header injection in babeld, GitHub's internal git proxy. When a user passes git push dash-o, push options were copied verbatim into a semicolon-delimited internal header without sanitising the semicolons. Three injectable fields chained together to bypass the sandbox, redirect hook scripts, and run arbitrary code with one git push. On GitHub.com's multi-tenant shared storage, a successful exploit could read across tenants — millions of public and private repos. Reported March fourth, patched on GitHub.com within six hours. Public disclosure landed Tuesday. And eighty-eight percent of self-hosted GitHub Enterprise instances were still unpatched at disclosure.
So get patching. But the AI angle is what's making the rounds.
Wiz says they used IDA MCP — a Model Context Protocol bridge to the IDA Pro reverse-engineering disassembler — to let LLM agents do the protocol reconstruction and binary analysis work that used to take weeks of senior researcher time. The top Hacker News comment called it a watershed moment for AI tooling enabling reverse engineering and compromise discovery. Defenders have feared this for two years. Here it is, shipping, with a CVE number on it. The arms race between AI-assisted offence and AI-assisted defence is no longer hypothetical, Kate. It just produced its first named GitHub-grade vulnerability.
Musk versus Altman, Marcus. We previewed the trial Monday. Day one happened yesterday.
Opening arguments and Musk's testimony began Tuesday in the Northern District of California in Oakland. Musk is seeking a hundred and thirty billion dollars in damages and asking the court to force OpenAI back into a non-profit structure and remove Altman and Greg Brockman from the board. His core claim is that OpenAI's restructuring into a for-profit entity violated charitable trust over the roughly thirty-eight million he donated when he co-founded the lab in 2015. OpenAI's lawyer told the jury, and I quote, we are here because Mr. Musk didn't get his way at OpenAI. He quit, saying they would fail for sure. But my clients had the nerve to go on and succeed without him.
And the witness list runs deep.
Altman, Brockman, Satya Nadella, and Ilya Sutskever are all expected to testify. The trial is expected to run into late May barring settlement. The legal question is real. Can a non-profit founded with charitable donations later restructure as a five-hundred-billion-dollar for-profit without breaching trust? The outcome will set precedent for every open-AI lab that converts to commercial. And the optics keep getting worse. Altman literally couldn't show up at his own AWS launch event Tuesday because of this trial. That is how directly it is pulling at OpenAI's centre of gravity right now.
And speaking of OpenAI's centre of gravity, Marcus, the Wall Street Journal dropped a piece Tuesday that moved markets.
The WSJ reported that OpenAI has been missing internal targets for both revenue and weekly active users in early 2026, including its goal of one billion weekly active ChatGPT users by year-end 2025. CFO Sarah Friar has reportedly warned colleagues privately that if growth doesn't accelerate, OpenAI could struggle to fund the compute commitments it has signed. The market response was sharp. Oracle, which has a five-year roughly three-hundred-billion compute deal with OpenAI, fell about four percent. AMD down three. Broadcom four. CoreWeave six. SoftBank, which committed sixty billion to OpenAI, fell about ten percent in Tokyo trading.
And OpenAI's response.
They called the piece, quote, clickbait, and said the business is, quote, firing on all cylinders. Here's why the report is dangerous regardless. The entire AI capex bull case — the one that justifies hundreds of billions of dollars of data-centre buildout — runs on the assumption that OpenAI's revenue grows fast enough to absorb its compute commitments. Even a hint that the curve is bending the wrong way ripples straight through Oracle, CoreWeave, AMD, and SoftBank. It is conspicuous, Kate, that this story landed the same week as the AWS deal that conveniently brings up to thirty-five billion of fresh financing. The numbers we covered yesterday — Microsoft restructure, AWS option, Stargate buildout — all of them assume the user growth is there. The WSJ is the first credible public hit on that assumption.
Anthropic, Marcus. Second major outage in eight days yesterday.
Claude.ai, Claude Code, and the Anthropic API all went into elevated-error and login-failure mode at 17:34 UTC Tuesday and recovered around 18:52 — a seventy-eight-minute window. Downdetector logged over twelve thousand user reports. Hacker News commenters noted the company is now down to roughly one nine of uptime over the trailing ninety days. One enterprise customer reportedly paying two hundred thousand a month said their leadership team is, quote, furious.
And the timing of the outage versus the Bedrock launch.
That is the part that has to sting. Anthropic has been the trusted enterprise option. Privacy-conscious orgs picked Claude through Bedrock specifically because they didn't trust OpenAI. That premium positioning depends on uptime that comfortably beats a noisy consumer chatbot. With OpenAI now landing on Bedrock under the same data-residency contracts, the enterprise segment has a credible alternative again. Anthropic's reliability story has flipped from a moat to a vulnerability inside a single news cycle. The libertarian read, Kate, is that this is competitive markets working as advertised. When one provider falters, customers now have a real second option in the same trust tier. That's the system functioning, not failing.
ChatGPT ads, Marcus. The technical writeup of how the ad mechanics actually work hit Hacker News this week.
The piece picked apart how ChatGPT now serves ads on the free tier and the new eight-dollar-per-month ChatGPT Go plan. Ads run as separately fetched events at the bottom of answers, triggered by conversation context, with a defined attribution loop back to advertisers. OpenAI says Plus, Pro, Business, and Enterprise stay ad-free, and conversations aren't sold to advertisers. The HN commentary surfaced Sam Altman's late-2024 quote that he, quote, kind of thinks of ads as a last resort for us as a business model — versus what just shipped.
And the bigger concern.
The bigger concern isn't these clearly-marked sponsored slots. It's the eventual model-level ad influence on answers, which would be far harder for users to detect or block. Eight hundred million weekly users now have an ad-supported AI answering their questions. The honest libertarian take is that somebody has to pay the GPU bill, and a transparent metered ad model is more honest than a lossmaking subscription that quietly degrades. The counter-take is that an AI mediating almost every information request, fed by ad-relevance signals, is structurally a different beast from a search engine showing ads next to ten blue links. Both readings are correct. We're going to be living with the consequences of which one wins for the next decade.
Last quick hit, Marcus. Warp open-sourced its agentic terminal.
Tuesday, Warp Labs published the source for its Rust-based agentic terminal client to GitHub under AGPL version three. The repo crossed roughly twenty-six to twenty-nine thousand stars within a day. OpenAI is named as a founding sponsor of the open release. Crucially, the agent orchestration layer they call Oz, the cloud platform, and the enterprise features remain proprietary. The open source is the client surface only. The signal here is that even a fifty-million-dollar developer-tools company now believes the durable moat is the cloud agent layer, not the local app. The terminal is becoming a recruiting funnel for the cloud-agent revenue.
And one quick footnote, Marcus. Anthropic also joined the Blender Development Fund yesterday at the top tier, at least two hundred forty thousand euros a year. Alongside Netflix, Epic, Google, Meta, NVIDIA, and Adidas.
Same day they shipped an official Blender connector for Claude built on the Model Context Protocol. The donation is targeted at the Blender Python API, which is exactly the integration surface Claude needs to drive 3D software programmatically. So it is real OSS support and good business, simultaneously. One of the cleaner AI-lab-supports-the-commons moves we've seen this year.
Wednesday big picture, Marcus.
Today's news reads as one story told seven ways, Kate. The AI industry is consolidating into a smaller number of much deeper, much less reversible commitments. OpenAI is now structurally inside Amazon's cloud. Google is structurally inside the Pentagon. Beijing has structurally blocked the talent flow West. OpenAI's compute commitments are big enough that even a hint of a missed quarter rippled through Oracle, AMD, and SoftBank in a single trading session. Anthropic's enterprise moat just sprang a leak the same week its largest competitor moved next door. The optionality of the 2023 lots-of-small-AI-startups era is gone. From here, who you bank with, who you're hosted by, and which government you sell to are first-order facts about an AI lab — not afterthoughts. The map is getting drawn in ink, not pencil.
That's your AI in 15 for today. See you tomorrow.