AI in 15 — March 09, 2026
One million new users a day. That's how fast people are flooding into Claude after the ChatGPT exodus, and Anthropic literally cannot keep the lights on fast enough.
Welcome to AI in 15 for Monday, March 9, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Happy Monday, Marcus. We've got a lot to get through. The ChatGPT-to-Claude migration is now backed by hard numbers, and the capacity problems are real. Oracle is considering cutting thirty thousand jobs to fund AI data centers. Nearly nine hundred Google and OpenAI employees signed an open letter demanding military AI limits. OpenAI keeps redefining what AGI means. Apple's Siri upgrade is hitting more delays. And Microsoft figured out how to store five terabytes of data on a piece of kitchen glass for ten thousand years. Let's preview.
The ChatGPT exodus hits record numbers as Claude downloads surpass ChatGPT for the first time ever.
Oracle eyes thirty thousand layoffs to fund its AI data center buildout while banks quietly back away from the table.
Nearly nine hundred workers at Google and OpenAI sign a joint petition demanding limits on military AI.
And OpenAI's shifting definition of AGI gets a thorough public autopsy. Let's get into it.
Marcus, we've been covering the migration from ChatGPT to Claude for over a week now. But the Sensor Tower data that dropped over the weekend puts actual numbers on it for the first time. And they're staggering.
U.S. uninstalls of the ChatGPT mobile app spiked two hundred and ninety-five percent on February 28th. That's thirty times the app's average daily uninstall rate. Downloads fell thirteen percent on the same day. One-star reviews surged seven hundred and seventy-five percent. Five-star ratings dropped by half. Those aren't gradual trends. That's a cliff.
And on the Claude side?
Claude downloads surged fifty-one percent on February 28th, marking the first time Claude's daily U.S. downloads exceeded ChatGPT's. The app hit number one on the U.S. App Store free apps list and stayed there for five days. It also topped the charts in Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland. Anthropic says free users are up more than sixty percent since January, paid subscribers have more than doubled in 2026, and daily sign-ups are exceeding one million.
A million new sign-ups per day. And the irony is that this success is creating its own crisis.
Exactly. Forbes reported that Claude is struggling to cope with the volume. Users are hitting rate limits, experiencing slowdowns, getting service interruptions. The infrastructure wasn't built for this kind of sudden influx. And here's the thing that makes this migration so unusual. It wasn't triggered by a technical leap. Claude didn't suddenly become dramatically better than ChatGPT overnight. This was driven almost entirely by ethical positioning. Anthropic refused to let the Pentagon use Claude for mass domestic surveillance or fully autonomous weapons. Trump blacklisted them. OpenAI rushed to take the Pentagon deal. And the public chose sides.
The Hacker News discussion had an interesting observation about how "non-sticky" these AI products really are.
That's the part that should worry every AI company. Multiple commenters noted how trivially easy it was to switch. Copy your conversation history, change your bookmark, done. There's almost no switching cost. Which means the moat for consumer AI isn't technology. It's trust. And trust can shift overnight, as OpenAI just learned the hard way. The flip side is that it can shift back just as quickly if Anthropic can't solve the capacity problems. Users who left ChatGPT because of principles will leave Claude because of performance if the service stays degraded.
So the window to convert these users into loyal customers is narrow.
Very narrow. And Anthropic is in this awkward position where the thing that made them popular, taking a principled stand, is now the thing straining their infrastructure. Growth this fast is a good problem to have, but only if you can actually serve the customers.
Let's talk about Oracle. Thirty thousand potential layoffs to fund AI data centers. Marcus, that's a huge number.
TD Cowen estimates Oracle needs a hundred and fifty-six billion dollars in capital expenditure for its AI data center expansion. To free up eight to ten billion in cash flow, Oracle is considering cutting twenty to thirty thousand jobs. And the situation got more urgent because multiple U.S. banks have pulled back from Oracle-linked data center project lending. So you have a financing gap at exactly the moment when the spending commitment is enormous.
Banks getting nervous about AI infrastructure lending is a signal worth paying attention to.
It absolutely is. Oracle is also reportedly weighing a sale of Cerner, its healthcare software unit that it acquired for twenty-eight point three billion dollars in 2022. So you're looking at mass layoffs plus a potential fire sale of a major acquisition, all to fund infrastructure for AI workloads that haven't materialized at the scale needed to justify the investment. The Hacker News discussion was sharp on this. One comment warned that we'll continue being told layoffs are because AI is driving efficiency, when the real reason is that AI is driving capex and investors won't tolerate free cash flow collapsing.
And there's the additional risk that the infrastructure you're building today could be obsolete in a few years.
That's the ASIC efficiency argument several commenters raised. If custom chips improve as fast as they have been, the data centers you build in 2026 could be dramatically oversized for the workloads of 2029. You're betting billions on current hardware trajectories continuing. Oracle's primary justification is its relationship with OpenAI and other major AI clients. But at an eight hundred and forty billion dollar valuation, OpenAI itself is under enormous pressure to justify revenue projections. It's bets on top of bets.
The military AI story continues to ripple through the industry. Nearly nine hundred tech workers at Google and OpenAI signed a joint petition. Marcus, this is unusual.
Nearly eight hundred from Google and close to a hundred from OpenAI, signing the same letter titled "We Will Not Be Divided." The letter calls for prohibiting military use of AI for surveillance of American citizens without judicial oversight and deploying autonomous weapons without human authorization. Separately, over a hundred Google AI researchers sent an internal letter to chief scientist Jeff Dean requesting explicit boundaries.
This reminds me of the Project Maven controversy back in 2018, but it's bigger and it's crossing company lines.
Much bigger. In 2018, it was Google employees protesting Google's contract. Now you have employees at competing companies signing the same petition. That's significant because it suggests military AI ethics is becoming a cross-industry labor issue, not just an internal governance question at individual companies. And the irony of OpenAI employees signing a letter criticizing military AI use while their own company just signed the Pentagon deal is not lost on anyone.
The trigger was this convergence of events. The Anthropic blacklisting, the Iran strikes, reports about AI in target selection.
And I think the Iran strikes are the catalyst that turned this from an abstract policy debate into something visceral for these workers. When you hear that AI tools, possibly including tools you helped build, may have been used in target selection for military strikes, that changes the emotional calculus. It's no longer theoretical. As we reported yesterday, a Pentagon official described a "whoa moment" when he realized their entire Iran campaign depended on Claude. These employees are reading those same reports and asking whether they signed up to build weapons.
Now this one's been making the rounds all weekend. A detailed analysis of how OpenAI keeps redefining AGI. Marcus, walk us through the timeline.
It's quite a progression. Sam Altman in May 2023 said AGI was about ten years away. December 2023, about six years. November 2024, about five years. December 2025, AGI "kinda went whooshing by." February 2026, "We basically have built AGI," later clarified as "spiritual, not literal." The goalposts didn't just move, they broke into a full sprint.
And the analysis dug up OpenAI's original 2018 charter, which has a fascinating clause.
A self-sacrifice clause. OpenAI committed to "stop competing" if a value-aligned, safety-conscious project came close to building AGI before they did. The triggering condition was a "better-than-even chance of success in the next two years." By Altman's own public statements, that condition has been met. But the clause remains completely unenforced. The piece drew three hundred and fifty-six points and three hundred and four comments on Hacker News. Commenters were sharply divided, with some arguing AGI is so poorly defined it's meaningless, and others pointing out that the economic implications of these claims matter regardless of definitions.
It's the marketing versus reality gap becoming a story in its own right.
When the largest AI company in the world simultaneously claims to have basically built AGI while also raising a hundred and ten billion dollars to keep building, something doesn't add up. Either AGI is here and the investment is unnecessary, or it's not here and the claims are marketing. The analysis suggests OpenAI has pivoted from discussing AGI achievement to emphasizing ASI, artificial superintelligence, as the new goal. New goalposts, same playbook.
Quick update on Apple. As we covered yesterday, they're partnering with Google's Gemini to fix Siri. But now there are more delays.
iOS 26.4 was targeted for March with three core Siri upgrades: Personal Context, On-Screen Awareness, and In-App Actions. Apple is now spreading features across future versions. Some may slip to iOS 26.5 in May or even iOS 27 in September. They only finalized the Google deal in January, and internal teams are reportedly struggling with integration.
Every month of delay is a month where people just download Claude or ChatGPT directly.
And that's the real competitive threat. Apple chose an unusual strategy by partnering rather than building its own frontier model. The engineering challenge of integrating Google's 1.2 trillion parameter model while maintaining Apple's privacy standards through Private Cloud Compute is genuinely enormous. But the clock is ticking.
Last one. Microsoft's Project Silica. Five terabytes on a piece of glass that lasts ten thousand years. Marcus, this is cool.
They use femtosecond lasers to create tiny 3D deformations called voxels inside ordinary borosilicate glass, the kind in your kitchen. The breakthrough was shifting from expensive pure fused silica to cheap everyday glass. Write once, read many times, no energy needed to maintain the data. And they've confirmed integrity over millennia through accelerated aging tests.
Not directly an AI story, but relevant to the AI era.
Very relevant. As AI models generate and require ever-larger datasets, cold storage that doesn't consume power becomes increasingly valuable. Training data preservation, model checkpoints, regulatory archives. The AI era is also a data storage era, and glass that lasts ten thousand years is one answer to the question of where all this data lives long-term.
Monday big picture, Marcus. A million users a day flooding Claude. Oracle cutting thirty thousand jobs to build data centers. Banks getting nervous. Tech workers revolting across company lines. AGI definitions shifting like sand. What ties it all together?
Fragility. The consumer AI market can flip overnight based on a single ethical decision. Oracle's entire workforce strategy hinges on AI demand projections that may not materialize. Banks are quietly reassessing risk. Employee loyalty now crosses company lines based on shared values rather than corporate identity. And the industry's foundational concept, AGI, means whatever the person saying it needs it to mean at that moment. Every pillar of the current AI boom, user loyalty, infrastructure financing, workforce stability, definitional clarity, turns out to be more fragile than it appeared a month ago.
Fragile but growing. That's an uncomfortable combination.
It is. And the companies that acknowledge that fragility, that build in redundancy, maintain trust, and resist the temptation to over-promise, those are the ones that will still be standing when the dust settles. The ones running on hype and momentum alone are the ones most exposed.
That's your AI in 15 for Monday, March 9, 2026. See you tomorrow.