AI in 15 — February 23, 2026
Over a billion devices are about to get a new brain. Not a software update, not a tweak. An entirely new intelligence running underneath the voice assistant that most people gave up on years ago. Apple is about to make the biggest bet in AI distribution history, and they're not even building the model themselves.
Welcome to AI in 15 for Monday, February 23, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Happy Monday, Marcus. We've got a fresh week and some stories that have been building all weekend. Let's preview what's coming.
Apple's completely reimagined Siri is about to land, powered by Google's Gemini under the hood. A billion devices, a trillion-parameter model, and not a single Google logo in sight.
Prediction markets are telling us something the headlines aren't. Anthropic now has eighty-four percent odds of holding the top model ranking, and OpenAI is retreating to a coding niche.
Estonia's intelligence agency caught DeepSeek embedding Chinese propaganda into its answers, and OpenAI just testified to Congress that China is free-riding on American innovation.
And the creator of the most popular open-source AI agent project on GitHub just joined OpenAI to build personal agents your mother could use. Let's get into it.
Marcus, this story has been quietly building for weeks, but it's about to become very real. Apple's new Siri, completely rebuilt from the ground up, powered by Google's Gemini, is set to debut any day now. Walk me through what's actually happening here.
Apple signed a deal with Google worth roughly a billion dollars a year to white-label Gemini as the engine behind Siri. We're talking about a 1.2 trillion parameter model running underneath Apple's assistant. No Google branding visible to users. As far as your iPhone is concerned, it's just Siri, but dramatically more capable.
So this is like the search deal Apple has with Google, but for AI?
It's the AI version of that same playbook. Apple gets a world-class model without spending years building one. Google gets distribution to over a billion devices. But there's a critical difference from the search deal. Apple is running this through its Private Cloud Compute infrastructure, which means user data stays under Apple's privacy umbrella, not Google's. Apple essentially said, we'll use your brain, but we're keeping control of the body.
And the new Siri isn't just answering questions better. There's something called on-screen awareness?
That's where this gets genuinely transformative. The new Siri can see what's on your screen and act on it contextually. It can pull a flight confirmation from your email, connect it to a calendar entry, cross-reference it with a restaurant reservation in your messages, and proactively tell you that you need to leave for the airport in two hours. That kind of cross-app intelligence is what people have wanted from Siri since it launched, and it's only possible because Gemini's multimodal capabilities let the system actually understand context rather than just parsing keywords.
I feel like we've heard "Siri is getting better" about fifteen times over the years. Why should anyone believe it this time?
Because this time they threw out the old system entirely. Previous Siri improvements were like renovating a house built on a bad foundation. You could add nicer countertops, but the plumbing was still from 2011. This is a demolition and rebuild. The Gemini model underneath is the same technology that's claiming benchmark leadership with 3.1 Pro, which we covered on Friday. Apple is essentially deploying a frontier AI model to a billion devices simultaneously. That's never happened before at this scale.
And the timing is interesting. This launches alongside iOS 26.4, so it's a software update, not a new phone purchase.
That's the distribution advantage that no other AI company can match. OpenAI needs people to download ChatGPT. Anthropic needs people to visit Claude. Google needs people to open the Gemini app. Apple just pushes a software update and suddenly a billion devices have a frontier AI assistant. You don't have to do anything. It just appears. For the average consumer who's never heard of Claude or Gemini or GPT, this might be their first real encounter with a capable AI model. And they'll just call it Siri.
So Google wins the distribution war by giving up the brand?
It's a calculated trade. Google gets inference revenue from a billion devices and proves that Gemini can power the world's largest consumer AI deployment. But they lose the direct relationship with the user. Every person who falls in love with the new Siri is falling in love with Apple's product, not Google's. It's the classic platform dilemma. Do you want the credit or the cash? Google chose the cash.
What does this mean for the other AI companies? If Siri suddenly becomes good, that's a huge chunk of the consumer market that might never bother downloading a standalone AI app.
That's the existential question for everyone else. ChatGPT's biggest growth vector has been people who want a smart AI assistant on their phone. If Siri fills that need well enough, the addressable market for standalone AI apps shrinks considerably. OpenAI, Anthropic, and even Google's own Gemini app are all competing for users who actively seek out AI tools. The new Siri captures the much larger audience of people who just want their phone to work better without thinking about it.
Let's talk about the scoreboard, Marcus, because prediction markets are painting a picture that doesn't match the headlines most people are reading. Anthropic is dominating.
The numbers are striking. As of this week, Anthropic holds eighty-four percent odds to lead the best-model rankings by the February 28 deadline. That's five days from now. When Claude Opus 4.6 launched on February 5, Anthropic's odds jumped from forty percent to sixty-eight percent in hours, and they've only climbed since. The implied ranking is Anthropic first, Anthropic second with a different model variant, and Google third. OpenAI isn't even in the top three.
Wait. OpenAI, the company raising a hundred billion dollars at an eight hundred and fifty billion dollar valuation, isn't in the top three models?
Not according to the prediction markets for general-purpose AI. And that's the key qualifier. OpenAI still dominates one category: coding. They hold seventy-six percent probability for coding leadership through March, driven by GPT-5.3-Codex, which launched February 5 with record scores on SWE-Bench Pro and Terminal-Bench.
So OpenAI has become a coding company?
That's the provocative reading, and it's not entirely wrong. The prediction markets are telling us that if you want the best overall AI model for reasoning, writing, analysis, and general tasks, the smart money is on Anthropic. If you want the best model for writing and debugging code, it's OpenAI. That's a meaningful shift from even six months ago when OpenAI was the default answer for everything.
And there's a wrinkle with that coding model. OpenAI itself flagged a concern?
GPT-5.3-Codex came with an unusual warning. OpenAI classified it as having high cybersecurity capability and introduced tight controls, including delayed developer API access. They're essentially saying this model is so good at writing code that it could be dangerous in the wrong hands. It's the first time a major lab has voluntarily restricted access to a model based on its coding ability specifically, not just general capability.
That's an interesting tension. Your best product is so good you're afraid to fully release it.
And it raises a question for the industry. As coding models get more capable, do they become dual-use tools in the same way that, say, encryption technology is? A model that can write a complex software system can also write sophisticated malware. OpenAI is threading the needle between showing off their best model and acknowledging that best sometimes means most dangerous.
Marcus, let's talk about China, because there are two developments this week that I think need to be discussed together. Estonia's intelligence agency called out DeepSeek for propaganda, and OpenAI testified to Congress about Chinese labs copying American AI.
Estonia's foreign intelligence service published a report stating that DeepSeek, quote, conceals key information and inserts Chinese propaganda into its answers on security matters. Specifically, it promotes the One China policy, flatters Xi Jinping, and actively avoids any discussion of the Uyghur genocide. This isn't a think tank's opinion. This is a NATO country's intelligence agency formally warning that a widely adopted AI model is functioning as a propaganda tool.
And this is a model that developers around the world are using because it's open source and free.
Exactly, and that's the strategic play. We covered on Sunday how Chinese labs are releasing competitive models under permissive open-source licenses to undercut Western competitors. The Estonia report adds a critical dimension to that story. It's not just about economics. When a model that censors the Uyghur genocide and promotes Beijing's territorial claims is embedded into applications used by millions of people worldwide, you're not just distributing software. You're distributing a worldview.
And then OpenAI went to Congress.
OpenAI testified to the House Select Committee on China, and they didn't hold back. Their statement said that DeepSeek's next model should be understood in the context of its, quote, ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs. They're directly accusing DeepSeek of distilling Western models, essentially training on the outputs of ChatGPT and Claude to bootstrap their own capabilities without doing the expensive original research.
Is that accusation new?
The distillation concern has been floating around for months, but OpenAI saying it under oath to Congress is a significant escalation. It's one thing to grumble about it on Twitter. It's another to formally testify that a Chinese lab is systematically copying your work. And the timing matters. As we reported Friday, OpenAI is closing a hundred billion dollar funding round. Making the case to Congress that Chinese competitors are free-riding on American innovation is also making the case that American AI companies need protection and support. It's business strategy wrapped in national security language.
So you're skeptical of the framing?
I'm not skeptical of the facts. Chinese labs almost certainly do use outputs from Western models as training data, and the propaganda findings from Estonia are well-documented. But I think it's important to see the full picture. OpenAI isn't testifying to Congress out of pure patriotism. They're building a narrative that supports their business interests. The truth and the strategy happen to align here, but we should be clear-eyed about both.
Let's shift to a hire that I think signals where the AI industry is heading next. Peter Steinberger, the creator of OpenClaw, just joined OpenAI. Marcus, for people who don't know OpenClaw, this is a big deal.
OpenClaw is one of the most successful open-source AI projects ever. Over two hundred and twelve thousand GitHub stars. One and a half million AI agents created through the platform. It lets regular people build AI agents that manage their inbox, handle email, organize their calendar. It's the closest thing we've had to a personal AI assistant that actually works for everyday tasks. And Steinberger built the whole thing.
And now he's at OpenAI to build personal agents. Sam Altman called it core to OpenAI's product strategy.
Altman specifically said personal agents are, quote, core to OpenAI's product offerings in a multi-agent future. And Steinberger's stated mission is to build an agent that even his mum can use. That's a telling framing. It's not about building the most technically sophisticated agent. It's about building one that non-technical people can actually adopt. The gap between what AI agents can theoretically do and what normal humans can actually get them to do is enormous, and Steinberger's track record suggests he knows how to close it.
And OpenClaw stays open source?
That's the interesting part. OpenClaw will continue as an independent open-source foundation, supported by OpenAI but maintaining multi-model compatibility. So it won't become an OpenAI-exclusive tool. Steinberger clearly negotiated to keep his community intact. But his best ideas and his full attention are now going to OpenAI's products.
This feels like the beginning of the next big product war. Not chatbots, not coding assistants, but personal agents that handle your daily life.
I think that's exactly right. Chatbots were phase one. Coding assistants were phase two. Personal agents, AI that does things for you rather than just talking to you, that's phase three. And the race is on. Google has the Siri-Gemini play for passive assistance. OpenAI just hired the guy who proved millions of people want active AI agents. Anthropic has Claude's tool-use capabilities. The company that cracks the personal agent experience for mainstream users will own the most valuable real estate in consumer tech.
Alright Marcus, Monday big picture. Apple is about to put Gemini in a billion pockets. Anthropic is the prediction market favorite. OpenAI is retreating to coding while raising the most money in private funding history. And China's models are being flagged for propaganda by intelligence agencies. What's the thread?
The thread is that the AI industry is stratifying. A year ago, every company was trying to be everything to everyone. Best model, best product, biggest user base. Now we're seeing clear lanes emerge. Apple owns distribution. Anthropic owns quality perception. OpenAI owns coding and capital. Google owns infrastructure and is quietly powering Apple's comeback. And Chinese labs own open source, with all the strategic baggage that entails. The era of one company dominating everything is over.
And the personal agent race is about to add another dimension to that stratification.
That's the next battleground. The models are converging in capability, benchmarks are tightening, and raw intelligence is becoming a commodity. What differentiates companies now is what they build on top of the models. An agent that manages your life, an assistant that knows what's on your screen, a coding tool that writes your software. The model is the engine, but the product is the car. And right now, everyone is scrambling to build the best car before the market decides which one it wants to drive.
That's your AI in 15 for Monday, February 23, 2026. We'll see you tomorrow.