AI in 15 — February 20, 2026
Google just mass-dropped a model and claimed the number one spot in AI. Gemini 3.1 Pro landed yesterday with a two-x reasoning improvement and benchmark scores that leapfrog Claude Opus, GPT-5.2, and every other frontier model on the board. And they're charging two dollars per million input tokens. The crown just changed heads.
Welcome to AI in 15 for Friday, February 20, 2026. I'm Kate, your host.
And I'm Marcus, your analyst.
Happy Friday, Marcus. We have a lot to get through. Google is claiming the top of the leaderboard with Gemini 3.1 Pro, and the benchmarks are hard to argue with.
OpenAI is about to close the largest private funding round in history. A hundred billion dollars at an eight hundred and fifty billion dollar valuation.
The India AI Summit gave us a viral moment. Sam Altman and Dario Amodei refused to hold hands in a group photo with Prime Minister Modi, and it wasn't just awkward, it was strategic.
Meta and Nvidia just signed a multi-billion dollar infrastructure deal that includes the first large-scale deployment of Grace CPUs without GPUs attached.
Saudi Arabia's Humain fund is putting three billion dollars into xAI, and it connects to what might be the biggest IPO in history.
The creator of AlphaGo just launched a startup that's raising a billion dollar seed round, and he thinks the entire LLM paradigm is wrong.
An AI system is reading brain MRIs faster than radiologists and diagnosing fifty conditions with near-perfect accuracy. And NASA just revealed that Claude helped drive a rover on Mars. Let's get into it.
Marcus, Google dropped Gemini 3.1 Pro yesterday, and they're making a very specific claim. Number one. Best model in the world. What do the numbers actually show?
The numbers are genuinely impressive. Google is reporting a two-x improvement in reasoning over the previous Gemini generation. They're claiming top scores across the standard benchmark suite, beating Claude Opus 4.6, GPT-5.2, and notably Alibaba's Qwen 3.5, which we covered as our lead story on Tuesday. If the benchmarks hold up under independent evaluation, this is Google reclaiming a position it hasn't held since the early Gemini 1.5 days.
And the pricing. Two dollars per million input tokens. That's aggressive.
It's extremely aggressive. For context, Anthropic is charging three dollars per million tokens for Sonnet 4.6, which we covered Wednesday, and significantly more for Opus. OpenAI's GPT-5.2 pricing is higher still. Google is essentially saying, we have the best model and we'll undercut everyone on price. That's only possible because Google controls its own infrastructure end to end. TPUs, data centers, networking, all of it. They don't have the same margin pressure that companies relying on Nvidia hardware face.
So how does this change the landscape? Because on Tuesday we were talking about seven frontier models in February and how the era of model monogamy is over.
Make it eight, Kate. And this release crystallizes something. A week ago, the conversation was about how different models were specializing. Claude for coding, Gemini for research, GPT for reasoning. Google just said, actually, we're going to be the best at everything. Whether they sustain that lead for more than a few weeks is the question. But even temporarily holding the number one position matters for enterprise deals. CIOs love a simple story, and "the benchmarks say Google is best" is as simple as it gets.
We should put an asterisk on these benchmark claims though, right? We've had this conversation before.
Always asterisks. Every company picks its benchmarks carefully. The evaluation conditions aren't standardized across labs. And benchmark performance doesn't always translate to real-world utility. But I'll say this. We covered Gemini Deep Think solving eighteen unsolved research problems on Tuesday. Now 3.1 Pro brings that reasoning capability into a model priced for mainstream use. Google isn't just publishing research breakthroughs. They're shipping them at two dollars a million tokens. That's the part competitors should be worried about.
And this comes right on the heels of the Lyria 3 music generation rollout we covered yesterday. Google is on a streak.
Three major announcements in a single week. Deep Think's scientific breakthroughs, Lyria 3 going global, and now 3.1 Pro claiming the crown. Google is clearly trying to shift the narrative away from the ChatGPT-versus-Claude conversation and make Gemini the default. Whether developers and enterprises follow the benchmarks or stick with what they know is the billion dollar question.
Let's talk about money, because the numbers coming out of OpenAI right now are almost hard to comprehend. A hundred billion dollar funding round. Marcus, walk us through this.
This would be the largest private funding round in history by a wide margin. Amazon is reportedly in for fifty billion. SoftBank for thirty billion. Nvidia for twenty billion. Microsoft is also participating, though the exact figure hasn't been confirmed. If it closes, OpenAI's post-money valuation would exceed eight hundred and fifty billion dollars.
Eight hundred and fifty billion. That's larger than most countries' GDP.
It would make OpenAI more valuable than all but about ten public companies on Earth. More valuable than JPMorgan Chase. More valuable than Visa. And this is a company that's still burning cash at an extraordinary rate to train models and build infrastructure. The valuation is entirely a bet on future dominance.
Who's the most interesting investor on that list to you?
Amazon at fifty billion. Because Amazon already has a four billion dollar investment in Anthropic. So now they're hedging, backing both sides of the AI safety divide. They're essentially saying, we don't know who wins, but we want a seat at every table. And Nvidia at twenty billion is interesting because Nvidia sells the hardware to every AI lab. Investing in OpenAI at this level deepens that relationship but also raises questions for their other customers. If you're Anthropic or Google and you're buying billions in Nvidia GPUs, how do you feel about Nvidia taking an ownership stake in your biggest competitor?
Does the valuation make sense, or is this peak froth?
Here's the tension. OpenAI's revenue is growing fast, reportedly approaching ten billion annualized. But their costs are growing faster. Training frontier models, running inference at scale, hiring top researchers, it all costs enormously. At eight hundred and fifty billion, investors are pricing in a future where OpenAI captures a meaningful share of the global software market. That's plausible but far from certain. Especially on a day when Google just claimed the number one model at half the price.
The competition giveth and the competition taketh away.
Exactly. A hundred billion buys a lot of runway. But it also buys a lot of pressure to deliver returns that justify that valuation. The clock is ticking.
Alright, we've been covering the India AI Summit all week, and it just delivered its most memorable moment. Marcus, tell me about this photo.
So there's a traditional photo op at these summits where the host country's leader stands in the center and everyone clasps hands. Modi was in the middle, flanked by tech CEOs. And when the moment came, Sam Altman and Dario Amodei both refused to hold hands. They stood there, arms at their sides, while everyone else was clasping up. It went viral instantly.
Okay, that's funny. But is it actually significant?
It's significant because it's a physical manifestation of a very real rivalry. These two are competing for the same enterprise customers, the same talent, the same government contracts. As we covered Wednesday, Anthropic is literally in a fight with the Pentagon over AI safeguards, while OpenAI is aggressively pursuing government deals. Standing next to each other and pretending to be collegial apparently had its limits.
And there's a backstory here. Anthropic ran a Super Bowl ad?
This is the juicier context. Anthropic ran an ad during the Super Bowl that directly mocked ChatGPT's approach to advertising. The campaign apparently boosted Claude's user base by eleven percent. So the tension between these two companies isn't just philosophical anymore. Anthropic is going after OpenAI's users directly, and it's working. The handshake refusal was the physical punctuation mark on a rivalry that's turned genuinely personal.
Meanwhile, OpenAI made some real moves in India beyond the photo op. They're partnering with six elite institutions and putting a hundred thousand students on their platform.
Right. OpenAI announced educational partnerships with six top Indian institutions and a program to bring over a hundred thousand students onto ChatGPT's educational tools. But the bigger play might be the JioHotstar deal. They're integrating AI-powered search into India's largest streaming platform, which serves hundreds of millions of users. As we've reported all week, India has over a hundred million weekly ChatGPT users. OpenAI isn't just visiting India for the summit. They're building deep product integrations into the fabric of how hundreds of millions of Indians consume media.
So the handshake refusal gets the headlines, but the real story is the deals being signed.
Always follow the money and the distribution, Kate. Always.
Speaking of money, Meta and Nvidia just signed what sounds like an enormous infrastructure deal. Marcus, what do we know?
Meta is purchasing millions of Nvidia Blackwell and Rubin GPUs in a multi-billion dollar deal. That alone is significant but not surprising. Meta has been one of Nvidia's largest customers. What's interesting is the other part of the deal. Meta is also doing the first large-scale deployment of Nvidia's Grace CPUs without companion GPUs.
Wait, why is that notable? CPUs are CPUs.
Because Nvidia designed Grace specifically to work alongside its GPUs. It's the CPU half of the Grace Hopper and Grace Blackwell superchips. Using Grace CPUs on their own, at scale, for AI inference workloads, suggests Meta is finding use cases where you need Nvidia's memory bandwidth and interconnect architecture but not the raw GPU compute. Think of it as the difference between training a model, which needs massive GPU power, and running a trained model efficiently, which might not. If this works, it could open up a whole new product category for Nvidia and significantly reduce inference costs for Meta.
And this comes in a week where we've seen massive infrastructure spending announcements from almost every major player.
The scale of AI infrastructure investment right now is staggering. OpenAI raising a hundred billion. Meta buying millions of chips. The India summit generating sixty-eight billion in pledges. We're in the middle of the largest capital buildout the tech industry has ever seen. And the question hanging over all of it is whether the AI revenue will grow fast enough to justify these investments.
Staying with the investment theme, Saudi Arabia's Humain fund just put three billion dollars into xAI. But Marcus, this connects to something even bigger.
Much bigger. The three billion dollar investment in xAI is part of the larger context around SpaceX, which Elon Musk has been discussing taking public. The rumored June IPO could value SpaceX at one-point-five trillion dollars. And the xAI-Tesla merger, which would be the largest corporate merger in history at one-point-two-five trillion, is also in play. Saudi Arabia investing three billion in xAI isn't just a bet on AI. It's a bet on the entire Musk ecosystem.
The Musk gravitational pull keeps getting stronger.
If you think about what Musk is assembling, Tesla for hardware and robotics, xAI for models and compute, SpaceX for global connectivity, Neuralink for brain-computer interfaces, it's the most vertically integrated technology empire since, well, maybe ever. Saudi Arabia clearly wants to be along for the ride. And three billion is table stakes at these valuations.
Does xAI actually need the money, or is this more about the relationship?
Both. xAI is burning through capital building out its Colossus compute cluster, which is reportedly one of the largest GPU deployments in the world. And having the Saudi sovereign wealth ecosystem as an investor opens doors for data center construction in the Middle East, which has cheap energy and a strong desire to be a player in the AI infrastructure race. It's strategic on both sides.
This next one really caught my attention. David Silver, the mind behind AlphaGo, one of the most famous AI systems ever built, just launched a new company. And Marcus, his thesis is basically that everyone else is doing AI wrong.
Silver's new company is called Ineffable Intelligence, and they're raising what would be the largest seed round in European history. Targeting a billion dollars at a four billion dollar valuation. And his core bet is that reinforcement learning, the approach that powered AlphaGo's superhuman Go playing, is a more promising path to advanced AI than the large language model paradigm that everyone else is pursuing.
So while the rest of the industry is scaling up LLMs, Silver is saying you're all going down the wrong road?
Not quite that the road is wrong, but that it has a ceiling. His argument is that LLMs are fundamentally pattern-matching engines. Incredibly good ones, but they're interpolating from training data rather than reasoning from first principles. Reinforcement learning, by contrast, learns by doing. AlphaGo didn't learn Go by reading about it. It learned by playing millions of games against itself and discovering strategies no human had ever conceived.
And investors are clearly buying the thesis at those numbers.
When the person pitching you is the guy who built the system that beat the world champion at the hardest board game in existence, and when that same person co-created the foundational algorithms behind most modern reinforcement learning, a billion dollar seed isn't as crazy as it sounds. Silver has credibility that money can't buy. He literally changed the field once already. Investors are betting he can do it again.
A billion dollar seed round. That phrase would have sounded insane two years ago.
Welcome to 2026, Kate.
Let's shift to something truly remarkable. An AI system that reads brain MRIs and diagnoses conditions in seconds. Marcus, this feels like the future we've been promised.
University of Michigan published a paper in Nature Biomedical Engineering on a system called Prima. It analyzes brain MRIs and can identify over fifty different neurological conditions with ninety-seven-point-five percent accuracy. And it does it in seconds, compared to the minutes or hours a radiologist typically needs.
Ninety-seven-point-five percent across fifty diagnoses. That's an extraordinary number.
It is. And the range of conditions is what makes it especially impressive. We're talking about everything from tumors and strokes to neurodegenerative diseases and developmental abnormalities. A human radiologist might specialize in a subset of these. Prima handles the full spectrum with near-perfect accuracy.
Does this replace radiologists?
The researchers are framing it as augmentation, not replacement, and I think that's the right framing for now. The system is exceptional at rapid initial screening. But treatment decisions, complex cases where the scan is ambiguous, situations where clinical context matters as much as the image, those still need human judgment. Think of it as giving every hospital in the world access to a world-class neuroradiologist for the initial read, instantly, at any hour. The impact in underserved regions where there simply aren't enough specialists could be transformative.
Published in Nature Biomedical Engineering. That's serious peer review.
That's what elevates this beyond the usual AI-in-healthcare hype cycle. This isn't a press release or a demo. It's been through rigorous scientific review. The ninety-seven-point-five percent figure will be scrutinized, replicated, and challenged, and it should be. But the fact that it survived the Nature review process means the methodology is sound.
And finally, Marcus, this one genuinely made me smile. NASA's Perseverance rover on Mars just completed its first AI-planned drives, and it was Claude doing the planning.
Anthropic and NASA collaborated on using Claude models to help plan autonomous driving routes for Perseverance. The rover completed over fifteen hundred feet of driving using routes that Claude helped plot. And to be clear, this is Mars. There's a roughly twenty-minute communication delay between Earth and the rover. So autonomous planning isn't a nice-to-have. It's essential. The rover can't wait forty minutes for a round-trip instruction every time it encounters a rock.
So Claude is literally driving on another planet.
Helping to plan the routes, not real-time driving. But yes, an AI model built in San Francisco is helping navigate a robot on the surface of Mars. The practical benefit is significant. Traditional route planning for Mars rovers involves teams of engineers on Earth analyzing terrain images, plotting safe paths, and uploading instructions. It's slow and limits how far the rover can travel each day. AI-assisted planning lets the rover cover more ground, more safely, with less human bottleneck.
I love that on a day when we're talking about hundred billion dollar funding rounds and trillion dollar valuations, the most awe-inspiring use of AI is driving a car on Mars.
It's a good reminder of what this technology is actually for, Kate. Not just making shareholders richer. Expanding the boundaries of what humanity can do.
Alright Marcus, it's Friday. Let's wrap the week. What's the thread?
The thread this week is scale without precedent. Everything we've covered in the past five days has been record-breaking. The most frontier models in a single month. The largest private funding round in history. The largest potential seed round in European history. The largest corporate merger ever proposed. Two trillion in SaaS market cap evaporated. And an AI reading brain scans and driving on Mars.
It feels like the industry crossed some kind of threshold this week.
I think it did. The combination of Gemini 3.1 Pro claiming number one, OpenAI raising a hundred billion, Chinese labs reaching parity, and the infrastructure spending accelerating, it all points to one conclusion. The major players have decided that AI is the most important technology race of our lifetimes, and they're committing resources to match that belief. A hundred billion here, fifty billion there. At some point, you're not just building a product. You're shaping the trajectory of civilization.
And David Silver saying maybe the LLM approach isn't even the right one. That's a fascinating counterpoint to all this spending.
That's the wildcard. What if the most consequential AI breakthrough of the next decade doesn't come from scaling up transformers with more data and more compute, but from a completely different paradigm? Silver's bet on reinforcement learning is a reminder that the current consensus could be wrong. And in a week where Google, OpenAI, Meta, and Saudi sovereign wealth are pouring hundreds of billions into the current approach, one of the most accomplished AI researchers alive is saying, I think there's a better way. That tension between scaling what works and exploring what might work better is going to define the next chapter.
Meanwhile, an AI is reading your brain scan and another one is driving on Mars. The future is already here.
It's just unevenly distributed, Kate. As always.
That's your AI in 15 for Friday, February 20, 2026. Have a great weekend, and we'll see you Monday.