AI in 15 — March 02, 2026
One hundred and ten billion dollars. That's how much money just mass just poured into OpenAI while half the internet is still trying to cancel them. Welcome to Monday.
Welcome to AI in 15 for Monday, March 2, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, the Anthropic-Pentagon saga continues to generate aftershocks, but today we've got some major stories that go beyond the drama. Let's preview.
OpenAI officially closed the largest private funding round in history, one hundred and ten billion dollars at a seven hundred and thirty billion dollar valuation. We'll dig into what the money actually means.
Anthropic's new Import Memory feature is going viral, letting users transfer their entire ChatGPT history to Claude in one click.
A Chinese open-source model called MiniMax M2.5 is matching Claude Opus on coding benchmarks at a fraction of the cost.
The AI hardware supply squeeze is getting real, with memory prices surging and ripple effects hitting the entire tech sector.
IBM's new security report shows AI-driven cyberattacks are up forty-four percent. And developers are having a reckoning about whether AI coding tools are actually making them slower. Let's get into it.
Marcus, we touched on OpenAI's funding round over the weekend, but now it's officially closed. One hundred and ten billion dollars. Put that number in context for us.
It's the largest private financing round in history. Full stop. The pre-money valuation is seven hundred and thirty billion, post-money eight hundred and forty billion. That puts OpenAI ahead of most publicly traded companies on Earth. The investor breakdown is what really tells the story. Amazon put in fifty billion, Nvidia thirty billion, SoftBank thirty billion.
And as we reported Saturday, those investments aren't exactly no-strings-attached cash.
Right, that's the part the headline doesn't capture. Amazon's fifty billion is tied to OpenAI using AWS infrastructure. Nvidia's thirty billion almost certainly comes with hardware purchase commitments. SoftBank has its own Stargate infrastructure ambitions. So a huge chunk of that money flows right back to the investors through contracts. The actual free capital OpenAI can deploy however it wants is considerably less than a hundred and ten billion.
And the elephant not in the room, Microsoft sat this one out entirely.
That silence is deafening. Microsoft has been OpenAI's most important partner since 2019. They've invested over thirteen billion previously. To not participate in the biggest round your partner has ever raised, that's not an oversight. That's a message. Whether Microsoft is hedging by building its own models or whether the relationship has fundamentally shifted, either interpretation is significant. When your founding partner doesn't show up to your biggest moment, the market notices.
The skeptics are loud on this, and honestly some of their points are hard to dismiss.
The top comment on Hacker News asked someone to explain how OpenAI isn't Netscape 2026. First mover advantage, no real moat, racing against competitors with infinite resources. And the math problem hasn't changed. Each new model generation is more profitable per query, but each next model costs roughly ten times the last to train. At some point those lines cross. Valued at eight hundred and forty billion while still burning cash faster than it earns, OpenAI needs to become one of the most profitable companies in history just to justify the valuation. That's not impossible, but it's a very specific bet.
Now here's a story that's perfectly timed. Anthropic launched an Import Memory feature for Claude, and it went absolutely nuclear on Hacker News. Five hundred and thirty-seven points.
The timing is immaculate. While the Cancel ChatGPT movement is still driving people away from OpenAI, Anthropic rolls out a feature that lets you transfer your entire ChatGPT conversation history to Claude in a single step. All the context, all the preferences, all the things GPT learned about how you work, ported over instantly.
So they're basically saying, we'll make switching painless for you.
Frictionless, even. And this matters because the biggest barrier to switching any AI assistant isn't the technology, it's the relationship. People have spent months training ChatGPT to understand their writing style, their codebase, their workflow preferences. Starting over from scratch with a new model feels like losing a coworker who finally understood how you think. Import Memory eliminates that cost entirely.
It's also a subtle competitive statement, isn't it? Like, we're so confident you'll prefer Claude that we'll import everything your previous AI knew about you.
That's exactly the play. And developers on Hacker News loved the technical elegance. It's not just dumping raw conversation logs. It's extracting the structured context, the preferences, the patterns, and integrating them into Claude's memory system. Given that Claude hit number one on the App Store just days ago, this feature is jet fuel on an already burning fire. Anthropic is executing a masterclass in turning a government crisis into a customer acquisition machine.
Now Marcus, shifting to the technical side. MiniMax, a Chinese AI lab, released M2.5, and the benchmarks are raising eyebrows.
Eighty point two percent on SWE-Bench Verified, which is the gold standard for evaluating coding ability. For reference, Claude Opus scores eighty point eight on the same benchmark. So we're talking about a gap of zero point six percentage points. And M2.5 does it at roughly one-twentieth the cost per query.
How is that even possible?
Architecture. It's a mixture-of-experts model with two hundred and thirty billion total parameters but only ten billion active at any given time. So you get the knowledge of a massive model with the compute cost of a tiny one. It's elegant engineering. The model is also fully open source with an Apache 2.0 license, meaning anyone can download, modify, and deploy it.
Now I know you have thoughts on Chinese open-source AI releases, Marcus.
Look, the technical achievement is real. But we should understand the strategic context. China is in an active propaganda campaign to undercut Western AI investments. Every time a Chinese lab releases a model that matches Western frontier models at a fraction of the cost, it sends a message to investors: why are you pouring hundreds of billions into OpenAI when open source from China does the same thing for pennies? That narrative serves Beijing's interests whether or not the benchmarks tell the whole story.
You think the benchmarks don't tell the whole story?
SWE-Bench is one test. It measures a specific kind of coding task. Real-world performance across diverse use cases, safety, reliability under pressure, long-context reasoning, those are harder to benchmark and often where the gaps show up. I'm not saying M2.5 isn't impressive. I'm saying that matching one benchmark score doesn't mean matching the entire product. But the cost advantage is genuine and that puts real pressure on pricing across the industry.
Speaking of costs, the hardware supply squeeze is becoming a major story. Marcus, what's happening with memory prices?
RAM prices are up fifty-five to sixty percent quarter over quarter. NAND flash is up two hundred and forty-six percent from its recent lows. And the ripple effects are hitting everywhere. A three-hundred-and-ninety-terabyte online game archive just shut down because storage costs became unsustainable. Analysts are projecting the PC market could shrink nine percent because component costs are making affordable computers harder to build.
And this is all being driven by AI demand?
Almost entirely. Data centers are vacuuming up every available chip, every stick of high-bandwidth memory. The AI infrastructure buildout we've been covering, the hundreds of billions flowing into compute, that money translates directly into physical demand for components. And when AI companies are willing to pay premium prices, consumer electronics get squeezed. Your next laptop or phone could cost more because Nvidia needed the memory for GPU clusters.
So ordinary consumers are subsidizing the AI boom through higher hardware prices.
That's one way to frame it. The component manufacturers are allocating supply to whoever pays the most, and right now that's AI infrastructure. It's a hidden cost of the AI revolution that doesn't show up in the funding headlines. The hundred-and-ten-billion-dollar OpenAI round doesn't just buy software engineers and compute time. It bids up the price of physical components that everything else in technology depends on.
IBM dropped its X-Force 2026 threat intelligence report, and Marcus, the numbers around AI-driven cyberattacks are alarming.
Forty-four percent increase in AI-powered attacks year over year. Over three hundred thousand ChatGPT credentials stolen and sold on dark web marketplaces. And here's the shift that security teams should really pay attention to. Vulnerability exploitation has now overtaken phishing as the number one attack vector. That's a fundamental change in how attackers operate.
Why does that shift matter?
Because phishing requires fooling humans, which is slow and unpredictable. Vulnerability exploitation can be automated. And AI makes that automation dramatically more effective. An attacker can use AI to scan for known vulnerabilities across thousands of targets simultaneously, generate custom exploits, and deploy them at machine speed. The defense still requires humans patching systems, reviewing code, responding to alerts. The asymmetry between AI-powered offense and human-paced defense is growing fast.
Now this next one struck a nerve. An essay called "AI Made Engineering Harder" went viral, three hundred and eighty points on Hacker News. And there's a research study backing it up.
The METR study found that experienced developers using AI coding tools were actually nineteen percent slower than those working without them. Not faster. Slower. And before anyone dismisses this, the study was well-designed with proper controls. The explanation isn't that AI tools are useless. It's that the time spent reviewing, correcting, and debugging AI-generated code often exceeds the time saved by not writing it yourself.
This connects to the conversation we had yesterday about Karpathy's point that AI tools are only as good as the expertise of the person using them.
Exactly. If you deeply understand the code and can immediately spot when the AI gets something wrong, the tools are a net positive. But if you're spending twenty minutes trying to figure out whether the AI's suggestion is correct, you would have been faster writing those fifteen lines yourself. The essay's core argument is that we've created a world where engineers are spending more time reading and evaluating code than writing it, and that's a fundamentally different skill that the industry hasn't adapted to yet.
So the vibe coding reckoning is real.
It's real, and it's healthy. The developers who treat AI as a collaborator that needs oversight are getting value. The ones treating it as a replacement for understanding are accumulating technical debt at unprecedented speed. The nineteen percent slowdown number is going to be cited in a lot of engineering team discussions this week.
Monday big picture, Marcus. OpenAI raised a hundred and ten billion but the smart money has questions. Anthropic is converting controversy into customers with features like Import Memory. Chinese open source is nipping at the heels of frontier models. Hardware costs are surging. And developers are discovering that AI tools might be making them slower, not faster. What ties it all together?
The gap between the narrative and reality is narrowing. For two years, AI has been a story about potential, about what's coming, about exponential curves. Now we're hitting the part where the bills come due. OpenAI needs to justify eight hundred and forty billion in valuation with actual revenue. The hardware buildout is creating real scarcity with real costs. And the tools that were supposed to make everyone ten times more productive are showing nineteen percent slowdowns in controlled studies. None of this means AI isn't transformative. It means the transformation is messier, more expensive, and more nuanced than the pitch decks suggested. The companies and developers who acknowledge that complexity honestly are the ones building on solid ground. Everyone else is building on hype, and hype has an expiration date.
Reality check Monday. I like it.
Every revolution needs one eventually.
That's your AI in 15 for Monday, March 2, 2026. We'll see you tomorrow.