AI in 15 — February 24, 2026
A hundred billion dollars. That's the size of the check Meta just wrote to a company that most people still think of as the budget alternative. AMD just went from underdog to the second pillar of the AI chip economy, and the deal comes with a twist that Wall Street hasn't seen in years.
Welcome to AI in 15 for Tuesday, February 24, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, we have a packed show today. Some massive numbers, some serious ethical flashpoints, and a growing sense that the internet itself might be broken. Let's preview.
Meta just signed what might be the largest semiconductor deal in history. Up to a hundred billion dollars in AMD chips, and AMD practically gave Meta a piece of the company to close it.
The Pentagon story we've been following all week just hit a new gear. Elon Musk's xAI signed Grok into classified military systems, and Defense Secretary Hegseth is summoning Anthropic's Dario Amodei for what sounds like an ultimatum.
OpenAI partnered with four of the world's biggest consulting firms to make its platform the default enterprise AI standard.
Sam Altman told an audience in India that training a human takes twenty years and a lot of food, so maybe AI's energy use isn't that bad. The internet had thoughts.
A Google VP warned that hundreds of AI startups built as thin wrappers around existing models might not survive the year.
And bots now generate more than half of all internet traffic. Dead internet theory just went from conspiracy to confirmed. Let's get into it.
Marcus, this Meta deal. A hundred billion dollars to AMD over five years. I had to triple-check that number. What's actually happening here?
Meta signed a multiyear agreement to purchase AMD's MI540 series GPUs and their latest CPUs. The deal starts in the second half of this year and runs for five years. We're talking enough hardware to drive roughly six gigawatts of data center power demand. To put that in perspective, that's more electricity than some small countries use.
And there's this bizarre stock warrant thing?
This is the part that made Wall Street's jaw drop. As part of the deal, AMD issued Meta a warrant for up to a hundred and sixty million shares of AMD common stock at one cent per share. That's roughly ten percent of AMD's outstanding shares. The warrants vest alongside delivery milestones, so as AMD ships the chips, Meta accumulates a massive equity stake in AMD. It's a loyalty guarantee. Meta is saying, we'll buy your chips, but we also want skin in the game if your stock price rises because of this deal.
So Meta is simultaneously AMD's biggest customer and one of its largest shareholders.
Potentially, yes. And this came just days after Meta expanded its existing partnership with NVIDIA. So Meta isn't replacing NVIDIA with AMD. It's building a dual-supplier strategy. If you're spending a hundred and thirty-five billion dollars in capex this year alone, which Meta is, you don't want to depend on a single chip provider. That's basic supply chain management at extraordinary scale.
And Meta is framing all of this around something they're calling personal superintelligence?
That's Zuckerberg's latest framing for Meta's AI ambitions. Advanced AI systems tailored to individual users. Think of it as a personal AI that knows your preferences, your history, your communication style, running across all of Meta's platforms. Building that for three billion users requires an absurd amount of compute. Hence the six hundred billion dollars in total US AI infrastructure investment Meta has pledged over the next several years.
Six hundred billion. Marcus, are these numbers even real anymore? They sound like they were generated by an AI hallucinating about budgets.
They do sound surreal. But look at the competitive landscape. Microsoft is spending similarly. Google is spending similarly. The entire hyperscaler industry has decided that the cost of not having enough AI compute is existential, while the cost of over-building can be written off as infrastructure investment. Whether the return on that investment materializes is the trillion-dollar question. But right now, nobody wants to be the company that didn't build enough and lost the race.
And for AMD specifically, this is validation, right? They've been positioned as the alternative to NVIDIA for years. This deal changes that perception.
Completely. AMD has been a credible second option for a while, but a hundred-billion-dollar commitment from one of the world's largest AI companies transforms them into a strategic pillar. Their stock jumped on the news, and you can expect every other hyperscaler to take their AMD conversations more seriously now. NVIDIA is still the market leader by a wide margin, but this deal signals that the days of NVIDIA having the AI chip market to itself are definitively over.
Let's turn to the Pentagon story. We've been covering the standoff between the military and Anthropic all week, but Marcus, two major things happened since yesterday. First, xAI is now in the building.
Elon Musk's xAI signed an agreement allowing Grok to be used in classified military systems. These are the Pentagon's most sensitive environments, used for intelligence analysis, weapons development, and battlefield operations. Until now, Anthropic's Claude was the only AI model with classified-level access. That monopoly is over.
And the key detail is what xAI agreed to that Anthropic won't.
xAI accepted the Pentagon's all lawful purposes standard, which means Grok can be used for essentially anything the military decides is legal. That's the exact demand that Anthropic has been refusing. Anthropic has drawn two red lines: no mass surveillance of American citizens and no fully autonomous weapons. The Pentagon considers even those two restrictions unacceptable.
And now Defense Secretary Hegseth has summoned Dario Amodei for a meeting today. What do we know about that?
Sources describe it as a tense ultimatum. Hegseth is reportedly threatening to designate Anthropic as a supply chain risk, which, as we explained on Saturday, would void their two-hundred-million-dollar DOD contract and force every Pentagon partner to stop using Claude. Negotiations between the two sides have reportedly shown no progress and are, quote, on the verge of breaking down.
So the timing of the xAI deal is clearly deliberate. Sign Grok in, then tell Anthropic you have an alternative.
Classic negotiating leverage. The Pentagon is saying, we don't need you anymore, so either accept our terms or we'll replace you with someone who already has. And the conflict-of-interest questions here are significant. Musk runs xAI, which just got this contract. Musk also runs DOGE, which has significant influence over government technology decisions. The wall between those roles looks very thin right now.
Where does this leave Anthropic? Because they've built their entire brand on responsible AI.
In the most difficult position of any AI company. If they fold and accept unrestricted military use, they lose the trust of their safety-focused employees, many of whom joined specifically because of Anthropic's principles. We've already seen researchers leaving over these pressures. If they hold firm, they potentially lose their entire government business and send a signal that principled AI companies get punished. There's no clean exit here.
Let's shift to OpenAI, which just made a very different kind of power move. They announced something called Frontier Alliances. Marcus, walk me through this.
OpenAI signed multi-year partnerships with McKinsey, Boston Consulting Group, Accenture, and Capgemini. Those are four of the biggest consulting firms on the planet. Each firm is building dedicated practice groups and training teams certified on OpenAI technology. OpenAI's own engineers will embed alongside consulting teams in client engagements.
So if you're a Fortune 500 CEO and you call McKinsey to help with your AI strategy, you're now getting OpenAI's platform recommended by default.
That's the play. These consulting firms collectively advise most of the world's largest companies. If their consultants are trained on OpenAI, certified on OpenAI, and building solutions on OpenAI, guess what they recommend to clients? The switching costs become enormous. Once an enterprise has its workflows built on OpenAI's Frontier platform with a McKinsey team managing the deployment, migrating to Anthropic or Google becomes a multi-million-dollar project. It's a moat-building strategy disguised as a partnership announcement.
Clever. Though I imagine Google and Anthropic aren't thrilled about this.
They'll need to respond. If the consultants who shape enterprise AI strategy are all trained on one platform, the other platforms become harder to even get in the door. It's the same dynamic as the old Oracle and SAP playbook. Get embedded in the enterprise through the consultants, and the technology becomes almost impossible to rip out.
Okay, Marcus, I need you to help me with this Sam Altman story because I genuinely can't tell if it's brilliant or tone-deaf. He was at the AI Impact Summit in New Delhi and he compared AI energy use to... raising a child?
His exact argument was, quote, it also takes a lot of energy to train a human. It takes like twenty years of life, and all the food you eat before that time, before you get smart. He was pushing back on concerns about data center water and energy consumption, calling the water concerns, quote, fake.
And people did the math.
They certainly did. On Hacker News, someone calculated that Einstein's brain consumed about four point six billion joules over his entire lifetime. ChatGPT burns through that amount in roughly two point eight milliseconds. So on a pure energy-to-intelligence ratio, the comparison falls apart rather quickly.
And he said this in India, of all places.
In New Delhi, at a summit attended by policymakers from a country where hundreds of millions of people still lack reliable electricity and clean water. Telling that audience that concerns about AI water use are fake was, let's say, not his most diplomatically calibrated moment. Zoho's co-founder pushed back publicly, saying he doesn't want to live in a world where we equate technology to human beings.
You know, I think the underlying point, that AI might eventually be more energy-efficient than human cognition for certain tasks, is actually interesting. But the way he framed it...
The framing was the problem. Comparing the energy cost of raising a child to training a language model strips away everything that makes human development meaningful and reduces it to a calorie count. It's the kind of argument that sounds clever in a Silicon Valley conference room and lands very differently when said to a global audience. And it comes in the same week that Meta committed to six gigawatts of power for its data centers. The industry's energy appetite is real. Dismissing the concerns isn't going to make them go away.
Let's talk about startups, because a Google VP just issued a warning that I think a lot of founders need to hear. AI wrapper startups may not survive.
Darren Mowry, who heads Google's global startup organization, said two categories of AI startups have their check engine light on. LLM wrappers, companies that put a consumer layer on top of existing models like ChatGPT or Gemini, and AI aggregators, which route queries across multiple models through a single interface. His argument is that wrapping thin intellectual property around someone else's model isn't a business.
And the reason is obvious when you think about it.
Every time OpenAI or Google releases a more capable model, the wrapper's value proposition shrinks. If the base model can now do what the wrapper was doing, why pay for the wrapper? And on the aggregator side, platforms like Azure AI and Amazon Bedrock are standardizing multi-model access at the infrastructure level. A startup charging for model routing can't compete with a cloud platform offering it as a feature.
Now, it's worth noting this comes from Google, which obviously benefits from startups building deeply on its platform rather than wrapping competitors.
Absolutely. There's self-interest here. But the analysis is correct regardless of who's making it. Hundreds of companies raised millions on the wrapper model during the gold rush of 2023 and 2024. Many of them need to either find a genuine competitive moat, deep vertical expertise, proprietary data, unique workflows, or accept that the window is closing. The advice Mowry gave was to build, quote, deep wide moats that are either horizontally differentiated or something really specific to a vertical market. Which is good advice, but also what startups should have been doing from the beginning.
Quick update on DeepSeek. Marcus, V4 was supposed to drop mid-February. It's now late February. What's the status?
Still imminent, apparently. Multiple signals suggest the release is close, but it hasn't officially launched. The model reportedly features one trillion parameters, a million-token context window, and architectural innovations targeting over eighty percent on SWE-bench. If those numbers hold, it would be the top coding benchmark performer. And as always with DeepSeek, the cost story is the disruptive part. Internal testing reportedly shows V4 outperforming competitors at ten to forty times lower cost.
As we reported yesterday, Estonia's intelligence agency already flagged DeepSeek for embedding propaganda, and OpenAI testified to Congress about Chinese labs free-riding on American research. A more capable DeepSeek model intensifies all of those concerns.
It does. And the delayed launch has people wondering whether they've hit technical challenges or whether this is a deliberate timing strategy. Either way, the AI community is watching. When it drops, it'll be the biggest benchmark test of the year.
Last story, Marcus. Simon Willison, who's one of the most respected developers in the AI space, called AI-generated replies the scourge of Twitter. And the numbers backing him up are pretty stark.
Bots now generate fifty-one percent of all web traffic. That's up from forty-two percent three years ago. Approximately sixty-four percent of accounts on X are likely bots. Fifty-four percent of LinkedIn's long-form posts are AI-generated. And nearly three-quarters of newly published web pages contain AI-generated content. Google traffic to publishers has dropped thirty-three percent globally year over year.
So the dead internet theory, the idea that most online content is generated by bots talking to other bots, is basically confirmed?
The data supports it. What was once a fringe conspiracy theory is now a measurable reality. And there's a painful irony here. Elon Musk bought Twitter specifically to, quote, stop the bots. The bot problem on the platform has only accelerated since then. Commenters on the Hacker News discussion pointed out that the window for mandatory LLM output watermarking has probably already passed. The volume of AI-generated content is now so large that retroactively marking it would be impractical.
And this connects back to the wrapper startup story in an interesting way. If the open web is increasingly polluted by AI-generated noise, the value of curated, trustworthy information goes up.
That's right. And the publishers who produce that trustworthy information are the same ones losing a third of their Google traffic. So the economics are moving in exactly the wrong direction. The people creating genuine content are being punished, while the bots flooding the internet with synthetic content face zero consequences. It's a slow-motion crisis, and the AI industry, which created the tools making it possible, has done almost nothing to address it.
Alright Marcus, Tuesday big picture. Meta is spending a hundred billion on AMD chips. Adani in India pledged a hundred billion for AI data centers. The Pentagon is signing new AI contracts while threatening companies that won't comply. OpenAI is locking in the consulting firms. And meanwhile, the internet itself is drowning in bot-generated content. What's the theme today?
The theme is scale without guardrails. Every story today is about something getting bigger. Bigger chip deals, bigger infrastructure spending, bigger military AI deployments, bigger enterprise lock-in, bigger bot problems. The industry has figured out how to scale everything except accountability. Meta can build six gigawatts of data centers, but nobody's building six gigawatts of content verification infrastructure. The Pentagon can deploy AI in classified systems, but the ethical frameworks are still stuck in committee. OpenAI can embed itself in every consulting firm on the planet, but the consulting firms aren't being trained on when to say no.
And Altman's comments in India kind of crystallize the mindset. The scale of AI's resource consumption isn't a problem to solve, it's just the cost of progress.
That framing is comfortable if you're the one building the data centers. It's less comfortable if you're the publisher losing a third of your traffic to AI-generated spam, or the startup about to be crushed by the platform you built on, or the safety researcher being shown the door because your red lines are inconvenient. Scale is not inherently good. It amplifies whatever you point it at. Right now, the industry is scaling investment, scaling deployment, and scaling content generation. What it's not scaling is the ability to manage the consequences. And that gap is the story of 2026.
That's your AI in 15 for Tuesday, February 24, 2026. We'll see you tomorrow.