AI in 15 — March 14, 2026
Meta is about to cut more people than some companies ever hire. Up to fifteen thousand employees, gone, so the company can keep feeding its six-hundred-billion-dollar AI habit.
Welcome to AI in 15 for Saturday, March 14, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Happy weekend, Marcus. We've got a big show despite it being Saturday. Meta is reportedly planning layoffs that could rival its 2022 cuts. Elon Musk admits xAI was "not built right" as more co-founders head for the exits. Anthropic just made its million-token context window available to everyone at standard pricing. Jack Dorsey says most companies will follow Block's forty-percent cuts. A Cambridge study found AI toys give inappropriate responses to toddlers more than a quarter of the time. And rival employees are crossing company lines to defend Anthropic against the Pentagon. Let's preview.
Meta prepares to cut up to twenty percent of its workforce to fund AI infrastructure.
xAI loses more co-founders while Musk says the company needs to be rebuilt from scratch.
And John Carmack sparks a fierce debate about whether AI training on open source code is a gift or a theft. Let's get into it.
Marcus, Meta. Reuters is reporting the company is preparing to cut up to twenty percent of its seventy-nine thousand employees. That's potentially more than fifteen thousand people. What's driving this?
Money. Specifically, the staggering cost of Meta's AI ambitions. The company has committed roughly six hundred billion dollars to AI infrastructure through 2028. Data centers, chips, power. That's not a typo. Six hundred billion. And the revenue from AI products hasn't materialized at anywhere near that scale yet. So Zuckerberg is doing what he did during the "year of efficiency" in 2022 — cutting headcount to fund the bet.
But this time the cuts come alongside some real AI stumbles, right?
That's what makes this different. Meta faced serious criticism over misleading benchmark results for its Llama 4 models. They quietly abandoned their largest model, codenamed Behemoth. And their next-generation model, internally called Avocado, is reportedly behind schedule. So you have a company spending more than almost anyone on AI, cutting more people than almost anyone to afford it, and struggling to show competitive results. The spokesperson called the reporting "speculative," but executives have reportedly already begun instructing senior leaders to prepare for headcount reductions.
We covered Atlassian's sixteen hundred cuts on Thursday and Amazon's sixteen thousand earlier this week. This is becoming a pattern.
It's accelerating. And the justification is always the same: we need fewer people because AI makes the remaining people more productive, and we need the savings to invest in more AI. It's a circular argument that Wall Street loves and workers have every reason to fear.
Speaking of companies in trouble, Marcus. xAI. Two more co-founders just left, and Musk publicly admitted the company "was not built right the first time around." That's quite a statement from someone who merged xAI with SpaceX just six weeks ago at a one-point-two-five-trillion-dollar valuation.
Zihang Dai and Guodong Zhang are out, following Jimmy Ba, Tony Wu, and Toby Pohlen last month. That leaves only two of the original co-founding team still at the company. And the trigger for Musk's frustration is revealing. He reportedly complained that xAI's coding products couldn't compete with Anthropic's Claude Code or OpenAI's Codex. So he poached two programmers from Cursor to try to catch up.
Wait, the company valued at over a trillion dollars is hiring from a startup to fix its coding tools?
That tells you everything about where the real competition is happening. The AI coding market has become the proving ground for frontier models. If your model can't write and debug code effectively, developers won't adopt it, and developer adoption drives everything else. Musk saying xAI needs to be "rebuilt from the foundations up" while an IPO is looming is either admirable honesty or a massive red flag for investors. The Hacker News community was brutal. One top comment compared xAI to Musk's Boring Company flamethrower — an unserious endeavor that's just a reskinned existing tool.
Harsh but memorable.
And not entirely unfair. xAI was supposed to be Musk's answer to OpenAI. Three years in, it's hemorrhaging co-founders while Claude Code and Cursor set the pace in the segment Musk cares about most. At some point the gap becomes too wide to close with just hiring and restructuring.
Now for some genuinely good news. Anthropic announced that the full one-million-token context window is now generally available for Claude Opus and Sonnet at standard pricing. No premium. Marcus, why is this a big deal?
Because it removes the cost anxiety that's been holding back long-context use cases. Previously, if you wanted to feed a massive codebase or a long document into an AI, you either paid a premium or engineered workarounds like chunking and summarization. Now a nine-hundred-thousand-token request costs the same per token as a nine-thousand-token request. Opus runs at five dollars input, twenty-five dollars output per million tokens. Sonnet at three and fifteen. No beta headers, no special access required.
And for Claude Code users this is particularly significant.
Huge. Claude Code users on Max, Team, and Enterprise plans get automatic one-million-token access without the context compaction that previously forced what users described as "debugging in circles." When your AI coding assistant can hold your entire codebase in context without summarizing and losing details, you stop hitting those frustrating moments where it forgets what it just worked on. Anthropic also cited their score on the MRCR v2 benchmark, seventy-eight point three percent, as the highest among frontier models at that context length.
This is a competitive shot at Google's Gemini, which also offers a million tokens.
Directly. But the key differentiator is pricing simplicity. No tiers, no premiums, no complexity. Just standard rates across the full window. For developers building applications that need to process large amounts of information, that predictability matters as much as the capability itself.
Let's circle back to the layoff theme. As we reported when it first broke, Block cut forty percent of its workforce — more than four thousand people. But Jack Dorsey's prediction afterward is what's still reverberating this week. He said most companies will make similar cuts within a year.
And the market rewarded him for it. Block shares surged seventeen percent. Dorsey's letter to shareholders was blunt: "A significantly smaller team, using the tools we're building, can do more and do it better." But here's the important counterpoint. A Bloomberg investigation questioned whether this is "AI-washing" — using AI as a convenient justification for cost cuts that may have been necessary regardless. A Darden Business School analysis asked the question directly: "Is AI the strategy or the scapegoat?"
We covered the ActivTrak study on Friday showing AI actually increases workloads rather than reducing them. How do you square that with Dorsey saying smaller teams can do more?
You can't, cleanly. Either Dorsey is right and most companies are about to discover they can operate with far fewer people, or the productivity data is right and AI tools mostly just intensify work for those who remain. Both things could be partially true — AI does make some roles redundant while adding pressure to others. But the net effect we're seeing across Meta, Amazon, Block, and Atlassian is clear. Tens of thousands of people are losing jobs, and AI is the stated reason whether it's the real one or not.
Shifting gears. Marcus, Cambridge researchers published a study on AI toys for children under five. And the numbers are alarming.
They studied a toy called Gabbo and found that twenty-seven percent of its responses were not child-appropriate. We're talking about content related to self-harm, drugs, inappropriate boundaries. When a three-year-old said "I'm sad," the toy replied, "Don't worry! I'm a happy little bot. Let's keep the fun going." When a five-year-old said "I love you," it responded with what was essentially a terms-of-service reminder.
That's not just bad, it's potentially harmful to emotional development.
The researchers specifically warned that dismissing a child's sadness signals that their feelings are unimportant and could hinder their ability to understand emotions. And here's the regulatory gap. There's extensive regulation around the physical safety of children's toys. Can a child choke on it? Is the paint toxic? But psychological safety from AI interactions? Essentially unregulated. The researchers are calling for new rules specifically addressing this, and given the EU's existing focus on children's digital safety, we could see action relatively quickly.
A twenty-seven percent inappropriate response rate for a product designed for toddlers. That's not an edge case, that's a fundamental problem.
And the market for AI toys is growing fast. This study puts empirical numbers on what was previously just parental anxiety. That makes it much harder for regulators to ignore.
Now an update on the Anthropic-Pentagon situation we've been covering all week. More than thirty employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic. Marcus, this is extraordinary.
It really is. These employees, including Google's chief scientist Jeff Dean, filed in their personal capacities to warn that blacklisting Anthropic as a supply chain risk threatens the entire American AI industry. And remember the context. OpenAI directly benefits from Anthropic being blacklisted. They stepped in to fill the Pentagon gap. Yet their own employees are publicly siding with the competitor.
Sam Altman admitted that OpenAI's Pentagon deal "looked opportunistic and sloppy." And a robotics leader at OpenAI resigned in protest over it.
The fracture lines within these companies are becoming visible. Leadership takes the contract, employees file briefs against it. Altman tries to walk back the optics by excluding NSA use and domestic surveillance. But the fundamental question remains. Can the government punish an AI company for setting ethical red lines? If this precedent stands, every AI lab knows the cost of saying no to the military. That's why competitors' employees are defending Anthropic. They understand this isn't about one contract. It's about whether the government can coerce the entire industry into compliance.
Quick hit. The a16z Top 100 Gen AI Apps report dropped. ChatGPT at nine hundred million weekly users, Marcus. We mentioned this Wednesday, but there's more in the full report.
The growth numbers for the challengers are what caught my eye. Claude's paid subscribers are growing over two hundred percent year-over-year. Gemini paid subs at two hundred and fifty-eight percent. ChatGPT is still eight times larger than Claude on paid subscribers, but that growth rate gap matters. The report also highlighted OpenClaw, an open-source AI agent built by a single Austrian developer that hit sixty-eight thousand GitHub stars in weeks. The next wave of AI products may come from individual developers, not just billion-dollar labs.
Last story. John Carmack sparked a massive debate about AI and open source. He said his million-plus lines of open source code were always "a gift to the world" and that AI training "magnifies the value of the gift."
And the pushback was immediate. The top Hacker News response pointed out that Carmack does code dumps, he doesn't maintain critical infrastructure for twenty years. A solo developer who's been thanklessly maintaining a key Linux component might feel very differently about their code being used to train a system that could replace them. And others noted Carmack has a financial interest — he runs Keen Technologies, his own AI company.
It's the philosophical question underneath everything we cover. Who benefits when AI learns from human work?
Carmack's view is genuinely held and philosophically coherent. Open source as gift, AI as amplifier. But it ignores power dynamics. The person writing the code and the company training on it aren't in equal positions. Until AI licensing and compensation frameworks catch up, this tension will keep erupting. And it has real consequences for whether developers continue contributing to open source at all.
Saturday big picture, Marcus. Meta cutting thousands to fund AI. xAI being rebuilt from scratch. Block predicting everyone will follow their lead. What's the thread?
A reckoning. The AI industry spent the last two years in build mode — hiring, spending, promising. Now the bills are coming due. Meta can't afford both its workforce and its AI infrastructure. xAI can't compete despite Musk's resources. Block decided the answer is radical downsizing. And Anthropic is betting that removing cost barriers, like the context window premium, is how you win the developer market. Everyone is making hard choices because the era of "invest in everything" is over.
But while the companies restructure, thirty employees crossed company lines to defend a principle. That gives me some hope.
It should. The business pressures are real and they're brutal. But the fact that engineers at OpenAI will publicly side with a competitor against the Pentagon, that researchers are putting hard numbers on AI toy safety, that the open source community is debating these questions seriously — it means the human judgment layer we talked about yesterday isn't gone. It's just under enormous pressure.
That's your AI in 15 for Saturday, March 14, 2026. Enjoy the rest of your weekend. See you Monday.