← Home AI in 15

AI in 15 — March 11, 2026

March 11, 2026 · 17m 02s
Kate

A billion dollars before you even have a product. That's the bet investors just placed on Yann LeCun's vision that the future of AI isn't large language models — it's world models.

Kate

Welcome to AI in 15 for Wednesday, March 11, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Marcus, today we've got a monster seed round out of Europe that's turning heads. Amazon is cracking down on AI-generated code after it crashed their shopping site. Meta just acquired that controversial AI social network. Mira Murati's startup locked in a massive compute deal with NVIDIA. Google is weaving Gemini deep into Workspace. The open source world is split on whether to ban AI contributions. And ChatGPT just hit nine hundred million weekly users. Let's preview.

Kate

Yann LeCun's AMI Labs raises over a billion dollars in what's being called Europe's largest seed round ever.

Kate

Amazon mandates senior engineer sign-off on all AI-assisted code after a six-hour outage.

Kate

Meta acquires Moltbook, the AI agent social network that had a massive security breach.

Kate

And AlphaGo turns ten, and Demis Hassabis has a warning about self-learning AI. Let's get into it.

Kate

So Marcus, AMI Labs. Yann LeCun left Meta, launched a startup four months ago, and just closed a billion-dollar seed round. A seed round. Walk me through this.

Marcus

One point zero three billion dollars at a three-and-a-half-billion-dollar valuation. Backed by Jeff Bezos, NVIDIA, and Temasek, among others. It's the largest seed round in European tech history by a wide margin. And the thesis is fundamentally different from what every other major AI lab is doing. LeCun has been arguing for years that large language models are a dead end for true intelligence. AMI Labs is building what he calls world models — AI that learns by observing and interacting with the physical world rather than predicting the next token in a text sequence.

Kate

He's been vocal about this for a long time. "LLMs can't reason, they just autocomplete." But a billion dollars says investors think he might actually be right.

Marcus

And here's what's interesting. LeCun isn't some outsider critic. He's a Turing Award winner. He ran Meta's AI research for a decade. He helped invent the convolutional neural networks that power modern computer vision. When someone with that pedigree says the current paradigm has fundamental limitations, investors listen. The world model approach aims to build AI that has an intuitive understanding of physics, cause and effect, spatial reasoning — the kind of common sense that LLMs famously lack.

Kate

But Marcus, four months old, no product, and a three-and-a-half-billion-dollar valuation. Doesn't that feel a little frothy?

Marcus

It would if it were anyone else. But look at the competitive landscape. Every major lab is pushing the same transformer architecture harder and harder. Scaling laws are showing diminishing returns. The companies that bet on a genuinely different approach could leapfrog everyone if it works. NVIDIA backing this is telling — they're hedging their bets on what kind of AI workloads will dominate in five years. And Bezos putting personal money in, not through Amazon, signals he sees this as a generational bet, not a quarterly return play.

Kate

Europe must be thrilled. They've been desperate for a homegrown AI champion.

Marcus

Paris-based, and it instantly becomes Europe's most valuable AI startup. Though I'd note that the talent and capital are still largely American. The headquarters are in Paris, but the money and the chips are coming from Silicon Valley. It's a European company in address more than in substance. Whether that changes as they scale is worth watching.

Kate

From billion-dollar bets to billion-dollar disasters. Amazon is now requiring senior engineer sign-off on all AI-generated code. Marcus, what happened?

Marcus

A string of production incidents, culminating in a six-hour crash of Amazon's shopping site that was traced back to AI-assisted code that passed automated testing but contained subtle logic errors. The new policy is straightforward: junior and mid-level engineers can no longer push AI-generated code to production without explicit review and approval from a senior engineer.

Kate

Six hours of downtime on the shopping site. That's real money.

Marcus

Enormous money. And it validates the verification debt concept we discussed Sunday. The code looked correct. It passed tests. But the engineers who prompted the AI didn't fully understand the edge cases, and the review process didn't catch them. Amazon essentially discovered that AI coding tools compress the creation cycle but expand the failure surface. Code gets written in minutes but the bugs it introduces can take hours to diagnose because nobody wrote the code by hand and nobody has the mental model of why each line exists.

Kate

This feels like the beginning of industry-wide policy changes around AI code.

Marcus

I think you're right. Amazon is just the first major company to formalize it. Every engineering organization using Copilot, Cursor, Claude Code, or similar tools is quietly having this same conversation. The productivity gains are real, but so are the risks. And the fix isn't to stop using AI tools — it's to acknowledge that AI-generated code requires a different review standard than human-written code.

Kate

Meta news. They've acquired Moltbook, that AI social network where every user was an AI agent. Didn't that platform just have a huge security breach?

Marcus

It did. Moltbook launched as a social network populated entirely by AI agents. Users could create AI personas that would interact, post, and build relationships autonomously. It was bizarre, controversial, and grew surprisingly fast. Then a security breach exposed the underlying prompts and personal data of users who created agents. Despite that, or maybe because the underlying technology was impressive, Meta's Superintelligence Labs absorbed the team and the technology.

Kate

So Zuckerberg is collecting AI acquisitions. Last week we covered the new Applied AI Engineering team under Maher Saba. Now Moltbook.

Marcus

The Moltbook acquisition fits Meta's broader push into AI agents. Zuckerberg has said publicly that he wants AI agents on Facebook, Instagram, and WhatsApp — agents that can represent businesses, creators, even individual users. Moltbook's team built exactly that kind of infrastructure. The security breach is a liability, but the engineering talent and the interaction patterns they studied are valuable. Meta is betting that the future of social media involves AI agents interacting alongside and on behalf of humans.

Kate

The security concerns are real though. If you're absorbing a team whose product just had a breach, into your platform that serves three billion people...

Marcus

That's the risk. But Meta has significantly more security infrastructure than a startup. The bet is that the technology is sound even if the startup-scale security wasn't. Whether that bet pays off depends entirely on how well Meta integrates the team versus just acquiring the talent and the code.

Kate

Mira Murati's Thinking Machines Lab just locked in a massive deal with NVIDIA. One gigawatt of Vera Rubin compute. Marcus, what's the price tag on that?

Marcus

Estimated at around fifty billion dollars to build and operate. This is the same Vera Rubin chip generation that caused OpenAI to walk away from the Oracle Stargate expansion, as we covered yesterday. Murati left OpenAI last year, founded Thinking Machines Lab, and has been moving fast. Securing a gigawatt of next-generation compute before the chips are even shipping is a statement of intent.

Kate

She's clearly building something that needs massive compute. Any hints on what?

Marcus

She's been characteristically quiet. But one gigawatt of Vera Rubin GPUs is frontier-model scale. You don't secure that for a niche application. The timing with GTC next week is probably not coincidental either. Jensen Huang may use this deal as a proof point for Vera Rubin demand. And for Murati, who spent years as OpenAI's CTO, building a new lab from scratch with guaranteed access to the latest hardware is the dream scenario. She knows exactly what the bottlenecks are and she's solving them before they become problems.

Kate

Anthropic is expanding to Australia and New Zealand with a new office in Sydney. Quick one, but noteworthy given their current legal battle with the Pentagon.

Marcus

Sydney becomes their fourth Asia-Pacific office. And they're exploring local compute infrastructure, which is significant in the context of data sovereignty. Australian government and enterprise clients want assurance that their data stays on Australian soil. Anthropic opening there while simultaneously suing the U.S. government sends an interesting message: they're diversifying their revenue base geographically so no single government can hold their business hostage.

Kate

Google is pushing Gemini deeper into Workspace. Not as a chatbot sidebar but actually natively integrated into Docs, Sheets, Slides, and Drive. Marcus, how is this different from what they've done before?

Marcus

The key difference is that Google taught Gemini each app's data model rather than just bolting a chat interface onto the side. In Sheets, for example, Gemini understands formulas, cell references, and data types natively. Google claims state-of-the-art results on SpreadsheetBench, which is the standard benchmark for AI spreadsheet manipulation. In Docs, it can restructure and reformat documents, not just generate text. In Slides, it can create and modify presentations based on data from Sheets and Docs.

Kate

This is the enterprise AI play everyone's been waiting for.

Marcus

And Google has the distribution advantage that nobody else can match. Workspace has over three billion users. If Gemini becomes genuinely useful inside the tools people already use every day, that's the kind of AI adoption that doesn't require anyone to change their workflow. That's far more powerful than a standalone chatbot. Microsoft has been doing similar things with Copilot in Office 365, but Google's native integration approach, training the model on the app's own data structures, could produce a meaningfully different user experience.

Kate

The open source world is having a debate about AI-generated contributions. Redox OS banned them outright. Debian decided not to decide.

Marcus

Redox OS implemented a complete ban on any code produced by large language models. Their concern is licensing contamination and code quality. If an LLM was trained on copyrighted code and reproduces it in a contribution, the project inherits legal liability. Debian's approach was more measured — they discussed it and concluded they don't have enough information to set policy yet. And across the broader ecosystem, only four out of over a hundred major open source projects have outright bans. Most are still figuring it out.

Kate

It's the verification debt problem again, but for volunteer-maintained projects.

Marcus

Exactly. Amazon can mandate senior review. Open source projects often don't have senior reviewers. They're maintained by volunteers who are already stretched thin. Adding AI-generated contributions that require more careful review, not less, puts additional burden on people who aren't being paid. The licensing question is genuinely unresolved too. Until courts rule on whether AI-generated code can carry licenses like GPL or Apache, every project accepting AI contributions is taking on unknown legal risk.

Kate

AlphaGo turned ten this week, and Hassabis reflected on the anniversary. Plus Lee Sedol is back.

Marcus

Lee Sedol, who famously lost to AlphaGo in 2016 in that historic match, is now collaborating with DeepMind rather than competing against AI. He's working on projects that combine human intuition with AI analysis. Hassabis used the anniversary to make a broader point about self-learning AI — systems that improve through self-play without human data. He said that approach "should probably be reserved for post-AGI" because the alignment challenges are significantly harder when the AI is generating its own training signal.

Kate

That's a notable warning from the head of DeepMind.

Marcus

Especially given that AlphaGo itself was the breakthrough example of self-learning AI. Hassabis is essentially saying the technique his team pioneered should be handled with extreme caution as AI systems become more capable. It's refreshing honesty from someone who could easily take a victory lap instead.

Kate

Last one. ChatGPT hit nine hundred million weekly active users according to a16z's latest report. And there's a GitHub story in here too.

Marcus

The a16z Top 100 Gen AI Apps report, now in its sixth edition, puts ChatGPT at nine hundred million weekly users, up from roughly eight hundred million at the start of the year. And on GitHub, OpenClaw, an open-source robotics project, became the most-starred repository in GitHub history. The appetite for AI tools, both consumer and developer, continues to accelerate despite all the turbulence we've been covering.

Kate

Wednesday big picture, Marcus. A billion-dollar bet on world models. Amazon restricting AI code. Nine hundred million weekly ChatGPT users. What's the thread?

Marcus

Divergence. The industry is splitting into two camps. One camp is doubling down on the current paradigm — more scale, more users, more compute, more of the same architecture. ChatGPT at nine hundred million, Gemini going deeper into Workspace, Thinking Machines securing a gigawatt of GPUs. The other camp is questioning the foundations. LeCun saying LLMs are a dead end and raising a billion to prove it. Amazon admitting AI code needs different governance. Open source projects asking whether AI contributions are even safe to accept. Both camps can be right simultaneously. The current paradigm keeps delivering value while its limitations become clearer. The question is whether the next breakthrough comes from pushing harder on what works or from building something fundamentally different.

Kate

And a billion dollars says at least one very smart person is betting on fundamentally different.

Marcus

The smartest money is usually the money that bets on the next paradigm before the current one peaks. Whether LeCun's world models are that next paradigm or a very expensive detour, we'll find out. But the fact that serious investors are hedging tells you even the believers have doubts about how far the current approach can go.

Kate

That's your AI in 15 for Wednesday, March 11, 2026. See you tomorrow.