← Home AI in 15

AI in 15 — March 15, 2026

March 15, 2026 · 15m 34s
Kate

Thirty thousand people from a hundred and ninety countries are about to descend on San Jose tomorrow, and one man in a leather jacket is expected to announce everything from the next generation of AI chips to NVIDIA's first laptop processors. GTC isn't just a conference anymore. It's where the AI industry finds out what's next.

Kate

Welcome to AI in 15 for Sunday, March 15, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy Sunday, Marcus. We're doing a preview-heavy show today because tomorrow is a big one. NVIDIA's GTC conference kicks off with Jensen Huang's keynote, and the pre-show announcements alone have been worth billions of dollars. We've also got Anthropic launching a hundred-million-dollar partner network to go after the enterprise market. Google just overhauled its entire Workspace suite with Gemini. A viral essay is asking whether AI actually made software engineering harder, not easier. And Anthropic's spring break promotion is sparking debate about the future of AI pricing. Let's preview.

Kate

NVIDIA GTC 2026 starts tomorrow with Vera Rubin, potential laptop chips, and billions in fresh investments.

Kate

Anthropic commits a hundred million dollars to embed Claude across enterprise consulting.

Kate

And a software engineer argues AI just made bad code easier to write. Let's get into it.

Kate

Marcus, GTC starts tomorrow. Jensen Huang keynotes at eleven AM Pacific at the SAP Center. Seven hundred sessions, thirty thousand attendees. But what's already leaked or been announced before the doors even open is remarkable.

Marcus

NVIDIA has been on a spending spree. Two billion dollars into Nebius, the AI cloud company we covered Friday. A significant stake in Thinking Machines Lab, the startup founded by former OpenAI CTO Mira Murati. And before that, two billion each into Lumentum and Coherent. That's roughly six billion dollars in strategic investments in the weeks leading up to GTC. This isn't a company just selling chips. NVIDIA is actively building the entire infrastructure layer for the next wave of AI.

Kate

And the Thinking Machines deal is particularly interesting. Murati's company committed to deploying at least one gigawatt of NVIDIA's next-generation Vera Rubin systems. That's a massive vote of confidence in hardware that hasn't even been formally unveiled yet.

Marcus

Which brings us to the keynote itself. The biggest expected announcement is the Vera Rubin microarchitecture, the successor to Blackwell. Analysts are also watching for a dedicated AI inference chip, which would be a new product category for NVIDIA. And then there's the wildcard — the N1 and N1X, potentially NVIDIA's first laptop CPUs for Windows machines.

Kate

Wait, NVIDIA making laptop processors? That would put them in direct competition with Intel and AMD on consumer hardware.

Marcus

It would be a massive strategic expansion. NVIDIA has dominated discrete GPUs and data center accelerators, but the CPU market has been Intel and AMD territory. If NVIDIA enters with ARM-based laptop chips, similar to what Apple did with M-series silicon, it could reshape the PC industry. That said, this is still rumored, not confirmed. Jensen loves a surprise, and GTC has historically been where those surprises land.

Kate

There's also NemoClaw, an enterprise agent platform, and an event where attendees build AI agents using OpenClaw, that open-source framework we've been tracking.

Marcus

NemoClaw is worth watching because it signals where NVIDIA sees the market going. Not just training models and running inference, but deploying autonomous AI agents in enterprise environments. If NVIDIA provides the platform layer for agent deployment the way it provided the hardware layer for model training, that's another enormous revenue stream. And the OpenClaw tie-in is smart marketing. OpenClaw has two hundred and fifty thousand GitHub stars and massive developer mindshare. Associating GTC with that energy keeps NVIDIA at the center of the developer ecosystem.

Kate

As we reported yesterday, Meta is potentially cutting up to twenty percent of its workforce to fund AI infrastructure. The contrast here is striking. Meta is shrinking to afford AI. NVIDIA is expanding because everyone else is buying AI.

Marcus

That's the GTC paradox. Every dollar Meta, Google, Microsoft, and Amazon spend on AI infrastructure is a dollar that flows through NVIDIA. The company isn't just benefiting from the AI boom. It's the tollbooth. And tomorrow's keynote will likely make that position even stronger.

Kate

From chips to consulting. Anthropic formally launched what they're calling the Claude Partner Network on Wednesday, committing a hundred million dollars in initial funding. Marcus, this feels like a very different kind of move for Anthropic.

Marcus

It's their clearest enterprise play yet. They've signed up Accenture, Deloitte, Cognizant, Infosys, the major consulting firms that actually deploy technology inside Fortune 500 companies. Accenture alone is training thirty thousand professionals on Claude. Infosys created a dedicated Anthropic Center of Excellence supporting three hundred and fifty thousand associates globally. This is the channel-partner model that Microsoft used to dominate enterprise software, now applied to AI.

Kate

And there are certifications now. Claude Certified Architect. That drew some skepticism online.

Marcus

The Hacker News crowd was predictably sarcastic. "Isn't this like certifying you know how to use a web browser?" And fair enough, the idea of certifying someone on a technology that changes every few months seems odd. But certifications serve a specific function in enterprise sales. When a CIO is evaluating AI vendors, having certified implementation partners reduces perceived risk. It's not about technical depth. It's about procurement psychology.

Kate

The timing here is interesting given everything happening with the Pentagon lawsuit. Some observers suggested this is partly defensive.

Marcus

The logic makes sense. The more deeply Claude is embedded in civilian enterprise through consulting partners, the harder it becomes for a government blacklist to threaten Anthropic's survival. And the growth numbers back up the commercial momentum. Claude Code hit a billion dollars in annual recurring revenue in just six months and is now running at two and a half billion. Anthropic's total ARR is reportedly around nineteen billion. Those aren't the numbers of a company that can be easily marginalized.

Kate

A hundred million dollars and big-four consulting partnerships. That's Anthropic saying they're here to stay regardless of what happens in Washington.

Marcus

Exactly. And they're scaling their partner-facing team fivefold, adding dedicated engineers, architects, and localized go-to-market support internationally. This isn't a pilot program. It's a full enterprise distribution strategy.

Kate

Google dropped a major Gemini integration across its entire Workspace suite this week. Docs, Sheets, Slides, Drive. Marcus, walk us through this.

Marcus

The Sheets capabilities are probably the most impactful. Users can now build or edit entire spreadsheets through natural language. Gemini pulls data across files, emails, chat, and the web. There's a "Fill with Gemini" feature that auto-populates cells with categorized data or real-time web information. Google reported a seventy percent success rate on SpreadsheetBench, which is a benchmark for real-world spreadsheet tasks.

Kate

Seventy percent. That's impressive but also means it fails three times out of ten.

Marcus

Right, and that matters when people are making business decisions based on spreadsheet data. But for the vast majority of spreadsheet users who aren't power users, having an AI that can set up formulas, organize data, and pull information from across your Google ecosystem is genuinely useful. It lowers the skill floor dramatically.

Kate

Docs got a writing style matcher and a format copier. Slides can auto-generate decks. Drive now has AI search summaries.

Marcus

The strategy is clear. Google can't beat ChatGPT or Claude in the standalone chatbot market. ChatGPT has nine hundred million weekly users. But Google has billions of people already inside Docs, Sheets, and Gmail every day. Embedding Gemini into those tools is a distribution advantage no other AI company can match. You don't need to convince users to adopt a new product. You just make the product they already use smarter.

Kate

It's the same playbook Microsoft is running with Copilot in Office 365.

Marcus

And the race between those two is ultimately about who makes AI invisible. The winner isn't the company with the best chatbot. It's the company whose AI features are so seamlessly integrated that users stop thinking of them as AI and just think of them as how the product works.

Kate

A blog post by software engineer Rob Englander went viral this week with a provocative title. "AI Didn't Simplify Software Engineering. It Just Made Bad Engineering Easier." Marcus, given everything we've been covering about AI coding tools, this hit a nerve.

Marcus

It did. His core argument is straightforward. Coding was never the hard part of software engineering. The hard parts are system design, understanding failure modes, managing complexity, making the right architectural tradeoffs. AI tools generate syntax brilliantly but can't replace the judgment that makes software reliable at scale. And the Hacker News discussion was genuinely split.

Kate

How so?

Marcus

One camp said AI is creating a generation of developers who'll "pay the price when they don't know the fundamentals." The other camp said experienced engineers are doing impressive work much faster with AI tools. And both things are probably true simultaneously. A senior engineer who understands architecture can use Claude Code or Copilot to work at two or three times their normal speed. A junior developer who relies on AI to write code they don't understand is building on sand.

Kate

This connects to the analysis we covered Friday about LLM coding abilities plateauing on real-world merge rates even as benchmark scores keep climbing.

Marcus

Exactly. If the models are getting better at generating code that passes automated tests but not at generating code that experienced maintainers want to ship, that's evidence for Englander's thesis. The AI handles the easy part better and better. The hard part stays hard. And with Claude Code at two and a half billion run-rate and Codex at two million weekly users, this isn't an academic debate. Companies are restructuring their entire engineering organizations around these tools. If the tools amplify bad engineering practices as much as good ones, the consequences will show up in production systems.

Kate

As one commenter put it, vibe coding has always existed. Some developers have always added copious null checks rather than understanding things. AI just enables that at scale.

Marcus

And scale is where anti-patterns become dangerous. A single developer writing mediocre code is a local problem. An entire organization generating mediocre code ten times faster is a systemic one.

Kate

Quick one to close out the stories. Anthropic launched a spring break usage promotion offering bonus Claude access during off-peak hours. It became one of the most-discussed stories on Hacker News with over two hundred points.

Marcus

The promotion itself is straightforward. Extra usage that doesn't count against weekly limits during off-peak times. But the discussion revealed something more interesting about where AI pricing is headed. Multiple commenters compared it to electricity pricing, where you pay less during overnight hours when demand is low. One user admitted the December promotion got them to upgrade to the hundred-dollar plan. Others wanted a five-to-ten-dollar off-peak-only tier.

Kate

Time-of-use pricing for AI. That's a future I hadn't thought about but it makes perfect sense.

Marcus

The compute costs are real and they vary by demand. If you're willing to run your AI workloads at two AM, why should you pay the same rate as someone using peak-hour capacity? Cloud computing already works this way with spot instances. AI is heading in the same direction. Anthropic is just the first to experiment with it at the consumer level. And it's a clever customer acquisition tool. Get people hooked during off-peak, then watch a percentage upgrade when they want access anytime.

Kate

Sunday big picture, Marcus. GTC tomorrow. Anthropic building enterprise moats. Google embedding AI everywhere. Engineers debating whether AI helps or hurts their craft. What's the thread?

Marcus

Infrastructure versus application. NVIDIA is building the physical infrastructure, the chips, the platforms, the data center investments. Anthropic is building the business infrastructure, partner networks, certifications, enterprise distribution. Google is building the application infrastructure, embedding AI into tools people already use. And Englander's essay is asking whether any of this infrastructure matters if the humans using it don't bring the judgment that makes technology useful. Everyone is building layers. The question is whether we're building them in the right order.

Kate

And tomorrow Jensen Huang gets on stage and probably announces the next layer.

Marcus

The leather jacket keynote is appointment viewing for the entire industry. Whatever he reveals about Vera Rubin, about laptop CPUs, about agent platforms, it will ripple through every story we cover for weeks. NVIDIA doesn't just announce products. It sets the tempo for the AI industry. And right now, that tempo is accelerating.

Kate

That's your AI in 15 for Sunday, March 15, 2026. GTC starts tomorrow and we'll have full coverage. See you then.