← Home AI in 15

AI in 15 — March 01, 2026

March 1, 2026 · 15m 06s
Kate

Number one. Not number two, not trending, number one. Anthropic's Claude app just hit the top spot on Apple's US App Store, overtaking ChatGPT for the first time ever. And it didn't get there with a marketing campaign. It got there because millions of people got angry.

Kate

Welcome to AI in 15 for Sunday, March 1, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Marcus, the fallout from this week's Pentagon drama is still rolling and it's taking some unexpected turns. Plus we've got some genuinely fascinating tech stories. Let's preview.

Kate

The Cancel ChatGPT movement exploded over the weekend, sending Claude to the top of the App Store while OpenAI deals with what might be the biggest consumer backlash in its history.

Kate

An OpenAI employee got fired for allegedly insider trading on Polymarket, using knowledge of upcoming product launches to place bets.

Kate

Andrej Karpathy built a complete working GPT in two hundred and forty-three lines of Python with zero dependencies.

Kate

Google is locking paying Gemini CLI users out of their entire Google account for using third-party tools.

Kate

And consumer hardware is catching up to the cloud. A ten-thousand-dollar desktop cluster just ran a trillion-parameter model. Let's get into it.

Kate

Marcus, we covered the Anthropic blacklisting and the OpenAI Pentagon deal all week. But what happened over the weekend with the consumer reaction, I don't think anyone predicted this.

Marcus

Cancel ChatGPT started trending on X within hours of the Pentagon deal announcement. The sequence was devastating for OpenAI's brand. Friday night, Anthropic gets designated a supply chain risk for refusing autonomous weapons and mass surveillance. Saturday morning, OpenAI announces its own classified military deal. The public connected those dots instantly, and the backlash was enormous.

Kate

And it wasn't just people posting angry tweets. They actually switched.

Marcus

In massive numbers. Claude went to number one on the US App Store, overtaking ChatGPT for the first time in the app's history. That's not a symbolic gesture. That's millions of downloads in a forty-eight-hour window. People weren't just saying they disagreed with OpenAI's decision. They were pulling out their credit cards and voting with their wallets.

Kate

Now Marcus, you've been skeptical of whether public sentiment actually translates into business impact for these companies. Does this change your view?

Marcus

It moves the needle. Look, App Store rankings are a snapshot, not a trend. ChatGPT could be back on top by Wednesday. But what matters is the signal. OpenAI has spent years building consumer trust around the idea that they're the responsible AI company. The name literally has "open" in it. When your competitor gets blacklisted for refusing to build weapons and you immediately take the deal, that narrative collapses. And rebuilding consumer trust is much harder than building it in the first place.

Kate

There's also the enterprise angle, right? Because a lot of companies are watching this.

Marcus

That's where it gets really interesting. CIOs and CTOs who were already evaluating both platforms now have a new variable. Do I want my company associated with the AI provider that took the weapons contract, or the one that said no? For companies in Europe especially, where attitudes toward military AI are very different, this could accelerate a shift toward Claude in enterprise deployments. Anthropic turned a government punishment into what might be the most effective customer acquisition event in AI history.

Kate

As we reported yesterday, the supply chain risk designation means Anthropic loses all defense-related business. But the consumer market might more than make up for it.

Marcus

That's the strategic calculus now. Two hundred million in defense contracts versus potentially billions in consumer and enterprise revenue from being the AI company that stood on principle. If the App Store numbers hold even partially, Anthropic comes out of this week stronger, not weaker. Which would be a remarkable outcome for a company the U.S. government just tried to crush.

Kate

Meanwhile, OpenAI has a completely different problem. Marcus, an employee got fired for insider trading on prediction markets.

Marcus

Unusual Whales, which is a platform that tracks suspicious trading activity, flagged seventy-seven trades across sixty wallets on Polymarket that were suspiciously timed with OpenAI product launches. The trades were placed shortly before announcements, consistently betting on the correct outcomes, and the pattern was too consistent to be luck.

Kate

So someone inside OpenAI was using their knowledge of upcoming launches to make money on prediction markets.

Marcus

That's the allegation, and OpenAI apparently agreed. They fired the employee, though they haven't named them publicly. The interesting wrinkle here is that prediction market insider trading exists in a legal gray area. Polymarket operates offshore. Traditional insider trading laws apply to securities, and it's not entirely clear whether prediction market contracts qualify. But from OpenAI's perspective, an employee monetizing confidential information is a fireable offense regardless of the legal technicalities.

Kate

This is a bad look at a moment when OpenAI really can't afford bad looks.

Marcus

The timing is brutal. You've got the Cancel ChatGPT movement, the Pentagon deal backlash, and now an insider trading scandal. Any one of these would be a rough news cycle. All three in the same weekend is a communications nightmare. And it raises a broader question about culture at OpenAI. When employees feel comfortable placing bets on insider knowledge, that suggests a certain looseness around information security that should worry anyone relying on OpenAI's enterprise products.

Kate

Let's talk about something more uplifting. Andrej Karpathy, who we covered yesterday talking about coding agents, did something beautiful this week. He built an entire GPT from scratch.

Marcus

MicroGPT. Two hundred and forty-three lines of pure Python. Zero dependencies. No PyTorch, no TensorFlow, no NumPy. Just raw Python implementing a complete transformer from the ground up, including training. It hit four hundred and thirty points on Hacker News and the developer community absolutely loved it.

Kate

Why does this matter? We already have massive GPT implementations everywhere.

Marcus

Because understanding matters. Karpathy has this incredible talent for distilling complex systems down to their essence. When you can implement a working GPT in two hundred and forty-three lines, it demystifies the technology. It shows that the core ideas behind these systems that are reshaping the world are actually elegant and comprehensible. You don't need a billion-dollar data center to understand how language models work. You need a text editor and an afternoon.

Kate

It's also a fantastic educational resource, isn't it?

Marcus

It's arguably the best one that exists. If you're a computer science student or a developer who wants to truly understand transformers, not just use API calls, this is your starting point. Every line of code teaches something. And the fact that it works without any dependencies means there's nothing between you and the math. No library abstractions hiding the details. Just pure computation. Karpathy keeps proving that the best way to understand AI is to build it yourself, even if the version you build fits on a single screen.

Kate

Now this next story is genuinely infuriating, Marcus. Google is banning paying customers from their own accounts.

Marcus

Gemini CLI users who pay two hundred and fifty dollars a month for Google's top-tier AI access have been getting locked out. Not just locked out of Gemini. Locked out of Gmail, Google Drive, Google Calendar, their entire Google account. For using third-party tools that connect to the Gemini API.

Kate

Wait, they're paying two hundred and fifty a month and getting banned for using the product they're paying for?

Marcus

Through third-party interfaces, yes. Google's terms of service apparently prohibit certain automated access patterns, but the enforcement is being applied broadly and without warning. Users report waking up to find their entire Google ecosystem inaccessible. No email, no documents, no calendar. And because so many people use Google as their primary identity provider, the lockout cascades to dozens of other services.

Kate

This feels like a fundamental problem with how much we depend on a single provider.

Marcus

It's the vendor lock-in nightmare made real. When one company controls your email, your documents, your calendar, your AI tools, and your identity, a single policy enforcement decision can essentially shut down your digital life. The Hacker News discussion was full of people sharing horror stories about Google account bans with no human appeal process. And these are paying customers. Two hundred and fifty dollars a month. If Google treats its highest-paying users this way, what hope does everyone else have?

Kate

Speaking of running AI locally, there were two breakthrough stories this week about getting powerful models running on consumer hardware.

Marcus

First, Unsloth released Dynamic 2.0 GGUFs, which is a new quantization format that compresses large language models while preserving quality. They ran Qwen 3.5 and achieved sixty-three tokens per second on consumer GPUs. That's fast enough for real-time conversation on hardware you can buy at a regular electronics store.

Kate

And then AMD showed something even more ambitious.

Marcus

AMD demonstrated a trillion-parameter model running on a desktop cluster that costs about ten thousand dollars. A trillion parameters. For context, GPT-4 was rumored to be around one point eight trillion parameters. We're talking about running models in that weight class on hardware that fits under a desk. Two years ago, you needed a data center for this.

Kate

So the gap between cloud AI and local AI is closing faster than anyone expected.

Marcus

Much faster. And this matters for privacy, for cost, and for independence. If you can run a near-frontier model locally, you don't need to send your data to OpenAI or Google or anyone else. You don't need a monthly subscription. You don't need to worry about getting banned for using the wrong third-party tool. The local AI movement has gone from hobbyist curiosity to legitimate alternative in about eighteen months. These numbers suggest it's going to keep accelerating.

Kate

Last quick hit. A Hacker News post titled "What AI Coding Costs You" hit nearly three hundred points, and Marcus, the developer community is having a real reckoning with AI coding tools.

Marcus

The core argument is that developers are trading deep understanding for speed. When you let an AI write your code, you ship faster, but you don't build the mental model of how that code actually works. And when something breaks at three in the morning, the mental model is what saves you. The post resonated because a lot of developers are privately feeling this tension but haven't articulated it.

Kate

It's the flip side of Karpathy's eighty-percent-AI workflow, isn't it?

Marcus

Exactly. Karpathy can delegate eighty percent to AI because he has decades of deep expertise to fall back on. He knows what to check, what to question, what to override. A junior developer using the same tools doesn't have that foundation. They're building on sand. The post isn't anti-AI. It's a warning that the value of AI coding tools is directly proportional to the expertise of the person using them. Without understanding, you're not coding faster. You're accumulating debt faster.

Kate

Sunday big picture, Marcus. The Cancel ChatGPT movement pushed Claude to number one. An OpenAI employee got caught insider trading. Consumer hardware is running trillion-parameter models. And developers are questioning whether AI tools are making them better or just faster. What's the thread?

Marcus

The thread is accountability. Every story today is about who's accountable and to whom. OpenAI took the Pentagon deal and consumers held them accountable with their wallets. An employee exploited insider knowledge and got held accountable with termination. Google locked paying customers out of their own lives and nobody held them accountable at all. And developers are asking whether they're holding themselves accountable for actually understanding the code they ship. The AI industry just crossed a line where the technology is powerful enough that accountability isn't optional anymore. The companies, the employees, and the users who take that seriously are the ones who'll earn trust. And right now, it's pretty clear who's earning it and who's burning it.

Kate

And the market is responding. Number one on the App Store doesn't lie.

Marcus

It doesn't. For the first time, doing the right thing and doing the profitable thing might be the same thing in AI. That's either a beautiful convergence or a temporary coincidence. We'll find out which one soon enough.

Kate

That's your AI in 15 for Sunday, March 1, 2026. Enjoy the rest of your weekend, and we'll see you tomorrow.