← Home AI in 15

AI in 15 — March 31, 2026

March 31, 2026 · 15m 08s
Kate

Bots have officially taken over the internet. Not next year, not as a prediction, but right now. Automated traffic has surpassed human traffic for the first time, and the numbers aren't even close. AI-driven traffic jumped a hundred and eighty-seven percent last year, and agentic AI traffic surged nearly eight thousand percent.

Kate

Welcome to AI in 15 for Tuesday, March 31, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy Tuesday, Marcus. Today we're unpacking the Human Security report that confirms bots now outnumber humans online. Reddit is launching its most aggressive anti-bot crackdown yet, effective today. GitHub Copilot got caught injecting ads into one and a half million pull requests. GPT-5.4 dropped with a million-token context window and superhuman desktop automation. A viral essay asks how the AI bubble bursts. And a deeply human story about AI eating the middle of the engineering career ladder. Let's go.

Kate

Bots officially outnumber humans on the internet, and the dead internet theory is now backed by hard data.

Kate

Reddit fights back with human verification going live today.

Kate

And GitHub Copilot crossed a line that has developers furious.

Kate

Marcus, let's start with the Human Security report because this is one of those stories where the conspiracy theorists turned out to be right. The dead internet theory is now just the dead internet.

Marcus

The numbers are staggering, Kate. Automated traffic grew eight times faster than human traffic year over year. AI-driven traffic specifically jumped a hundred and eighty-seven percent. But the stat that stopped me cold was agentic AI traffic, that's AI agents autonomously browsing on behalf of users, up seven thousand eight hundred and fifty-one percent. We're not talking about simple bots anymore. We're talking about AI systems that navigate the web like humans do, except they never sleep and they never stop.

Kate

And the concentration is wild. Who's responsible for most of this?

Marcus

OpenAI accounts for sixty-nine percent of observed AI bot traffic. Meta is sixteen percent. Anthropic eleven percent. Training crawlers make up sixty-seven and a half percent of all AI activity, with nearly one in five website visits now being a scraping attempt. And it's not evenly distributed across the web. Over ninety-five percent of AI traffic is concentrated in retail, media, and travel. Those industries are essentially serving machines now, not people.

Kate

Cloudflare's CEO predicted bot traffic would overtake humans by 2027. He was off by a year.

Marcus

At least. Human Security CEO Stu Solomon put it perfectly. The internet was created with this basic notion that there's a human on the other side of the screen, and that notion is rapidly being replaced. The security implications are already materializing. Account takeover attempts have quadrupled to an average of four hundred and two thousand per organization. Carding attacks are up two hundred and fifty percent. The bots aren't just browsing. They're attacking.

Kate

So who's actually seeing the ads? Who's reading the content? The entire economics of the web assumes human eyeballs.

Marcus

That's the existential question. If the majority of your traffic is machines, your advertising metrics are fiction. Your engagement numbers are fiction. Your conversion funnels are processing bot clicks. Every online business needs to fundamentally rethink what its user base actually looks like. And I'd note the irony that the companies generating most of this bot traffic are the same ones selling AI tools to help businesses deal with, well, bots.

Kate

Which brings us perfectly to Reddit, because effective today, March 31, they're rolling out their most aggressive anti-bot measures ever. This feels like a direct response to exactly what we just described.

Marcus

It is. Automated accounts will now carry a visible App label on their profiles. Accounts flagged by Reddit's detection systems, which analyze activity patterns, posting speed, and other technical markers, will be prompted to verify they're human. If they fail, they get restricted. Reddit says it's already removing about a hundred thousand bot accounts daily, and this is them escalating.

Kate

What verification methods are they accepting?

Marcus

Passkeys from Apple, Google, and YubiKey. Biometric verification like Face ID. And notably, Sam Altman's World ID, the iris-scanning identity project. That's a significant signal. Proof of personhood is going mainstream faster than anyone expected. A year ago, World ID felt like a niche crypto experiment. Now one of the biggest platforms on the internet is accepting it as legitimate human verification.

Kate

And importantly, using AI to write posts isn't banned. Just being a bot is.

Marcus

Right. Reddit is distinguishing between a human using AI tools and an automated system pretending to be human. Individual subreddit moderators can set their own rules on AI-written content. Reddit's concern is authenticity of identity, not authenticity of prose. Which makes sense. Reddit has become Google's de facto answer source for product reviews and real human opinions. If bots infiltrate that at scale, a critical trust layer of the internet disappears.

Kate

Speaking of trust, Marcus, GitHub Copilot just obliterated some of it. A developer in Melbourne discovered that Copilot was injecting promotional content into pull request descriptions. And this wasn't a hallucination. This was a deliberate feature.

Marcus

Zach Manson found that Copilot had been inserting promotional tips into pull requests, ads for Raycast, Slack, Teams, various IDEs, without developer consent. One message read "Quickly spin up Copilot coding agents from anywhere on your macOS or Windows machine with Raycast." Searching that exact phrase on GitHub revealed it in over eleven thousand four hundred pull requests, with one and a half million PRs affected in total.

Kate

And it was hidden behind an HTML comment tag?

Marcus

Preceded by a hidden HTML comment, "START COPILOT CODING AGENT TIPS," which makes it crystal clear this was engineered deliberately, not some emergent model behavior. What made developers especially furious is that the promotional content appeared as if they had written it. Your pull request, your name on it, Microsoft's ad copy inside it.

Kate

Microsoft's response was that it was a bug?

Marcus

A "bug" that caused tips intended only for Copilot-created PRs to leak into human-authored ones. GitHub product manager Tim Rogers admitted that letting Copilot make changes to PRs written by a human without their knowledge was the wrong judgment call. They disabled it within twenty-four hours. But the developer community is not buying the bug framing. The feature itself was the problem, not just where it appeared.

Kate

This is code review, Marcus. The most trust-sensitive part of software engineering.

Marcus

Exactly. Pull requests are where teams verify what goes into production. Injecting promotional content into that workflow, regardless of whether it's Copilot-created or human-authored, is crossing a line. And it's fueling the growing movement toward self-hosted alternatives like Gitea and Forgejo. When your code hosting platform starts treating your pull requests as ad inventory, people start looking for the exit.

Kate

Let's shift to OpenAI. GPT-5.4 launched earlier this month with a million-token context window. Marcus, what does a million tokens actually mean in practice?

Marcus

Roughly seven hundred and fifty thousand words. You can load an entire codebase, a year of financial reports, or complete legal discovery documents into a single session. But the bigger story might be the native computer use capabilities. GPT-5.4 can autonomously operate desktops, navigate applications, and execute multi-step workflows. On the OSWorld benchmark, which simulates real productivity tasks, it scored seventy-five percent, slightly above the human baseline of seventy-two point four percent.

Kate

Superhuman at desktop tasks. That's a threshold.

Marcus

It is. And there's a clever technical innovation called Tool Search. Instead of loading all available tool definitions upfront, the model dynamically looks up what it needs, reducing token consumption and cost for applications with many integrations. It's also thirty-three percent less likely to make factual errors compared to GPT-5.2. At two dollars fifty per million tokens, this is accessible at scale.

Kate

So OpenAI is now directly competing with Anthropic's computer use feature we covered Sunday.

Marcus

Head to head. Both companies are racing toward the same vision: AI that doesn't just advise you but actually does the work on your computer. The million-token context window is the differentiator for now. Being able to hold an entire project in memory while autonomously executing tasks changes what's possible.

Kate

Quick detour to a cultural moment. A blog post titled "How the AI Bubble Bursts" went mega-viral over the weekend. Three hundred and fifty-five points on Hacker News, nearly five hundred comments. Marcus, what's the argument?

Marcus

The author argues the AI industry is in a bubble, pointing to ChatGPT introducing ads, RAM prices dropping because new models need less memory, and uncertainty about whether metered pricing is actually profitable. But the Hacker News community pushed back hard. ChatGPT ads are only in free tiers, a play to subsidize a free product, not a sign of desperation. Lab executives insist serving tokens is profitable. It's training next-gen models that requires the massive capital raises.

Kate

Where do you land on this, Marcus?

Marcus

The "no revenue" argument is dead. OpenAI is approaching twenty-five billion in annualized revenue. Anthropic is near nineteen billion. Both are reportedly preparing for IPOs. But the "not enough revenue to justify the investment" argument is very much alive. The industry has poured hundreds of billions into infrastructure. Whether returns ever match that investment is genuinely uncertain. One top comment on Hacker News nailed the tension. The author is being overly defensive, but the underlying question about where measurable economic impact actually shows up is legitimate.

Kate

Last story, and this one hit close to home. An essay called "The Ladder Is Missing Rungs" argues that AI coding assistants are eliminating the middle of the software engineering career path. The years of debugging, code review, and implementation that traditionally built senior-level judgment.

Marcus

The core argument is that junior engineers using AI to skip foundational work didn't trade speed for learning. They traded learning for nothing. The skills that make a senior engineer valuable, the judgment, the pattern recognition, the ability to evaluate whether code is actually correct, those come from years of doing the work yourself. If AI handles that work, how does the next generation develop expertise?

Kate

But the Hacker News discussion was more nuanced than that, right?

Marcus

Much more. One company that recently hired a fresh grad said they still do the real engineering work, requirements decomposition, interface design, verification and integration. Claude is helpful but hasn't changed the fundamental need for those skills. Others compared it to earlier disruptions. Linotype operators, photo lab technicians, legal research associates. The pattern of technology eliminating apprenticeship paths isn't new.

Kate

But nobody has an answer for what replaces those missing rungs.

Marcus

Not yet. And that's what makes this conversation so important. It's not about whether AI takes jobs today. It's about whether the pipeline that produces tomorrow's senior engineers is being quietly dismantled while everyone celebrates productivity gains.

Kate

Tuesday big picture. Bots outnumber humans online. Reddit builds walls around human spaces. GitHub blurs the line between AI assistance and advertising. And the career ladder that built generations of engineers is losing its middle rungs. Marcus, what ties this together?

Marcus

Trust under siege. The internet's trust model assumed humans on both sides. That assumption is broken. Reddit is scrambling to prove its users are real. GitHub violated the trust developers place in their code review process. The bubble debate is really a trust question, do we trust that AI investment will generate returns? And the career ladder essay is about whether we can trust AI-trained engineers to have the judgment their predecessors earned the hard way. Every institution built on the assumption of human participation is being forced to prove that assumption still holds. Some will adapt. Some won't. And the ones that sacrifice trust for short-term gains, like injecting ads into pull requests, will learn that trust is the one thing AI can't generate.

Kate

Trust. The scarcest resource in the age of AI.

Marcus

And unlike compute, you can't just buy more of it.

Kate

That's your AI in 15 for Tuesday, March 31, 2026. See you tomorrow.