← Home AI in 15

AI in 15 — March 22, 2026

March 22, 2026 · 14m 57s
Kate

A security scanner designed to protect you, turned into a weapon that steals your credentials. You can't make this stuff up.

Kate

Welcome to AI in 15 for Sunday, March 22, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy Sunday, Marcus. We've got a really interesting mix today. A supply chain attack that weaponized Trivy, one of the most popular security scanners in open source. Bloomberg is reporting that AI coding tools are compressing dev cycles but burning engineers out faster. A sixteen z dropped their top one hundred gen AI consumer apps list and the numbers are staggering. An eight million dollar AI music streaming fraud just became the first criminal case of its kind in the U.S. And a research paper is warning that AI is creating an entirely new mode of human cognition, and not in a good way. Let's get into it.

Kate

A security scanner turned into a self-propagating credential-stealing worm.

Kate

Bloomberg says AI tools are making developers work harder, not easier.

Kate

And researchers identify "System 3" thinking, where humans just stop thinking and let AI decide.

Kate

Marcus, let's start with the Trivy supply chain attack because the irony here is almost poetic. A tool that developers use specifically to scan for vulnerabilities was itself weaponized. What happened?

Marcus

This is one of the most clever supply chain attacks we've seen. Trivy is Aqua Security's open-source vulnerability scanner, used by thousands of organizations to check containers and code for security issues. Attackers managed to inject malicious code into npm packages that Trivy depends on. When developers ran their security scans, the compromised packages would execute, harvesting credentials, API keys, and environment variables from the machines running the scanner.

Kate

So the thing you're running to check if you're safe is the thing making you unsafe.

Marcus

Exactly. And it gets worse. The malicious payload was designed to propagate. Once it had access to a developer's credentials, it could use those credentials to push compromised code to other repositories, which would then infect other developers running Trivy. A self-propagating worm hiding inside a security tool. It's the digital equivalent of poisoning the water supply at the water testing lab.

Kate

How widespread was the damage?

Marcus

The full scope is still being assessed, but the attack was active long enough to affect a significant number of CI/CD pipelines. This follows the pattern we've been tracking. Monday we covered Glassworm hiding malware in Unicode characters on GitHub. Friday we covered Snowflake's AI coding tool getting prompt-injected. Now a security scanner itself is compromised. The software supply chain is under sustained, sophisticated attack, and the attackers are specifically targeting the tools developers trust most.

Kate

What's the fix here? How do you protect yourself when your protection tools are compromised?

Marcus

Defense in depth. Pin your dependencies to specific versions and audit them. Run security tools in isolated environments with minimal credentials. And the broader lesson, trust in open-source packages can't be assumed based on popularity alone. The most widely-used packages are the most valuable targets precisely because they're everywhere. Security isn't a single tool you run. It's an architecture you build.

Kate

From security to the human cost of AI tools. Bloomberg published a deep investigation this week with a headline that's going to resonate with a lot of developers. AI coding tools like Claude Code are compressing development cycles by forty percent, but engineers aren't getting any relief. They're just being pushed harder. Marcus, what did Bloomberg find?

Marcus

The report surveyed engineering teams across multiple companies and found a consistent pattern. When AI tools reduce the time to complete a feature from, say, ten days to six, management doesn't give engineers four days of breathing room. They pack in more work. Sprint commitments increase. Feature expectations expand. The productivity gains are being captured entirely by the organization, not shared with the developers doing the work.

Kate

So the treadmill just speeds up.

Marcus

And the quality concerns compound. Remember the Carnegie Mellon study from earlier this week showing AI coding tools degrade code quality by forty-two percent? Now combine that with compressed timelines and increased workload. You have developers shipping more code faster with less time to review it, and the code itself is lower quality. That's a recipe for technical debt accumulation at an unprecedented rate.

Kate

This connects to the "AI Coding Is Gambling" essay we covered Thursday. The behavioral patterns are being reinforced by organizational pressure now.

Marcus

It's a perfect storm. The tools create a dopamine loop that feels productive. Management sees the speed increase and demands more. Developers can't push back because the metrics say they're faster. And nobody's measuring the long-term cost of the technical debt being generated. Bloomberg quoted one engineering lead who said, "We used to have time to think. Now we have time to prompt."

Kate

That's a devastating quote.

Marcus

It captures the paradox perfectly. The tools are genuinely capable. But capability without recovery time leads to burnout, not productivity. The companies that figure out how to use AI tools to give engineers breathing room for deeper work rather than just cramming in more features will have a significant talent retention advantage.

Kate

Andreessen Horowitz dropped their latest top one hundred generative AI consumer apps report. Marcus, the numbers at the top are wild.

Marcus

ChatGPT at nine hundred million weekly active users. That's approaching Instagram territory. But the more interesting finding is how the market is segmenting. A sixteen z identifies distinct ecosystems forming. ChatGPT is becoming the super-app, trying to be everything to everyone. Social features, image generation, browsing, code, voice. Claude is carving out the professional tools market, particularly with developers and enterprise users. And then you have specialized players like Midjourney for images and ElevenLabs for voice occupying defensible niches.

Kate

So the era of "every AI company does the same thing" is ending?

Marcus

Rapidly. And this matters for the IPO race. OpenAI's super-app strategy produces massive user numbers but lower revenue per user. Anthropic's focused approach produces fewer users but much higher revenue per user. Claude Code alone at two and a half billion ARR from a relatively small developer user base versus ChatGPT's nine hundred million users generating ten billion in consumer revenue. The math on revenue per user tells a very different story than the headline user counts.

Kate

Which approach wins long-term?

Marcus

History says focus wins. Amazon started with books. Google started with search. The companies that tried to be everything from day one, that's a much shorter list. But the AI market is genuinely different in some ways, so we'll see.

Kate

This next one is a first. Federal prosecutors in the U.S. just brought the first-ever criminal case for AI-powered streaming fraud. A man allegedly used AI to generate hundreds of thousands of songs, then used bots to stream them billions of times across platforms like Spotify and Apple Music, pocketing eight million dollars. Marcus?

Marcus

Michael Smith of North Carolina is accused of creating thousands of bot accounts that streamed AI-generated music on repeat, twenty-four hours a day. The songs were generated by AI tools, uploaded under fake artist names, and the streaming platforms paid out royalties as if real listeners were enjoying real music. At its peak, the operation was generating over six hundred thousand streams per day.

Kate

Eight million dollars from fake songs nobody actually listened to.

Marcus

And this is almost certainly the tip of the iceberg. The barriers to entry are essentially zero now. AI music generation tools can produce passable tracks in seconds. Bot farms are cheap. And the streaming platforms' fraud detection has been playing catch-up. Spotify and others have been tightening their systems, but the economics are so attractive that fraud keeps evolving. The criminal prosecution signals that the industry is finally treating this as a serious enforcement issue rather than just a terms-of-service violation.

Kate

Real musicians are already struggling with streaming economics. This makes it worse.

Marcus

Significantly. Every fraudulent stream diverts money from the royalty pool that pays legitimate artists. If even a small percentage of total streams are fraudulent, that's millions of dollars per year taken from actual musicians. The first criminal case is important, but the systemic problem requires better detection technology from the platforms themselves.

Kate

Let's talk about something that sounds like science fiction but is coming out of serious academic research. A new paper introduces the concept of "System 3" cognition. We all know Daniel Kahneman's System 1, fast intuitive thinking, and System 2, slow deliberate reasoning. Now researchers are arguing AI has created a third mode. Marcus, what is System 3?

Marcus

System 3 is essentially cognitive outsourcing. The researchers found that when people have access to AI, particularly those who are less inclined toward critical thinking, they increasingly defer to the AI's judgment rather than engaging either their intuitive or analytical thinking. It's not that they think fast or think slow. They stop thinking and let the AI think for them.

Kate

That's different from using AI as a tool, right? Using a calculator doesn't mean you've stopped doing math.

Marcus

The distinction is agency. When you use a calculator, you decide what to calculate and evaluate whether the answer makes sense. System 3 cognition means accepting the AI's framing of the problem and its answer without independent evaluation. The study found this effect is strongest in people who already scored lower on critical thinking measures. So AI isn't creating uniform cognitive enhancement. It's widening the gap between critical thinkers and non-critical thinkers.

Kate

That has huge implications for education, for workplaces, for democracy honestly.

Marcus

It does. If a significant portion of the population shifts to System 3 as their default cognitive mode, you have a society where critical thinking becomes a specialized skill rather than a baseline expectation. The researchers specifically warn about decision-making in high-stakes contexts. Medical decisions, financial planning, voting. If people defer to AI without engaging their own judgment, the quality of those decisions depends entirely on the AI's alignment with the person's actual interests.

Kate

Quick hit. Game Developers Conference data from GDC 2026 dropped this week and the numbers are grim. Thirty-three percent of U.S. game developers have been laid off in recent waves, fifty-two percent view AI negatively, and eighty-two percent support unionization. Marcus, quick take?

Marcus

The game industry is the canary in the coal mine for creative professions and AI. Developers are seeing AI used to justify layoffs while simultaneously being told to integrate AI into their workflows. The eighty-two percent unionization support is historic for an industry that's traditionally been resistant to organized labor. When more than half your workforce views the technology negatively and a third have been laid off, that's not an industry embracing AI. That's an industry being disrupted by it against its will.

Kate

And publishers are fighting back too. We covered yesterday that Wikipedia voted forty-four to two to restrict AI content. Now the EFF is warning that major publishers blocking the Internet Archive are, quote, "burning the library to punish the arsonist" in their fight against AI scrapers.

Marcus

The irony is that restricting access to archived content doesn't hurt AI companies that already scraped it. It hurts researchers, journalists, and the public who rely on the Internet Archive for legitimate access to historical information. The collateral damage of the anti-AI backlash is falling on the wrong targets.

Kate

Sunday big picture. Security tools weaponized against their users. Developers working harder despite productivity gains. A new cognitive mode where people stop thinking entirely. Game developers in crisis. Marcus, what's the thread today?

Marcus

Unintended consequences. Every one of these stories is about AI producing effects that nobody designed for and nobody wanted. The security scanner wasn't supposed to steal credentials. Productivity tools weren't supposed to increase burnout. AI assistants weren't supposed to make people think less. And AI music generation wasn't supposed to defraud the streaming ecosystem. We keep building these systems with intended use cases and being surprised when the second and third-order effects are nothing like what we planned. The technology does exactly what it's designed to do. The problem is everything else it does that we didn't think about.

Kate

Unintended consequences with very real costs.

Marcus

Eight million in streaming fraud. Forty-two percent degradation in code quality. A third of game developers unemployed. These aren't hypothetical risks anymore. They're quarterly earnings and human livelihoods.

Kate

That's your AI in 15 for Sunday, March 22, 2026. Enjoy the rest of your weekend, and we'll see you tomorrow.