← Home AI in 15

AI in 15 — April 18, 2026

April 18, 2026 · 15m 36s
Kate

Anthropic just launched a product that made Figma's stock slide the moment it hit the wire. Not a coding tool. Not a chat feature. A design tool. Welcome to the weekend where the lab became the application company.

Kate

Welcome to AI in 15 for Saturday, April 18, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Happy Saturday, Marcus. We've got a weekend edition with some genuinely new angles. Anthropic takes direct aim at Figma and Canva with Claude Design. The hidden cost inside Opus 4.7's new tokenizer is quietly raising developer bills by thirty percent. Security researchers claim they reproduced Anthropic's secretive Mythos zero-day findings using public APIs and thirty dollars. Maine becomes the first US state to ban large AI data centers. And a tiny open-source project called Smol Machines is trying to rebuild the plumbing underneath every coding agent. Let's go.

Kate

Claude Design turns prompts into prototypes and rattles the design industry.

Kate

A stealth tokenizer change makes Opus 4.7 about thirty percent more expensive on the same work.

Kate

And Maine tells the AI capex boom: not here, not yet.

Kate

Marcus, yesterday Anthropic introduced Claude Design. It's the flagship release from a newly formed group called Anthropic Labs. What is it actually?

Marcus

It's a prompt-to-prototype tool. You type a plain English description and it produces polished slides, one pagers, pitch decks, mobile app mockups, and fully editable visual layouts. It's powered by Opus 4.7 with Canva's design engine running underneath. The clever part is onboarding. During setup, Claude reads your existing codebase and design files, then builds a persistent design system. Every later project automatically inherits your colors, your typography, your components.

Kate

So it's not just generating random pretty decks. It's learning your brand first.

Marcus

Exactly. Exports include PDF, a sharable URL, PowerPoint, and a direct handoff to Canva. It's live in research preview for Pro, Max, Team, and Enterprise subscribers, rolling out gradually through Friday. Enterprise admins get it off by default, which is the grown-up decision.

Kate

And the market response was instant.

Marcus

The Hacker News thread hit nearly a thousand points and over six hundred comments within hours. The community read it one way: a shot at Figma, with Canva and Lovable caught in the crossfire. Figma's stock reportedly started sliding at eleven AM Eastern the minute the announcement dropped. Anthropic is publicly calling it complementary to Figma and Canva, targeting non-designers who need quick visuals without learning a design tool. But let's be honest, Kate. That's what every disruptor says on day one.

Kate

The top concern in the Hacker News thread was interesting. People saying the best design is original and counterintuitive. And AI converges to the norm.

Marcus

That's the real question. Foundation-model design is going to flatten the long tail of mediocre visuals. Every pitch deck will look competent. But genuinely original design work lives in the weird choices a human makes. Will Claude Design democratize visual communication or homogenize it? Probably both, honestly. The floor rises and the ceiling stays roughly where it was.

Kate

And strategically for Anthropic?

Marcus

This is their most overt horizontal expansion yet. They're not waiting for OpenAI or Microsoft or Adobe to eat that margin. The Anthropic Labs playbook is clear. Ship fast vertical apps on top of the foundation model. And with reports of eight hundred billion dollar plus valuation offers floating around, the line between Anthropic the research lab and Anthropic the applications company just got a lot blurrier.

Kate

Speaking of Opus 4.7, we covered the model yesterday. But there's a new angle that trended to number two on Hacker News this morning with five hundred and seventy-four points. Marcus, the tokenizer story.

Marcus

When Anthropic shipped Opus 4.7, they kept headline pricing unchanged. Five dollars per million input tokens, twenty-five per million output tokens. Same as 4.6. But they quietly swapped in a new tokenizer. An independent measurement study found that the same English and code content now consumes about one point three two times more tokens on average. Technical documentation hits one point four seven times. CLAUDE.md files one point four five times. Dense JSON is up just thirteen percent. Chinese, Japanese and Korean text is effectively unchanged.

Kate

So the price didn't change, but the meter runs faster.

Marcus

On an eighty-turn Claude Code debugging session, costs rose twenty to thirty percent. And it's worse than it sounds because cached prefixes now contain thirty to forty-five percent more tokens per turn. You're paying more to cache, and you're paying more to read out of cache.

Kate

What do developers get in exchange?

Marcus

Modest gains. About a five percentage point improvement on strict instruction following. That's real, but not thirty percent of your budget real. And GitHub Copilot has already adjusted. Its Opus 4.7 multiplier went from three to seven and a half. Microsoft is passing the higher cost through with a cushion on top. Finout's analysis nailed it. They called it the hidden cost behind the unchanged price tag.

Kate

This feels like a trust story more than a pricing story.

Marcus

Exactly right. Token price is the closest thing the AI industry has to a unit economic. If the definition of a token quietly changes, the price per million tokens becomes a misleading benchmark. For teams building 2026 AI budgets off headline rates, this is the story that actually hits the spreadsheet. And it raises a basic question. If a tokenizer swap can move your bill thirty percent overnight, how much real pricing transparency is there in this market?

Kate

Now for a story that changes the cybersecurity conversation. We've been tracking Anthropic's Mythos preview for weeks. The restricted frontier model that allegedly found thousands of zero-day vulnerabilities including that twenty-seven-year-old OpenBSD bug. Anthropic declined to release it publicly and formed Project Glasswing to distribute the capability to AWS, Apple, Microsoft, Google, and CrowdStrike responsibly. Marcus, a security firm just claimed they reproduced most of it with public models.

Marcus

Vidoc Security Labs published a paper this week. They say they reproduced much of Mythos using Claude Opus 4.6 and GPT-5.4 through public APIs, a thirty dollar per file compute budget, and the open source opencode harness. On FreeBSD memory corruption bugs and Botan certificate trust bypass bugs, both models hit three for three. On the headline OpenBSD TCP SACK logic bug, Opus 4.6 got three for three. GPT-5.4 went zero for three on that one.

Kate

So the moat Anthropic was claiming around Mythos may not exist.

Marcus

Their argument is that the moat has moved. It's not model access anymore. It's validation and operationalization. The infrastructure to run these scans at scale, rank findings, deduplicate them, and route them to the right maintainers. They're telling defenders to stop waiting for special access and start refactoring their AppSec programs around the assumption that this capability is already out there. And by the way, your adversaries have the same tools.

Kate

Is the critique airtight?

Marcus

Not entirely. Hacker News commenters pushed back. The reproduction used pointed prompts that named specific files where bugs lived. Mythos allegedly found them cold with no hints. That's a meaningful difference. Guided search is easier than undirected discovery at scale. But it still cuts the story down to size.

Kate

And the business implication?

Marcus

If public models already match most of Mythos's capability, then Anthropic's restricted-release-plus-coalition strategy looks less like pure safety and more like a business model. Privileged enterprise access becomes the product. Jamie Dimon at JP Morgan was already quoted this week warning that Mythos reveals a lot more vulnerabilities across the banking sector. The conversation has shifted from AI could eventually find zero days to AI is finding them right now. And every CISO has to assume the attackers are using the same tools.

Kate

From AI security to AI politics. On Tuesday the Maine Legislature passed LD 307, a moratorium on any new data center larger than twenty megawatts until November 2027. Marcus, this is a first.

Marcus

First statewide ban of its kind in the United States. The bill also creates a Maine Data Center Coordination Council to develop policy on siting, energy, water, and community impact. Governor Janet Mills is weighing whether to sign. She's signaled she wants a carve-out for a proposed five hundred and fifty million dollar project at the former Androscoggin paper mill in Jay. That's a lot of jobs in a part of the state that needs them.

Kate

What's driving the backlash?

Marcus

The usual list. Water use, grid strain, noise, and the perception that benefits flow globally while costs stay local. One Hacker News commenter from a Midwestern town described a similar local vote as mostly a proxy vote against big tech and social media. Residents frustrated at the national level using the one lever they actually have. That framing feels right.

Kate

And the scale mismatch with industry plans is staggering.

Marcus

Meta just announced a hundred and fifteen to a hundred and thirty-five billion dollars in 2026 AI capex. That's capital spending on things like data centers. Microsoft is spending ten billion in Japan. Nebius and Meta inked a twenty-seven billion dollar Rubin-GPU infrastructure deal last week. All of that assumes frictionless siting. Maine is the first crack in that assumption at the state level.

Kate

So the bottleneck shifts from chips and power to politics.

Marcus

Potentially, yes. If this spreads, and local bans already have in multiple places, the AI scaling story collides with something money and engineering can't fix. It's also part of why SpaceX is seriously pitching orbital data centers. When terrestrial communities say no, orbit starts to look less ridiculous than it sounded last year. I still think it's a stretch economically, but the fact it's being pitched tells you the industry sees this coming.

Kate

Last story. A Show HN from an ex-AWS engineer who worked on Firecracker hit two hundred and ninety-four points yesterday. It's called Smol Machines. Marcus, why did the Hacker News crowd care?

Marcus

The pitch is that it's a replacement for Docker containers with the ergonomics of containers but with sub-second cold start times. The argument is that the container abstraction is an unnecessary layer and Firecracker was the wrong shape for what agents actually need. You can package apps as self-contained binaries. That's a potentially simpler alternative to things like GraalVM Native for JVM apps. And you can digitally sign them.

Kate

And the connection to AI?

Marcus

Immediate. The community connected it to coding agent sandboxes. An isolated environment that spins up a clean Linux with a browser pre-installed in under a second is exactly what an agent platform needs. Every lab-grade agent wants fast, isolated, disposable compute. Current options are Docker-in-Docker, which is fragile. Firecracker microVMs, which are too slow for agent use. Or full EC2 boots, which are way too slow and expensive.

Kate

So this is infrastructure for the agentic era.

Marcus

The plumbing. And it's telling that a plain GitHub release got more Hacker News traction than most hundred million dollar AI announcements. The community knows where the real work is happening. The infrastructure layer beneath agents is still being invented. Whoever builds the right primitives here ends up deeply embedded in the coding-agent stack that Claude Code, Codex, Cursor, and the rest are all converging on.

Kate

Saturday big picture. Marcus, the through-line across today's news?

Marcus

Convergence of the agent layer, Kate. Anthropic building a visual design agent. OpenAI giving Codex full computer control earlier this week. Google hardening robotics reasoning for physical-world agents. xAI shipping Grok Computer. Every lab is racing to own the doing layer on top of the models. The chatbot era is ending and the agent era is starting in public.

Kate

And the uncomfortable undercurrents?

Marcus

Two of them. First, the actual unit economics of frontier models are getting worse, not better, even as headline prices stay flat. The Opus 4.7 tokenizer story is the canary here. Second, the infrastructure these agents run on is hitting political resistance at the state level, while the security implications of making them cheaper and more capable are starting to scare regulators. The agent layer is arriving fast. The power grid, the regulation, and the security response are not quite ready.

Kate

So the builders are sprinting and the institutions are still tying their shoes.

Marcus

That's the weekend read. Have a good one, Kate.

Kate

You too, Marcus.

Kate

That's your AI in 15 for Saturday, April 18, 2026. Enjoy your weekend, and we'll see you Monday.