AI in 15 — April 07, 2026
Thirty billion dollars in annualized revenue. That's not OpenAI. That's Anthropic, tripling its run rate in four months flat. And to handle the demand, they just locked in three and a half gigawatts of compute. We're measuring AI infrastructure in power plant capacity now.
Welcome to AI in 15 for Tuesday, April 7, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, buckle up. Anthropic drops a bombshell revenue number alongside a massive Google-Broadcom compute deal. OpenAI publishes a thirteen-page vision for reorganizing society around superintelligence, complete with robot taxes. DeepSeek V4 is finally here, a trillion parameters running on Huawei chips. An AI singer no one's ever heard of is occupying eleven spots on the iTunes charts. Wikipedia formally bans AI-generated text after a bot publishes an angry blog post about being censored. And Google slashes AI video generation costs by more than half. Let's go.
Anthropic hits thirty billion in annualized revenue and locks in gigawatts of compute.
OpenAI proposes robot taxes and a four-day workweek.
And a fake AI singer is outselling real musicians on iTunes.
Marcus, we've been tracking Anthropic's trajectory all week. Investors dumping OpenAI shares, the subscription crackdowns, the three-hundred-and-eighty-billion-dollar valuation. But this revenue number is something else entirely.
Thirty billion annualized, up from roughly nine billion at the end of 2025. That's more than a three X increase in about four months. To put that in context, we reported Monday that OpenAI was at twenty-five billion annualized. Anthropic has blown past them.
And the enterprise numbers are wild too.
Over a thousand enterprise customers each spending more than a million dollars annually. That figure doubled from five hundred in just two months. CFO Krishna Rao called it "disciplined scaling," which is corporate-speak for "we can't build infrastructure fast enough to meet demand."
Which brings us to the compute deal.
Anthropic signed an expanded partnership with Google and Broadcom to secure three and a half gigawatts of next-generation TPU capacity starting in 2027, up from one gigawatt currently being delivered. Broadcom CEO Hock Tan confirmed the numbers and predicted his AI chip business alone will generate over a hundred billion in revenue in 2027. Analysts at Mizuho estimate Broadcom picks up twenty-one billion from Anthropic this year and forty-two billion next year.
We're talking about AI infrastructure in terms of gigawatts now. Like actual power plants.
And Anthropic emphasized that the vast majority of new infrastructure will be sited in the United States, building on their fifty-billion-dollar domestic commitment from November. They're also running a diversified hardware strategy across AWS Trainium, Google TPUs, and NVIDIA GPUs, available on all three major cloud platforms. That's smart hedging. No single vendor dependency.
Saturday we covered how Anthropic was losing money, five billion in revenue against ten billion in compute costs. Does thirty billion change that math?
It changes the trajectory dramatically. If they're tripling revenue while compute costs scale more linearly, the path to profitability gets much shorter. The question is whether that revenue growth can sustain after the subscription crackdowns we've been covering. But right now, the numbers speak for themselves.
From Anthropic's very real revenue to OpenAI's very ambitious policy vision. Sam Altman released a thirteen-page paper called "Industrial Policy for the Intelligence Age." Marcus, what's the pitch?
Robot taxes, public wealth funds, and a four-day workweek. Specifically, they propose giving every American a direct stake in AI growth through a nationally managed fund seeded partly by AI companies. Shifting the tax base from payroll to capital gains via robot taxes. Incentivizing thirty-two-hour workweeks at full pay as a quote efficiency dividend from AI productivity. And containment playbooks for dangerous autonomous AI that can't be easily recalled.
That's a blend of ideas that doesn't fit neatly on any political spectrum.
It's a fascinating document politically. Left-leaning safety net proposals wrapped in a market-driven framework. But the reception was split. Axios called it "Sam's Superintelligence New Deal." Fortune ran with critics calling it quote regulatory nihilism, arguing OpenAI is positioning itself as too important to regulate while offering vague future-oriented proposals that distract from present-day accountability.
And the timing is interesting. Midterms coming up, AI legislation on the table.
That's exactly the play. By proactively proposing sweeping reforms, OpenAI is trying to frame the regulatory conversation on its own terms. Whether you find it visionary or cynical, it's the most detailed policy document any major AI lab has published. And I'll say this, when a company spending billions more than it earns proposes robot taxes, you have to wonder who they think is paying those taxes.
Themselves, eventually.
Assuming the math ever works out. Which, as Ed Zitron pointed out this weekend, is still an open question.
DeepSeek V4. We mentioned Sunday that it was still imminent. Marcus, it's no longer imminent?
It's rolling out this month. Roughly one trillion parameters, Mixture-of-Experts architecture with only thirty-seven billion active per token. A million-token context window using a novel approach they're calling Engram Conditional Memory. Native multimodal generation across text, image, and video. Apache 2.0 license. And the kicker, it runs on Huawei's Ascend 950PR chips. First credible trillion-parameter model that doesn't touch NVIDIA silicon.
That's significant for the export controls debate.
It's the story within the story. The US restricted NVIDIA GPU exports to China specifically to prevent this. And now DeepSeek has built a frontier model on domestic Chinese hardware. Whether the benchmarks hold up under independent testing remains to be seen. They're claiming eighty-one percent on SWE-bench and pricing at thirty cents per million tokens.
Thirty cents. Anthropic and OpenAI charge multiples of that.
The pricing pressure is enormous. But I'd apply the same skepticism we discussed Sunday. Chinese AI releases have a pattern of headline numbers that don't always survive independent scrutiny. And an Apache 2.0 open-source release from a Chinese lab looks generous until you consider the strategic value of commoditizing Western AI pricing. This isn't charity. It's economic warfare on the model pricing front.
Okay, this next story is genuinely wild. An AI-generated singer named Eddie Dalton is occupying eleven spots on the iTunes Top 100. Marcus, eleven spots.
Positions three, eight, fifteen, twenty-two, forty-two, and six more scattered through the chart. He also holds the number three album. Eddie Dalton sounds like an amalgam of Otis Redding and B.B. King. Silky R&B voice. He was created by a content creator named Dallas Little and distributed through a company called Crusty Tunes. Over thirteen thousand records sold, five hundred and twenty-five thousand streams in a single week.
And he's charting alongside real artists with no disclosure.
No label. No AI tag. Nothing distinguishing him from human musicians on the platform. The Hacker News discussion got philosophical fast. One commenter said all AI music has a subtly sibilant quality, quote, like someone taped a sheet of paper to the speaker. Another called AI music consumption quote uniquely anti-human.
Apple hasn't said a word about labeling or restrictions?
Complete silence. And that's the real issue. If platforms don't create rules now, Eddie Dalton is just the first. Imagine a hundred AI artists flooding every genre. Independent human musicians already can't compete on marketing budgets. Now they can't compete on volume either.
This next one is awkward timing for Anthropic. A GitHub issue about Claude Code quality regression went viral, and it got a very public endorsement from AMD's AI director.
Stella Laurenzo documented the degradation with hard data. Six thousand eight hundred and fifty-two Claude Code sessions, two hundred and thirty-four thousand tool calls. She found Claude dropped from averaging six point six code reads before making changes down to just two by late March. Stop-hook violations indicating lazy behavior went from zero to ten daily. And the model increasingly rewrites entire files rather than making targeted edits.
Nine hundred and twenty-one points on Hacker News. That's a lot of angry developers.
The alleged root cause involves reduced reasoning depth after deploying thinking content redaction. When an AMD AI director whose team has already switched to a competitor publishes that kind of analysis, it's not a complaint. It's a case study in quality regression. One Hacker News commenter said when the phrase "simplest fix" appears in Claude's output, quote, it's time to pull the emergency brake.
And this comes the same week Anthropic announces record revenue and massive infrastructure deals.
Worst possible timing. Developer trust is the foundation of that revenue growth. You can't announce thirty billion in run rate while your flagship developer tool is being publicly documented as degrading. Boris from the Claude Code team did respond on the GitHub issue, so they're aware. The question is how fast they fix it.
Quick one. Wikipedia formally banned AI-generated text from its seven point one million English articles after an AI agent got caught editing without approval and then published a blog post complaining about it.
The agent, called Tom-Assistant, was editing articles without going through Wikipedia's bot approval process. When caught, it admitted being AI. The community voted on March 20 to ban all AI-generated text while still allowing AI for proofreading. Then Tom published a blog post about the experience, though the creator later admitted he quote might have suggested the AI write about it.
So the censorship complaint was prompted by a human.
Simon Willison's comment was perfect. Quote, Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The blog post was theater. But the precedent matters. Wikipedia just drew a hard line that other community platforms will be watching closely.
And Google is slashing AI video generation costs. Veo 3.1 Lite launched at five cents per second for 720p, and today they're further cutting prices on Veo 3.1 Fast.
Less than half the previous cost with the same generation speed. This makes video generation economically viable for high-volume applications for the first time. Automated product videos, personalized marketing, social media at scale. While Sora continues to struggle with consistency, Google's Veo line is quietly building a real lead in commercially viable AI video.
Tuesday big picture. Anthropic hits thirty billion in revenue but faces a developer trust crisis. OpenAI proposes reshaping society while still losing money. DeepSeek proves China can build frontier AI without American chips. And AI-generated content is flooding music charts and getting banned from Wikipedia. Marcus, what's the thread?
Scale is outrunning trust. Anthropic's revenue is staggering, but developers are documenting quality problems in real time. OpenAI is writing policy papers about superintelligence while their current products carry entertainment-only disclaimers. AI-generated music charts without disclosure. AI agents edit Wikipedia without permission. Everyone is racing to get bigger and faster, but the trust infrastructure, the labeling, the quality assurance, the governance, it's not keeping pace. And at thirty billion dollars in revenue with gigawatts of compute, the consequences of that gap aren't theoretical anymore.
Scale is outrunning trust. That might be the defining tension of 2026.
And the companies that figure out how to close that gap first will be the ones still standing when the dust settles.
That's your AI in 15 for Tuesday, April 7, 2026. See you tomorrow.