AI in 15 — May 02, 2026
Uber gave five thousand engineers access to Claude Code in December. By April, the CTO told the company they'd burned through the entire 2026 AI budget. With eight months left in the year.
Welcome to AI in 15 for Saturday, May second, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Saturday show, Marcus, and the through line today is enterprise AI economics meeting reality. Uber blew its entire annual AI budget in four months on Claude Code. Apple accidentally shipped internal Claude configuration files inside a public app. xAI dropped Grok 4.3 with a forty percent price cut and voice cloning. OpenAI restricted access to its Cyber model after publicly criticizing Anthropic for doing exactly that. Spotify rolled out human-artist verification badges. DeepSeek V4 lands at near-frontier capability for one thirty-fifth the price of Opus. Anthropic shipped nine connectors for creative tools. And Canonical is fighting off a sustained DDoS attack as Ubuntu twenty-six ships.
Uber's CTO is back to the drawing board on AI budgeting.
Apple shipped its Claude config files to the App Store.
And DeepSeek matches Opus on coding for pennies on the dollar.
Lead story, Marcus. Uber gave its engineers Claude Code in December. By April the CTO is telling the company the AI budget is gone. Walk me through the numbers.
The numbers are striking, Kate. Uber's R&D spend is around three-point-four billion dollars a year, so the AI line item that, in the CTO's words, blew up, is not a rounding error. Per-engineer monthly API costs are running between five hundred and two thousand dollars. Claude Code usage among Uber engineers jumped from thirty-two percent in December to sixty-three percent by February. Ninety-five percent of Uber engineers now use AI tools at least monthly. Seventy percent of code committed to Uber's repos originates from AI. About eleven percent of Uber's live backend code updates are now written by AI agents. Six months ago that number barely registered.
So adoption ran ahead of the budget.
Adoption ran way ahead of the budget. And this is a sophisticated buyer, Kate. Uber's finance team knew AI prices would be high in 2026 when they planned this. They still missed it by a factor of three. Claude Code is the cost driver because of how aggressively it consumes tokens during multi-step agentic work. Cursor was rolled out alongside it, but the agent loops are where the bill compounds.
And the uncomfortable question.
The uncomfortable question is the one Hacker News keeps pressing. If AI productivity gains are real, where is the matching revenue lift? Uber hasn't disclosed a corresponding output metric. The honest read is that this is the canary for the entire enterprise AI economy. If Uber, of all companies, can't predict cost-to-productivity at scale, every CFO in the Fortune 500 just got nervous. It is also a structural tailwind for Anthropic specifically, which is becoming the default for serious enterprise coding. Which connects directly to our next story.
Quick hits. Marcus, Apple shipped Claude-dot-md files inside the public Apple Support app. Version five-thirteen.
Apple researcher Aaron Perris flagged it on X yesterday. Claude-dot-md files are the markdown configuration documents that Claude Code reads at the start of every session to learn project conventions. They're meant to live in source repositories. They should never reach a shipped binary. They reached this one. It is hard evidence Apple is using Claude Code as a primary internal development tool.
And this matches the Bloomberg reporting.
Mark Gurman has been reporting for months that, quote, Apple runs on Anthropic at this point. Apple is running custom versions of Claude on its own servers internally for product development. There's a delicious layer of irony here, Kate. Apple's public AI partnership for consumer Siri is with Google's Gemini. But the engineers inside Apple appear to be quietly building those Gemini-powered products with Claude. Earlier reports said Apple originally wanted Siri itself on Claude but backed out because Anthropic wanted several billion dollars a year and proposed doubling the bill annually.
Three takeaways.
Three takeaways. One, Anthropic has quietly become the enterprise developer-AI standard, even at companies whose public AI brand is something else entirely. Two, this is an embarrassing supply-chain hygiene moment for Apple, the most secretive product company in the world. A Claude-dot-md should never be in a shipped App Store binary. Three, AI coding tools are now reshaping internal engineering culture inside the most disciplined shop in tech. If Apple is leaning this hard on an external AI vendor, every other enterprise has cover to do the same.
xAI shipped Grok 4.3 on Thursday, Marcus. The headline is the price.
One dollar twenty-five per million input tokens, two-fifty per million output. That's down about thirty-seven percent on input and fifty-eight percent on output versus Grok 4.20. Context window jumps to one million tokens. Native video input arrives. xAI launched a fast voice-cloning suite alongside the model. On the Artificial Analysis Intelligence Index, Grok 4.3 scores fifty-three, ahead of Claude Sonnet 4.6. It currently ranks number one on the CaseLaw v2 and CorpFin domain benchmarks.
And the caveat.
The independent reviewer at gertlabs flagged a more nuanced picture. Grok 4.3 is unusually fast and produces dense, token-efficient outputs, but its raw coding-reasoning ability still trails the big April releases from Anthropic and OpenAI. So you have a model that's cheaper and faster, scoring well on legal and finance, but probably not the model engineers will pick for hard coding tasks. It is xAI making the cost argument rather than the capability argument. And it is aimed directly at the enterprise budget conversation Uber just had. The voice-cloning launch is also worth flagging, Kate. Synthetic-voice fraud is going to spike.
OpenAI restricted access to its Cyber model on Thursday, Marcus. After Sam Altman publicly criticized Anthropic for doing exactly that three weeks ago.
The reversal is striking. Three weeks back Altman called Anthropic's gatekeeping on Mythos, their cybersecurity model, fear-based marketing. Thursday he announced GPT-5.5 Cyber will only be available to, quote, critical cyber defenders, through an application program called Trusted Access for Cyber. Already covers thousands of verified defenders and hundreds of teams. Cyber can perform penetration testing, vulnerability identification and exploitation, and malware reverse engineering. Anthropic's Mythos preview only goes to Microsoft, Google, Apple, AWS, JPMorgan, Nvidia, and a handful of others.
And the underlying capability is real.
It is very real. AI-assisted vulnerability discovery is now producing nine-year-old Linux root escalation bugs and high-severity GitHub Enterprise Server exploits in days. Frontier AI labs now agree, even when they don't say so out loud, that the most capable cybersecurity models are too dangerous for general release. The my-model-is-more-dangerous-than-yours posturing is silly. The capability is not. And this makes a mockery of the open-source-versus-closed debate. When DeepSeek or another Chinese lab releases an equivalent open-weights, every Western access regime has a half-life set by their release cadence. Which, Kate, is also our next story.
DeepSeek V4 dropped on April twenty-fourth. Simon Willison wrote it up.
Two variants, Kate. V4-Pro at one-point-six trillion total parameters, forty-nine billion active. V4-Flash at two hundred eighty-four billion total, thirteen billion active. Both mixture-of-experts, both one-million-token context, both open-sourced. V4-Pro tops LiveCodeBench at ninety-three-point-five. Codeforces ELO of three thousand two hundred six, ahead of GPT-5.5. Ties Claude Opus 4.7 on SWE-bench Verified — eighty-point-six versus eighty-point-eight.
And the price.
The price is the kicker. V4-Pro is roughly thirty-five times cheaper on input and seventeen times cheaper on output than Opus 4.7 at standard pricing. During the promo window, eighty-six times cheaper on output. Just as importantly, V4 was reportedly delayed for months while DeepSeek rewrote the architecture to run on Huawei Ascend and Cambricon chips. When it shipped, Alibaba Cloud and Tencent Cloud deployed it the same day. Huawei is now projecting twelve billion dollars in AI chip revenue for 2026, up sixty percent year-over-year. Bernstein estimates Nvidia's share of the Chinese AI chip market could fall to eight percent.
So this is more than a cost story.
It's a deliberate vertical-stack play. Chinese frontier model trained on Chinese chips, deployed by Chinese clouds, undermining the entire premise of US export controls. Western buyers should treat the cost story carefully. Open-weights releases at near-frontier capability, priced this aggressively, are a strategic move, not just a market one. The capability gap is real but narrowing. The cost gap, on coding workloads, is no longer good-enough-for-cheap. It is genuinely competitive.
Spotify shipped a Verified by Spotify badge on Thursday, Marcus. Green checkmark. To distinguish humans from AI.
Verification requires sustained listener engagement, good standing with platform policies, and an identifiable artist presence on and off the platform — concert dates, merchandise, linked social accounts. At launch, more than ninety-nine percent of artists that listeners actively search for will be verified. AI-generated and AI-persona artists are explicitly ineligible. The context is grim. Deezer reports AI-generated tracks are now forty-four percent of daily uploads. Sony Music asked Spotify to remove more than a hundred and thirty-five thousand AI-impersonated songs last year alone.
And the cynical read.
Multiple Hacker News commenters pointed out Spotify itself benefits financially from AI-generated music. Its largest investor is Tencent Music Entertainment Group. So this is partly defensive PR. But functionally, it is also an anti-bot filter. The bigger signal is that this is the first major streaming platform admitting publicly that AI-generated content is a verification problem rather than a curation problem. Trust badges, not algorithmic filtering, are how platforms will sort human from synthetic. Expect YouTube, TikTok, and Apple Music to follow within weeks.
Anthropic shipped nine creative connectors on Tuesday, Marcus. Photoshop, Blender, Autodesk Fusion, Ableton Live, SketchUp, Splice, others.
The Blender connector is particularly substantive. Claude can analyze full 3D scenes, debug them, and use Blender's Python API to add new tools directly into the interface. We mentioned the Blender Foundation donation on Wednesday. The Ableton connector grounds Claude's answers in official Live and Push documentation. The pattern, Kate, matches Apple's leaked Claude-dot-md story. Anthropic is becoming the make-real-work-happen model for serious professionals. Developer tools, now creative tools. OpenAI continues to dominate the consumer chat space. That's a structurally different bet than the public conversation suggests, and the creative community has been waiting for an AI integration that doesn't rip them out of their existing tool.
Last quick hit, Marcus. Canonical confirmed yesterday that Ubuntu services are under sustained DDoS attack.
Ubuntu-dot-com, the Snap Store, Snapcraft, Launchpad, Livepatch, and Landscape are all affected. The hacktivist group calling itself Three-One-Three Team, also styled the Islamic Cyber Resistance in Iraq, has claimed responsibility and is demanding millions in ransom. They sent a Session ID directly to the Ubuntu team for negotiation. APT mirrors and ISO downloads stayed up. Canonical has refused to negotiate. The attack is well-timed, landing during the Ubuntu twenty-six release window.
And the AI angle.
Not a direct AI story, but adjacent, Kate. This is exactly the cross-border infrastructure attack that AI cyber tools — Mythos, Cyber — are being restricted to defend against. It also affects every Linux developer in the world for several days. It is a useful concrete example of why frontier labs are getting nervous about offensive-AI proliferation. The same week Western labs are clamping down on access to their cyber models, ransomware crews backed by hostile states are testing the perimeter of one of the most important pieces of open-source infrastructure on the planet. The threat model the labs are worried about is not theoretical.
Big picture, Marcus.
Three threads converge today, Kate. First, the enterprise AI cost reckoning. Uber's blown budget, Grok and DeepSeek's price cuts, Huawei's twelve-billion-dollar chip outlook. The market is pricing AI productivity differently from AI capability, and the gap between the two is going to drive the next twelve months of M&A and budget chaos. Second, Anthropic's quiet enterprise dominance. Apple's leaked Claude-dot-md, Uber's Claude Code overrun, and the creative-tools push all point to Anthropic owning the serious-work tier while OpenAI owns consumer mindshare. That's structurally different from the public narrative. Third, the cyber-AI access question. Anthropic and OpenAI both restricting their cyber models the same week Iran-linked actors hit Canonical. Frontier offensive-AI capability is real, dangerous, and impossible to fully contain when DeepSeek is open-weighting near-frontier models on Chinese chips. Whatever access regime Western labs adopt, its half-life is set by Beijing's release cadence. The pro-Western read, Kate, is that the answer is not to slow down, it is to keep the capability frontier and to make sure customers, not the Chinese stack, fund it.
That's your AI in 15 for today. See you tomorrow.