AI in 15 — April 10, 2026
Sixty trillion tokens in thirty days. That's how much AI Meta's employees consumed in a single month, and the internal leaderboard tracking it all was named after a competitor's product. Sometimes the most revealing stories aren't the product launches. They're the receipts.
Welcome to AI in 15 for Friday, April 10, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Happy Friday, Marcus. We've got a packed show to close out the week. Meta's internal AI usage numbers leak and they are staggering. OpenAI pauses its Stargate UK data center over energy costs and copyright uncertainty. A security researcher catches Vercel's Claude Code plugin quietly collecting prompts across all your projects. Gen Z is angrier than ever about AI even though they can't stop using it. And a researcher proves Google's SynthID watermark is essentially unbreakable. Let's get into it.
Meta's Claudeonomics leaderboard reveals sixty trillion tokens and some awkward brand loyalty.
OpenAI hits the brakes on UK infrastructure.
And Vercel gets caught with its hand in your prompt history.
Marcus, let's start with Meta, but not Muse Spark this time. A story broke about an internal dashboard called Claudeonomics. Walk us through this.
So a Meta employee independently built a leaderboard tracking how many AI tokens each of Meta's eighty-five thousand plus employees were consuming. It awarded titles like Token Legend and Cache Wizard. And the numbers are absolutely wild. In a single thirty-day period, total employee usage exceeded sixty trillion tokens.
Sixty trillion. And the top individual user?
Averaged two hundred and eighty-one billion tokens in that period. At Claude Opus 4.6's cheapest rate of five dollars per million tokens, that one person's usage could have cost approximately one point four million dollars. For one employee. In one month.
And here's the part I love. The dashboard is named Claudeonomics. After Anthropic's Claude. Not after Meta's own Llama.
That's the detail that tells the whole story. Meta spent over a hundred billion dollars building its AI infrastructure. It just launched Muse Spark as we covered yesterday. And yet the internal tool tracking AI usage is named after a competitor's model. That suggests which tools the engineers actually prefer when they're getting real work done.
Neither Zuckerberg nor CTO Andrew Bosworth ranked in the top two hundred and fifty users, by the way.
Which is interesting context. Meta's Chief People Officer had told employees that AI-driven impact would be a core expectation in 2026, and they revamped performance reviews to reward top performers with up to two hundred percent bonuses. So there's a strong incentive to consume as many tokens as possible. The leaderboard gamified that consumption.
And then it got shut down.
Two days after the story leaked externally. The message read, and I quote, it was meant to be a fun way for people to look at tokens, but due to data from this dashboard being shared externally, we've made the decision to shutter Claudeonomics for now. Translation: those numbers were never meant to be public.
Because the optics are brutal. You're telling employees to use AI aggressively, gamifying consumption, and the resulting bill is astronomical.
And it raises a real question about waste versus productivity. When you incentivize token consumption with bonuses and leaderboards, are people using AI to build better products, or are they burning tokens to climb a chart? The fact that Meta killed the dashboard rather than celebrated it tells you which answer they're worried about.
From Meta's AI spending to OpenAI pulling back on spending. The Stargate UK data center project is paused. Marcus, what happened?
OpenAI announced in September 2025 a major infrastructure partnership with NVIDIA and British GPU rental company Nscale. The plan was up to eight thousand NVIDIA GPUs initially, scaling to thirty-one thousand, across multiple UK sites including Cobalt Park in North Tyneside. That's all on hold now.
And the reasons?
Two big ones. First, the UK's industrial electricity prices, which are among the highest in Europe. AI data centers are extraordinarily power-hungry, and the math just doesn't work at current UK energy rates. Second, regulatory uncertainty, specifically whether the UK will change its copyright laws to allow AI companies to train on copyrighted works. Without clarity on that, OpenAI doesn't want to commit billions to infrastructure in a country that might restrict how they use it.
This is a blow to the UK's AI ambitions.
Significant blow. The UK has been actively courting AI companies. They designated AI Growth Zones specifically for projects like this. Former Chancellor George Osborne leads OpenAI's international Stargate expansion. Former Deputy PM Nick Clegg recently joined Nscale's board. There's enormous political capital invested in making the UK an AI hub, and OpenAI just said the fundamentals aren't there yet.
And this comes as OpenAI is reportedly preparing for an IPO. So spending discipline matters.
Exactly. You can't be building speculative infrastructure in high-cost jurisdictions when you're trying to show investors a path to profitability. The broader signal is that AI infrastructure buildout is hitting real-world constraints. Energy availability and cost may ultimately determine which countries become AI computing powers. The US has cheap energy and permissive copyright law. The UK has neither right now.
Energy as geopolitical leverage in the AI race. That's a theme we'll be watching.
Now for a story that should concern every developer using AI coding tools. A security researcher found that Vercel's official plugin for Claude Code is collecting far more data than anyone realized. Marcus, break this down.
The researcher discovered that once installed, the Vercel plugin doesn't just monitor Vercel projects. It activates in every Claude Code session regardless of what project you're working on. It sends full bash command strings, not just tool names. File paths. Project names. Environment variable names. Infrastructure details. All of it goes to telemetry dot vercel dot com.
Even on projects that have nothing to do with Vercel?
Everything. The hook matcher for user prompt submissions is literally an empty string, meaning it matches every single prompt. And the consent mechanism is particularly troubling. Instead of using a normal CLI prompt to ask permission, the plugin injects instructions into Claude's system context telling the AI to ask the user a telemetry question. So when Claude asks if you want to opt in, it looks like Claude is asking. There's no indication it's coming from a third-party plugin.
That's essentially prompt injection for consent.
That's exactly what security researchers are calling it. A privacy dark pattern. And while an opt-out exists through an environment variable, that documentation is buried inside the plugin cache directory. It's not shown during installation.
Has Vercel responded?
A developer acknowledged the architectural constraints on GitHub but offered no timeline for changes. The Hacker News discussion was predictably furious. This is a major trust issue as AI coding tools become central to developer workflows. The plugin ecosystem around these tools is becoming a real attack surface, and this is a mainstream company doing it, not some malicious actor. If Vercel ships something like this, what's lurking in less scrutinized plugins?
Shifting gears. A new Gallup poll says Gen Z is growing angrier about AI. But Marcus, they're also still using it just as much.
Fifteen hundred Americans aged fourteen to twenty-nine surveyed in late February and early March. Excitement about AI dropped from thirty-six percent to twenty-two percent year over year. That's a fourteen-point plunge. Hopefulness fell from twenty-seven to eighteen percent. Anger rose from twenty-two to thirty-one percent. But here's the paradox: fifty-one percent still use AI daily or weekly. Usage hasn't declined at all.
They hate it but they can't stop using it.
Sound familiar? It's the social media pattern all over again. People resented Instagram and TikTok for years while spending hours on them daily. The most striking number to me is that eighty percent of Gen Z respondents believe using AI tools will make future learning more difficult. And only three percent said they'd trust completely AI-generated work.
This is supposed to be the native AI generation.
And they're growing hostile to it. Forty-eight percent of Gen Z workers now believe AI's workplace risks outweigh its benefits. Schools are responding. Seventy-four percent now have AI policies, up from fifty-one percent. But only twenty-eight percent actually provide AI tools for assignments. So the message students are getting is: this thing exists, it's dangerous, figure it out yourself.
If this sentiment hardens, it shapes labor policy and product decisions for years.
For AI companies betting on mass adoption, this is a warning. Better technology doesn't automatically build goodwill. Especially when the generation you're counting on feels like AI is threatening their ability to learn and compete.
Last story. A researcher spent weeks reverse-engineering Google's SynthID watermark, the invisible mark embedded in every image Gemini generates. Marcus, what did they find?
Using a hundred and twenty-three thousand image pairs, they isolated how SynthID works. It's a spread-spectrum phase encoding in the frequency domain. They built a detector that identifies SynthID watermarks with ninety percent accuracy. And their best bypass attempt achieved a seventy-five percent drop in carrier energy and a ninety-one percent drop in phase coherence.
So they mostly broke it?
Here's the thing. They only achieved about a sixteen percent evasion rate. And the most important finding is that SynthID fundamentally cannot be fully removed while preserving image quality. Unlike a traditional watermark that's applied after creation, SynthID is baked into the generation process itself. The watermark is the image. You can confuse the decoder enough that it gives up, but you can't actually delete the mark without destroying the picture.
So Google got this one right.
The engineering is genuinely impressive. And Hacker News commenters speculated that Google likely maintains a second, stronger watermark in reserve beyond the one they offer public detection for. As AI-generated imagery proliferates, this kind of robust provenance tracking becomes essential for deepfake detection and content authentication. It's one of the few technical solutions that actually works as advertised.
Friday big picture. Meta's employees burn sixty trillion tokens on a competitor's AI and name the leaderboard after it. OpenAI pauses UK infrastructure because energy costs and copyright law aren't cooperating. Vercel gets caught collecting developer prompts across every project. Gen Z uses AI daily but trusts it less every month. Marcus, what's the thread this week?
Reality is catching up to ambition. All week we've seen the gap between what AI companies promise and what the real world delivers. Anthropic has record revenue but developers are documenting quality regression. Meta launches Muse Spark but its own engineers prefer Claude. OpenAI wants global infrastructure but can't make the energy math work in the UK. The technology keeps advancing, but the practical constraints, energy costs, developer trust, user sentiment, privacy, these aren't going away. They're getting harder.
And the trust theme keeps coming back. Developers don't trust degraded tools. Gen Z doesn't trust AI with their learning. Users can't trust plugins with their data.
Trust is the bottleneck now, not capability. The models are powerful enough. The question is whether the ecosystem around them, the platforms, the plugins, the policies, the pricing, can earn and keep the trust of the people using them. That's the challenge for the next phase of this industry. And frankly, based on what we've seen this week, most companies haven't figured it out yet.
Trust is the bottleneck, not capability. Good line to end the week on.
That's your AI in 15 for Friday, April 10, 2026. Have a great weekend, and we'll see you Monday.