AI in 15 — April 09, 2026
Fourteen billion dollars. That's what Meta reportedly paid to poach one person. And today, that person's first creation just went live. It's called Muse Spark, and in a move nobody saw coming, Meta's new AI model is closed source.
Welcome to AI in 15 for Thursday, April 9, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, huge day. Meta unveils Muse Spark, the first model from its new Superintelligence Labs, and abandons the open-source playbook that defined Llama. Anthropic launches Claude Managed Agents in public beta, making its biggest platform play yet. Apple's App Store is drowning in an 84 percent surge of new apps thanks to vibe coding. A research team figures out how to train a hundred-billion-parameter model on a single GPU for thirty-five thousand dollars. And Samsung posts record profits with an eightfold increase driven entirely by AI memory chips. Let's go.
Meta goes closed-source with Muse Spark and bets everything on Alexandr Wang.
Anthropic wants to run your AI agents for you.
And Apple can't review apps fast enough because everyone's vibe coding.
Okay Marcus, let's start with Meta. Muse Spark launched yesterday, the first model out of Meta Superintelligence Labs. Give us the backstory.
So Meta created this elite research division called MSL, led by Alexandr Wang, the former CEO of Scale AI. The reported price tag to bring him on as Chief AI Officer was fourteen billion dollars. His team essentially rebuilt Meta's entire AI stack from scratch over nine months, and Muse Spark is the first result.
And what can it actually do?
Meta is positioning it as intentionally small and fast but capable of deep reasoning across science, math, and health. It has strong multimodal perception, so it can understand images, not just text. And it supports multi-agent coordination where subagents work in parallel on complex tasks. Right now it powers the Meta AI app and the meta.ai website, with rollout coming to WhatsApp, Instagram, Facebook, Messenger, and Meta's AI glasses in the coming weeks.
But the headline here isn't what it does. It's how it's being released.
Exactly. Muse Spark is a closed model. This is a complete reversal from Meta's strategy with the Llama series, which was their whole identity in the AI race. Open source was what differentiated Meta from OpenAI and Anthropic. Now they're going head-to-head as a closed-model competitor.
They do say future open-source versions are planned.
They say that. But the first model from the new flagship division being closed tells you where the priorities are. A private API preview is available for select partners only. This is Meta saying we're done being the open-source insurgent, we want to compete directly at the frontier with Anthropic, OpenAI, and Google.
So how good is it actually? Because after the Llama 4 embarrassment, trust is low.
That's the critical question, and the answer is genuinely unclear. The Hacker News community is divided. Some commenters who've run internal benchmarks say they're very unimpressed. Simon Willison noted it still fails his standard pelicans test, which is basically a benchmark for whether a model can follow precise spatial reasoning instructions. On the other hand, some are saying if it matches or beats Claude Opus 4.6, that's a real achievement given the nine-month timeline.
Meta has between a hundred and fifteen and a hundred and thirty-five billion in AI capex planned for this year alone. That's a staggering bet.
It's the most anyone is spending. And the Muse name is deliberate. Meta describes this as a scientific approach to model scaling where each generation validates the last before going bigger. Spark is meant to be the foundation, not the destination. If the approach works, bigger Muse models follow. If it doesn't, that's a very expensive lesson.
Nine months, fourteen billion for Wang, and over a hundred billion in capex. This is either Meta's redemption arc or its most expensive mistake.
And we should know which one pretty quickly. The developer community isn't going to wait for Meta's marketing cycle. Independent benchmarks will tell the real story within weeks.
Sticking with Anthropic for a moment. We covered the thirty billion revenue number and the compute deal on Tuesday. But yesterday they made a completely different kind of move. Claude Managed Agents launched in public beta. Marcus, what is this?
Think of it as Anthropic saying we don't just want to sell you the model, we want to run your agents for you. It's a suite of composable APIs for building and deploying cloud-hosted AI agents at scale. They handle the hard infrastructure stuff. Secure sandboxing, authentication, tool orchestration, context management, error recovery. You just write the agent logic.
So they're moving up the stack from model provider to platform.
That's exactly the play. Long-running autonomous sessions that persist through disconnections. Multi-agent coordination in research preview. Scoped permissions so your agent can only access what you allow. And end-to-end execution tracing through the Claude Console so you can see exactly what your agent did and why.
Does it actually work better than rolling your own?
Internal testing showed the managed harness improved task success by up to ten points over standard prompting loops, with the biggest gains on the hardest problems. And the early adopter list is impressive. Notion is using it for code shipping and content agents. Rakuten for enterprise agents across product, sales, marketing, and finance. Asana for their AI Teammates feature. Sentry for debugging agents that can also write patches.
The Hacker News crowd was skeptical though.
Very. Top concerns were vendor lock-in to a single model provider, Anthropic's reliability given the outages we've been covering all week, and the prediction that open-source alternatives will surpass it. One commenter said, and I love this, it's all good until your production agent deployment has a single nine uptime.
Ouch. But strategically this makes a lot of sense for Anthropic.
It's their biggest enterprise platform bet yet. They're competing directly with LangChain, CrewAI, and every emerging agent framework. The calculus is clear. Lock in enterprises at the infrastructure layer, not just the model layer. If your agents run on Anthropic's platform, switching to a competitor becomes enormously expensive. And with a potential IPO on the horizon, platform revenue is worth a lot more than token revenue.
Now for a story that perfectly captures this moment in tech. Apple's App Store saw an eighty-four percent surge in new app submissions in Q1, reaching about two hundred and thirty-five thousand new apps. Marcus, that reverses a trend that saw submissions falling for nearly a decade.
Falling forty-six percent between 2016 and 2024. And then AI coding tools arrive and the curve snaps upward almost overnight. Sensor Tower analyst Abraham Yousef confirmed it directly. The growth aligns with the broader release of agentic coding tools. This is vibe coding at scale.
And Apple is struggling to handle it.
Apple claims ninety percent of submissions are still processed within forty-eight hours, but developers are reporting review delays of seven to thirty-plus days in March. Elon Musk publicly complained about it on X. And here's where it gets really interesting. Apple pulled or blocked updates to three top vibe-coding apps, Replit, Vibecode, and Anything, citing violations around apps that generate interpreted code capable of altering their own primary function.
So Apple is cracking down on the very tools causing the surge.
While simultaneously updating Xcode to support coding models. They want vibe coding to happen through their tools, on their terms. It's the classic Apple playbook. If you can't beat it, control it.
The human stories here are wonderful though.
The Hacker News thread had some gems. One user's retired father, a former software engineer, finally built an app he'd been dreaming about for eight years using AI coding tools. Another described a friend with zero coding experience who made an app that, and I'm quoting, tells you who died today and who you've managed to outlive.
I want that app. That's dark but I want it.
The democratization is real. People who could never write code before are now shipping products. Whether those products are any good is a separate question, and it's one Apple's review team is now drowning in.
Technical story now. A new paper called MegaTrain demonstrates training a hundred-billion-plus parameter model on a single GPU for about thirty-five thousand dollars. Marcus, walk us through this.
The core insight is beautifully simple. Instead of treating the GPU as the center of the universe, MegaTrain stores everything, parameters and optimizer states, in host memory, regular CPU RAM. The GPU becomes what they call a transient compute engine. For each layer, you stream parameters in, compute gradients, stream them out. A double-buffered pipeline keeps the GPU busy while data moves back and forth.
And the hardware requirements?
One NVIDIA H200 GPU at about thirty thousand dollars plus one and a half terabytes of DDR5 RAM at three to five thousand. Total cost around thirty-five thousand compared to eighty to two hundred thousand for traditional multi-GPU cluster setups. On that single H200, they reliably trained models up to a hundred and twenty billion parameters and achieved nearly double the throughput of DeepSpeed ZeRO-3 with CPU offloading.
So a PhD student could theoretically train a frontier-scale model.
That's the promise. Independent researchers, startups, universities that were priced out of multi-GPU clusters could train large models for a fraction of the cost. The paper and code are both public. Now, HN commenters noted that an H200 plus one and a half terabytes of RAM isn't exactly a laptop. But compared to renting a cluster of sixty-four GPUs, it's transformational.
Samsung reported record Q1 results. And when I say record, I mean eightfold profit increase. Marcus, the AI chip boom is real.
Revenue of about a hundred billion dollars. Operating profit of roughly forty-three billion. That's an eight X increase over Q1 2025. And ninety-five percent of those profits came from one division, semiconductor chips. Specifically high-bandwidth memory chips used in AI data centers. Samsung is one of only three companies in the world that can make advanced HBM chips, alongside SK Hynix and Micron.
And prices are still climbing.
Memory chip prices doubled this quarter and are expected to rise another fifty percent in Q2. Samsung's new HBM4 chips are being sought by both AMD and NVIDIA for their AI accelerators. When a single quarter's profit exceeds the full-year earnings of 2025, that's not a trend. That's a structural transformation of the semiconductor industry.
Which feeds directly into the cost pressures every AI company is facing.
Exactly. As we reported Tuesday, Anthropic is spending billions on compute. Those costs are going up, not down. The companies building AI and the companies supplying the infrastructure are in very different economic positions right now.
Thursday big picture. Meta abandons open source and goes closed with Muse Spark. Meanwhile, as we covered Monday, Google's Gemma 4 ships under Apache 2.0, fully permissive, running on phones and Raspberry Pis. Anthropic launches a managed platform to lock in enterprises. Marcus, what's happening to the AI landscape?
It's fragmenting into competing strategies, and the irony is thick. Meta, the company that built its AI identity on open source, just went closed. Google, the company everyone accused of hoarding AI behind APIs, is now the most aggressive open-source player. Anthropic is moving from selling tokens to selling infrastructure. And underneath all of it, Samsung is printing money selling the memory chips everyone needs.
So everyone's switching lanes.
Because the first round of strategies didn't work the way anyone expected. Meta's open-source play with Llama 4 blew up in their face. Google realized giving away small models builds an ecosystem that feeds demand for their cloud products. Anthropic figured out that token revenue has a ceiling but platform revenue compounds. The AI industry is barely three years old and it's already on its second strategic cycle. The companies that adapt fastest win. The ones still running last year's playbook are already falling behind.
Second strategic cycle. That's a good way to put it. Everyone's placing new bets.
And the stakes have never been higher. A hundred and thirty-five billion in capex from Meta alone. Thirty billion in revenue at Anthropic. Eight X profit jumps at Samsung. These aren't experiments anymore. These are irreversible commitments. Whoever reads this moment wrong doesn't get a third chance.
That's your AI in 15 for Thursday, April 9, 2026. See you tomorrow.