← Home AI in 15

AI in 15 — April 14, 2026

April 14, 2026 · 14m 55s
Kate

Stanford just surveyed the entire AI landscape and found two completely different worlds. Inside the industry, seventy-three percent of experts are optimistic about jobs. Outside? Just twenty-three percent of the public agrees. That's not a gap. That's a canyon.

Kate

Welcome to AI in 15 for Tuesday, April 14, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Good Tuesday, Marcus. Big show today. Stanford drops its annual AI Index and the perception gap is staggering. Anthropic decides not to release its most powerful model ever. Meta goes closed-source and Zuckerberg builds an AI clone of himself. AI is revolutionizing mathematics in ways nobody expected. Computer science enrollment is in freefall. And a new PwC study says eighty percent of companies are losing the AI race. Let's get into it.

Kate

The Stanford AI Index reveals an industry talking to itself.

Kate

Claude Mythos is too dangerous to release.

Kate

And CS students are fleeing the very field that built AI.

Kate

Marcus, Stanford's Human-Centered AI Institute just released its 2026 AI Index. We've covered bits and pieces of these themes all week, but this report puts hard numbers on everything. Walk us through the headline finding.

Marcus

The perception gap is the story, Kate. Seventy-three percent of AI experts feel positive about AI's impact on jobs. Only twenty-three percent of the general public agrees. On the economy, sixty-nine percent of experts are optimistic versus twenty-one percent of ordinary Americans. Even in healthcare, where AI has delivered real measurable benefits, eighty-four percent of experts see a bright future but only forty-four percent of the public does.

Kate

And the nervousness numbers are climbing.

Marcus

Fifty-two percent of Americans now say AI makes them nervous. Only ten percent say they're more excited than concerned about AI in daily life. And here's a telling data point. The United States has the lowest trust in its government to regulate AI responsibly of any nation surveyed. Just thirty-one percent. Singapore tops the chart at eighty-one percent.

Kate

So Americans are nervous about AI and don't trust their government to manage it. That's a rough combination.

Marcus

It's a feedback loop. Low trust in regulation feeds anxiety about the technology, which feeds demand for regulation that people simultaneously don't trust to work. And this connects directly to stories we've been covering all week. The firebombing of Sam Altman's home, the Gen Z anger data, the CS enrollment decline. The expert class is living in one reality and the public is living in another.

Kate

The performance side of the report is interesting too. Anthropic leading the rankings as of March, followed by xAI, Google, and OpenAI.

Marcus

And Chinese models from DeepSeek and Alibaba trailing only modestly. That's a significant shift from two years ago when US models had a clear lead. The report also notes that top models now score above fifty percent on Humanity's Last Exam, a benchmark literally designed to be nearly impossible. But blind spots persist. Even the best models can only read a clock correctly about half the time. Claude Opus 4.6 manages just nine percent accuracy on clock reading.

Kate

A model that can score above fifty on Humanity's Last Exam but can't tell time. That's almost poetic.

Marcus

It tells you that AI capability is deeply uneven. Superhuman in some domains, sub-toddler in others. And the adoption speed is staggering. AI is outpacing both the personal computer and the internet in terms of how fast companies are generating revenue. Productivity gains of fourteen percent in customer service, twenty-six percent in software development. But those gains vanish in tasks requiring judgment, which tracks with everything we've been discussing about the review burden on senior engineers.

Kate

Let's move to the Mythos story. As we reported last week, Anthropic's Claude Mythos Preview found zero-day vulnerabilities across every major operating system and browser. But Marcus, the decision not to release it publicly is what makes this unprecedented.

Marcus

This is the first time a major AI company has withheld a general-purpose model from public release purely on security grounds. Mythos achieved a ninety-four percent score on SWE-bench verified, up from Opus 4.6's eighty-one percent. It found a twenty-seven-year-old bug in OpenBSD and a sixteen-year-old vulnerability in FFmpeg. On Firefox alone, it achieved a hundred and eighty-one successful JavaScript shell exploits versus just two for Opus 4.6.

Kate

We covered the AISLE counter-study on Sunday showing small models could detect the same vulnerabilities. But the UK's AI Safety Institute published its own independent evaluation now.

Marcus

And their findings are sobering. They confirmed continued improvement in capture-the-flag challenges and significant improvement on multi-step cyber attack simulations. Mythos is the first model to saturate their network attack evaluation entirely. Now, Hacker News commenters raised valid points about confidence intervals and statistical rigor. But the overall picture is clear. This model's offensive capability is a genuine step change.

Kate

And Anthropic's response was Project Glasswing, restricting access to defenders.

Marcus

A hundred million dollars in usage credits for vulnerability identification and patching. Access limited to AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The bet is that giving defenders a head start will do more good than an open release. Whether you view that as responsible deployment or brilliant marketing, the precedent is set. Other labs will face pressure to follow.

Kate

From Anthropic holding back to Meta charging forward. Muse Spark is officially out, and Marcus, the strategic implications are massive. We covered the launch last week, but the full picture is now clearer.

Marcus

Muse Spark scores fifty-eight percent on Humanity's Last Exam, putting it in direct competition with the extreme reasoning modes of Gemini and GPT. It's natively multimodal with tool use, visual chain-of-thought, and multi-agent orchestration. Built over nine months by Alexandr Wang's team after Zuckerberg's fourteen-point-three-billion-dollar investment in Scale AI. And it is proprietary. The company that positioned Llama as the open-source alternative to GPT and Claude is now playing the closed-source game.

Kate

And then there's the Zuckerberg AI clone. The Financial Times is reporting Meta is building a photorealistic avatar of Zuckerberg to interact with employees.

Marcus

Trained on his speech patterns, tone, and strategic thinking. Available twenty-four seven to over eighty thousand employees. If the experiment works, creators could build AI avatars of themselves too. Internally the reception was reportedly ambiguous, which is corporate-speak for people are uncomfortable.

Kate

If your boss can be an AI, what does that say about everyone else on the org chart?

Marcus

It says the line between tool and replacement just got very blurry. And against the backdrop of Meta's recent layoffs, you can understand why employees aren't celebrating. The Zuckerberg avatar might be a productivity experiment, but it reads as a proof of concept for automating leadership itself.

Kate

Let's shift to something genuinely inspiring. Quanta Magazine published a major feature declaring that AI's revolution in mathematics has arrived. Marcus, the results here are extraordinary.

Marcus

Terence Tao and colleagues at DeepMind used AlphaEvolve, a system that evolves Python programs using Gemini and genetic algorithms, on sixty-seven math problems. They improved solutions on twenty-three problems and matched existing results on thirty-six, accomplishing in days what typically requires months of expert work. Ernest Ryu proved a forty-two-year-old conjecture by Nesterov about gradient descent using ChatGPT over twelve hours across three days. And five mathematicians used AlphaEvolve to discover hidden structures in permutation groups that went unnoticed for fifty years.

Kate

Tao himself said 2025 was the year AI started being useful for many different tasks in mathematics.

Marcus

And Daniel Litt at the University of Toronto went further, saying it's very likely this technology is bigger than the computer. But there's a real cultural tension. Fields Medalist Akshay Venkatesh warned about losing valuable things in mathematical culture. Joel David Hamkins described an ocean of AI-generated slop overwhelming journal systems. And Tao pointed out that AI now instantly solves homework problems, potentially preventing students from building foundational understanding.

Kate

If AI can do the math, what is a mathematician?

Marcus

That's the question. And it connects to the CS enrollment story. Some mathematicians now spend two-thirds of their research time working with AI tools. They're becoming collaborators with machines rather than solo thinkers. That might be a better future or it might be a loss. Probably both.

Kate

Speaking of that CS enrollment story. Computer science enrollment at US four-year universities dropped eight-point-one percent this school year. The steepest decline of any field. Marcus, what's driving this?

Marcus

Fear. Parents who once pushed kids toward CS are now steering them toward mechanical and electrical engineering, majors they perceive as more resistant to AI automation. Gen Z has what researchers describe as a doomeristic view of CS careers, heavily influenced by social media influencers who've prematurely dismissed software development as a dead-end career. Sixty-two percent of universities in a Computing Research Association survey reported declining CS enrollment.

Kate

But it's not entirely a flight from tech.

Marcus

No. UC San Diego, which has a dedicated AI major, saw enrollment increase. About twenty percent of CS department applications went to the AI program specifically. Northwestern, Columbia, and USC are all launching AI-focused programs for fall 2026. Students aren't rejecting technology. They're rejecting the generalist CS degree in favor of AI-specific training and cybersecurity.

Kate

The irony is incredible. AI companies are spending hundreds of billions and hiring aggressively, and fewer students are entering the pipeline.

Marcus

If this continues for three to five years, we could face a genuine talent shortage. The students trying to position themselves as people who build AI rather than people displaced by it are making a rational bet. But a narrower pipeline could also produce engineers with less foundational breadth, which is exactly what you don't want when building systems this complex.

Kate

Quick hit. PwC surveyed over twelve hundred executives and found that seventy-four percent of AI's economic gains are going to just twenty percent of companies. These leaders generate seven-point-two times more value than competitors.

Marcus

And the difference isn't deploying more AI tools. It's using AI for business reinvention rather than cost cutting. Leaders are two-point-six times more likely to reinvent their business model with AI. While most companies are stuck in pilot mode automating existing processes, the winners are creating entirely new revenue streams. It's a winner-take-most dynamic that could accelerate corporate concentration rather than democratize opportunity.

Kate

Incremental adoption isn't enough. That's the message.

Kate

Tuesday big picture. Marcus, Stanford documents a perception canyon, Anthropic withholds a model for the first time, students flee CS, and eighty percent of companies can't figure out how to extract real value from AI. What ties this together?

Marcus

The AI industry is splitting into parallel universes. Inside the bubble, models keep getting better, math problems that stumped humans for decades are falling, and the top twenty percent of companies are pulling away. Outside the bubble, trust is eroding, students are scared, and most organizations can't translate AI capability into actual results. The Stanford report quantifies what we've been sensing all week. The technology is advancing faster than society's ability to absorb it.

Kate

And the gap keeps widening. The better AI gets, the more anxious people become, and the more the benefits concentrate among the few who know how to use it.

Marcus

Exactly. Closing that gap isn't a technical problem, Kate. It's a communication, education, and distribution problem. And right now, nobody's solving it at scale.

Kate

Something to think about tonight.

Kate

That's your AI in 15 for Tuesday, April 14, 2026. See you tomorrow.