← Home AI in 15

AI in 15 — May 10, 2026

May 10, 2026 · 18m 06s
Kate

There is no option to opt out on your corporate laptop. That's Meta's CTO replying to an engineer who asked how to stop the company from recording every mouse click and every screen on his work computer to train Meta's AI. Eight thousand of his coworkers will be laid off in ten days. Some of them are now openly hoping to be on the list.

Kate

Welcome to AI in 15 for Sunday, May tenth, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your co-host.

Kate

Sunday show, Marcus, but a heavy lineup. The New York Times has a deeply reported piece on what the AI transformation actually looks like inside Meta — surveillance, layoffs, and forced training of your own replacement. Google quietly turned its Gemini File Search into a multimodal RAG service. The April Challenger Report says AI is now the number one cause of US layoffs for the second month running. A peer-reviewed study in Science finds an OpenAI reasoning model out-triages emergency room doctors. Microsoft researchers published a benchmark showing frontier models silently corrupt a quarter of any document you delegate to them. Mark Zuckerberg the philanthropist is dropping five hundred million on AI biology. And Peter Thiel is funding floating data centers in the ocean.

Kate

Inside Meta, the AI transformation feels like surveillance.

Kate

AI hits twenty-six percent of US layoffs.

Kate

And an AI doctor beats human ones at triage.

Kate

Lead story, Marcus. The Times published Friday what may be the clearest public window we have into what AI transformation actually looks like inside a Big Tech company. Walk me through it.

Marcus

It's a remarkable piece of reporting, Kate. Two converging pressures are crushing morale across Meta's seventy-eight thousand-person workforce. First, a ten percent layoff round scheduled for May twentieth — roughly eight thousand jobs. Second, a new internal surveillance system Meta is calling the Model Capability Initiative. Every mouse movement, every click, every dropdown selection, every screen on every corporate laptop is captured and used as training data for Meta's AI assistants. When an engineering manager asked publicly on Workplace, quote, this makes me super uncomfortable, how do we opt out — CTO Andrew Bosworth replied flatly, there is no option to opt out on your corporate laptop. The post drew over a hundred angry-face reactions.

Kate

And performance reviews now factor in how much you use the AI.

Marcus

Exactly that, Kate. Employees are being graded on how heavily they use the same AI assistants many of them believe will replace them. A user-research employee called the situation, quote, incredibly demoralizing. Workers have built at least three internal countdown websites for the May twentieth cut, including one titled Big Beautiful Layoff. Some are openly hoping to be selected so they can collect severance. HR head Janelle Gale acknowledged the month-long ambiguity is, quote, incredibly unsettling. Zuckerberg's framing is that the data captures, quote, how smart people use computers. That line has not landed well internally.

Kate

Why does this matter beyond Meta.

Marcus

Because it exposes the contradiction at the heart of the Big Tech AI rollout, Kate. You cannot simultaneously tell employees to adopt AI tools as a productivity metric and lay off ten percent of them, while also training those same AI tools on their keystrokes. The optics are catastrophic, but the deeper issue is that this is what top-down AI transformation actually looks like in 2026. No agency, surveillance dressed as innovation, and a workforce that's been turned into reluctant data labelers. Wall Street has been pressing Zuckerberg for a clearer AI strategy after the Muse Spark launch underwhelmed. The leaks suggest the internal cost of his pace is enormous. And every other Big Tech CEO is reading this article right now thinking, that's our playbook too, but maybe it shouldn't be.

Kate

Quick hits. Marcus, the April Challenger Report dropped this week, and the numbers are stark.

Marcus

They are, Kate. The report counted eighty-eight thousand three hundred eighty-seven announced job cuts in April. Of those, twenty-one thousand four hundred ninety — twenty-six percent — cited AI as the direct cause. AI is now the number one reason cited for layoffs in the United States for the second consecutive month. Tech absorbed the largest share at thirty-three thousand cuts. Year-to-date, AI has caused roughly forty-nine thousand cuts — about sixteen percent of all 2026 layoff plans, up from thirteen percent through March. April overall layoffs jumped thirty-eight percent over March.

Kate

And Meta's eight thousand aren't even in this number yet.

Marcus

Correct, Kate. The May twentieth cut hits next month's report. Microsoft, Amazon, and others have been on the same trajectory. There's an honest caveat — AI-related is sometimes a re-label for cost-cutting that would have happened anyway. But for a year, the labor data lagged the rhetoric. It's catching up now. The political pressure that follows is probably a much bigger story for the second half of 2026 than the layoffs themselves. The libertarian read, Kate, is that creative destruction has always been how the economy reallocates labor, and the deflation in AI services is real productivity. The harder read is that this transition is moving faster than any retraining or safety net can plausibly absorb. Both can be true.

Kate

Healthcare story, Marcus. A peer-reviewed paper in Science just dropped that has the medical community talking.

Marcus

Adam Rodman at Beth Israel Deaconess Medical Center led the study, Kate. Using real-world data from a Boston emergency room, they found that an OpenAI reasoning model hit the correct or near-correct diagnosis at triage roughly sixty-seven percent of the time. Compare that to fifty-five and fifty percent for two physician reviewers working from the same notes. The model was strongest in management reasoning, clinical reasoning, and documentation — exactly the areas that historically required experience-based judgment. And critically, the model was using only text from electronic health records. No images, no sounds, no nonverbal cues that human clinicians normally have access to.

Kate

Caveats.

Marcus

Plenty, Kate. This is retrospective, not prospective. It's text-only. It does not measure patient outcomes or unintended harms. Real ER triage involves a worried family, a kid pulling on your coat, ambient noise, gut feel. But this is peer-reviewed in Science with Harvard collaboration. It is not an industry blog post. ER triage is one of the highest-stakes, highest-volume cognitive bottlenecks in medicine, Kate. If even part of this generalizes, the conversation moves from could AI help one day to what is the regulatory and liability path. Expect the FDA, the AMA, and malpractice carriers to be very loud about this in the coming months. And expect the first hospital systems to deploy reasoning models as a triage assist — not a replacement — within the next twelve months.

Kate

Counter-point story, Marcus. Microsoft researchers published a benchmark called DELEGATE-52 that climbed Hacker News this weekend.

Marcus

Three hundred eighty-four points, Kate, and an important counterweight to the agentic AI hype. DELEGATE-52 covers three hundred ten environments across fifty-two professional domains — coding, crystallography, genealogy, music notation. Documents averaging fifteen thousand tokens, with five to ten complex editing tasks each. The headline finding. Even frontier models — Gemini 3.1 Pro, Claude 4.6 Opus, GPT-5.4 — corrupt an average of twenty-five percent of document content by the end of long delegated workflows. The errors are sparse but severe, and they silently compound. Agentic tool use does not help. Damage gets worse with larger documents, longer interactions, and distractor files in context.

Kate

So the more you delegate, the more it breaks.

Marcus

The JPEG analogy from the comments is the right one, Kate. Each LLM pass is like re-saving a JPEG, except the input is intent. Simon Willison and others argued the round-trip framing is a bit unrealistic — frequent users already know not to do this. But the broader point about silent semantic ablation in delegated agent workflows landed hard. This cuts directly against the agentic AI pitch dominating every enterprise deck this year, Kate. If the longer the delegation, the worse the corruption — and tool use does not fix it — then the architectural answer is not more autonomy. It's keeping the LLM as a thin translation layer over deterministic processes. That is a meaningfully different product strategy than what most agentic startups are selling.

Kate

Developer story now, Marcus. Google upgraded the Gemini API File Search tool late last week, and it's a quiet but significant move.

Marcus

Three changes that meaningfully shift the RAG-tooling landscape, Kate. First, multimodal retrieval via Gemini Embedding 2. Images, charts, product photos, diagrams — all index into the same vector store as text. You can search an archive for, quote, an image with a melancholic mood alongside text queries. Second, page-level citations. Every response is tied back to the exact source page. That's a real ergonomic win for verifiable RAG. Third, custom metadata filtering — you can scope queries to a department, a date range, a project — which reduces hallucinated cross-context bleed.

Kate

And the pricing.

Marcus

Aggressive, Kate. Storage and query-time embeddings are free. Charges only on initial indexing and standard Gemini I-O tokens. Supported formats include PDF, DOCX, Excel, CSV, JSON, SQL, Jupyter, HTML, Markdown, and PNG and JPEG up to four-K. For the entire ecosystem of RAG-as-a-service startups — Pinecone, Vespa, Weaviate — Google just bundled the most-requested features into a free-storage offering inside the Gemini API. Multimodal RAG was a real differentiator a quarter ago. It is now table stakes. Combined with verifiable citations, this is the kind of building block that pushes enterprise adoption forward without a lot of fanfare. Quietly, this is one of the more important shipping moves of the past two weeks.

Kate

Anthropic update, Marcus. We covered the SpaceX Colossus deal Thursday, but a board decision on the funding round is now imminent.

Marcus

Bloomberg reports the Anthropic board is expected to decide this month on a fifty-billion-dollar pre-emptive raise at an eight-fifty to nine-hundred-billion-dollar valuation, Kate. The trajectory is staggering — sixty-one-and-a-half billion in March 2025, one-eighty-three billion last September, three-eighty in February, potentially nine hundred billion in May. That would top OpenAI's eight-fifty-two-billion post-money raise from earlier this year. The financial backdrop justifies it on paper. ARR has crossed thirty billion, up from about nine billion at the end of 2025. The run rate is trending toward forty billion. Customers spending over a million annualized have doubled in two months and now exceed a thousand. CoinDesk pegs an IPO for June. The SpaceX compute lock-in we covered Thursday now reads as IPO prep. The two-horse race between OpenAI and Anthropic is now genuinely two horses, Kate.

Kate

Philanthropy story with an awkward edge, Marcus. Mark Zuckerberg and Priscilla Chan committed five hundred million to AI biology this week.

Marcus

Through Biohub, Kate, over five years, into what they're calling the Virtual Biology Initiative. The goal — open datasets to train predictive AI models of human cells. Virtual cells that simulate how cells function, malfunction, and respond to therapies. Allocation is four hundred million to internal research, one hundred million as external grants worldwide. The framing is the long-running cure all disease in our children's lifetime mission.

Kate

And the contrast with the lead story.

Marcus

Hard to miss, Kate. Zuckerberg the philanthropist is making five-hundred-million open-data bets on AI for biology. Zuckerberg the CEO is forcing surveillance software onto his employees and laying off ten percent of them. Same person, very different vibes. AI-for-biology is shaping up to be the next major frontier after foundation models, with DeepMind's AlphaFold lineage, Meta's own ESM and Muse Spark biology pushes, and now this. Open data is the limiting reagent. Most cellular data is locked in commercial silos. A hundred-million-dollar grant pool for global researchers could shift the field's pace meaningfully if it's deployed well.

Kate

Last quick hit, Marcus, and it pairs with the Anthropic-SpaceX orbital line from Thursday. Silicon Valley is now putting real money into floating ocean data centers.

Marcus

Oregon-based Panthalassa raised a hundred and forty million in a round led by Peter Thiel, Kate, valuing the startup near a billion. Total Silicon Valley commitment to the ocean compute thesis is now around two hundred million. The pitch — massive floating orbs paired with onsite AI compute, powered by wave energy and cooled with circulating seawater. Cooling alone eats thirty to forty percent of a typical data center's electricity, so seawater is a meaningful saving. Data is shipped via low-Earth-orbit satellites. Their Ocean-3 prototypes are slated for offshore operation around August, with commercial systems planned for 2027.

Kate

And the macro driver.

Marcus

The IEA projects AI-driven energy demand growing roughly thirty percent annually through 2030, potentially reaching three percent of global electricity, Kate. Land-based data centers are running into permitting walls, water scarcity, and grid constraints. Hence ocean. The challenges are not small. Corrosion, biofouling, jurisdictional gray zones in international waters, underwater noise pollution, data-sovereignty problems. Combine this with Anthropic's gigawatts-in-orbit line we covered Thursday and you can sketch the next phase of compute infrastructure. AI training is being pushed off the conventional grid entirely. Whether oceans and orbits actually pencil out economically is another question. But the capital is genuinely showing up.

Kate

Big picture, Marcus.

Marcus

Three threads run through today's stories, Kate. First, compute is leaving the ground. Anthropic is taking SpaceX's Colossus, talking about orbital data centers, locking in two hundred billion with Google. Thiel is funding ocean data centers. Land-based capacity simply will not be enough. Second, the labor reality is hardening. Meta is the public face, but the Challenger numbers say AI is now the number one cited reason for layoffs two months running. The cultural moment of AI-the-tool has crossed into AI-replaces-the-headcount. Third, reasoning is the unlock — but delegation is fragile. The Science ER paper and Anthropic's commercial momentum show reasoning models doing things their predecessors could not. But DELEGATE-52 is the inconvenient counter. Hand the entire workflow to an agent and you still corrupt twenty-five percent of content. The pro-Western, libertarian read, Kate, is that the sober middle path — thin LLM layers over deterministic systems, paired with transparent labor transitions and capital flowing where it earns a return — is where serious enterprise AI lands. Not at the maximalist agentic pitch. Not at the doomer veto. Somewhere disciplined in between.

Kate

That's your AI in 15 for today. See you tomorrow.