AI in 15 — May 07, 2026
Anthropic just rented every single GPU at the data center Elon Musk built to train Grok. Two hundred and twenty thousand chips. Three hundred megawatts. And buried in the announcement, one line about gigawatts of AI compute in orbit. Yes — orbit.
Welcome to AI in 15 for Thursday, May seventh, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Massive show today, Marcus. Anthropic just signed a compute deal with SpaceX that has the entire industry doing a double-take. Greg Brockman spent yesterday reading his own teenage-style diary entries to a federal jury. The White House is now seriously drafting a pre-release vetting regime for frontier models, and the catalyst is Anthropic's own Claude Mythos. Mark Cuban says OpenAI's trillion-dollar capex will never pay back. A Cape Breton fiddler is suing Google for one-and-a-half million Canadian after an AI Overview called him a sex offender. Simon Willison admits he's drifting into vibe coding himself. And someone built a GitHub-style contributions graph for AI-driven outages that took the top of Hacker News.
Anthropic takes over Musk's Memphis data center.
Brockman reads his own diary to a federal jury.
And Mark Cuban predicts the AI Sun Microsystems moment.
Lead story, Marcus. Anthropic announced Wednesday that it has signed a deal with SpaceX to take all the compute capacity at Colossus 1 in Memphis. Walk me through this.
It's an extraordinary alignment shift, Kate. Colossus 1 is the data center Elon Musk built specifically to train Grok at xAI. Three hundred megawatts. More than two hundred and twenty thousand NVIDIA chips — H100s, H200s, and the new GB200s. Anthropic gets all of it inside the next month. They immediately doubled Claude Code's five-hour rate limits for Pro, Max, Team, and Enterprise customers, killed peak-hour throttling, and lifted API rate caps on Opus models.
The Anthropic-Musk relationship has been famously frosty. How did this happen?
The working assumption — and Hacker News commenters were all over this — is that xAI overestimated its own compute needs while Anthropic massively underestimated Claude demand. Anthropic put it bluntly. They are, quote, looking in every corner of the world for compute. xAI confirmed the partnership. So you have Anthropic now operationally tied to a Musk-built facility while Musk himself is in court suing OpenAI in a separate case we'll get to in a minute. Realpolitik beat the personal feud.
And this is on top of how many other compute commitments?
Five gigawatts with Amazon. Five gigawatts with Google and Broadcom coming online in 2027. Thirty billion with Microsoft and NVIDIA on Azure. Fifty billion in U.S. infrastructure with Fluidstack. And the line that genuinely made me sit up, Kate. Anthropic and SpaceX have, quote, expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity. Data centers in space.
That's not a press release flourish?
No. SpaceX has been pitching orbital compute privately for over a year. Free vacuum cooling, twenty-four-seven solar, no NIMBY problem. The catch has always been launch cost and radiation hardening. For a frontier lab to publicly endorse the vision tells you compute scarcity is now severe enough that science-fiction logistics are on the table. Critics will note, fairly, that Colossus 1 has been the target of complaints about illegal natural-gas turbines, air pollution, and water contamination affecting low-income neighborhoods near Memphis. Anthropic now owns that environmental story too.
Quick hits. Marcus, the Musk versus Altman trial entered week two, and Greg Brockman had a very bad day.
Brutal, Kate. OpenAI's president took the stand in Oakland and was forced to read his own personal journal entries aloud to a federal jury. Entries that OpenAI itself submitted as evidence in October, sealed, and then unsealed in January. The most awkward — a 2017 entry where he asks himself, quote, financially, what will take me to one billion dollars. Another wrestles with whether voting against Musk on the board would be morally wrong. A separate entry refers to the nonprofit mission as, quote, a lie. Brockman's stake is now estimated near thirty billion. He told the jury, quote, it's very painful, very deeply personal writings that were never meant for the world to see, but there's nothing in there I'm ashamed of.
And the physical-intimidation testimony.
He testified that during the 2018 board fight Musk physically intimidated him. Quote, I thought he was going to hit me. Look, Kate, the diary entries are devastating because they're contemporaneous. Brockman privately worried in 2017 that converting the nonprofit would be morally bankrupt, which is exactly what Musk is now alleging at five-hundred-billion stakes. Beyond the legal risk, every executive in Silicon Valley is reconsidering what they write in journals stored on company-owned hardware. The chilling effect on internal candor is real.
We covered the White House pre-release vetting story Tuesday, Marcus, but there's a major new wrinkle.
The catalyst is now public, Kate. Bloomberg and CSO Online both report that the executive order being drafted is being driven specifically by Anthropic's Claude Mythos Preview. Mythos is the offensive-cyber-capable model Anthropic developed that autonomously discovered thousands of previously unknown vulnerabilities across every major operating system and web browser, and then wrote working exploits without human guidance. Anthropic kept it out of the public API and routed it through Project Glasswing, a hundred-million-dollar credit program for vetted cybersecurity defenders.
So Anthropic essentially scared the administration into regulating the industry.
Functionally, yes. And David Sacks, the deregulatory AI czar who pushed back on Mythos as, quote, Anthropic scare tactics, exited his role in March. The Verge ran a column today literally titled how David Sacks crashed and burned. Susie Wiles and Treasury Secretary Bessent have stepped in. CAISI has already signed pre-release vetting agreements with Google DeepMind, Microsoft, and xAI. The libertarian discomfort here is real, Kate. An administration that revoked Biden's AI safety order on day one is now drafting a much more interventionist regime, prompted by a single lab's capability demonstration. The cynical reading is that Anthropic gets exactly the regulatory moat its competitors will struggle to clear.
Mark Cuban came on the Big Technology podcast and torched OpenAI's spending plans.
He did, Kate. His thesis. The trillion-dollar capex commitments OpenAI is racking up will never see a return, because compute is getting cheaper and faster so fast that today's investment math, quote, isn't going to come to fruition. He singled out Sam Altman directly. Quote, Sam is all over the map, and I think that'll backfire on him. Cuban thinks AI will mint the first trillionaire, but it won't be Altman. It might be, in his words, just one dude in a basement.
And the Hacker News engineering crowd is broadly with him.
They are. The shared view is that only three companies can plausibly absorb frontier compute costs — OpenAI, Anthropic, and Google — and only Google and Microsoft have the alternative cashflows to survive a misstep. If Cuban is right that compute deflation outruns the capex curve, OpenAI risks becoming the Sun Microsystems of the AI era. Massive Cisco-class infrastructure spend, no platform lock-in to defend it. Cuban is useful precisely because he isn't a doomer. He's an investor saying the unit economics don't work.
Heavy story now, Marcus. A Cape Breton fiddler is suing Google for one-and-a-half million Canadian.
Ashley MacIsaac, a Juno-winning Canadian musician, Kate. Google's AI Overview falsely told users he was a convicted sex offender on Canada's lifetime registry. It accused him of sexually assaulting a woman, attempting to lure a child online, and committing a violent assault. Completely fabricated. He found out in December when a First Nation north of Halifax cancelled one of his concerts and confronted him with a screenshot. The likely cause is the model conflating him with another Atlantic Canada man who shares his surname.
And Google's response.
No apology. No retraction. No admission of responsibility. They're going to fight it. The reason this case matters far beyond MacIsaac, Kate, is jurisdiction. Section 230-style immunity does not translate to Canadian common law. Google will likely have to defend the truth of statements its own model fabricated, and the operator-liability theory could cascade across every common-law jurisdiction. If Google loses or settles, it becomes precedent that the operator of a generative system is liable for its outputs. That reshapes the legal posture of every AI product on Earth. The Canadian courts may end up writing the rules the U.S. Congress hasn't.
Simon Willison published an essay yesterday that the engineering community is wrestling with.
Five hundred and twelve points and over five hundred and fifty comments on Hacker News, Kate. Willison argues the once-clear line between vibe coding — non-engineers asking an AI for code without reading it — and agentic engineering — professionals reviewing and testing AI-generated code — is collapsing. And he admits he himself is increasingly shipping agent-generated code into production without line-by-line review. His phrase. He treats Claude Code, quote, like a service from another team. He notes that beautiful documentation and passing tests are no longer reliable signals of quality. Only time-tested production usage is.
His provocative line.
Quote, the entire software development lifecycle was designed around producing a few hundred lines daily. Now it doesn't. What else breaks? When Willison admits this, Kate, that matters. He's one of the most measured and methodical voices on practical AI engineering. The comment thread consensus is that LLMs didn't create undisciplined engineering. They exposed and accelerated it. The traditional discipline of code review, lifecycle controls, the whole apparatus — it was built around a scarcity that no longer applies. And nobody has a coherent replacement yet.
Last quick hit, Marcus, and it's a meme that turned into a real story. Red Squares.
Seven hundred and thirty-seven points yesterday, top of Hacker News, Kate. A side project at red-squares-dot-cian-dot-lol. It visualizes a year of GitHub outages as a GitHub-style contributions graph, color-coded by severity. The catch is that a striking share of the recent red squares aren't GitHub's fault at all. They're Copilot incidents driven by upstream model providers. Quote, disruption with Gemini 2.5 Pro. Disruption with Grok Code Fast 1. Claude Opus 4 experiencing degraded performance. And the chart shows outages cluster heavily on weekdays, dramatically less on weekends.
Which proves what, exactly.
That AI coding traffic is now the dominant load on developer infrastructure, Kate. Weekends drop because human engineers stop running agents. The deeper point is the dependency. GitHub's reliability is now coupled to the reliability of three or four AI labs it does not control. When Anthropic, Google, or xAI degrade, Copilot users worldwide see broken builds. The single biggest hidden dependency of modern software development is no longer AWS. It's the model API layer. And nobody has redundancy planning for that. We talked Tuesday about GitHub's eighty-four-percent ninety-day uptime number. This visualization is why.
Big picture, Marcus.
One theme rhymes through every story today, Kate. The compute crunch is the story of 2026. Anthropic is renting Musk's data center because it cannot get enough silicon and is openly pitching gigawatts of compute in orbit. OpenAI is spending fifty billion this year and Cuban says it won't pay back. The White House is panicking about Mythos because frontier capability is moving faster than policy. Brockman's diary is being read to a jury because the original nonprofit structure couldn't bear the weight of the capex required to stay frontier. MacIsaac is suing Google because the same scaling pressure that produces frontier capability also produces fabricated defamation at scale. Willison is warning that the discipline of writing software is being reshaped by a model layer no individual developer controls. And Red Squares is the meme version of the same fact. Every one of these stories is a downstream effect of a single binding constraint. In 2026, the limit on AI is no longer ideas or talent. It is gigawatts. The pro-Western, libertarian read, Kate, is that markets are pricing this correctly. Compute scarcity is real, but compute deflation is also real, and the labs that build distribution and trust before the deflation curve catches them will win the next decade. The ones that bet purely on locked-in capex spend may not.
That's your AI in 15 for today. See you tomorrow.