AI in 15 — April 26, 2026
A US state attorney general just opened a criminal investigation into OpenAI. Not civil. Criminal. The Florida AG, on camera at a press conference, said this. Quote, if that bot were a person, they would be charged with a principal in first-degree murder. Three days earlier, a twenty-year-old threw a Molotov cocktail at Sam Altman's house and left an anti-AI manifesto. This is the week the public mood broke through.
Welcome to AI in 15 for Sunday, April 26, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Sunday show, Marcus, and the political ground under the AI industry just shifted. Florida opened a criminal probe into OpenAI over the FSU shooting. The New Republic published a long piece arguing the AI industry is finally noticing the public hates it, and the violence is escalating. CNBC reports AI data centers are now a third-rail issue in Pennsylvania House races. OpenAI quietly posted a twenty-five-thousand-dollar bounty asking researchers to prove its bio safeguards are breakable. Adobe retired the Experience Cloud brand to bet the company on agents. Andrej Karpathy's LLM wiki pattern went viral on Hacker News three times in twenty-four hours. And the deal sheet keeps growing. Meta's eight thousand layoffs, Microsoft's voluntary buyouts, and a reported sixty-billion-dollar SpaceX option to acquire Cursor.
Florida treats AI output like a co-conspirator.
The public backlash gets a body count.
And Adobe burns one of the most valuable brands in enterprise SaaS.
Lead story, Marcus. Tuesday, Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI tied to the Florida State University shooting. Two dead. Several wounded. Walk me through what prosecutors are actually claiming.
Reviewing chat logs from the accused gunman, Phoenix Ikner, prosecutors say he asked ChatGPT what type of gun and what ammunition to use, what time of day would maximize victims on campus, the lethality of specific shotgun shells, whether school shooters get sent to maximum-security prisons, and whether three FSU victims would garner enough media attention. Uthmeier at his press conference said, quote, if that bot were a person, they would be charged with a principal in first-degree murder. OpenAI has been subpoenaed for internal training materials and policies covering user threats of harm to others, self-harm, and crime reporting. OpenAI's response, the shooting was a tragedy, but ChatGPT is not responsible, and they say they proactively shared the suspect's account with law enforcement.
Why does the criminal framing matter so much more than civil?
Because it changes the math, Kate. Section 230 immunity, the law that has shielded internet platforms for thirty years, doesn't obviously apply to generative output. There's no settled doctrine for, quote, principal in first-degree murder, by a probabilistic system. Civil liability, OpenAI insures against. Criminal exposure for executives or for the corporation as a person under Florida law is a different threat model entirely. If this case advances, every model provider has to harden refusal behavior under threat of prosecution rather than threat of being sued. The downstream effect is more aggressive guardrails, more false refusals, and a chilling effect on what consumer-facing AI will do for users in any state with an ambitious AG.
And the political context is what amplifies it.
That's the part to keep an eye on, Kate. This is the same Florida AG who has been aggressive on Big Tech generally. He's picking a fight he believes the public is on his side of. And he's almost certainly right. Stanford's AI Index this year found seventy-three percent of AI experts think AI will be net positive for jobs. The general public, twenty-three percent. That fifty-point gap is the political opening, and Uthmeier just walked through it.
Quick hits, Marcus. And the next story is the connective tissue. The New Republic published a piece this week titled, the AI industry is discovering that the public hates it. Three hundred-plus comments on Hacker News.
Two recent incidents anchor it. April tenth, Sam Altman's house was attacked with a Molotov cocktail by a twenty-year-old who left an anti-AI manifesto on the scene. Three days earlier, an Indianapolis Democratic councilman had thirteen shots fired into his home with a, quote, no data centers, note left on the doorstep. His eight-year-old son was inside. The numbers underneath the piece are stark. Gallup last month, Gen Z excitement about AI fell from thirty-six percent to twenty-two percent in a single year. Anger rose from twenty-two to thirty-one. An NBER paper in February found eighty percent of companies actively using AI report no productivity impact. MIT last year, ninety-five percent of corporate AI pilots returned zero. And between April and June of 2025, ninety-eight billion dollars of proposed data-center projects were blocked or delayed by local resistance.
So the hype-versus-reality gap is now politically combustible.
And the violence is no longer abstract. I want to be straight with listeners, Kate. A lot of this hostility is downstream of the AI labs' own marketing. They spent two years telling the public that their jobs would vanish and their models could uplift bioterrorism. Then they're surprised when a twenty-year-old takes them at their word and throws a Molotov cocktail. You don't get to run doom-marketing for two years and then act bewildered that the public believed you. That said, shots fired into a councilman's home with a child inside is a line that should never be crossed, and the political class is going to remember which industry the rage is pointed at.
Pennsylvania, Marcus. CNBC has reporting that AI data centers are now a campaign issue in 2026 House races.
All four competitive Pennsylvania House districts are areas where Governor Josh Shapiro has data-center expansion plans. Republican incumbents who flipped seats in 2024, including Reps. Ryan Mackenzie and Rob Bresnahan, are now scrambling to distance themselves. A Quinnipiac poll in February found sixty-eight percent of PA voters would oppose construction of an AI data center in their community. Pennsylvania residential electricity rates jumped twenty-one-point-seven percent in 2025, largely attributed to data-center demand on the PJM grid. Harrisburg lawmakers are weighing a three-year moratorium on hyperscale facilities.
And the House majority is five seats.
Exactly. If data-center anger costs the GOP two PA districts, it costs them the chamber. The interesting feature here is that the opposition is genuinely bipartisan. Environmentalists, farmers, and conservative ratepayers, all aligned. Tech issues usually don't unite that coalition. Power bills do. This is the first US national election cycle in which AI infrastructure itself is a campaign issue, and it lands on the same side of the aisle that the industry has been lobbying hardest. The hyperscalers are about to learn that ratepayer politics beats lobbying budgets every single time.
OpenAI, Marcus. Two days after launching GPT-5.5, they posted a bug bounty I have never seen before.
On April twenty-fifth, OpenAI published the GPT-5.5 Bio Bug Bounty. Up to twenty-five thousand dollars for the first researcher who finds a single universal jailbreak that gets the model to answer all five questions in OpenAI's internal bio-safety challenge. Clean session in Codex Desktop. Applications open through June twenty-second. Testing through July twenty-seventh. NDA-bound. One winner. The Hacker News reaction has been frosty. The previous Kaggle red-team had five hundred thousand dollars in payouts and was open-publishable. This one caps a single payout at twenty-five thousand and gags the researcher.
So what's the company actually saying with that structure?
Two things at once, Kate. First, they're implicitly conceding their bio safeguards are not provably bulletproof. Otherwise, why offer the bounty. Second, the structure tells you exactly how much they want independent eyeballs on the result. NDA, single winner, twenty-five-thousand-dollar cap, you get one or two serious researchers, not the security community. The previous Kaggle posture said, quote, please tear this apart in public. This one says, quote, please tear this apart in private and don't tell anyone what you found. Pair that with the Florida criminal probe lead story, and you can see why. They're trying to lock down a known weakness before a prosecutor finds it for them.
Adobe, Marcus. They retired the Experience Cloud brand this week.
At Adobe Summit 2026 in Las Vegas, Adobe announced it is replacing the Experience Cloud brand with a new product called CX Enterprise, anchored by a persistent agent called CX Enterprise Coworker. Coworker is goal-based, not campaign-based. It monitors signals, recommends next-best actions, and executes across channels in real time. Architecturally, it runs on the open MCP and A2A protocols, and is interoperable with AWS, Anthropic, Google Cloud, Microsoft, and OpenAI surfaces. General availability in the coming months.
Why is killing the Experience Cloud brand the actual headline?
Because Experience Cloud is one of the most valuable brand names Adobe owns. They don't retire it lightly. The customer-experience suites, Adobe, Salesforce, HubSpot, are the second-biggest commercial AI battleground after coding. Adobe explicitly committing to MCP and A2A is a bet that the agent-protocol standards war is functionally over. They're conceding to interop rather than fighting for a proprietary stack. For investors, the bigger signal is that Adobe is willing to torch a marquee brand to plant a flag in the agentic era. That's a level of conviction we haven't seen from a public-company SaaS vendor on AI yet. It also tells you what they think their competition is going to look like by the end of next year.
Karpathy story, Marcus. Three independent implementations of the LLM wiki pattern hit Hacker News in twenty-four hours.
A Show HN of a project called wuphf, a Karpathy-style LLM wiki maintained by your agents in Markdown and Git, hit the front page Friday with two hundred and twenty-nine points and over a hundred comments. The pattern, originally proposed by Andrej Karpathy in a gist that's been making the rounds. Instead of using RAG to re-derive answers from raw documents at query time, you have the LLM incrementally compile a durable wiki of entity pages, concept pages, summaries, and cross-references that accumulates over time. A schema file, often called CLAUDE.md or AGENTS.md, tells the model the conventions. The slogan from one comment, quote, RAG retrieves and forgets. A wiki accumulates and compounds.
Why does this matter beyond a clever Show HN?
Because it's the open-source community answering the agent-memory problem differently than the venture-funded answer. The vector-database industry has raised billions on the premise that the right substrate for LLM memory is high-dimensional embeddings retrieved at query time. The Karpathy pattern says, no, the right substrate is Markdown in Git. It's auditable, version-controlled, human-readable, and it's free. The fact that three independent implementations went viral in a single day tells you the pattern hit a nerve with developers who are tired of chasing RAG quality. It's a quiet rebuke to a lot of the infrastructure being built around vector search, and a sign that the ecosystem keeps reinventing what works rather than what's funded.
Last quick hit, Marcus. The deal sheet keeps moving. We covered the Google and Amazon Anthropic deals yesterday. What's new today?
Three items worth flagging. First, Meta confirmed eight thousand layoffs, roughly ten percent of staff, plus six thousand unfilled roles eliminated. Zuckerberg is roughly doubling AI capex to between a hundred and fifteen and a hundred and thirty-five billion dollars in 2026. Second, Microsoft opened a, quote, Rule of 70, voluntary buyout to about eight thousand seven hundred and fifty US employees. Third, and this is the spiciest, SpaceX reportedly holds a sixty-billion-dollar option to acquire Cursor.
So the people-cuts are funding the chip-buys.
That is now an explicit, repeatable pattern. The marginal frontier-lab investment is bigger than the entire annual R&D budget of most Fortune 100 companies, and it's being financed by a handful of hyperscalers that are simultaneously cutting headcount to fund it. Meta's math is the cleanest. Eight thousand jobs out, an extra forty to sixty billion in capex in. The SpaceX-Cursor option is the more interesting strategic signal. It tells you Musk's ecosystem is acquiring not just chips and launch capacity, but the IDE layer where developers actually live. Vertical integration from silicon to text editor. Whether the option is exercised or not, the fact that it exists is itself a signal about where Musk thinks the choke points will be.
Sunday big picture, Marcus. Pull the threads together.
Two stories, same week, opposite directions, Kate. DeepSeek shipped a one-point-six-trillion-parameter open-weights model that nearly matches the US frontier at one-sixth the price. And Pennsylvania voters told pollsters sixty-eight percent of them don't want a data center near their house, while Sam Altman's house got attacked and a Florida AG opened a criminal probe over chatbot output. The capability curve is bending faster than the social license to deploy it, and the gap between those two curves is where 2026's AI politics is going to be fought. The open-sourcing from China is precisely calibrated to that gap, undercutting Western labs' pricing power at the moment they need every dollar to fund the data centers their voters are trying to block. That's not an accident.
And the human side.
When the public mood and the model release schedule diverge this hard, somebody eventually has to bend, Kate. The Florida criminal probe, the Indianapolis shots-fired, the Pennsylvania moratorium bill, the Karpathy wiki getting more applause than the next vector database, these are all the same signal. People want a relationship with this technology that the industry hasn't bothered to design for them yet. The labs that figure that out next year survive politically. The ones that keep selling doom-and-supercluster will find themselves the campaign issue in November.
That's your AI in 15 for Sunday, April 26, 2026. See you tomorrow.