← Home AI in 15

AI in 15 — February 21, 2026

February 21, 2026 · 20m 21s
Kate

Disney just sent a cease-and-desist letter to one AI company for pirating its characters, then turned around and signed a billion-dollar deal with another AI company to do essentially the same thing. Welcome to the copyright wars of 2026, where the line between theft and innovation depends entirely on who's writing the check.

Kate

Welcome to AI in 15 for Saturday, February 21, 2026. I'm Kate, your host.

Marcus

And I'm Marcus, your analyst.

Kate

Happy Saturday, Marcus. We're wrapping up what has been an absolutely relentless week in AI news, and we've got some stories that didn't get the attention they deserve.

Kate

ByteDance dropped Seedance 2.0, and Hollywood responded with lawyers and billion-dollar contracts in the same breath.

Kate

The Pentagon's fight with Anthropic just got a lot more personal. The CTO is using words like "cross the Rubicon."

Kate

A wave of safety researchers is heading for the exits at major AI labs, and their goodbye letters are getting darker.

Kate

ChatGPT now has ads, and developers just found code for something called Citron Mode. Three guesses what that is.

Kate

Over a hundred AI experts just published the most comprehensive safety report ever assembled, and the findings are sobering.

Kate

Google is betting on volcanoes to power AI, and Nvidia says its next-generation chip platform is already rolling off the production line. Let's get into it.

Kate

Marcus, ByteDance launched Seedance 2.0 on February 12th, and the response has been wild. Walk me through what this thing actually does.

Marcus

Seedance 2.0 is a multimodal AI video generator, and it's genuinely impressive on the technical side. You feed it text, images, audio, or existing video, and it produces realistic short video clips in up to 2K resolution with lip-sync support across eight or more languages. The quality is among the best we've seen from any lab, including OpenAI's Sora. ByteDance clearly poured enormous resources into this.

Kate

And then Disney immediately went nuclear.

Marcus

Within days. Disney's legal team sent ByteDance a cease-and-desist letter accusing them of pre-packaging Seedance with what Disney called a pirated library of copyrighted characters. Star Wars, Marvel, Pixar, the whole vault. Disney's language was blistering. They described it as treating their intellectual property like, quote, free public domain clip art.

Kate

So users could just generate videos of Spider-Man doing whatever they wanted?

Marcus

Essentially, yes. The model appeared to have Disney's character library baked in, making it trivially easy to generate videos featuring those characters without any licensing or authorization. ByteDance has since promised to add safeguards, but the damage to the relationship was done.

Kate

Here's where the story gets really interesting though. Because while Disney was suing ByteDance with one hand, they were signing a billion-dollar deal with OpenAI with the other.

Marcus

And that's what makes this a landmark moment. Disney signed a billion-dollar licensing agreement with OpenAI allowing Sora to generate fan-inspired videos using Disney, Pixar, Marvel, and Star Wars characters. Proper licensing, proper revenue sharing, proper guardrails. Same characters, same technology category, completely different approach.

Kate

So Disney isn't anti-AI video generation. They're anti-free AI video generation.

Marcus

Exactly, Kate. And I think this becomes the template for how the entire entertainment industry navigates AI. You don't fight the technology. You monetize it on your terms. Disney is saying, you want Mickey Mouse in your AI-generated video? Fine. Pay us. The billion-dollar Sora deal sets the price. The cease-and-desist sets the consequence for not paying it.

Kate

And ByteDance also had to pull a feature over deepfake concerns, right?

Marcus

They suspended the feature that could turn facial photos into personal voices. The deepfake implications were obvious enough that even ByteDance recognized the risk. Meanwhile, they're aggressively hiring in the US, nearly a hundred AI roles across San Jose, LA, and Seattle, despite all the national security scrutiny around TikTok.

Kate

So ByteDance is simultaneously fighting Disney's lawyers, pulling features over safety concerns, and hiring like crazy in the US. That's a lot of contradictions in one company.

Marcus

Welcome to being a Chinese AI company operating globally in 2026. Every move is simultaneously a technology play, a legal play, and a geopolitics play. But the bigger takeaway here is the Disney strategy. The combination of aggressive litigation against unauthorized use and aggressive licensing for authorized use is probably the smartest approach any content owner has taken to AI so far. Other studios are watching very closely.

Kate

Let's turn to the Pentagon and Anthropic, because as we reported on Wednesday, there's a serious standoff over military AI safeguards. Marcus, this week it escalated significantly.

Marcus

It did. On Wednesday we covered the core dispute, Anthropic refusing to allow unrestricted military use of Claude while the Pentagon pushes for what they call all lawful purposes access. Since then, Pentagon CTO Emil Michael went public and used language that's hard to misinterpret. He said Anthropic needs to, quote, cross the Rubicon, and called it, quote, not democratic for a private company to limit how the military uses AI.

Kate

Cross the Rubicon. That's a very deliberate historical reference.

Marcus

It's Julius Caesar crossing the point of no return. Michael is essentially telling Anthropic, stop hedging, commit fully to military use, and accept that there's no going back. And he backed it up with a concrete threat. The Pentagon is considering designating Anthropic as a supply chain risk, a classification normally reserved for foreign adversaries like Huawei.

Kate

We explained on Wednesday what that designation means. It would effectively blacklist Anthropic across the entire defense ecosystem.

Marcus

Right. Every company doing business with the Pentagon would face pressure to certify they don't use Claude in their workflows. That's not just Anthropic's two-hundred-million-dollar contract. That's their entire government-adjacent business. And here's the competitive dimension. OpenAI, Google, and xAI are all watching this. If Anthropic gets frozen out for maintaining guardrails, the message to every other AI lab is crystal clear. Play ball or lose the customer.

Kate

Meanwhile, Anthropic's red lines are actually pretty narrow. Mass surveillance of Americans and fully autonomous weapons. That's it.

Marcus

Two red lines out of an enormous range of military applications they're fine with. And yet even those two are apparently unacceptable to the current Pentagon leadership. That tells you something about where the pressure is heading for the entire industry.

Kate

And speaking of pressure inside AI labs, Marcus, we've been tracking safety researcher departures, but this week the pattern became impossible to ignore. It's not just one person anymore.

Marcus

It's a wave. We covered Mrinank Sharma's departure from Anthropic on Wednesday, the head of Safeguards Research who wrote that cryptic letter saying the world is in peril and that employees constantly face pressures to set aside what matters most. He's leaving tech entirely to study poetry in the UK. But since then, the picture has gotten much bigger.

Kate

What else has come out?

Marcus

Two major developments at OpenAI. First, Zoë Hitzig, a former researcher, published a New York Times essay titled "OpenAI Is Making the Mistakes Facebook Made. I Quit." Her argument is that ChatGPT's move toward advertising risks repeating social media's fundamental error, optimizing for engagement at the expense of users. Second, OpenAI fired VP of Product Policy Ryan Beiermeister, reportedly after she opposed the company's adult mode plans. OpenAI says it was related to a separate HR matter, but the timing lines up suspiciously with her internal objections.

Kate

And there's a structural change too?

Marcus

OpenAI dissolved its seven-person mission alignment team entirely. This was the team created in 2024 specifically to ensure AGI development stayed true to the company's founding mission of benefiting humanity. It no longer exists.

Kate

So the team whose job was to ask "are we staying true to our mission" has been eliminated. That feels symbolic, Marcus.

Marcus

It's more than symbolic. When you combine Sharma leaving Anthropic, Hitzig leaving OpenAI, Beiermeister being fired, and the mission alignment team being dissolved, you see a pattern. The people hired to pump the brakes are being removed from the vehicle, either by choice or by force, at the exact moment the vehicle is accelerating. And the timing couldn't be worse, because these departures are happening alongside the most capable models ever released.

Kate

Let's shift to OpenAI's business moves, because there are two developments that paint a very specific picture of where the company is heading. ChatGPT now has ads, and developers found something interesting in the code.

Marcus

OpenAI started testing ads inside ChatGPT on February 9th for free and Go tier users in the US. Ads are matched based on conversation topic, chat history, and past ad interactions. OpenAI insists they don't influence ChatGPT's answers and that conversations remain private from advertisers. Pro, Plus, Business, Enterprise, and Education tiers are ad-free.

Kate

And then the code discovery.

Marcus

Developers poking around ChatGPT's web app found references to something called Citron Mode, which appears to be the internal name for the upcoming adult mode. The code includes eighteen-plus sensitive content warnings. OpenAI's CEO of Applications Fidji Simo confirmed adult mode is coming in Q1 2026, which means any day now. It would allow NSFW text content like erotica, frank discussions of sensitive topics, and customizable AI personalities.

Kate

Ads and adult content. That's quite the product evolution for a company that started as a nonprofit AI safety lab.

Marcus

Sam Altman framed adult mode as, quote, treating adult users like adults. And the competitive response was immediate. Anthropic ran Super Bowl ads directly mocking ChatGPT's advertising plans, declaring Claude would never show ads. And it worked. Anthropic saw an eleven percent spike in daily active users after the game.

Kate

So Anthropic is using OpenAI's monetization strategy as a marketing gift.

Marcus

It's the clearest brand differentiation in the AI industry right now. OpenAI is saying, we're a consumer platform, we'll monetize like one. Anthropic is saying, we're the alternative that respects your attention. Whether that purity lasts when Anthropic needs to find its own path to profitability is another question. But for now, the contrast is working.

Kate

Marcus, a major international report on AI safety dropped this month, and given everything we've just discussed about safety researchers leaving, this feels especially timely.

Marcus

The second International AI Safety Report was published in February, led by Turing Award winner Yoshua Bengio, authored by over a hundred AI experts, and backed by more than thirty countries. This isn't an industry white paper. This is the most comprehensive, internationally coordinated assessment of AI's current state.

Kate

What are the headline findings?

Marcus

On capabilities, leading AI models now pass professional licensing exams in medicine and law, answer over eighty percent of graduate-level science questions, and achieved gold-medal performance on International Mathematical Olympiad problems in 2025. On risks, AI is already being used by criminal groups and state-backed attackers to enable cyberattacks, though it's not yet executing attacks fully autonomously. AI systems can reduce barriers to creating biological and chemical weapons. And AI-generated content produces measurable changes in people's beliefs.

Kate

And the defenses?

Marcus

That's the uncomfortable part. The report states plainly that, quote, no combination of current methods eliminates failures entirely. Systems still hallucinate, produce flawed code, and give misleading medical advice. And while the number of companies publishing safety frameworks has more than doubled since 2025, quote, sophisticated attackers can often bypass current defenses.

Kate

So capabilities are racing ahead and defenses are lagging behind. We've heard this before, but hearing it from a hundred experts backed by thirty countries hits differently.

Marcus

It hits differently because it's not one researcher's opinion or one company's position. This is the closest thing we have to a global scientific consensus on AI risk. And it's landing in a week when safety researchers are quitting, safety teams are being dissolved, and governments are pressuring AI companies to remove guardrails. The gap between what the experts recommend and what the market demands is widening.

Kate

Let's talk energy. Google just signed a hundred-and-fifty-megawatt geothermal deal to power AI data centers. Marcus, why geothermal?

Marcus

Because unlike solar and wind, geothermal runs twenty-four-seven regardless of weather. It's steady baseload power, which is exactly what a data center needs. Google signed a power purchase agreement with Ormat Technologies for up to a hundred and fifty megawatts of new geothermal capacity in Nevada, routed through Berkshire Hathaway's NV Energy utility. New projects start coming online as early as 2028 with a fifteen-year contract term.

Kate

Most of the headlines have been about nuclear for AI data centers. Is geothermal a better bet?

Marcus

It's a different bet. Nuclear has higher capacity but longer lead times and more regulatory hurdles. Geothermal is smaller scale but deployable faster and with fewer permitting headaches, at least in Nevada where the geology cooperates. Google is being smart by diversifying. Don't put all your electrons in one basket. And at six hundred and fifty billion dollars in collective data center spending this year, the industry needs every power source it can get.

Kate

Last one, Marcus. Nvidia's next-generation platform, Vera Rubin, is confirmed in full production. What does that mean for the AI hardware race?

Marcus

Nvidia announced that the Vera Rubin platform is now in full production, meaning chips are actually being manufactured, not just designed. The performance claims are significant. Up to ten-x reduction in inference token cost and four-x reduction in GPUs needed to train mixture-of-experts models compared to the current Blackwell generation. AWS, Google Cloud, and Microsoft will offer Rubin-based instances starting in the second half of 2026.

Kate

Ten-x reduction in inference cost. That would be enormous for anyone running AI at scale.

Marcus

If those numbers hold in real-world deployments, it changes the economics fundamentally. Inference, which is the cost of actually running trained models, is the biggest ongoing expense for AI companies. A ten-x reduction means you can either serve ten times more users at the same cost or the same users at a tenth of the cost. Either way, it accelerates deployment. And it arrives just as the memory chip shortage is squeezing everyone's hardware budgets. More efficient chips partially offset the rising cost of memory.

Kate

Alright Marcus, it's Saturday. Let's step back from the week. What's the thread that ties all of this together?

Marcus

The thread this week is fracture. The AI industry is fracturing along every axis simultaneously. The Pentagon and Anthropic are fracturing over military ethics. Safety researchers are fracturing away from the labs that hired them. Disney is suing one AI company and investing in another, fracturing the idea that there's a single industry position on copyright. OpenAI is fracturing from its nonprofit origins with ads and adult content. Even the international community can't agree, publishing a safety report that warns about risks while individual governments push companies to remove safeguards.

Kate

It feels like the consensus that held the industry together for the last few years has completely broken down.

Marcus

It has. In 2023 and 2024, there was at least a shared vocabulary. Responsible AI. Safety first. Beneficial AGI. Everyone used the same words even if they didn't mean the same things. Now the pretense is gone. Companies are picking sides. Move fast and monetize, or hold the line on safety. Arm the military or draw red lines. Monetize with ads or market yourself as the ad-free alternative. The middle ground is disappearing.

Kate

And the safety report landing this week feels like an exclamation point. A hundred experts saying the defenses aren't ready, while the people building the defenses are walking out the door.

Marcus

That's the central tension of 2026, Kate. The technology is more powerful than ever. The commercial incentives are more intense than ever. And the people and institutions that are supposed to keep it all in check are either leaving, being fired, or being overruled. Somewhere between Disney's billion-dollar Sora deal and Mrinank Sharma's decision to quit tech and study poetry, the AI industry crossed a line this week. I'm just not sure we'll know which line until we look back on it.

Kate

The future isn't waiting for anyone to figure out the rules.

Marcus

It never does.

Kate

That's your AI in 15 for Saturday, February 21, 2026. Enjoy the rest of your weekend, and we'll see you Monday.