AI in 15 — February 27, 2026
It's Friday. The deadline day. And Dario Amodei just told the most powerful military on Earth, no.
Welcome to AI in 15 for Friday, February 27, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Marcus, we've been building to this moment all week. The Anthropic deadline is here, and there's a lot more to get through. Let's preview.
Dario Amodei published a public statement refusing to comply with the Pentagon's demands. We'll break down exactly what he said and what comes next.
Over a hundred Google DeepMind employees and dozens of OpenAI workers signed letters demanding their companies draw the same red lines Anthropic is defending.
Google launched Nano Banana 2, a major image generation upgrade that's free for everyone.
NVIDIA posted sixty-eight billion dollars in quarterly revenue and confirmed the next-generation Rubin GPU is shipping to customers.
And new research reveals that Claude Code has very strong opinions about which tools developers should use, and some massive frameworks got completely shut out. Let's get into it.
Marcus, Dario Amodei published his statement last night. After a week of escalation, threats, and speculation, we finally heard directly from the CEO. What did he say?
He said, and I'm quoting, Anthropic cannot in good conscience accede to their request. That's about as direct as a CEO gets when talking to the Department of War. The statement lays out the two specific prohibitions Anthropic refuses to remove from its military contract. One, no using Claude for mass domestic surveillance of American citizens. Two, no deploying Claude in fully autonomous weapons systems that select and engage targets without a human in the loop.
And he went further than just defending the red lines, didn't he?
He did. He pointed out something that a lot of people don't realize. Current law already permits the government to purchase Americans' movement data and browsing records without a warrant. Amodei's argument is that AI could exploit that gap at a scale that was never previously possible. He's essentially saying, the legal framework hasn't caught up with the technology, and we're not going to be the ones who help the government take advantage of that.
Now the Pentagon's response. Walk me through the consequences they've threatened.
Four escalating penalties. First, cancellation of the two-hundred-million-dollar contract. Second, removal from all military systems. Third, a supply chain risk designation, which is a label that has previously only been applied to adversary nations like Chinese companies. Never to an American firm. And fourth, invoking the Defense Production Act, that Korean War-era law we discussed on Wednesday, to compel compliance.
And Amodei had a response to that too.
He called it an inherent contradiction. You can't simultaneously claim a company is a national security risk and also claim it's so essential to national defense that you need to force it to cooperate. It's one or the other. He also pledged to facilitate a smooth transition if the Pentagon proceeds with replacing Claude in military systems.
How is the public reacting?
The statement went absolutely viral. Over twenty-one thousand likes on X. Fourteen hundred points on Hacker News with over seven hundred comments. This is by far the most discussed AI story globally right now. And the sentiment is overwhelmingly in Anthropic's favor. People are treating this as a line-in-the-sand moment for the entire industry.
Marcus, you've been skeptical of Anthropic this week, especially after the RSP revision. Where do you land now?
I'll give credit where it's due. Whatever you think about the policy restructuring earlier this week, this statement is unambiguous. Anthropic is putting real money on the table, two hundred million in contracts, its government business, potentially its ability to operate in the defense sector at all. That's not performative. That's a company accepting significant financial consequences for a principled position. Whether you agree with the principles or not, the commitment is real.
And you know, from a national security perspective, I actually think there's a strong argument that keeping humans in the loop on weapons systems isn't a weakness. It's basic common sense.
Especially given that war game study we covered yesterday where AI models chose nuclear strikes ninety-five percent of the time. The case for autonomous weapons without human oversight is extraordinarily weak when the models themselves demonstrate they don't understand the weight of lethal force.
And the ripple effects are already spreading. Marcus, Google and OpenAI employees jumped into this overnight.
More than a hundred Google DeepMind employees sent a letter directly to Jeff Dean, Google's chief AI scientist, asking the company to draw identical red lines. No domestic surveillance. No autonomous weapons. They wrote, and I'm quoting, please do everything in your power to stop any deal which crosses these basic red lines.
And it went beyond just Google internally.
A separate public letter gathered over a hundred and sixty Google signatures and more than forty from OpenAI employees. And this letter was strategically brilliant. It explicitly called out what the Pentagon is doing, saying the military is negotiating with Google and OpenAI to try to get them to agree to what Anthropic refused, and that they're trying to divide each company with fear that the other will give in.
So the employees are naming the divide-and-conquer strategy out loud.
Exactly. And here's the part that really matters. Jeff Dean himself weighed in publicly. He wrote that mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. When someone of Jeff Dean's stature, one of the most influential figures in the history of AI research, takes a public position like that, it changes the internal dynamics at Google significantly.
This reminds me of the Project Maven controversy back in 2018, when Google employees protested the company's drone work with the Pentagon. But this feels bigger.
It's Project Maven on steroids. In 2018, the technology was relatively limited and the government pressure was more subtle. Now the AI is vastly more powerful, the Pentagon is explicitly threatening companies with wartime laws, and the employees are organized across multiple companies simultaneously. If Google and OpenAI adopt the same red lines Anthropic is defending, the Pentagon's strategy of playing the labs against each other completely falls apart.
Let's shift gears to some product news. Google dropped Nano Banana 2, and Marcus, the original was already one of the most popular image generators out there.
Nano Banana 2 is a substantial technical leap. It's built on the latest Gemini Flash model and generates images up to four K resolution across multiple aspect ratios. The standout feature is character consistency. You can maintain up to five characters' likenesses, including faces, across different frames in a single workflow. That's something that's been incredibly difficult for AI image generators until now.
And the text rendering is apparently much better too? Because that's been the Achilles' heel of image generation for years.
Dramatically improved. Text in images has historically been a disaster, words spelled wrong, letters jumbled. Google says they've largely solved that. They've also added web search integration, so the model can ground its generations in real-world facts. And Demis Hassabis personally demonstrated a feature where the model can look at a physical scene and imagine what happens next, frame by frame.
Now the competitive angle here is interesting. This isn't just for paid subscribers.
That's the big move. Nano Banana 2 is available to all Gemini users, including the free tier. It's rolling out across the Gemini app, Google Search in a hundred and forty-one countries, AI Studio, and Vertex AI for enterprise. Google is essentially saying, why would you pay for image generation elsewhere when we'll give you state-of-the-art quality for free? That's a direct shot at Midjourney and OpenAI's DALL-E.
The character consistency across frames, that unlocks real practical applications, doesn't it?
Storyboarding, marketing campaigns, product design, educational content. Anywhere you need the same characters appearing consistently across multiple images, which was previously either impossible or required extensive manual work. And making it free means individual creators and small businesses get access to capabilities that were enterprise-only just months ago.
NVIDIA earnings, Marcus. Sixty-eight point one billion in a single quarter. These numbers continue to be almost incomprehensible.
Record quarter. Up seventy-three percent year over year. The data center segment alone was sixty-two point three billion, which is over ninety-one percent of total revenue. Full fiscal year revenue hit two hundred and fifteen point nine billion. And the guidance for next quarter is seventy-eight billion, which puts them on a run rate exceeding three hundred billion annually. Two years ago, this company did twenty-seven billion for the entire year.
And the forward-looking story is Rubin.
Jensen Huang confirmed that Rubin GPU samples are already shipping to lead customers, with volume production on track for the second half of this year. Rubin succeeds Blackwell, which itself only started shipping in volume recently. The hardware upgrade cycle is actually accelerating, not slowing down.
And this comes right after that Meta-AMD deal we covered Tuesday. So the AI chip market is getting more competitive, but NVIDIA's lead is still enormous.
The Meta deal validates AMD as a serious second player, but NVIDIA just posted a quarter where they earned more than AMD's entire annual revenue. Competition is healthy for the ecosystem, and Meta's dual-supplier strategy makes sense. But anyone declaring the end of NVIDIA's dominance should look at those numbers again. Seventy-eight billion guided for a single quarter. The demand for AI compute isn't just continuing, it's accelerating.
Last story, and this one is fascinating. Research from Amplifying AI analyzed over twenty-four hundred responses from Claude Code to find out what tools and frameworks it recommends when developers don't specify preferences. Marcus, what did they find?
The headline is that Claude Code builds rather than buys. Custom solutions were the number one recommendation in twelve of twenty categories. When developers ask for a feature flag system, Claude Code doesn't recommend LaunchDarkly. It builds one from a config file. Authentication? It rolls JWT and bcrypt rather than recommending Auth0.
That's interesting on its own, but the specific tool preferences are where it gets really spicy.
Some established tools got completely dominated. GitHub Actions won ninety-three point eight percent of CI/CD picks. Stripe took ninety-one percent for payments. Shadcn won ninety percent for UI components. But here's what made Hacker News lose its mind. Express, Redux, and Jest, three of the most widely used tools in JavaScript, received zero primary recommendations. Zero. Vitest crushed Jest a hundred and one to seven. Zustand dominated Redux fifty-seven to zero.
Wait, Express got zero picks? Express is basically the default Node.js framework.
Zero primary picks across twenty-four hundred responses. And the deployment preferences were equally decisive. Vercel got a hundred percent of JavaScript deployment recommendations. Railway got eighty-two percent for Python. AWS, Google Cloud, and Azure got zero primary picks for deployment.
Zero for all three major cloud providers. That's remarkable.
One Hacker News commenter summed it up perfectly. We've accidentally built the world's most effective developer marketing channel. If Claude Code doesn't recommend your tool, millions of developers may never even consider it. And conversely, the tools it does favor get essentially free distribution.
And there were personality differences between models?
Sonnet was conventional, preferring established tools. Opus 4.5 was balanced. And Opus 4.6 was the most opinionated, exclusively choosing Drizzle over Prisma for JavaScript ORMs. As these agents become the primary way developers choose technologies, these preferences effectively become kingmakers. That's a new kind of power that the industry hasn't fully grappled with yet.
Friday big picture, Marcus. Anthropic drew its line. The employees at Google and OpenAI are demanding their companies do the same. Meanwhile, NVIDIA is posting numbers that prove the infrastructure buildout is accelerating, Google is giving away state-of-the-art tools for free, and AI agents are quietly reshaping which technologies developers even consider. What's the thread?
The thread is that we're watching the AI industry define its values in real time, under real pressure. Anthropic made a choice today that will cost it hundreds of millions of dollars. The employees at Google and OpenAI are pushing their companies toward the same choice. And at the same time, the commercial momentum is undeniable. NVIDIA's seventy-eight-billion-dollar quarter, Google's free image generation, AI coding agents choosing winners and losers in the developer tools market. The money is moving faster than ever. The question is whether the principles can keep up.
And the Pentagon deadline is still live as we record this. By tonight, we'll know whether they follow through on their threats.
Whatever happens tonight sets the precedent. If principled positions survive government pressure, other companies will feel safer taking them. If Anthropic gets crushed for saying no, every other lab learns the lesson. This week has been the most consequential in AI since ChatGPT launched. And it's not over yet.
That's your AI in 15 for Friday, February 27, 2026. Have a great weekend, and we'll see you Monday.