You have probably heard something about the company Anthropic. Here is Rachel Hurley's take on it. It's pretty much just suppositions at this point (she says that), but whatever the answer is, it's pretty scary.
--Kim
Okay - I've read a shit ton about this Anthropic / OpenAI / Department of Defense situation - and here is my take.
If you have no idea what I am talking about - here is the short version. The Pentagon wants carte blanche for when using AI to build mass surveillance tools and autonomous killing machines. The AI company they are using said no. So the Pentagon threatened them.
Anthropic - the company that makes Claude - got handed a deadline this week. Remove the guardrails that stop your AI from doing autonomous weapons and mass domestic surveillance, or else. Anthropic said no. Within hours, Pete Hegseth - a man whose previous claim to fame was hosting Fox & Friends and allegedly not washing his hands for years - designated an American AI company a national security threat. The same label we put on Chinese telecom companies. For the crime of refusing to build a surveillance panopticon.
The general consensus seemed to be that the other big AI companies agreed with Anthropic's stand - and then ChatGPT's Sam Altman - who this morning said he supported Anthropic's position - swooped in and cut a deal with the Pentagon. So, OpenAI gets the contract, Anthropic gets banned from the federal government, and Sam says his deal has the exact same safety red lines Anthropic was holding.
So either the Pentagon banned Anthropic for conditions it then accepted from OpenAI - which makes no sense - or somebody is lying about the terms.
There are a lot of theories on the internet about what really just happened:
1. OpenAI agreed that the Pentagon decides if rules are being followed; Anthropic wanted to decide itself.
2. OpenAI's safety rules can be technically removed later; Claude's are baked into training.
3. OpenAI only banned these activities on its own servers - once the model is running on the Pentagon's own servers, there will be no oversight.
4. OpenAI accepted existing law as sufficient; Anthropic argued current law doesn't cover AI-powered surveillance
But the best theory floating around is the ugliest one. It wasn't about the red lines at all. It was a power play. Anthropic went to the government and said, "You can't use our AI for that." The government responded, "You can't tell us what to do." And they made an example of them.
And if you don't think this affects you - let me be clear - the surveillance stuff isn't hypothetical. DOGE already has access to Treasury data, IRS records, Social Security files, and federal payment systems. Databases that were siloed on purpose, by design, for decades. An LLM that can reason across all of them simultaneously creates something that has literally never existed in American history. The query it enables: "Who donated to this organization, lives in this zip code, traveled to this country, and posted this opinion online?"
And the weapons piece - this isn't about one drone with a gun. The military wants AI that completes a kill chain in milliseconds. AI that coordinates ten thousand drones simultaneously.
The thing nobody wants to say out loud is that once you build autonomous killing logic for foreign wars, it comes home. It always comes home. Every dual-use technology in American history follows this exact pattern.
Ilya Sutskever - one of the most important figures in AI history, a man who basically never talks publicly - broke his silence to say Anthropic was right. Two hundred and twenty engineers from OpenAI and Google signed a letter supporting Anthropic. Three U.S. senators launched formal challenges within hours. Europe publicly invited Anthropic to relocate.
And Pete Hegseth reposted Sam Altman's deal announcement like he just won the Super Bowl.
That's the tell. If the terms were really identical, why is the guy who banned Anthropic celebrating? He's not celebrating safety terms. He's celebrating compliance. He's celebrating that one company bent the knee and the other one got destroyed for not bending.
Everyone watched this week and learned exactly one thing: say no to the Pentagon and they will try to kill your business. Say yes and they'll throw you a parade.
The court challenge is coming. Whether it works will determine if AI safety commitments mean anything at all, or if they're just marketing copy that evaporates the second a man with stars on his shoulders makes a phone call.
No comments:
Post a Comment