As you’ve probably heard, on Friday that political caprice came home to roost for many in Silicon Valley when Defense Secretary Pete Hegseth announced he was declaring Anthropic a “supply chain risk” and that no one with US military contracts could have a commercial relationship with the company any more (a gross exaggeration of what being declared a supply chain risk actually means, but that’s besides the point).
We’ve criticized these “supply chain risk” designations going back years, but mainly for how they tend to be used to prop up American companies against foreign (usually Chinese) competitors with little evidence regarding the actual risk. Of course, you can easily understand the stated intent of an “SCR” designation: if there’s a foreign company with ties to a government that is averse to the US, there is always a risk that the company could agree to sneak backdoors or spyware into the network and do something bad. Hell, it’s what the US does.
But here, it makes no sense at all. The only “risk” was Anthropic saying its technology shouldn’t be used for domestic mass surveillance or to power autonomous killing machines. There is no underlying risk.
Also, important to remember the distinction here. This was all because the only aspect Anthropic was actually against was AI being allowed to make kill decisions. They are fine with the DOD using their AI tech, just not fine with AI making kill decisions. So it’s not like they were saying “no, you can’t use our tech at all.”. Anthropic AI is definitely still in use by the US government… Just not using it for AI to make decisions on who to kill.
I mean… imho that’s a distinction without a difference. Sure a human will have the final say, but they will be working from what the AI produced, which could be hallucinated and unless the human checks the output against the actual evidence, they will just sign off on a kill order from hallucinated AI “analysis.” In other words essentially the same result as having the AI make the kill order itself.
So, just remember to not treat Anthropic like heroes for doing the bare minimum.
Anthropic’s biggest issue isn’t even killing people with their autonomous weapons. Their complaint, if you read what they say, is that they can’t deploy weapons that won’t kill Americans. Everybody else be damned.
CEO Dario Amado wants to develop these autonomous weapons in conjunction with Hegseth and Trump’s “Department of War,” as he calls it.
Even worse than my initial understanding of it. Of course. These people are fucking wild. They literally view the entire underclass of the world as completely disposable.
On the plus side, the US government has now ensured Anthropic will do more than the bare minimum in the US.
Of course, that means it’s now OpenAI making kill decisions.



