Anthropic doesn’t want Claude controlling autonomous weapons. The Pentagon may not give them a choice.
A looming Friday deadline to get onboard with the Pentagon’s demands could lead to an unraveling of years of the company’s work with the government.
Damned if you do, damned if you don’t.
Anthropic faces a difficult decision by Friday after a tense meeting this week between CEO Dario Amodei and Defense Secretary Pete Hegseth at the Pentagon.
After it was revealed that Anthropic’s Claude was used to help plan the attack on Venezuela that led to the capture of President Nicolás Maduro, Amodei reportedly reached out to its partner Palantir to push back on the defense contractor’s use of its AI to plan the deadly attack. Anthropic has stood alone as the only major AI vendor that prohibits using its tools for surveillance or “battlefield management,” causing the White House to grow increasingly frustrated with the company.
According to reporting from the New York Times, “Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop.”
During the meeting this week, Hegseth allegedly gave Amodei a Friday deadline to get onboard with the Pentagon. Per the report, if the company does not yield on its restrictions, Hegseth threatened two possible penalties: Hegseth could declare Anthropic’s Claude essential to national security and force the company to make changes under the Defense Production Act, or Anthropic could be declared a “supply chain risk,” effectively blacklisting the company for national security use.
Anthropic finds itself in this unenviable position after a year of actively seeking out government work, including in national security applications.
Here’s a timeline of the announcements that show the effort Anthropic has made to push Claude for use by the Pentagon and other government agencies.
