Anthropic has refused to comply with a Pentagon request to remove key safeguards from its artificial intelligence systems, even as the Department of Defense warned it could label the company a “supply chain risk” and terminate a contract worth up to $200 million.
The dispute centers on Anthropic’s decision to maintain restrictions that prevent its AI models from being used for fully autonomous weapons targeting or mass domestic surveillance. The AI startup has argued that lifting those safeguards would pose serious ethical and safety concerns.
Earlier, Pentagon spokesperson Sean Parnell stated on X that the department has no intention of using artificial intelligence for mass surveillance of Americans or for autonomous weapons that operate without human oversight. He said the Pentagon’s request was to allow the use of Anthropic’s AI models for all lawful purposes and set a deadline for the company to respond. If Anthropic refused, Parnell warned, the partnership could be terminated and the company could be designated a supply chain risk.
In a statement, Anthropic CEO Dario Amodei reaffirmed the company’s stance against allowing its AI systems to be used for domestic mass surveillance or to power fully autonomous weapons. He argued that advanced AI systems are not yet reliable enough for high-stakes military applications.
A source familiar with the matter clarified that Anthropic was not accusing the Pentagon of planning to deploy AI for these purposes. Instead, the company’s position reflects a broader safety judgment about the risks associated with frontier AI models.
According to the source, artificial intelligence remains unpredictable in unfamiliar situations, making it unsuitable for life-or-death targeting decisions. In military contexts, such unpredictability could result in friendly fire incidents, mission failures or unintended escalation.
The use of AI for large-scale domestic surveillance also raises legal and constitutional concerns, the source added. While current laws may not explicitly restrict how AI aggregates and interprets vast amounts of data, such systems could generate population-level profiles that conflict with the spirit of constitutional protections.
Amodei expressed hope that the Pentagon would reconsider its position. However, he indicated that if the Department of Defense decides to cancel the contract, Anthropic would work to ensure a smooth transition to another provider.
The CEO also said the Pentagon had threatened to remove Anthropic’s systems, classify the company as a supply chain risk and potentially invoke the Defense Production Act to compel the removal of safeguards. Despite these warnings, Amodei emphasized that the company could not agree to the request in good conscience.
Undersecretary of Defense Emil Michael responded publicly, criticizing Anthropic’s leadership and stating that the Pentagon would always operate within the law while not yielding to pressure from private technology firms.
Anthropic, which is backed by Google and Amazon, currently holds a Department of Defense contract valued at up to $200 million. A company spokesperson said Anthropic remains open to continued discussions and is committed to maintaining operational continuity for the department.
More than 200 employees from Google and OpenAI have reportedly signed an open letter supporting Anthropic’s position. Google and OpenAI did not immediately comment on the matter.




