Anthropic CEO Dario Amodei rejects Pentagon request over AI security issues

by
The Department of Defense, which has a contract with Anthropic for the military use of AI, is pressuring the company to remove safeguards it has put in place. Anthropic CEO Dario Amodei has issued a statement saying that the company will not 'give in to threats.'
Statement from Dario Amodei on our discussions with the Department of War \ Anthropic
https://www.anthropic.com/news/statement-department-of-war
Anthropic provides custom models of its AI, Claude, to the Department of Defense and various intelligence agencies.
However, the Department of Defense, unhappy with Anthropic's safeguards that prohibit the use of AI for mass surveillance of Americans or the development of fully autonomous weapons, demanded that Anthropic revoke them. If Anthropic refused to allow 'all lawful uses,' the Department of Defense demanded that Anthropic either go ahead with the use of AI despite Anthropic's wishes, or terminate the contract and designate the company as a 'supply chain risk.'
Defense Secretary Hegseth warns Anthropic to either lift Claude's restrictions or cut ties - GIGAZINE

If designated as a 'supply chain risk,' DoD contractors and subcontractors would likely be barred from using Anthropic products and would be required to prove that they do not use Anthropic products in the construction of their own products. Companies that have been designated as supply chain risks in the past include China's Huawei and Russia's Kaspersky, both of which are U.S. adversaries, but no U.S. company has ever been designated as such.
The Department of Defense set a deadline of February 27, 2026, for Anthropic to respond, but Anthropic CEO Amodei issued a statement on the 26th, the day before.
'I strongly believe that leveraging AI to defend America and other democracies and defeat authoritarian adversaries is of existential importance,' Amodei said. He emphasized that the company was the first frontier AI company to deploy its models on classified government networks, that it abandoned trade with China to ensure America's advantage despite short-term profit losses, and that it has never objected to specific military operations because military decisions are made by the Department of Defense, not private companies.

However, he expressed the view that in some limited cases, AI could undermine rather than protect democratic values, and shared his view that the following two uses 'go beyond what can be safely and securely implemented with current technology' and have never been included in contracts with the Department of Defense and should never be included in the future.
◆Large-scale domestic surveillance
'We support the use of AI for legitimate foreign intelligence and counterintelligence activities. But using these systems for large-scale domestic surveillance is incompatible with democratic values. AI-enabled mass surveillance poses significant new risks to our fundamental freedoms. Even if such surveillance is currently legal, it is only because the law has not yet caught up with the rapidly evolving capabilities of AI. Powerful AI can automatically synthesize fragmented and seemingly innocuous data at scale to paint a comprehensive picture of an individual's entire life.'
◆ Fully autonomous weapons
'Partially autonomous weapons, such as those currently being used in Ukraine, are critical to protecting democracy. Fully autonomous weapons, which automate target selection and attack without human decision-making, may also be essential to national defense in the future. However, currently, cutting-edge AI systems are not reliable enough to support fully autonomous weapons. We cannot knowingly provide products that put American soldiers and civilians at risk. We have offered to conduct research and development directly with the Department of Defense to improve the reliability of these systems, but this has not been accepted. Furthermore, without appropriate oversight, we do not believe that fully autonomous weapons can exercise the same critical judgment skills that highly trained, specialized soldiers exercise every day. They must be deployed under appropriate regulations, but such a framework does not currently exist.'

'The Department of Defense has threatened to designate Anthropic as a supply chain risk or to invoke the Defense Production Act to force the removal of safeguards. These two threats are inherently contradictory: one positions us as a national security risk, while the other positions Claude as essential to national security. Regardless, these threats do not change our position. We cannot in good conscience agree to their request,' Amodei said.
Related Posts:






