(TechGenez) – Anthropic CEO Dario Amodei stated Thursday that his company will not remove two core safeguards from its Claude AI models: prohibitions on mass domestic surveillance and fully autonomous lethal weapons, even if doing so costs the firm access to Department of War contracts.
The declaration follows a meeting with Defense Secretary Pete Hegseth in which the DoW reportedly demanded that Anthropic accept “any lawful use” of its technology, including the two restricted categories. When Anthropic refused, the department allegedly threatened to remove Claude from DoW systems, designate Anthropic a “supply chain risk” (a label previously reserved for foreign adversaries), and potentially invoke the Defense Production Act to force compliance.
Amodei described the threats as “inherently contradictory” and reaffirmed the company’s principled stance: “We cannot in good conscience accede to their request.”
Anthropic’s Position and Safeguards
In a public statement, Amodei outlined Anthropic’s track record of supporting U.S. national security while maintaining narrow red lines:
- First frontier AI company to deploy models on classified U.S. government networks
- First to deliver custom models for national security customers
- Voluntarily forgone hundreds of millions in revenue by blocking use by entities linked to the Chinese Communist Party
- Shut down CCP-sponsored cyberattacks attempting to abuse Claude
- Advocated publicly for strong export controls on advanced chips
Despite this cooperation, Anthropic has consistently excluded two use cases from DoW contracts:
- Mass domestic surveillance
Amodei argued that AI-enabled mass surveillance of U.S. citizens is “incompatible with democratic values” and introduces “serious, novel risks to our fundamental liberties.” He highlighted that current law allows warrantless purchase of detailed personal data (movements, web browsing, associations), a practice already raising bipartisan privacy concerns in Congress, and warned that frontier AI could automatically assemble scattered data into comprehensive individual profiles at unprecedented scale. - Fully autonomous lethal weapons
While acknowledging the value of partially autonomous systems (as used in Ukraine) and the potential future need for fully autonomous weapons, Amodei stressed that today’s frontier models are “simply not reliable enough” to select and engage targets without human oversight. He offered direct R&D collaboration to improve reliability, but said the offer was not accepted.
Amodei emphasized that these narrow exceptions have not hindered widespread deployment of Claude across classified networks, national laboratories, and mission-critical DoW applications including intelligence analysis, modeling, simulation, operational planning, and cyber operations.
Department of War Position
The Department of War has not issued a public response to Amodei’s statement. Secretary Hegseth has previously stated that the U.S. military must have unrestricted access to the most capable AI systems to maintain operational advantage.
The reported threat to label Anthropic a “supply chain risk” while simultaneously asserting Claude’s importance to national security has drawn criticism for internal inconsistency.
Broader Context
The dispute highlights growing friction between frontier AI developers and national security agencies over acceptable use boundaries. Anthropic has positioned itself as the most safety-conscious major player, consistently maintaining stricter red lines than competitors OpenAI and Google DeepMind on surveillance and autonomous weapons applications.
The standoff occurs against a backdrop of accelerating U.S.-China AI competition, renewed export controls on advanced chips, and ongoing congressional debate over AI governance and military use.
Challenges
- Anthropic faces significant revenue and strategic risk if removed from DoW contracts and classified environments.
- The Department of War must either accept restricted use or transition to providers that may have fewer safety commitments or lower performance.
- Public perception could shift if the disagreement is framed as placing corporate ethics above national defense priorities.
Outlook
- The public statement increases pressure on both sides to find compromise before any formal offboarding occurs.
- Anthropic has offered to support a smooth transition to alternative providers if necessary, aiming to avoid disruption to ongoing military missions.
- The outcome may influence future government contracting language and shape how democratic nations balance AI safety, innovation, and military advantage.
Conclusion
Dario Amodei’s refusal to drop safeguards against mass domestic surveillance and fully autonomous lethal weapons places Anthropic in direct conflict with Department of War demands, exposing the difficult trade-offs between frontier AI safety principles and national security imperatives. The resolution of this standoff will likely set important precedents for how the United States government partners with private AI developers in an era of intensifying strategic competition.






