(TechGenez) – President Donald Trump directed all U.S. federal agencies on Friday to immediately stop using Anthropic’s artificial intelligence technology and begin a six-month phase-out of existing deployments, escalating a public confrontation over military use of the company’s Claude models.

The order follows Anthropic CEO Dario Amodei’s refusal earlier this week to remove two key safeguards from Claude (prohibitions on mass domestic surveillance and fully autonomous lethal weapons) despite threats from the Department of War to remove the company from its supply chain.

In a lengthy post on Truth Social, Trump accused Anthropic of attempting to “strong-arm” the military and endanger national security, declaring: “We don’t need it, we don’t want it, and will not do business with them again!”

Order and Administration Position

The directive requires agencies to halt new use of Anthropic technology and phase out existing implementations within six months. Trump threatened “major civil and criminal consequences” if Anthropic does not cooperate during the transition.

The President framed the decision as protecting military autonomy: “The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Defense Secretary Pete Hegseth had earlier labeled Anthropic a “supply chain risk” (a designation typically reserved for foreign adversaries) and threatened to invoke the Defense Production Act to force compliance.

The administration argues that unrestricted access to frontier AI capabilities is essential for national defense, particularly against autocratic competitors.

Anthropic’s Response

Anthropic issued a statement Friday evening calling the action “unprecedented and legally unsound,” noting that the “supply chain risk” label has “never before been publicly applied to an American company.”

The company reiterated its willingness to support a smooth transition to alternative providers if offboarded, stating: “Our strong preference is to continue serving the Department of War and our warfighters with our two requested safeguards in place.”

Anthropic emphasized its extensive prior cooperation with national security agencies, including being the first frontier AI company to deploy models on classified networks and provide custom solutions for DoW customers.

Background of the Dispute

The clash originated from a meeting between Amodei and Secretary Hegseth earlier this month. The Department of War reportedly demanded removal of safeguards against:

  • Mass domestic surveillance of U.S. citizens
  • Fully autonomous weapons that select and engage targets without human oversight

Amodei argued both uses are incompatible with democratic values or exceed current technological reliability. He offered collaboration on improving autonomous system safety, which was not accepted.

Anthropic has previously restricted Claude access for entities linked to the Chinese Communist Party and advocated for strong chip export controls.

Broader Context

The confrontation highlights deepening tensions between frontier AI developers and national security agencies over acceptable use boundaries. Anthropic has positioned itself as prioritizing safety and alignment more stringently than competitors OpenAI and Google DeepMind.

The dispute occurs amid accelerating U.S.-China AI competition, renewed export controls on advanced semiconductors, and ongoing congressional debate over military AI governance.

Challenges

  • Anthropic faces significant revenue risk if excluded from federal contracts and classified environments.
  • The Department of War must either accept restricted use or transition to alternative providers, potentially with reduced capabilities or fewer safety commitments.
  • The unprecedented “supply chain risk” designation against a U.S. company could face legal challenge.
  • Public and congressional perception may shift depending on framing: corporate ethics versus national defense priorities.

Outlook

  • Anthropic has signaled intent to challenge the action legally and through public advocacy.
  • The six-month phase-out period provides time for transition planning and potential negotiation.
  • The outcome could influence future government contracting terms and shape how democratic nations balance AI safety with military requirements.

Conclusion

President Trump’s directive to eliminate Anthropic technology from federal use marks a dramatic escalation in the clash over AI safeguards. The standoff between a leading AI safety-focused developer and the national security establishment underscores the difficult trade-offs between innovation guardrails and operational freedom in an era of strategic AI competition.

Leave A Reply

Exit mobile version