(TechGenez) – OpenAI announced late Friday that it has reached an agreement with the U.S. Department of War to deploy its AI models on classified government cloud networks, concluding successful negotiations just hours after the Trump administration ordered all federal agencies to cease using rival Anthropic’s technology.
OpenAI CEO Sam Altman stated in a post on X that the deal incorporates the company’s core safety principles, including prohibitions on mass domestic surveillance and human responsibility for the use of force, including autonomous weapon systems. The agreement comes one day after the Department of War labeled Anthropic a “supply chain risk” and initiated a six-month phase-out of its Claude models across government systems over the same safeguards.
Altman described the Department of War as showing “deep respect for safety and a desire to partner,” contrasting sharply with Anthropic CEO Dario Amodei’s earlier refusal to remove identical restrictions.
Deal Terms and Safeguards
According to Altman’s statement, the agreement includes:
- Deployment of OpenAI models on DoW classified cloud networks
- Explicit contractual prohibitions on mass domestic surveillance
- Requirement of human responsibility for lethal force decisions, including autonomous weapons
- Technical safeguards (including Full-Disk Encryption) to ensure model behavior aligns with agreed principles
- Deployment limited to secure cloud environments
Altman emphasized that these terms reflect existing DoW law and policy and urged the department to extend the same conditions to all AI providers. He expressed hope for de-escalation and a return to “reasonable agreements” rather than legal or governmental confrontations.
OpenAI did not disclose financial terms or specific model versions involved in the deployment.
Background of the Dual Announcements
The OpenAI deal caps a dramatic 24-hour period in U.S. defense AI policy:
- On Thursday, Anthropic CEO Dario Amodei publicly rejected DoW demands to drop safeguards against mass domestic surveillance and fully autonomous lethal weapons.
- Friday morning, President Trump ordered all federal agencies to stop using Anthropic technology and begin a six-month phase-out.
- By Friday evening, Altman announced OpenAI had reached agreement on substantially similar safety boundaries.
The rapid sequence underscores the high stakes and differing corporate approaches to military AI partnerships.
Broader Context
The dispute reflects intensifying debate over AI safety boundaries in national security contexts. Anthropic has positioned itself as prioritizing long-term safety and democratic values, while OpenAI has pursued broader government partnerships with negotiated safeguards.
The Department of War has emphasized unrestricted access to frontier AI capabilities as essential for maintaining strategic advantage against autocratic competitors.
Challenges
- OpenAI’s agreement may face scrutiny from safety advocates concerned about military use of frontier models.
- The DoW must now evaluate whether to accept similar terms from other providers or pursue alternatives with fewer restrictions.
- Legal and congressional oversight of AI-military integration is expected to intensify.
Outlook
The OpenAI-DoW deal could accelerate adoption of commercial frontier models in classified environments while setting a precedent for negotiated safety boundaries. Anthropic’s exclusion from DoW contracts may limit its government revenue and influence future partnerships.
The contrasting outcomes highlight divergent corporate philosophies on AI safety versus operational flexibility in defense applications.
Conclusion
OpenAI’s rapid agreement with the Department of War, reached hours after Anthropic’s ban, demonstrates the high value placed on frontier AI capabilities in national security. While the deal preserves key safeguards on domestic surveillance and autonomous weapons, it also underscores the delicate balance between innovation access and principled restrictions in an era of strategic AI competition.





