(TechGenez) — Google has signed an agreement granting the U.S. Department of War access to its artificial intelligence systems for use on classified military networks, positioning the tech giant alongside OpenAI and Elon Musk’s xAI in supplying AI tools to the Pentagon for sensitive government operations.
The deal permits Pentagon workers to use Google’s Gemini AI models for classified work, so long as those uses are considered “lawful,” with the agreement extending an existing arrangement that previously covered only unclassified government data.
The agreement was signed at 4 p.m. on Monday, according to a person familiar with the matter, even as researchers at the company publicly protested against it.
A Deal Shaped by Anthropic’s Refusal
The agreement is the direct consequence of a high-profile standoff between the Pentagon and AI safety company Anthropic. The Department of War had sought unrestricted access to Anthropic’s AI systems for classified use, including applications involving domestic mass surveillance and autonomous weapons. Anthropic refused, insisting on guardrails to prevent those specific use cases.
In response, the Pentagon branded Anthropic a “supply-chain risk,” a designation typically reserved for foreign adversaries. Anthropic contested the label in court, and a federal judge granted the company an injunction against the designation last month while the case proceeds.
Google now becomes the third major AI firm to step into the opening Anthropic’s refusal created. OpenAI signed a deal with the DoD shortly after the dispute became public, followed by xAI.
The Terms and Their Limits
The contract permits the Pentagon to use Google’s models for “any lawful governmental purpose,” according to people familiar with the agreement.
According to reports, Google’s agreement also requires the company to assist in adjusting its AI safety settings and filters at the government’s request, and specifies that it does not grant Google the authority to control or veto lawful government operational decision-making.
Google’s agreement includes language stating that it does not intend for its AI to be used for domestic mass surveillance or in autonomous weapons. In a statement, Google said it is “committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.” However, it remains unclear whether such provisions carry legal force or can be meaningfully enforced in practice, according to reporting by The Wall Street Journal.
Internal Resistance
The deal did not go uncontested within Google’s own walls. At least 950 Google employees signed an open letter urging the company to follow Anthropic’s lead and decline to sell AI to the Department of War(DoW) without equivalent guardrails. Signatories argued that refusing classified work is the only way to ensure Google’s AI is not misused.
Google did not respond to a request for comment on the employee letter.
A Widening Military-AI Complex
The agreement reflects a rapidly shifting landscape in which the U.S. government is aggressively expanding its use of commercial AI for national security purposes.
The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google, as it worked to integrate AI models into both classified and unclassified government systems. Classified networks handle a wide range of sensitive government work, including mission planning and weapons targeting.
xAI had already reached classified-network access in January, demonstrating that the department was moving to open sensitive environments to commercial AI vendors well before Google’s deal was finalised.
The broader pattern points to what analysts describe as an operational, not merely experimental, framework for AI in defence. As commercial AI models move deeper into classified environments, questions about accountability, safety, and the limits of contractual guardrails are expected to intensify — in courtrooms, boardrooms, and on the factory floors of the companies building these systems.
Anthropic’s lawsuit against the Pentagon remains ongoing. The outcome could have significant implications for how AI companies negotiate the boundaries of government use of their technology going forward.






