Microsoft, Google, and xAI Agree to Give US Government Early Access to AI Models for National Security Testing
What Happened
Microsoft, Google, and Elon Musk’s xAI have agreed to provide the U.S. government with early access to new AI models for national security testing. The agreements, announced by the Center for AI Standards and Innovation (CAISI) at the Department of Commerce, will allow government scientists to evaluate frontier models before public deployment and assess their capabilities and security risks.
The move was driven partly by growing alarm in Washington over the hacking capabilities demonstrated by Anthropic’s recently unveiled Mythos model. Microsoft will work with government researchers to “test AI systems in ways that probe unexpected behaviors” and develop shared datasets and workflows. Microsoft also signed a parallel agreement with the UK’s AI Security Institute.
These new agreements build on earlier arrangements with OpenAI and Anthropic established in 2024 under the Biden administration, when CAISI was known as the U.S. AI Safety Institute. The initiative fulfills a pledge from the Trump administration’s July 2025 AI Action Plan to partner with tech companies to vet models for national security risks.
Source
Insurance Journal — Microsoft, Google and xAI to Give Government Early Access to AI Models
Why This Matters
This is a significant step toward government oversight of frontier AI systems, and it’s notable that it’s happening through voluntary agreements rather than legislation. The explicit mention of Anthropic’s Mythos as a catalyst suggests that AI capabilities are advancing faster than policy can keep up — and that specific models are now considered national security concerns.
The bipartisan continuity is also worth noting: Biden-era safety agreements are being expanded under the current administration rather than scrapped. Whether voluntary pre-deployment testing will be sufficient, or whether mandatory evaluation frameworks will eventually follow, remains the big open question. For now, the major labs are cooperating — but notably, the agreement doesn’t include Anthropic or Meta, two of the other frontier model developers.


