Microsoft, Google DeepMind, and xAI have agreed to allow the US government to test their AI models for national security risks, including cybersecurity, biosecurity, and chemical weapons, before public release. This collaboration aims to ensure responsible AI development.