Skip to main content
search
All BlogsCyber NewsZero Trust

Securing the AI Frontier: Why Zero Trust Is Central to “America’s AI Action Plan”

By August 1, 2025 No Comments

Duncan Greatwood, CEO, Xage Security

In July 2025, the White House released a new document titled Winning the Race: America’s AI Action Plan, outlining the United States’ comprehensive strategy to secure global leadership in artificial intelligence across technological, economic, and national security domains. Framed as a critical race akin to the space race, the plan is driven by the vision of leveraging AI to spark an industrial and information revolution. 

It is structured around three core pillars: 1) accelerating AI innovation through deregulation, open-source encouragement, scientific investment, and workforce empowerment; 2) building robust AI infrastructure including data centers, energy grids, semiconductor manufacturing, and cybersecurity systems; and 3) leading international AI diplomacy and security by promoting U.S.-based AI exports, countering adversarial influence, and enforcing export controls. 

The document also emphasizes ensuring trustworthy AI and protecting U.S. innovations from misuse. Through detailed policy actions and agency-led initiatives, the Action Plan presents a roadmap aimed at achieving “unquestioned and unchallenged global technological dominance” for the United States in the AI era.

Americas AI Action Plan

Pillar I: Accelerate AI Innovation

Pillar I aims to boost U.S. AI leadership by cutting regulations, promoting open-source models, and supporting private-sector innovation. It focuses on expanding AI adoption across industries, protecting free speech in AI, training the workforce, investing in scientific research and datasets, and advancing government and defense AI capabilities.

So, what will it take to achieve these goals? There are a couple of recommended policy actions that stood out to us. 

“Establish secure compute environments within NSF [National Science Foundation] and DOE [Department of Energy] to enable secure AI use-cases for controlled access to restricted Federal data.” 

“Build World-Class Scientific Datasets,” p. 9 

“Launch a technology development program led by the Defense Advanced Research Projects Agency in collaboration with CAISI [NIST’s Center for AI Standards and Innovation] at DOC [Department of Commerce] and NSF, to advance AI interpretability, AI control systems, and adversarial robustness.”

“Invest in AI Interpretability, Control, and Robustness Breakthroughs,” p. 9

These two recommendations will require proper access control to federal data. Some data sets can no doubt simply be placed in the public domain. However, for more sensitive data of the kind that will be hosted at NSF or DOE, a different approach is needed. There will be a wide variety of users and applications seeking to access the data—government, regulators, academics, U.S. private sector and likely also researchers from U.S. allies—each with their own access entitlements and their AI own systems that they want to use. These overlapping entitlements will need to be matched with granular Zero Trust enforcement and control that applies to users and to AI agents alike.

In addition to access control, tamperproof audit logs are critical for real-time monitoring, post-incident forensics, and demonstrating that AI operations remain safe and compliant. These logs must provide immutable, time-stamped records of every system interaction—whether by a human user or an AI agent—to enable organizations to detect anomalies, trace the root cause of incidents, and prove adherence to regulatory and internal policy frameworks. Just as importantly, they must be tamperproof to prevent attackers from covering their tracks by altering the logs. As AI systems become more autonomous and integrated into critical systems, the ability to track, verify, and validate every decision or action taken by AI is essential.

“Prioritize investment in theoretical, computational, and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI, reflecting this priority in the forthcoming National AI R&D Strategic Plan. 

“Advance the Science of AI,” p. 9

“Invest, via DOE and NSF, in the development of AI testbeds for piloting AI systems in secure, real-world settings, allowing researchers to prototype new AI systems and translate them to the market. Such testbeds would encourage participation by broad multistakeholder teams and span a wide variety of economic verticals touched by AI, including agriculture, transportation, and healthcare delivery.”

“Invest in AI Interpretability, Control, and Robustness Breakthroughs,” p. 10

AI testbeds also require proper Zero Trust access control and policy based enforcement, both for what data AI agents can access, and for what AI agents themselves are allowed to do—for instance, what commands each AI agent is allowed to issue, and to whom. 

Pillar II: Build American AI Infrastructure

Pillar II focuses on rapidly expanding the physical and technical foundations needed for AI leadership, ensuring that the infrastructure, software, and security stack—especially for critical sectors like energy—is built using American products. It calls for streamlining permits, modernizing the electric grid, revitalizing domestic semiconductor manufacturing, and directing the construction of high-security AI data centers for military and intelligence communities. The plan also emphasizes training a skilled workforce and strengthening cybersecurity through secure-by-design AI systems and robust incident response to support national security and resilient AI growth.

“Create a strategic blueprint for navigating the complex energy landscape of the 21st century. By stabilizing the grid of today, optimizing existing grid resources, and growing the grid for the future, the United States can rise to the challenge of winning the AI race while also delivering a reliable and affordable power grid for all Americans.”

“Develop a Grid to Match the Pace of AI Innovation,” p. 16

Stabilizing the grid requires adherence to the latest NERC CIP standards, which have expanded significantly. NERC’s updated guidelines now require even low-impact assets to comply with cybersecurity regulations. Previously, only medium- and high-impact systems in transmission, generation, and distribution were included. This shift underscores a growing recognition: to secure the grid, every component matters—regardless of size.

At the same time, the energy landscape is evolving rapidly as AI infrastructure demands soar. Many tech companies are now building or buying their own distributed energy generation —including solar farms, battery systems, nuclear power, and microgrids—to meet the power needs of AI data centers. These energy generation resources, now within NERC’s regulatory scope, must also be secured to maintain compliance and operational integrity.

As AI begins to tune and optimize the grid in real time, a unified, Zero Trust foundation will be essential to ensure resilience, reliability, and regulatory compliance across the board.

Led by DHS [Department of Homeland Security], issue and maintain guidance to private sector entities on remediating and responding to AI-specific vulnerabilities and threats.” 

“Bolster Critical Infrastructure Cybersecurity,” p.16

As AI becomes fundamental to critical industries, new approaches will be needed to provide rigorous guarantees against AI data leakage and to protect against rogue AI agents, whether malicious or simply malfunctioning. Furthermore, protection against hostile use of AI—for instance, to automate experimental attack attempts against the grid—will also be needed. 

Pillar III: Lead in International AI Diplomacy and Security

Pillar III focuses on strengthening America’s global AI leadership by exporting U.S. AI technology to allies, countering adversarial influence—especially from China—in international standards bodies, and enforcing strict export controls on AI-related technologies. It promotes global alignment on AI protections, evaluates national security risks in frontier models, and invests in biosecurity to prevent misuse of AI in developing harmful biological agents.

Sharing data and AI research with allied nations is an area where granular entitlements for access and actions will be crucial. Designated researchers and AI agents should be able to access what they should have access to and be able to carry out the actions for which they’re authorized, but not be able to abuse their access—whether deliberately or accidentally. This will require another layer of Zero Trust cybersecurity architecture to ensure safety and compliance, and to conveniently enable and underpin co-operation among U.S. allies.

Conclusion

America’s AI Action Plan outlines an ambitious roadmap to secure U.S. leadership in AI—across innovation, infrastructure, and global diplomacy. But turning this vision into reality will require more than high-level policies. It demands a secure foundation for how AI systems interact with data, infrastructure, and people. 

As AI becomes deeply embedded in critical sectors, like energy and defense, the need for granular, enforceable controls becomes paramount. Whether it’s protecting sensitive government datasets, safeguarding real-time grid optimization, or securely collaborating with allies, a Zero Trust approach will be essential—not only for human users, but also for the AI agents they deploy. This is the moment to embed trust, cybersecurity certainty, accountability, and resilience into the architecture of the AI era—and ensure that innovation doesn’t outpace our ability to protect it.