Winning the AI Race: America’s AI Action Plan
- Laura Mahrenbach
- Aug 3
- 6 min read
The Trump administration says it hopes to achieve “an industrial revolution, an information revolution, and a renaissance—all at once.”
Introduction
On July 23, 2025, the Trump administration released its AI Action Plan. The Action Plan aims to provide specific policy actions to “cement U.S. dominance in artificial intelligence,” according to the White House press release.
Principles of American AI Action Plan
The Action Plan promotes three principles in American AI development.
American AI should “create high-paying jobs for American workers” and improve American standards of living. The Action Plan views AI as a complement rather than a replacement for human work while acknowledging that AI will cause major changes to the American workforce and require training and reskilling.
American AI should “be free from ideological bias and be designed to pursue objective truth” to ensure AI systems are “trustworthy.” This is considered particularly important as AI systems are integrated more deeply into people’s daily lives.
“Constant vigilance” is an important focus since AI – like other emerging technology – can be used and abused by malicious actors. AI itself can also generate additional risks.
Pillars of American AI
The Action Plan is organized into three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security.
Pillar 1 | Accelerating AI innovation
This pillar depicts the U.S. as the contemporary global leader in AI innovation but argues that the federal government will need to make certain adjustments to ensure the U.S. retains its position. Emphasis is placed on regulatory adaptation, human capacity development, centralizing government action, and using technology to make manufacturing breakthroughs.
To that end, this pillar calls federal agencies and actors to:
Limit regulations and standardize government AI use
Identify and modify federal regulations perceived as hindering AI innovation. Restrict federal funding to states whose “burdensome AI regulations” may “hinder the effectiveness of that funding.”
Eliminate misinformation, climate change and DEI elements from the National Institute of
Standards and Technology AI Risk Management Framework and limit federal contracting to “objective” and non-ideological LLMs.
Develop minimum data quality standards for AI model training. Publish guidelines for federal agencies to evaluate the reliability and performance of AI systems and schedule regular best practice exchanges.
Develop “regulatory sandboxes,” “AI centers of Excellence” and “AI testbeds” for testing AI regulations and tools and for translating and scaling the latter to the market.
Centralize and standardize government AI use by designating the Chief AI Officer Council (CAIOC) as “the primary venue for interagency coordination and collaboration on AI adoption.” Create modular systems (e.g., AI procurement toolbox that provides basic models which different agencies can then customize). Create federal knowledge transfer and talent-exchange programs. When doing this, prioritize DOD needs and facilities over those of other government branches.
Boost adoption of AI
Boost AI adoption via creation of regulatory sandboxes to test and deploy AI tools, via domain-specific efforts to shape expectations and standards for using AI in specific fields, and by encouraging AI adoption among small and medium-sized businesses.
Prioritize AI skill development and education in federal funding streams and create financial incentives for the private sector to invest in AI adoption and training. Study labor market trends and provide “rapid retraining” for workers displaced by AI.
Create a supportive and secure environment for innovation
Create a “supportive environment” for open source and weight AI models, including by increasing access to compute via financial market intervention, public-private technology- and resource-sharing partnerships, and developing an AI R&D strategic plan to guide funding decisions.
Invest in “developing and scaling” manufacturing technologies. Identify supply chain challenges for robotics and drone manufacturing.
Increase investment in digital infrastructure (including secure compute environments) that enable AI-supported scientific research, in developing and publishing high-quality datasets, in experimental and frontier technology research, and in work that makes AI findings more interpretable and transparent.
Work with the private sector to “to actively protect AI innovations from security risks” (12). Develop legal guidance and standards for identifying and addressing deepfakes.
Pillar 2 | Build American AI infrastructure
This pillar focuses on the digital and physical infrastructure requirements for cementing U.S. advantages in the field of AI. Main goals include securing a reliable electrical grid in the short- and long-term, ensuring sufficient numbers of trained workers to build and operate AI infrastructure and work in high-tech facilities, and improving federal coordination regarding risks to AI infrastructure.
Specific recommendations include the following:
Facilitate building of necessary infrastructure to meet the growing energy demands of data centers and to build data centers themselves, including by creating new categorical exclusions from NEPA, expediting environmental reviews and permitting processes, and making federal lands available to developers.
Exclude foreign ICT providers from the US AI computing stack.
Ensure electrical grid reliability by safeguarding existing assets, by not prematurely decommissioning potential power sources, by employing “advanced management technologies” and by “align[ing] financial incentives with the goal of grid stability.”
Create a friendly regulatory environment for and provide federal financing to boost domestic semiconductor manufacturing.
Protect “high-security data centers” from attacks by “nation-state actors” via new technical standards and “agency adoption of classified compute environments.”
Identify high-priority professions for building AI infrastructure and create or adjust curriculum and study programs for primary, secondary and post-graduate education to fill any related employment gaps).
Create an AI Information Sharing and Analysis Center to share threat information across federal agencies and provide response guidelines.
Promote secure-by-design in AI technologies by updating DOD’s Responsible/Generative AI frameworks and guidelines and publishing new AI standards (e.g., related to intelligence work). Also promote best-practice AI responses in public and private sectors by ensuring AI is included in all response frameworks and by encouraging information sharing related to threats and vulnerabilities.
Pillar 3 | Lead in International AI Diplomacy and Security
The final pillar shifts focus to the international context. Here, the focus is on ensuring allies comply with U.S. government preferences related to AI governance, manufacturing and trade.
Federal agencies should work with the private sector and with international partners to:
Develop and export an American-made full technology stack to American allies.
“Leverage the U.S. position in international diplomatic and standard-setting bodies” to promote U.S. values and preferences in developing global AI governance frameworks.
Enhance enforcement of chip export controls to limit “foreign adversaries’” access to compute and “expand and increase end-use monitoring in countries where there is a high risk of diversion of advanced, U.S.-origin AI compute.” Develop new export controls on semiconductors and semiconductor manufacturing.
Leverage U.S. resources to “induce key allies to adopt complementary AI protection systems and export controls across the supply chain.” Prioritize plurilateral controlling of the AI stack over multilateral.
Take the lead in evaluating the security of frontier AI systems, including by attracting key personnel, standardizing evaluation processes and collaborating across federal agencies and with research institutions .
Ensure federally-funded biology research employs “robust nucleic acid sequence screening” and encourage data sharing between synthesis providers to enhance biosecurity.
Cascade’s take
The Action Plan includes over 90 policy recommendations, some quite specific while others remain more general. Many of the specific policy suggestions have already appeared in executive orders issued by the Trump White House. For example, the call to create and encourage export of an All-American AI Stack was the driving purpose of EO 14320. Likewise, streamlined permitting for energy and AI infrastructure projects is central to EO 14318 as well as being a prominent discussion topic in recent congressional hearings. This makes it likely that the administration will make progress towards advancing these specific policy goals, particularly those for which executive orders and/or the One Big Beautiful Bill Act have already provided a foundation.
One opportunity to follow here is the development of an AI research and development plan by the Office of Science and Technology Policy (OSTP), which will guide federal funding and research decisions upon its completion. Another is the commitment to support next-generation manufacturing. Here, the Action Plan identifies numerous existing funding sources and commits to soliciting private sector input about how the government interprets and responds to supply chain challenges, implying upcoming requests for information.
In contrast, the less concrete policy recommendations appear less certain of success. With the exception of biosecurity, discussions of AI threats are as vague as the responses proposed. Safety appears to be addressed primarily through the exclusion of foreigners from domestic AI systems; improving information about potential threats; and creating numerous federal frameworks, guidelines, toolkits and roadmaps to address each potential AI challenge.
This lack of specificity provides an opportunity for the private sector to shape how the federal government addresses cybersecurity in general, as well as within the context of AI and AI infrastructure development. The report indicates the government is willing to negotiate market access for industries which can successfully position themselves or their products as key contributors to long-term U.S. AI and/or economic leadership.
A final lesson from the AI Action Plan is that the Trump administration’s domestic agenda remains front and center, at least on the surface. This means businesses should continue to be vigilant about how they position themselves when discussing AI and energy topics. Companies can consider framing their business decisions via the lens of national security to explicitly convey how their company supports American AI security, innovation and (global) dominance. That said, the domestic agenda does not always carry over to the more technical recommendations. For example, “avoiding ideological bias” is a top principle in the document. Yet, very few policy suggestions touch on bias in training AI models nor on the topics (DEI, climate change) generally associated with this administration’s discussions of “bias” (see EO 14319 or EO 14151).
Comments