Jared Dunnmon: I think there’s a lot to be excited about in the action plan. My biggest concern from an implementation perspective is a say-do gap caused by a lack of resourcing on the government side to implement.
Aaron Bartnick: the question is whether the US can outcompete China in the AI race with these new rules. And the answer is, under this new framework, that is the only remaining choice.
Ashley Finan: I’ll be watching in the coming months to see whether these deals with nuclear-powered data centers or geothermal-powered data centers actually move towards putting steel and concrete in the ground, because that would be a really big win for the country.
Jason Bordoff: Last month, the Trump administration released what it calls an AI action plan, and the President signed several executive orders designed to promote and expand AI infrastructure. The plan and executive orders are aimed at removing regulatory hurdles and accelerating US dominance in the industry. They also have broad energy and security implications. There’s quite a lot to say about this raft of policies. But this week, I’m handing that job off to someone else.
This is Columbia Energy Exchange, a weekly podcast from the Center on Global Energy Policy at Columbia University. I’m Jason Bordoff.
On August 1st, we gathered some of the leading experts on artificial intelligence here at the Center on Global Energy Policy, to discuss the Trump administration’s AI action plan during a webinar. And so today on the show, we’re trying something different: sharing the recording of that webinar, which was moderated by my colleague David Sandalow, the Inaugural Fellow at the Center on Global Energy Policy.
I hope you enjoy their conversation.
David Sandalow: Greetings everyone. Thank you for joining us today. My name’s David Sandalow. I’m the inaugural fellow at the Center on Global Energy Policy at Columbia University and the host of the AI Energy and Climate Podcast. Today I’m delighted to welcome three colleagues from the Center on Global Energy Policy to discuss President Trump’s new AI action plan and the three executive orders related to that action plan on AI. By way of background, on July 23rd, just over a week ago, the Trump administration released the action plan and the three executive orders. The action plan outlines more than 90 policy recommendations for federal agencies focused on promoting innovation, reducing regulation, building AI infrastructure, promoting open source models, eliminating ideological bias in AI models, training workers to use AI, increasing AI exports and much more. They are very, very detailed. The first two lines of the action plan set the tone. They say, and I quote, “The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global standards and reap broad economic and military benefits.”
The action plan goes on to state, just to quote a few of the headline concepts, the United States needs to innovate faster and more comprehensively than our competitors. We need to dismantle unnecessary regulatory barriers that hinder the private sector. In doing so, we need to build and maintain vast AI infrastructure and the energy to power it, and we need to establish American AI as the gold standard worldwide and ensure our allies are building on American technology. The AI action plan also states three more principles: that American workers are central, that AI systems should be free from ideological bias, and that we need to prevent advanced technologies from being misused and stolen by malicious actors. So as I mentioned, the Trump administration released three executive orders along with this action plan. The first is called “Preventing Woke AI in the Federal Government.” It’s gotten a fair amount of attention and discussion, one on accelerating federal permitting of data center infrastructure and a third on promoting the export of American AI technology stack.
I’d say these technologies are, in many respects, very different from those of the Biden administration, although there are a few elements of similarity and continuity. So there’s lots to explore in these executive orders and the action plan, and I’m really thrilled, as I said, to be joined by three top experts from the Center on Global Energy Policy. To do that, first, Ashley Finan. Welcome! She is the Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy. She was previously Chief Science Officer for the National Homeland Security Directorate at the Idaho National Laboratory, and before that she was director of the National Reactor Innovation Center at INL. She earned her PhD in Nuclear Science and Engineering at MIT.
Second, delighted to welcome Aaron Bartnick, who’s a global fellow at CGEP. He spent 15 years at the intersection of technology, national security, and global markets, most recently as assistant director for technology security and governance at the White House Office of Science and Technology Policy. While he was at OSTP, Aaron led a portfolio that included work on the Biden administration’s AI executive order and AI infrastructure executive order, as well as economic and research security.
And third, Jared Dunnmon was previously the technical director for AI at the Pentagon’s Defense Innovation Unit, where he served under the first Trump administration and the Biden administration. He’s been a postdoc at the Stanford AI Lab, a member of the early team at startup Snorkel AI and vice president of future technologies at the battery firm Our Next Energy. He recently started a new company and still spends some time with us at CGEP as a non-resident fellow, which we are grateful for. So we’re going to have a conversation among ourselves, then take questions from you, everyone listening from the audience.
And just to timestamp this, our conversation is taking place on August 1st, 2025. So Ashley, let me start with you, and first maybe we could all just lay the groundwork and describe for people who may not have followed this closely what’s in these materials that the Trump administration put out about a week ago. Pillar two of the action plan is on AI infrastructure, and I’d say it’s probably the part of the action plan that’s most directly relevant to energy and the energy work we do at the Center on Global Energy Policy. So maybe could you just start by telling us about pillar two and then the related executive order on federal permitting?
Ashley Finan: Yes, absolutely. Thank you, David. It’s great to be here with this fantastic panel. So as you said, pillar two is focused on AI infrastructure and building American AI infrastructure. It has a major focus on energy, as you said, and also on permitting reform. So it has a section on streamlining infrastructure development, which focuses on permitting reform and streamlining using more NEPA National Environmental Policy Act categorical exclusions, expanding the FAST-41 coverage for AI infrastructure projects. That’s FAST NEPA accelerating policy, and then also directs simplifying permitting under the Clean Water Act, the Clean Air Act, and some other laws. It directs agencies to open up federal land for data center and energy infrastructure development. It makes some security mandates and also expands an AI for permitting program within DOE to help speed up environmental reviews using AI technology. It has a section on power grid expansion and modernization, and that’s a big focus here.
It talks about stabilizing today’s grid, trying to prevent retirements, increase reliability, and then also optimizing existing infrastructure with grid management and demand response strategies. And then finally, accelerating new generation. It specifically calls out dispatchable power and then specific advanced technologies including nuclear fission, enhanced geothermal, and nuclear fusion. And it calls on markets to be reformed to increase reliability. Really a big focus here on reliability and expansion of power. Pillar two also has a section on reviving US semiconductor manufacturing and expanding some of the CHIPS program and focusing research on AI-enhanced chip design. And then it has a section directing the building of high-security AI data centers for intelligence purposes and DOD and national security work, as well as developing national standards for classified AI compute environments, which is really important. And I think that’s great that that’s in there. It has a section on training workforce for AI infrastructure.
There’s a lot in here. It’s a big section, as you said, and a big action plan. So trying to launch an initiative to train workers in key roles, engineering roles, electricians, HVAC, things like that, expanding partnerships with education and employment actors to try to get apprenticeships and employer-driven training underway. And then it has a few other critical infrastructure security provisions that I think are worth mentioning briefly. It looks to bolster critical infrastructure cybersecurity. It establishes or directs the establishment of an AI information sharing and analysis center to be able to share AI-related vulnerabilities and look to address those. It promotes secure-by-design AI technologies and applications and also promotes a mature federal capacity for AI incident response. So the US government has significant incident response capabilities now to help with cybersecurity issues or cyber attacks in the public and private sectors. And this plan directs that they need to incorporate AI incident response into those existing capabilities. So at a high level, a lot of security and infrastructure build-out focusing on energy build-out, energy security, and energy reliability. And I think one differentiation from the Biden administration’s energy-related AI policies is less of a focus on environmental impacts and more of a focus on reliability and acceleration.
David Sandalow: Thank you, Ashley, for that terrific summary. And I would say that the differences that you just pointed out in terms of a lack of focus on environmental protection, more of a focus on deregulation—it struck me as I was reading it, there are some similarities. Both the Biden and Trump administrations want to build data centers on federal land and there are provisions in that area, I think. And then there’s a focus on security from both the Biden and Trump administrations. Do you agree with that, with those assessments?
Ashley Finan: Yes, I agree. And so there were, looking at them side by side, there are a lot of similarities including a lot of the National Environmental Policy Act recommendations relative to categorical exclusions and other things. But the Biden administration’s plans had more information in there about clean energy and carbon emissions and water usage and things like that.
David Sandalow: Great. Well, I want to come back to that actually and hit it in our conversation, but let’s go to the other parts of the action plan and turn to Aaron to talk about pillar three, which is on international issues. Aaron, you’ve got a lot of background in this area. Could you just summarize for us what’s in pillar three and the related EO on promoting the export of American AI technology?
Aaron Bartnick: Yeah, absolutely. Thanks very much for having me. So I have to say, overall, I was surprised by how much continuity there was between this rollout and the prior administration’s AI policy architecture, with the notable exception of pillar three leading in international AI diplomacy and security. I think this is the most sweeping change in terms of how the Trump administration is approaching AI policy versus its predecessor. So every administration is trying to strike a balance between, on the one hand, supporting American jobs and innovation by helping our companies earn global revenue and market share, and on the other hand, protecting US national security by keeping dangerous technology out of the hands of our adversaries. At a very high level, the Biden AI strategy was to provide financial support through the CHIPS and Science Act and then largely played defense with regulation while trusting US companies to outcompete their rivals.
The Trump approach essentially flips this on its head and says that the export controls that prevent the proliferation of advanced technology overly constrain US companies from gaining global revenue and market share, which makes the US AI stack, as the Trump administration is referring to it, less competitive. So the Trump administration has essentially given the green light to sell as much AI hardware and software overseas as possible to try to ensure that the world runs on American AI. It’s too early to say who will end up being right, and it’s reasonable to question the Biden approach given the spotty track record on export control enforcement. But the important thing to understand about the Trump approach is that it’s irreversible. Once that American hardware and software is overseas, the IP as well as any know-how or reverse engineering capabilities that you’re able to gain from that, that is there to stay.
The pillar also instructs commerce, the Department of Commerce, to plug loopholes in chip export controls and strengthen export control enforcement. I think this is great. NVIDIA has exported more than $1 billion in chips so far this year, which is nearly impossible to monitor when you have fewer than literally a dozen export control officers across the total of all of our embassies overseas. So this is sorely needed, but is seemingly at odds with the overall strategy. And I think they’ve tried to square this circle by focusing the new controls on semiconductor manufacturing subsystems rather than the finished chips. But there’s a lot that remains to be seen here. Similarly, the pillar calls for working with partners and allies to better align export controls globally. Again, this is great, but it’s quite notable given the administration’s different approach to allies in most other contexts. Finally, this pillar crucially calls out the importance of biosecurity. This is one of the most plausible and frankly scary near-term risks for malicious AI use, and it’s a very, very good sign to see that they included it.
David Sandalow: Thanks, Aaron. The pillar three and the related executive order create a whole program where companies are going to come together and make proposals to the Department of Commerce for US government support in this area. Do you have any assessment of the potential there from your work on this?
Aaron Bartnick: Yeah, I was going to save this for the “what to watch in the weeks ahead,” but yeah, so there’s a 90-day deadline from July 23rd, so I think that is October 20th, for the Department of Commerce to establish an American AI exports program that defines what the American AI stack is going to be, identifies target countries and regions that are high priorities for us to gain market share, and establishes the stack as the underlying system for AI in those countries and recommend government incentives to help make that possible. And so I think you are going to see a tremendous amount of business activity in trying to shape what that export program is going to look like because it has obviously tremendous potential for the companies that are included and the sub-sectors that are included.
David Sandalow: Terrific. Let’s turn to Jared, who actually knows a lot about these topics as well on the export side, but Jared, could you just walk us through what’s in pillar one on innovation and then add any thoughts you might have on what Aaron and Ashley have said?
Jared Dunnmon: Yeah, absolutely. So I’ll start my comments by just framing the action plan with what in my mind is the key comment from it, which is basically the second sentence, which says that “whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.” And I think that you can look at the vast majority of the rest of the plan as taking that assumption as the driving assumption behind the actions that are being proposed. That framing makes it clear that the Trump administration views the way to win the air quotes “AI race” with China as a competition for market share in the global economy. As Aaron said, the communication of that idea of an American AI technology stack that includes semiconductors, data, models, and applications running on energy infrastructure that is sufficient to support the many potential benefits of AI for science, economic growth, and military power is the vision here.
And so in general, many of the items in the action plan come across as guidance towards that objective rather than, I’d say, explicit direction to take particular actions. There are some exceptions to that, particularly in some of the infrastructure areas that Ashley alluded to, but the thing that I would say upfront is that implementation is really going to determine the degree to which some of these ideas are put into practice, particularly given that, as I think Aaron and Ashley also pointed out, some of the recommendations do have some tension with other administration objectives. So just in pillar one, which is really just accelerating AI innovation, to give you a very high-level summary, which I’m happy to dive deeper into at any point in the Q&A or later, this first pillar actually encompasses eight major areas of focus, and it begins with removing regulatory barriers by identifying and eliminating regulations that limit AI deployment through a combination of public RFIs and review by the Office of Management and Budget, reviewing and reversing FTC actions that are perceived as restrictive, and potentially limiting federal funding to states with AI-restrictive regulations.
So this is the administration looking at what tools it has in the executive branch toolbox to incentivize the rollback of regulations that are perceived to be in the way of AI deployment, which one can make—again, if you take my upfront framing as the framing driving this—that’s a very logical thing to pursue. I will say that one thing that’s perhaps worth noting is that because AI is a very horizontal technology, one could say, “Well, this regulation in the healthcare sector is really, really limiting deployment of AI,” when really it’s about potentially something else. So when we get into implementation, it’ll be interesting to see how that AI hook is used in a broader deregulatory push across industry. So I would say that’s something to watch with that. The plan also emphasizes ensuring ideological neutrality by removing references to things like climate change and misinformation from the NIST AI Risk Management Framework and updating federal procurement guidelines to require objective AI systems and developing evaluations.
Interestingly, to assess whether LLMs—large language models—align with Chinese Communist Party talking points, again, implementation-wise, that will be interesting to watch because determining objectivity in any context can be difficult. And so there is in the executive orders some principles for unbiased AI that mostly apply to, I think, federal acquisition, but it’ll be worth looking at the exact mechanisms that are used to implement that. One of the biggest strategic shifts that you see in this plan that is less consistent with the previous administration is supporting open source AI in a pretty substantial way, with the administration really promoting both open source and open weight systems for global adoption. It also does have some things that are much more aligned with the previous administration. So improving compute access for academics and researchers, creating a new AI R&D strategic plan, which spiritually is a thing that we want to keep doing.
And then supporting things like the National AI Research Resource. I think one bellwether of the degree to which these are going to be nice words versus concrete action is the degree of resourcing you actually see for things like the National AI Research Resource and the degree to which researchers are going to be provided government access to compute capabilities that will allow them to continue to develop pretty capable open source models and deploy them outside of the large industry labs. From an industry adoption perspective, the plan calls for creating industry-specific AI standards and benchmarks, establishing regulatory sandboxes for AI testing. So the idea that you should be able to go and test AI that might do things you might not think it would do, but kind of in a regulatory sandbox where you’re not going to get dinged for it. And then conducting net assessments, interestingly, comparing US AI capabilities to those of other countries.
That last one’s particularly interesting because spiritually it is—again, there’s this race and we’re trying to understand where we’re at. On the other hand, from a tension perspective, it’s interesting that that dovetails with the Pentagon effectively shutting down the Office of Net Assessment relatively recently. And so that’s one of those places where you see tension between what’s stated in the plan and then perhaps some of the way the administration’s implementing it to date. Obviously, there could be other plans in the works. From a workforce development perspective, there are initiatives including making AI training programs eligible for tax exemptions, providing retraining funds for industries affected by AI, deploying AI-driven manufacturing technologies, and identifying automation bottlenecks through industry stakeholder meetings. And again, what’s actually pretty cool here—and again, credit where credit’s due—this is a major issue of thinking about how is AI going to affect the workforce?
How is it actually going to affect the broader workforce and actually tasking folks at Department of Labor and across the interagency with trying to get the data on what’s happening and then thinking about mechanisms that one could use to make sure that the impact of AI across the board is as positive as possible. And so that’s actually one of the meatier sections in that pillar. The last two things in this pillar: a big focus on accelerating scientific progress, which is exciting to see. And in line with, I think, some efforts from the Biden administration, they propose building cloud-enabled automated labs with federal funding, supporting AI-using focused research organizations, incentivizing the release of scientific datasets, and creating federally funded enclaves for AI computation. There’s even some cool things like trying to do whole genome sequencing on federal lands to just create a bunch of data for use in biological models.
There’s some cool things in there on that front. Then the last piece is really about government AI deployment mandates. It’s interesting that they mandate that agencies provide frontier large language models to employees who would benefit from them. It creates an OMB-coordinated AI procurement toolbox and requires DOD to negotiate priority access to inference capacity with private sector entities, which is welcome because it’s an area that DOD ideally would be able to accelerate. And then finally, the last piece is there are some pieces at the end about protecting American AI inventions and combating synthetic media through NIST and DOJ. And so across the board, there’s actually a good amount of pretty exciting stuff in here, some of which is, as I mentioned, consistent with previous administration, some of which represent some strategic shifts, particularly on open source and a couple of other things. The key thing I would flag here is the devil is going to be in the details on implementation, and the things to watch are going to be how are these things actually implemented and which pieces of this action plan end up with the level of resourcing they need, both from executive branch staffing perspective and frankly from a legislative support perspective to provide meaningful progress.
So back over to the rest of the team for the next stage. But that’s my take on pillar one.
David Sandalow: Jared, thank you for that tremendous summary. It just really underscores how much detail there is in these documents, and it’s a lot to follow up on. Let me just ask you about what you highlighted at the beginning, which is the second sentence, which is, as you said, in your view, kind of captures it all. And I’m just going to read it again: that “whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.” Do you have any thoughts about how we should think about what the largest AI ecosystem means? Is that the question of chip sales? Is it a question of reaching artificial general intelligence faster than anybody else? What are the metrics for this? And that for Jared or Ashley or Aaron? I’m wondering if you’ve got any thoughts on that?
Jared Dunnmon: That’s a great question. I’ll let Ashley or Aaron go first. I just blabbered for a long time.
Aaron Bartnick: Sure. So I’m not sure what—I mean, there’s a bunch of different metrics you could use. You could say it’s driven by global sales. If Commerce identifies Nigeria as a strategic market as part of this new rollout, then one way you could measure it would be like American versus Chinese sales for chips, for models, for who’s underwriting the data centers. There’s a bunch of different ways you can measure it, but I think there’s a broader comment here on how this ecosystem approach is very much taking a VC model: growth in market share at any cost, build the ecosystem, crowd out your competitors. And that has proven to work very well in tech startups, but now we’re applying it to national security. And that makes sense given the background of many of the leading policymakers in the Trump administration. And maybe hopefully it’ll work, but if it doesn’t, the consequences, as I mentioned earlier, are going to be pretty significant and importantly irreversible.
David Sandalow: Ashley, looks like you had a thought.
Ashley Finan: Yeah, I’ll add to that. I think back to the dawn of nuclear energy technology as an analog here, and Aaron and I have talked a lot about the parallels between nuclear technology and these technologies and others. And I think if we look at this AI ecosystem, the largest AI ecosystem is one thing, but the impact of that is setting global AI standards and reaping broad economic and military benefits. And so I don’t know how to measure the largest, but if I think back to how one tried to do that in nuclear energy, it was having the cutting-edge research and the best technology and a good academic and research and commercial base built up around that as well as government R&D. And then it was also being a vendor of choice. So selling—indeed, not always having top market share everywhere, but being a major supplier to have influence and exerting a lot of leadership and policy leadership in the standards for that technology. And that could be safety, security, and in the case of nuclear, non-proliferation standards. And I think that looking at how you would do that with AI and data centers would be really valuable, and we could look to that as an example.
David Sandalow: Jared, please.
Jared Dunnmon: I’ll just—yes, so I’ll just follow up by saying I think there are, if you were to measure this, to answer that question, and the executive orders kind of get after this. So I’ll give an abstraction that’s, I think, along the same lines. You can conceptualize the ecosystem as something like there are—moving from furthest away from the user to closest to the user—there’s chips, there’s data, there’s algorithms, there’s applications. And from a chips perspective, how do you measure if you’re owning the ecosystem? Are your chips being sold everywhere? Is everybody running on your chips? And do you as a country have a huge amount of the inference capacity or the training capacity that folks around the world use, for instance via your web services, which right now, like American companies have a huge percentage of the data center market as an example.
So you could measure that. That’s one thing you could measure. On the data front is: of the models that are being trained, where’s the data that they’re using coming from? Where are the datasets being—who is actually getting the datasets that are most commercially useful and where are datasets being licensed from? As an example, is it from companies in other parts of the world or is it from companies and research labs in America? Thirdly, algorithms are a funny one because a lot of the times that actually gets published, and so that actually gets absorbed pretty quickly. And so you can continue to measure things like are folks running algorithms that come from here, there, or the other. But I actually think that’s less important than just implementation. So what frameworks are people using? So if you look at the major tools that are used to build machine learning models, so TensorFlow, PyTorch are good examples—one’s maintained by Google, the other one’s maintained by Meta, and there are others, but those are two examples.
The global usership of those platforms is far, far larger than their Chinese equivalents. So if you think people have heard of things like PaddlePaddle—Paddle is Baidu’s—so you could measure that by your developer population: what are developers using? And to give you an idea of what victory might look like, this is a bit of an extreme analogy, but if you think about programming languages like C++ or even Python, those are there. People are not trying to replace those. They are everywhere. That’s kind of the argument here is if you get the frameworks like that and some of the underlying technology stack to a place like that, nobody’s trying to rip them out. I mean, folks have tried to rip Windows out and it’s been really hard. So that’s the argument here is you want to get to a place like that, and you can potentially measure it by users of those frameworks.
And the last piece on the application side is economically, where’s the value being created? Are American companies reaping the value of AI products or is it going other places? And that’s a pretty straightforward one to measure, at least theoretically. So the last piece to abstract up is, okay, so what are you trying to accomplish with all those things? So okay, you own those and you can measure that ecosystem. What are you actually trying to accomplish? And I think it’s laid out pretty well here, and I would agree with it. This is the way that I usually frame it when I talk about this is there’s three things that you want to accelerate here. You want to accelerate scientific progress, your economic growth, and your military power. And how is owning the AI ecosystem contributing to each of those things? And so I also think that you can do the bottom-up thing that I just said, which is measure each of those components. You can also do the top-down thing of measuring, okay, high-level scientific progress, economic growth, military power, looking under those and saying of those things and how they’re growing, what are the determinants and to the degree to which AI is a determinant, where is it having more of an effect? Is it in America or is it outside? I think you could do both of those things.
David Sandalow: I want to get to strengths and weaknesses in your view of these policy announcements. And also I want to focus on the energy pieces since we’re a center on global energy policy here. And so maybe just to turn to those for a moment and then get any of your thoughts on strengths and weaknesses in that context. We have a question from a participant, an anonymous participant who asked us, “Given that nuclear and advanced geothermal can ramp up a very limited amount in the near term, do you think it will be mainly gas meeting the increasing AI demand in the near term?” Anyone have any thoughts they would like to offer on that?
Ashley Finan: Well, it does look like there will be a lot of gas moving to meet the increasing demand for energy, but there is also work to very quickly restart some existing nuclear power plants like Palisades, for example, restarting this fall. And I think the Trump executive order is related to AI and energy, but also related to nuclear energy specifically focuses on how to move those things faster. So I think that yes, we’ll see gas in the interim and a move towards building as fast as feasible some of these other baseload large electric power sources.
David Sandalow: Yeah, I mean I’d agree with that and I’d answer the same way. I think there’s going to be huge significant growth in gas to power data centers, but this also raises an issue involving solar and wind power, which I think is worth touching on. And there’s a tension, a pretty significant tension between the administration’s desire to bring on data center power sources quickly and its policies with respect to solar and wind power. Right now, the fastest way to bring on new power in the US electric grid is with solar and wind power. I’ve heard, among other people, the CEO of NextEra, John Ketchum, speak to that pretty strongly. And I’ve heard assessments that one can bring on solar and wind power within 16, 18 months, but there is a long time lag with respect to ordering turbines for natural gas and even longer timelines for nuclear and geothermal power. Then meanwhile, under the Big Beautiful Bill, there’s a tax increase for solar and wind power, and the administration has made pretty clear that it is not supportive of solar and wind power and has cut support for solar wind power programs. So I think that’s a tension in this suite of policies that’s going to play out in the months and years ahead.
More broadly, any thoughts from any of you on the strengths you see in this plan? I want to talk a little about strengths, a little bit about weaknesses. Maybe turn to you, Ashley, first. You highlighted a few of these, I think, in your remarks, but what would you consider to be the principal strengths from your standpoint?
Ashley Finan: I saw a few strengths here. One is that there is room to streamline some of the permitting processes. I think that specific and appropriate categorical exclusions can be efficient and can allow us to focus National Environmental Policy Act resources on higher-impact projects. So if it’s implemented well, those make sense. Again, I think as Jared pointed out, it’s all in the implementation here. The use of federal land could be a positive step forward, and that can provide a variety of different benefits like incorporating research and development uses of AI for research and development into something sited on a national lab or other federal land site. And the DOE has already announced four sites that they’ve selected for supporting the Trump administration’s work to cite on federal lands. They announced that on July 24th, so the day after the executive orders, and those were Idaho National Lab, Oak Ridge Reservation, Paducah Gaseous Diffusion Plant, and Savannah River Site.
So they’re already moving forward with that, and those sites are all moving forward aggressively to try to do this. So I think that’s positive. And then as I mentioned in my earlier remarks, I think that the incorporation of AI into sharing critical infrastructure vulnerabilities and protection approaches across private and public actors and incorporating AI into our incident response capabilities is very important to our critical infrastructure reliability and security. And then being proactive on developing AI standards for national security use as well as capacity and applications for national security is really important, and I’m glad to see those in the action plan. So those are some strengths from my perspective.
David Sandalow: Great, thanks Ashley. Let’s turn to Aaron and Jared then quickly for thoughts on strengths. Aaron Bartnick?
Aaron Bartnick: Yeah, would absolutely echo Ashley’s points on making it easier to build the next generation of critical infrastructure is a very good thing and in many ways is consistent with the Biden administration’s executive order on this as well. And so there’s a lot to be supportive of there, even if this version of it rolls back more environmental regulations than some would’ve preferred. We already talked about the focus on closing export control loopholes and strengthening enforcement and better coordinating with allies. All this is great. Similarly, the calls for investing in AI-enabled scientific research and AI skills training for workers in pillar one also unambiguously good.
I think there’s an open question on how the guidance in this action plan, however, is going to butt up against conflicting approaches in other elements of the Trump policy, right? We talked about how coordinating with allies on export controls—excellent idea—but it stands in pretty stark contrast to how the administration is engaging with allies on trade, for example. Similarly, investing in AI-enabled research—excellent, going to be critical for winning the research competition that will manifest in products 20 years down the road—but also stands in quite stark contrast to what is frankly a pretty draconian approach to federal research in other contexts.
David Sandalow: Jared, any thoughts on strengths and we’ll get to weaknesses?
Jared Dunnmon: Yeah, briefly I would echo what Ashley and Aaron just said. I would say the big highlights for me and strengths were the focus on science, on AI-driven scientific research, the focus on infrastructure and the need to deregulate in ways that accelerate our ability to deploy AI systems, which I think is right. And also frankly, again, the letter of what’s written there in terms of some of the semiconductor AI semiconductor manufacturing export controls and enforcing existing export controls, focusing on some of the subsystems pieces. I think those are pretty substantial strengths. And there’s a lot of things in some of the pieces in terms of thinking about the effect on the workforce and actually having the Department of Labor thinking explicitly about what AI’s effect on the overall workforce. I think those are all really positive things. There’s a lot of unambiguously positive things in this action plan, and I think overall there’s a lot to be excited about. Here, again, I’ll hammer again that the devil will be in the details and implementation.
David Sandalow: Let me just jump in on the labor point you just made because we have a question from Niall in the chat. Who asks, “Do the new rules make it easier for foreign AI researchers to come and work in the US?” Thoughts on that? From Jared or anybody?
Jared Dunnmon: I mean not that I’m aware of. Aaron, anything you’re seeing?
Aaron Bartnick: No, I don’t think they do. That’s obviously a critical component of this and one of the United States’ two really major competitive advantages over the last 80 years is that we’ve been the top global destination for talent. And this is another area where the goals of the AI action plan are excellent, but there is a tension with how we are treating the attraction of foreign research talent in other policy areas.
David Sandalow: So I think essentially I’ve heard real praise for parts of this action plan and part of the executive orders from each of you, but also some concerns that you’ve highlighted. Are there any weaknesses or concerns that you’ve had that you haven’t had a chance to speak to yet that you’d like to raise? Anyone?
Ashley Finan: I think that there are some gaps that aren’t covered here. They could be covered in the future, but I didn’t see coverage in these executive orders or action plan on really the impacts on communities and impacts on the environment in some ways. So cost of power, for example, trying to control the cost of electric power for consumers—I think that’s going to be an important issue moving forward that can be addressed elsewhere. And then water supply I think could be an increasing tension around data centers. It already is an issue, and so that will need to be addressed, and that impacts communities. And then water protection with some of the streamlining of regulation in the action plan and the executive order. There could be a concern about sensitive resources like wetlands being less protected than they were in the past. So really needing to take a look at that potentially.
And then I think that we’ve touched a little bit on implementation of export controls. I think, again, as Jared has said, implementation is key, and then investment—there’s a lot that the administration wants to accomplish here. An executive order is not a place where you can make new investments. The EO and the action plan direct the use of discretionary resources that are appropriate to be put towards these priorities. But in terms of making big new investments, we haven’t seen that yet. So that’s not something that would appear here, but to make all of this happen, it’s probably needed. Those are a few examples.
David Sandalow: Thanks Ashley. And just to pick out one of the points you made—for me, one of the biggest surprises reading through this was a lack of attention to increasing electricity rates around the country. It’s not that it’s by any means ignored, but it doesn’t seem to be a major focus of this. And this is becoming such a high-profile political issue in some parts of the United States, particularly the East Coast and the PJM region, and data center power demand is projected to continue to drive increasing electricity rates and PUCs around the country are starting to pay pretty significant attention to this. And again, it’s referred to in these materials, but I think it’s fair to say it doesn’t seem to be a major focus, and I think we can expect to see—I would anticipate some dialogue about that and some policies in the months and years ahead on that. Jared or Aaron, anything you’d like to highlight on this topic?
Jared Dunnmon: Sure, I’ll hit three things. So one is while I really like and appreciate the focus on evaluations for AI systems, making AI objective is a very hard thing to do and measure. And so on one hand there are some good things here, which is the Center for AI Standards and Innovation, which is kind of the follow-on to the AI Safety Institute, is not only still there but actually called out for doing some really useful things. But the question is, given personnel moves in the government, how is that going to be staffed and how are you going to actually hire to staff that and how are you going to actually be able to help create that ecosystem in the context of current federal approaches to employment? I think that’s a really important question to figure out because what you don’t want is to have nominally this place that’s supposed to be helping curate the evaluation ecosystem and then just really not be resourced to do it.
And that gets to Ashley’s point about resourcing—because some of that may have to happen through legislation. So that’s thing one. There’s a bit of a worry between having a say-do gap from a resourcing perspective on evaluations. Number two, I think there was a knit that I took issue with where they mandated federal employees have access to frontier large language models. That word “frontier”—the way that that’s usually interpreted is kind of the very, very latest models that are trained by the very biggest AI labs. I’m not sure you actually need frontier models to do useful things a lot of the time. You can use models that are not frontier models to do useful things. And so I’m not really sure you needed that, and I hope that doesn’t cause downstream implementation challenges. The last piece, and I know everybody on this call will have opinions about this one, so I’m going to say it and let others chime in, but if you look at the plan for victory on energy, the plan for victory on energy appears to be dispatchable power.
That’s the word. And you could make an argument that basically everything could be dispatchable power if you have batteries in the middle, no matter what your intermittency is upstream, et cetera. And in some sense that’s great, but it’s a missed opportunity because if you look even from a national security perspective, if I put my nat-sec hat on and I look at the reason that China is building out enormous amounts of solar power, like more solar power in a month than we install in a very large number of months across the board in the United States from any source, you can make an argument and say, oh, well that’s great, we’re doing this for climate reasons, et cetera. With my nat-sec hat on, I would say it’s absolutely not for climate reasons. It’s because China is one of the most energy hungry and energy insecure countries in the world.
And they know that one of the ways to get energy that is not deniable by an adversary intervening in a Strait of Malacca or anywhere else is make sure that you can generate it locally. And so solar power will do that. So I think missing the forest for the trees on saying, look, the energy piece—like yes, dispatchable power on principle, yes, that’s the right answer, but if you continue to let your energy supply markets in some core areas be dominated by the folks that you’ve called out as a geopolitical adversary in your national defense strategy as your primary pacing threat, and your goal is to actually maximize the amount of dispatchable power you have, but you’re letting yourself be hamstrung by losing that capacity to that same adversary, I think it’s a missed opportunity to call out that problem and think about the strategic security implications of not being able to deploy physically as much power as possible because we’re not thinking intentionally about how we’re competing in some of these areas where, as you mentioned, David, the objective reality is that the vast majority of power being deployed is coming from a set of sources that this action plan doesn’t come out and say, yes, we should do those things.
David Sandalow: Jared, I’m in strong agreement with the point you just made, and it’s a great elaboration on the way I framed earlier the tension between these policies and the solar and wind policies that are being pursued in other contexts. And I also want to turn to Aaron in a minute, but I want to pick up on the point you made about objective truth and free speech that’s made in here. Because for me, this was maybe the oddest part of the AI action plan. And just want to quote, there’s a sentence that says at page three, it says, “We must ensure that free speech flourishes in the era of AI and that AI procured by the federal government objectively reflects truth rather than social engineering agendas.” And then the very next sentence, literally the very next sentence, says that the administration is going to revise the AI Risk Management Framework from the National Institute of Standards and Technology to eliminate references to climate change after saying that objective truth and avoiding social engineering agendas is a priority.
And I think my impression is the drafters don’t have a recognition of the—or didn’t seem to have a recognition of—the contradiction between calling for free speech and then eliminating references to climate change, or objectively reflecting truth and then calling for eliminating references to climate change. And I mean, putting aside the debate about this issue, the overwhelming majority of scientists around the world, of course, accept the reality of climate change, and the overwhelming majority of governments do as well. I mean, I think there are 195 governments that are members of the Paris Agreement on climate change, and there are three that aren’t right now. I think Libya, Yemen, and Iran, and we’re about to join Libya, Yemen, and Iran in not being members of the Paris Agreement on climate change. So I think it’s hard to maintain that it’s a social engineering agenda to discuss something that’s widely accepted by the vast majority of climate scientists and the vast majority of the world. So it gets to the point about who’s going to decide what objective truth is and what social engineering is. And those are very, very hard questions, but the signals in here are not encouraging in that regard.
Aaron Bartnick: And the oddest thing about it, David, is that if you actually read the AI Risk Management Framework, there are no references to climate change in there. And so I think the concern here is that this is a deeply serious issue, a deeply serious competition with very meaningful economic and security consequences. And the concern here is that things like this will distract from the much more serious issues at stake. Anyway, I know we have a bunch of good questions and not a lot of time, so I’ll leave it there.
Jared Dunnmon: I will make a 20-second comment that I think is important here, which is all those things are true. And David, what you pointed out here on the climate change side tie-ins there with kind of some of the conversations around the 2009 endangerment ruling, et cetera. So there’s a lot going on here, but the thing that I’ll also point out is that if you actually take a deregulatory approach here and let markets decide, if you look at the biggest funders of trying to deploy SMRs, trying to deploy clean power for data centers, a lot of it is the big tech companies, and credit where credit’s due. They’ve actually put a lot of money towards certainly deploying—I would say traditional power because they’re trying to find it, but they could also be spending a lot less than they are on paying more than they have to for cleaner, future-looking energy sources. And that’s a reflection of what their consumers want. And so I think there’s also an argument here to be made, which is that if you actually do take a deregulatory approach, the approach that a lot of the companies will take is actually one that will try to address that desire from their consumers. And so it’ll be interesting to see where those dynamics land.
David Sandalow: You mentioned tech companies—just one question on the stakeholder impacts here. Did anyone see anything in these orders or in the AI action plan that does not reflect the views of the tech industry overall? I guess you just pointed to one piece. There are a lot of hyperscalers who strongly support this type of solar and wind power and low-carbon power, and that seems to be less of a priority here. But broadly speaking, are there positions taken here that are not advanced by the tech industry? Recognizing, of course, that the tech industry is not a monolith.
Aaron Bartnick: With the exception of Meta, who has very much embraced open source with Llama, I think a lot of the other model companies are not going to be thrilled to see the full-throated endorsement of open models, for example.
David Sandalow: That’s a powerful point, Aaron. Absolutely.
Jared Dunnmon: A hundred percent agree with that, by the way. And I think, for example, OpenAI saw it coming and they’ve been talking about releasing some of the—but yeah, I mean they’re not going to be super excited about that. I think that’s right. I think they also, again, this question of—I think they’re going to be perplexed by what they have to do to align with the removal of ideological bias angle of this. How do you measure that and how do you convince yourself that you’ve done that? When you get down to brass tacks implementation, that is—it’s actually, again, a very hard thing to do and measure. And so I think that will cause consternation, put it that way.
David Sandalow: So we’re down to our last four minutes here, and there’s so much to discuss. One topic we haven’t had a chance to get to involves states and the role of states. And just to flag that, I think there’s some interesting material in here about states. I’ll just quote, a provision says, “AI is far too important to smother in bureaucracy at this early stage, whether at the state or federal level. The federal government should not allow AI-related federal funding to be directed towards states with burdensome AI regulations that waste those funds but should not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” So I think there’s an attempt to balance here, both an impulse for uniform federal standards in this area, but also recognition of states’ rights. And I think that reflects the ongoing dialogue on this topic. Let me just start to wrap up because I think we have time for about 60 seconds from each of you and ask you for any concluding thoughts and in particular anything you’d keep your eye on in the months and years ahead. Why don’t I start with you, Jared, and you’ve already said the devil’s in the details and really highlighted that as a major theme of your remarks. What particular devil should we be looking for in the details? Any thoughts about that?
Jared Dunnmon: At a high level, like I said, I think there’s a lot to be excited about in the action plan. I think actually—I think it’s in general of the government plans I’ve seen on AI—actually think it’s overall quite good. And a lot of folks clearly thought a lot about these things. My biggest concern from an implementation perspective is a say-do gap caused by a lack of resourcing on the government side to implement. So we say we want to coordinate evaluations, we want to make sure that export—existing export controls are enforced and that new ones are enforced. We want to make sure that this thing happens and that thing happens and that thing happens and that thing happens. And all of those things, a lot of them I say, that’s great, but you actually have to have people to do it, and the government actually has to be able to do it.
And my worry would be that via a combination of executive branch priorities and a lack of resourcing from the legislative branch, you end up in a situation where we’ve set a lot of the right things and we just don’t have the firepower to do it on the government side. So again, I’m not giving you a really specific—it’s just my really big overall concern. So I hope that we don’t have that. I hope that we’re able to execute, and it’s on us as a country to figure out how to execute. But that’s my really big one.
David Sandalow: Aaron, any quick thoughts?
Aaron Bartnick: Yeah, we talked about the American AI exports program. There’s a call in there that there will be a public call for proposals from industry-led consortia for inclusion in this program. I think we’re going to see a lot of very interesting bedfellows as those consortia are put together. But the bigger thing to look at is, and this is in response to a question from the audience, is whether the US can outcompete China in the AI race with these new rules. And it’s a great question, and the answer is, under this new framework, that is the only remaining choice. The only way, absent some of these prior guardrails, American companies really only have one option to protect both US national security and their own market share: build, grow, and win. So that’s the question of the day, and I hope the answer is yes.
David Sandalow: Ashley, closing thoughts?
Ashley Finan: I echo what Jared and Aaron have said—the resourcing and the competition. I guess I will just add that I think that this increase in energy demand is a great opportunity as a catalyst for moving some of these advanced technologies from their current stage to deployment at scale. So I’ll be watching in the coming months to see whether these deals with nuclear-powered data centers or geothermal-powered data centers, whether they actually move towards putting steel and concrete in the ground, because that would be a really big win for the country.
David Sandalow: Well, I feel like we could keep talking for an hour or two or maybe all afternoon about these topics, and we got to some of the questions in the chat. Sorry we weren’t able to get to all the really good questions in the chat. Jared Dunnmon, Aaron Bartnick, Ashley Finan, thank you so much for joining us. My name is David Sandalow at the Center on Global Energy Policy. If you’re interested in these topics, please listen to the AI Energy and Climate Podcast, as well as the Columbia Energy Exchange. Two different podcasts that touch on these issues here at the Center on Global Energy Policy. We host rapid response webinars like this from time to time. Please keep looking for them. Thank you for joining us today. Have a great day and a great week ahead. All the best, everyone.
Jason Bordoff: Many thanks to David Sandalow for moderating that great discussion – and to my colleagues, Aaron Bartnick, Jared Dunnmon, and Ashley Finan for sharing their expertise. And thanks to all of you for listening to this week’s episode of Columbia Energy Exchange.
The show is brought to you by the Center on Global Energy Policy at Columbia University’s School of International and Public Affairs.
The show is hosted by me, Jason Bordoff, and Bill Loveless.
The show is produced by Mary Catherine O’Connor.
Additional support from Caroline Pitman and Kyu Lee.
Gregg Vilfranc engineered the show.
For more information about the podcast, or the Center on Global Energy Policy, please visit us online at energypolicy.columbia.edu or follow us on social media @columbiaUEnergy.
And please, if you feel inclined, give us a rating on Apple podcasts — it really helps us out.
Thanks again for listening, we’ll see you next week.