Improving AI Strategic Coordination
Since World War II, the U.S. Department of Energy (DOE) has been at the forefront of most of the groundbreaking and world changing revolutions in science and technology. Artificial intelligence (AI), including machine learning (ML), is an ideal tool for deriving new insights from analysis of very large data sets. AI becomes more useful as the speed and computational power of today’s supercomputers grows. With all its research, computing, and funding strength, it should come as little surprise that the DOE is spearheading the charge to advance research into AI and its applications across a wide range of industries and uses. "It is the mission of this office,” explains Pamela Isom, director of the Artificial Intelligence and Technology Office (AITO) within DOE, “to transform the department into a world leading AI enterprise by accelerating research, development, delivery, demonstration, and adoption of responsible and trustworthy AI."
Pamela joined me on The Business of Government Hour to share how the department is maximizing the impacts of AI through strategic coordination and planning. She describes the mission and purpose of the office she leads, discusses the importance of responsible and trustworthy AI, and highlights key priorities, from national security to climate resilience and energy justice. The following is an edited excerpt of our discussion, complemented with additional research.
What is the history and mission of the Artificial Intelligence and Technology Office within the U.S. Department of Energy? The AI and Tech Office was established in September 2019. Some refer to it at AITO, but I call it AI and Tech. I became the director in August of 2021, providing a new leadership and focus but building on the successes and lessons learned of the prior leadership. We know that AI is pervasive in everything, in science, in engineering. This includes how to address climate change, carbon capture, etc. The mission of this office is to transform the Department of Energy into a world leading AI enterprise by accelerating the research, the development, deployment, demonstration, and adoption of responsible and trustworthy AI.
I like to emphasize the responsible and trustworthy aspect of artificial intelligence in our mission. We work to accelerate AI-enabled capabilities through strategic portfolio alignment while scaling department-wide use cases that advance the agencies core missions. In addition, we advocate for department program offices pursuing AI efforts. We provide advice on trustworthy AI and machine learning strategies. We also expand public, private, and international partnerships, policy, and innovations, all in support of national AI leadership and innovation. More focused on impact than inventory, we enable the department to advance AI capabilities where it makes sense.
How is your office organized? The office is organized into three pillars: leadership and administration, AI portfolio and program optimization, and AI strategy and partnership development. Led by a seasoned program manager, the portfolio and program optimization team conducts strategic portfolio analysis and alignment of the AI investments. It looks at the investments and assesses what the department is doing in the AI space. This team keeps an eye out for capabilities that ensure departmental and national security priorities are met. They bring gap analysis to the forefront. They also facilitate department-wide trustworthy and responsible AI practices through workshops. Moreover, they guide ethical AI practices and focus on delivery of tool kits to enable and mitigate risks. Recently, this team stood up the responsible and trustworthy AI task force.
The strategy and partnership development team, also led by a seasoned program manager, builds and ensures robust partnerships and customer excellence across internal, external, and international boundaries. This team leads the strategic communications for the office on AI. This includes the development of the DOE-wide AI strategy. The team provides elite, innovative governance through operationalization and administration of the AI Advancement Council.
Finally, there is the leadership and administration area. Our focus is on understanding the impacts that we’re making and how we can make an even greater impact and maximize the return on the AI investments across the department. The leadership aspect of this team focuses on workforce optimization and performance management.
What about your specific duties and responsibilities as the director of the office? I am responsible for ensuring that we are delivering cross- cutting innovation and impacts through AI. That’s really my responsibility. Also, I have a passion for equity. So responsible and trustworthy AI is at the center of my personal attention. If we are going to lead AI at a global scale, which is our mission, then focusing on humanitarian impacts is a must. I’m helping to make sure we track societal impacts: that we are verifying, validating, and looking to apply AI for goodness, equity, and positive societal impacts.
We must be mindful, however, of the adversarial outcomes. You’ll find me operating throughout the department, working with the national labs, and doing all that I can to facilitate success for my office as well as the department. Then lastly, I like creative problem solving and removing roadblocks. That’s where you’ll find me—somewhere trying to solve a problem and remove roadblocks out of my team’s way.
What has surprised you most, Pamela, since joining AI and Tech? I would like to see more talent and capabilities in the department, particularly around responsible and trustworthy AI. We have formed a Responsible and Trustworthy AI (R&T) task force and the participation is fantastic with great representation from the department. An example of why this is important is we want AI to fuel distributed energy fairness and not widen equity gaps. AI can help if we integrate R&T practices. The R&T AI task force is a pleasant outcome and a little surprising because of some unknowns on my part. We have workforce development on the agenda with some successful actions. The budget appears better in 2023 and I hope the impacts continue so that an upward trend continues. One of our goals is to introduce operational practices and behaviors to guide Ethical AI project management, development, and operations. We are very interested in an integrated environment for AI development that can be leveraged across the department.
Given your private and public sector experience, how have both of those experiences informed the way you lead? What are the characteristics in your mind of an effective leader? I like to lead by example. I like to lead with practical use cases so people can relate and understand what it is that they’re doing and why they’re doing it. I don’t like to lead with the style and approach of “just do it just because it has to be done.” I do like to hear from others prior to making decisions, but I am a decision-maker. I believe in teams that are strengthened by diversity. I believe the greater the diversity the greater the possible impact and success of our efforts. I would rather follow the process and offer innovations to make things better and more efficient. I love when my team does the same thing. I expect them to look at the process. Is there a way to make it better? What can we do to address the challenge that fuels innovation? If I just see a team that is following the orders, following the rules—which I want us to follow the rules—but if you’re just doing that, just because you’re trying to hit the metric, I’m not impressed.
Would you highlight the priorities of your office? Do they align with the National AI Initiative Office? The focus on responsible and trustworthy AI is a top priority. My team is focused on innovative AI governance where responsible and trustworthy AI outcomes are the standard. Development of the departmental AI strategy is another key priority. This also includes establishment of the AI Advancement Council.
Another priority is on the continued development of strategic partnerships. We’re going to continue to evolve that strategic partnership framework. Workforce education, training, and upskilling are all key priorities. We must do more around training and upskilling—it’s essential. Each of these priorities pretty much aligns with the National AI Initiative Office and its strategy.
Would you briefly describe AI and outline the AI lifecycle? We know that AI is a disruptive technology. It’s starting to be intertwined in all that we do. It is about getting computer systems to perform tasks that mimic what humans would do or mimic human intelligence. This is basically how we look at artificial intelligence. AI will never think exactly like a human, but it can get close to it. The key is that when applied effectively, it can do things much faster.
The AI lifecycle is composed of four core areas. It starts with the supply chain, where you want to understand the hardware, software, and the components of the AI. The next area is data acquisition, establishing data provenance and chain of custody. The models don’t perform without data. Once you combine the data and the algorithms, that’s when you have the models. The next part of the lifecycle is the deployment. You want to deploy the models so that they are going to the right destinations in a secured manner. The last aspect of the lifecycle involves monitoring performance. AI should be monitored after it’s deployed. It should be monitored and then we should continue to measure its performance relative to what we intended the AI to do. This encompasses harms monitoring and modeling as well as AI assurances in the face of cyber and adversarial threats.
What is your office doing in AI governance and risk management? The AI Advancement Council will play a key role in the area of governance and risk management development. Our approach to governance involves understanding our inventory, making decisions, and providing recommendations on how to advance the outcomes of our AI investments. When it comes to governance, one of the things that we’re doing is infusing risk management and risk mitigation. Our data is more vulnerable when it comes to AI because of the way that the data is utilized and the way that the data is accessed.
The risk management aspect of what we’re doing is there to minimize those risks and make sure that we’re thinking about things that we can do to mitigate such risk. We deal often with adversarial AI. We are working on finalizing an Artificial Intelligence Risk Management Playbook with a planned release in 2023. We are working on the playbook with the National AI Initiatives Office, the National Institute of Standards and Technology (NIST), along with some industry partners. Once released the product will evolve and we welcome feedback. This playbook captures risk scenarios and provides prescriptive guidance to mitigate those risks so that AI decisions are responsible and trustworthy. The playbook even takes into consideration mitigations that are relevant to edge devices like unmanned systems and personal devices. Edge AI systems allow teams, such as our emergency responders, to act quickly on data right where it’s captured. It’s important to have a robust AI Risk Management Framework since AI is being used in critical infrastructure and is, as a result, vulnerable.
A part of our responsible and trustworthy principles is that we stay close to the models after they are deployed. Our approach around risk mitigation is to be proactive, try to prevent as much as possible, and then understand the risk mitigation techniques that can be applied across the entire AI lifecycle.
Would you tell us more about ethical AI? Why is it so important and how does it contribute to the pursuit of that goal of trustworthy AI? Ethical AI is the conscience of AI. AI doesn’t have awareness of itself. It can only separate right from wrong based on data that has the label “right” and the label “wrong” attached to it. The only moral compass there is when talking about AI, is that of its developer, alongside interdisciplinary teams and data, who set the bar for what is right and what is wrong.
Ethical AI is designed and deployed to deliver equality, fairness, justice, safety, and integrity. It can save lives, but unethical AI can lead to an erosion in public trust and the slow progress and adoption of AI. For example, we want to make sure that communities are properly represented in data sets and in the inputs to the AI model. We need a diverse set of individuals who are either validating the outcomes of the AI or they are involved upfront for what we call “human in the loop.” It is one of the best ways to prevent bias. You need a diverse set of folks that can see information from the different perspectives.
We are focused on proactively detecting, assessing, and mitigating the impacts of bias in systems. We are focused on AI as a force multiplier for equity aka equity AI systems. We are accelerating the research, development, deployment, demonstration, and adoption of responsible and trustworthy AI—developing tools to audit and certify AI systems as responsible and trustworthy. We’ve convened the Responsible and Trustworthy AI Task Force that is going to be looking
at how we institutionalize AI independent verification and validation teams. We don’t want to do this in a vacuum. We want the inputs from across the department.