DHS Unveils Framework for Safe and Secure Deployment of AI in Critical Infrastructure
WASHINGTON, Nov. 15, 2024 -- Thursday, the Department of Homeland Security (DHS) released a set of recommendations for the safe and secure development and deployment of Artificial Intelligence (AI) in critical infrastructure, the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework”). This first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators – as well as the civil society and public sector entities that protect and advocate for consumers. The Artificial Intelligence Safety and Security Board (“Board”), a public-private advisory committee established by DHS Secretary Alejandro N. Mayorkas, identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in U.S. critical infrastructure. This product is the culmination of considerable dialogue and debate among the Board, composed of AI leaders representing industry, academia, civil society, and the public sector. The report complements other work carried out by the Administration on AI safety, such as the guidance from the AI Safety Institute, on managing a wide range of misuse and accident risks.
America’s critical infrastructure – the systems that power our homes and businesses, deliver clean water, allow us to travel safely, facilitate the digital networks that connect us, and much more – is vital to domestic and global safety and stability. These sectors are increasingly deploying AI to improve the services they provide, build resilience, and counter threats. AI is, for example, helping to quickly detect earthquakes and predict aftershocks, prevent blackouts and other electric-service interruptions, and sort and distribute mail to American households. These uses do not come without risk, and vulnerabilities introduced by the implementation of this technology may expose critical systems to failures or manipulation by nefarious actors. Given the increasingly interconnected nature of these systems, their disruption can have devastating consequences for homeland security.
“AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more,” said Secretary Alejandro N. Mayorkas. “The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow. I am grateful for the diverse expertise of the Artificial Intelligence Safety and Security Board and its members, each of whom informed these guidelines with their own real-world experiences developing, deploying, and promoting the responsible use of this extraordinary technology. I urge every executive, developer, and elected official to adopt and use this Framework to help build a safer future for all.”
If adopted and implemented by the stakeholders involved in the development, use, and deployment of AI in U.S. critical infrastructure, this voluntary Framework will enhance the harmonization of and help operationalize safety and security practices, improve the delivery of critical services, enhance trust and transparency among entities, protect civil rights and civil liberties, and advance AI safety and security research that will further enable critical infrastructure to deploy emerging technology responsibly. Despite the growing importance of this technology to critical infrastructure, no comprehensive regulation currently exists.
DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the Framework recommends actions directed to each of the key stakeholders supporting the development and deployment of AI in U.S. critical infrastructure as follows:
- Cloud and compute infrastructure providers play an important role in securing the environments used to develop and deploy AI in critical infrastructure, from vetting hardware and software suppliers to instituting strong access management and protecting the physical security of data centers powering AI systems. The Framework encourages them to support customers and processes further downstream of AI development by monitoring for anomalous activity and establishing clear pathways to report suspicious and harmful activities.
- AI developers develop, train, and/or enable critical infrastructure to access AI models, often through software tools or specific applications. The Framework recommends that AI developers adopt a Secure by Design approach, evaluate dangerous capabilities of AI models, and ensure model alignment with human-centric values. The Framework further encourages AI developers to implement strong privacy practices; conduct evaluations that test for possible biases, failure modes, and vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers.
- Critical infrastructure owners and operators manage the secure operations and maintenance of key systems, which increasingly rely on AI to reduce costs, improve reliability and boost efficiency. They are looking to procure, configure, and deploy AI in a manner that protects the safety and security of their systems. The Framework recommends a number of practices focused on the deployment-level of AI systems, to include maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services, or benefits to the public. The Framework encourages critical infrastructure entities to play an active role in monitoring the performance of these AI systems and share results with AI developers and researchers to help them better understand the relationship between model behavior and real-world outcomes.
- Civil society, including universities, research institutions, and consumer advocates engaged on issues of AI safety and security, are critical to measuring and improving the impact of AI on individuals and communities. The Framework encourages civil society’s continued engagement on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases. The Framework envisions an active role for civil society in informing the values and safeguards that will shape AI system development and deployment in essential services.
- Public sector entities, including federal, state, local, tribal, and territorial governments, are essential to the responsible adoption of AI in critical infrastructure, from supporting the use of this technology to improve public services to advancing standards of practice for AI safety and security through statutory and regulatory action. The United States is a world leader in AI; accordingly, the Framework encourages continued cooperation between the federal government and international partners to protect all global citizens, as well as collaboration across all levels of government to fund and support efforts to advance foundational research on AI safety and security.
President Biden directed Secretary Mayorkas to establish the Board to advise the Secretary, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. Secretary Mayorkas convened the Board for the first time in May 2024, and Board Members identified a number of issues impacting the safe use and deployment of this technology, including: the lack of common approaches for the deployment of AI, physical security flaws, and a reluctance to share information within industries.
The Framework is designed to help address these concerns and complements and advances existing guidance and analysis from the White House, the AI Safety Institute, the Cybersecurity and Infrastructure Security Agency, and other federal partners.
“Ensuring the safe, secure, and trustworthy development and use of AI is vital to the future of American innovation and critical to our national security. This new Framework will complement the work we’re doing at the Department of Commerce to help ensure AI is responsibly deployed across our critical infrastructure to help protect our fellow Americans and secure the future of the American economy.” – Secretary of Commerce, Gina Raimondo
“The Framework correctly identifies that AI systems may present both opportunities and challenges for critical infrastructure. Its developer-focused provisions highlight the importance of evaluating model capabilities, performing security testing, and building secure internal systems. These are key areas for continued analysis and discussion as our understanding of AI capabilities and their implications for critical infrastructure continues to evolve.” – Dario Amodei, CEO and Co-Founder, Anthropic
“I would like to thank the Board for their leadership in developing this important Framework and appreciate the opportunity to provide input that reflects critical infrastructure needs. AI holds the promise to create significant opportunities for our world, but we must ensure the technology is deployed thoughtfully and responsibly. The Framework, developed through countless hours of collaboration and negotiation, provides a foundation for how business, government, and all segments of our society can work together to enhance accountability, integration, and cooperation. I’m looking forward to continued work with our partners in this effort.” – Ed Bastian, CEO, Delta Air Lines
“The AI Roles and Responsibilities Framework promotes collaboration among all key stakeholders with a goal of establishing clear guidelines that prioritize trust, transparency and accountability — all essential elements in harnessing AI's enormous potential for innovation while safeguarding critical services. Salesforce is committed to humans and AI working together to advance critical infrastructure industries in the U.S. We support this framework as a vital step toward shaping the future of AI in a safe and sustainable manner.” – Marc Benioff, Chair and CEO, Salesforce
“Humane Intelligence fully endorses the ‘Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,’ developed by the AI Safety and Security Board. This comprehensive framework offers essential guidance for the responsible and secure use of AI across the United States. As an organization dedicated to advancing safe and ethical AI practices, we believe the voluntary responsibilities outlined are crucial steps toward enhancing the safety, security, and trustworthiness of AI systems. By addressing five key roles – cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society, and the public sector – the Framework thoughtfully recognizes the diverse stakeholders involved in safeguarding our nation’s critical infrastructure. The emphasis on securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact aligns closely with our mission. We commend the AI Safety and Security Board for providing clear technical and process recommendations that will help ensure AI systems not only function effectively but also serve the public good in a safe and ethical manner. Humane Intelligence is committed to supporting these principles and will continue working with partners across sectors to promote the responsible development and deployment of AI in critical infrastructure.” – Dr. Rumman Chowdhury, CEO & Co-founder, Humane Intelligence
“This Framework recognizes that proper governance of AI in the critical infrastructure ecosystem is a multistakeholder endeavor. If companies, governments, and NGOs embrace the voluntary roles and responsibilities this Framework envisions, deployment of AI in critical infrastructure is more likely to protect security, privacy, civil rights, and civil liberties than would otherwise be the case.” – Alexandra Reeve Givens, President and CEO, Center for Democracy & Technology
“Artificial intelligence has incredible potential to create efficiencies and innovations, and this Framework takes a thoughtful approach to balancing those opportunities with the risks and challenges it creates. Partnership and collaboration between the public and private sectors will be critical as we work to incorporate these advances into infrastructure and services while also taking steps to mitigate potential harm. This Framework represents an important step towards fostering accountability, safety, and security while embracing this technology and the future.” – Bruce Harrell, Mayor of Seattle
“We are pleased that the Roles and Responsibilities Framework prioritizes civil rights to ensure the equitable deployment of AI. The Framework reflects an understanding that in order for our nation’s critical infrastructure to be best protected, AI must first be safe and effective. That starts with ensuring that all applications of AI both defend and promote equal opportunity. The DHS Framework makes significant progress toward meeting those goals.” – Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law
“We are proud to be part of the U.S. Department of Homeland Security’s AI Safety and Security Board to develop a Framework that will help encourage the responsible use of AI in the energy industry while ensuring critical infrastructure is protected from cyber threats. With our companywide focus on safety, resilience, and driving innovation, we plan to adopt the Framework in the relevant aspects of our business to promote the further integration of advanced AI technologies in support of sustainable energy development.” – Vicki Hollub, President and CEO, Occidental Petroleum
“As we move into the AI era, our foremost responsibility is ensuring these technologies are safe and beneficial. The DHS AI Framework provides guiding principles that will help us safeguard society, and we support this effort.” – Jensen Huang, Founder and CEO, NVIDIA
“The DHS Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure is a powerful tool to help guide the responsible deployment of AI across America’s critical infrastructure and IBM is proud to support its development. We look forward to continuing to work with the Department to promote shared and individual responsibilities in the advancement of trusted AI systems.” – Arvind Krishna, Chairman and CEO, IBM
“Academia and civil society are vital to deploying AI in critical infrastructure safely. This is a crucial, nonpartisan issue with profound impacts on the nation’s well-being. This Framework reaffirms the commitment to security, transparency, and public trust. Through rigorous research and cross-sector collaboration, we can help create a resilient AI ecosystem that prioritizes the public good.” – Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute
“Artificial Intelligence technology is already here. The only question is whether we choose to be proactive or reactive when it comes to leveraging the benefits of AI and guarding against vulnerabilities. I applaud the Biden-Harris Administration and the work of the U.S. Department of Homeland Security’s AI Safety and Security Board for their commitment to seizing this moment and putting forth a responsible Framework that will benefit the American people. In partnership, Maryland will continue to work with federal leaders to unlock the power of innovation so we can deliver real results for our communities.” – Wes Moore, Governor of Maryland
“Technology must be built on a foundation of integrity at the highest levels, and DHS’s Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure will ensure the public and private sectors work closely together to enable AI solutions that are secure, reliable, and trustworthy. As a leader in networking and security that will connect and protect the responsible AI revolution, Cisco is proud to have contributed to the Framework alongside important government, industry, and civil society partners. We look forward to supporting the efforts by Secretary Mayorkas and the Department of Homeland Security.” – Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable
“The collaboration between government, industry, and civil society organizations proved beneficial in establishing the DHS 'Roles and Responsibilities Framework for AI in Critical Infrastructure' to protect the nation's assets. The Framework lays out principles for safe and secure AI that averts anticipated and unforeseen risks, and places equal importance on the preservation of civil and human rights for the people and communities impacted by emerging technologies. The Board's intention to harmonize these goals is a promising first step in the future application and adherence to the Framework.” – Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution
“The use of AI in critical infrastructure merits strong measures to prevent harm and ensure everyone has equal access to information, goods, and services. DHS’s outlining of stakeholders’ roles and responsibilities is an important first step to protecting everyone in the U.S. from discrimination in the deployment of AI systems in our nation’s infrastructure.” – Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights
DHS is responsible for the overall security and resilience of the nation’s critical infrastructure, which hundreds of millions of Americans rely on every day to light their homes, conduct business, exchange information, and put food on the table. In the 2025 Homeland Threat Assessment, the Department advised that domestic and foreign adversaries will continue to threaten the integrity of our nation’s critical infrastructure due to the cascading impacts on U.S. industries and our standard of living. These threats range from, but are not limited to, the use of AI to span or scale physical attacks; targeted attacks on AI systems supporting critical infrastructure; and failures in AI design and implementation that affect critical infrastructure operations.
To learn more about the Framework or the ways DHS is safely and responsibly leveraging AI to protect the homeland, visit the Artificial Intelligence at DHS webpage.
Source: Department of Homeland Security