NSF Invests $10.9M in the Development of Safe AI Technologies
Nov. 3, 2023 -- The U.S. National Science Foundation recently announced an investment of $10.9 million to support research that will help ensure advances in artificial intelligence go hand in hand with user safety.
The objective of the Safe Learning-Enabled Systems program, a partnership between NSF, Open Philanthropy and Good Ventures, is to foster foundational research that leads to the design and implementation of safe computerized learning-enabled systems — including autonomous and generative AI technologies — that are both safe and resilient.
"NSF's commitment to studying how we can guarantee the safety of AI systems sends a clear message to the AI research community: We consider safety paramount to the responsible expansion and evolution of AI," said NSF Director Sethuraman Panchanathan. "NSF continues to drive cutting-edge AI research — not only to find opportunities for innovation, but also to improve safety."
As AI systems rapidly grow in size, acquire new capabilities, and are deployed in high-stakes settings like healthcare, commerce and transportation, the safety of those systems becomes extremely important. These awards represent key NSF investments in AI by designing resilient automated systems with clear and precise end-to-end safety constraints that have been rigorously tested to ensure that unsafe behaviors will not arise when deployed.
The following list identifies and summarizes the recipients' projects:
- Foundations of Safety-Aware Learning in the Wild, University of Wisconsin-Madison.Researchers will design new safety-aware machine learning algorithms and methodologies that can detect data that are outside of the normal distribution to protect systems that are deployed in the wild in increasingly dynamic and unpredictable environments.
- Vision-Based Maximally-Symbolic Safety Supervisor with Graceful Degradation and Procedural Validation, Princeton University.Researchers will work to develop new technology that can continuously monitor the actions of autonomous robotic systems, such as self-driving cars and home robots, and intervene as needed to ensure safety.
- Safety under Distributional Shift in Learning-Enabled Power Systems, Virginia Tech and University of California, Berkeley.Primarily focusing on power systems, researchers will design novel learning-enabled, safety-critical systems, explore systems for cooperative decision-making, and apply rigorous stress testing to ensure the capability of these systems during rare or unexpected events.
- Safe Distributional-Reinforcement Learning-Enabled Systems: Theories, Algorithms, and Experiments, University of Michigan, Arizona State University and The Ohio State University.Researchers will work towards limiting a major obstacle associated with reinforcement learning techniques — a lack of safety guarantees — by developing foundational technologies for safe learning-enabled systems based on distributional reinforcement learning techniques.
- Specification-guided Perception-enabled Conformal Safe Reinforcement Learning, University of Pennsylvania.This project will bring together researchers with expertise in reinforcement learning, formal methods, theory of machine learning, and robotics to design and implement a reinforcement learning framework with precise mathematical and empirical safety guarantees and constraints.
- A Theoretical Lens on Generative AI Safety: Near and Long Term, Harvard University.Researchers will develop mathematically rigorous AI deployment methods that come with solid theoretical assurances that AI systems will not stray from their intended behavior. This is done by establishing sustainable checks and fail-safes for generative AI technologies like ChatGPT.
- Guaranteed Tubes for Safe Learning across Autonomy Architectures, University of Illinois Urbana-Champaign and University of South Carolina.This research will design a novel system, called "Data-enabled Simplex," that lays the groundwork for advancing autonomous learning-enabled systems by allowing them to learn and adapt to unexpected changes and unknown obstacles.
- Bridging offline design and online adaptation in safe learning-enabled systems, University of Pennsylvania and University of California, Berkeley.This project is focused on mitigating the uncertainties associated with learning-enabled systems in unknown environments by using novel designs that allow for principled tradeoffs between risks to system safety and active data collection and learning, thus closing the loop between online safety monitoring and offline design.
- CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening, University of Virginia.This project will develop a new framework to stress test existing autonomous vehicle software, helping identify potential software failures. The framework uses rare, but realistic, scenarios that may cause autonomous vehicles to fail, and then enhances the software to ensure failure does not reoccur.
- Foundations of Qualitative and Quantitative Safety Assessment of Learning-enabled Systems, University of Nebraska-Lincoln and Augusta University Research Institute.This project aims to build the foundations of end-to-end qualitative and quantitative safety assessments for learning-enabled autonomous systems, allowing for a thorough understanding of safety concerns and enabling effective safety verification in uncertain environments.
- Verifying and Enforcing Safety Constraints in AI-based Sequential Generation, UCLA and University of Illinois at Urbana-Champaign.A team of researchers will develop algorithms to assess the safety of AI programs under various scenarios and provide assurance of their behavior under mission-critical situations. This analysis will reduce unexpected AI failures, prevent bias and discrimination in AI technologies, align AI systems with human values and societal norms, and build public trust for AI-enabled applications.
More information about the Safe Learning-Enabled Systems program can be found at nsf.gov.
Source: NSF