Data-Driven Verification and Learning Under Uncertainty
The DEUCE project aims to enhance reinforcement learning by developing novel verification methods that ensure safety and correctness in complex, uncertain environments through data-driven abstractions.
Projectdetails
Introduction
Reinforcement learning (RL) agents learn to behave optimally via trial and error, without the need to encode complicated behavior explicitly. However, RL generally lacks mechanisms to constantly ensure correct behavior regarding sophisticated task and safety specifications.
Background
Formal verification (FV), and in particular model checking, provides formal guarantees on a system's correctness based on rigorous methods and precise specifications. Despite active development by researchers from all over the world, fundamental challenges obstruct the application of FV to RL so far.
Key Challenges
We identify three key challenges that frame the objectives of this proposal:
-
Curse of Dimensionality: Complex environments with large degrees of freedom induce large state and feature spaces. This curse of dimensionality poses a longstanding problem for verification.
-
Idealized State Spaces: Common approaches for the correctness of RL systems employ idealized discrete state spaces. However, realistic problems are often continuous.
-
Uncertainty in Real-World Environments: Knowledge about real-world environments is inherently uncertain. To ensure safety, correctness guarantees need to be robust against such imprecise knowledge about the environment.
Project Objectives
The main objective of the DEUCE project is to develop novel and data-driven verification methods that tightly integrate with RL.
- To cope with the curse of dimensionality, we devise learning-based abstraction schemes that distill the system parts that are relevant for correctness.
- We employ and define models whose expressiveness captures various types of uncertainty. These models are the basis for formal and data-driven abstractions of continuous spaces.
- We provide model-based FV mechanisms that ensure safe and correct exploration for RL agents.
Conclusion
DEUCE will elevate the scalability and expressiveness of verification towards real-world deployment of reinforcement learning.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.500.000 |
Totale projectbegroting | € 1.500.000 |
Tijdlijn
Startdatum | 1-1-2023 |
Einddatum | 31-12-2027 |
Subsidiejaar | 2023 |
Partners & Locaties
Projectpartners
- RUHR-UNIVERSITAET BOCHUMpenvoerder
- STICHTING RADBOUD UNIVERSITEIT
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Deep Bayesian Reinforcement Learning -- Unifying Perception, Planning, and ControlDevelop an algorithmic framework using deep learning and Bayesian reinforcement learning to enhance robotic manipulation in unstructured environments by effectively managing uncertainty. | ERC Starting... | € 1.500.000 | 2022 | Details |
Verifiably Safe and Correct Deep Neural NetworksThis project aims to develop scalable verification techniques for large deep neural networks to ensure their safety and correctness in critical systems, enhancing reliability and societal benefits. | ERC Starting... | € 1.500.000 | 2023 | Details |
Model-based Reinforcement Learning for Versatile Robots in the Real WorldREAL-RL aims to create versatile autonomous robots that learn from experience using a model-based approach for efficient task adaptation and behavior planning. | ERC Consolid... | € 1.998.500 | 2023 | Details |
Continual and Sequential Learning for Artificial IntelligenceDevelop a Continual and Sequential Learning AI that integrates RL with advanced data gathering to autonomously adapt and explore in dynamic environments for scientific breakthroughs. | ERC Starting... | € 1.259.375 | 2025 | Details |
Control for Deep and Federated LearningCoDeFeL aims to enhance machine learning methods through control theory, developing efficient ResNet architectures and federated learning techniques for applications in digital medicine and recommendations. | ERC Advanced... | € 2.499.224 | 2024 | Details |
Deep Bayesian Reinforcement Learning -- Unifying Perception, Planning, and Control
Develop an algorithmic framework using deep learning and Bayesian reinforcement learning to enhance robotic manipulation in unstructured environments by effectively managing uncertainty.
Verifiably Safe and Correct Deep Neural Networks
This project aims to develop scalable verification techniques for large deep neural networks to ensure their safety and correctness in critical systems, enhancing reliability and societal benefits.
Model-based Reinforcement Learning for Versatile Robots in the Real World
REAL-RL aims to create versatile autonomous robots that learn from experience using a model-based approach for efficient task adaptation and behavior planning.
Continual and Sequential Learning for Artificial Intelligence
Develop a Continual and Sequential Learning AI that integrates RL with advanced data gathering to autonomously adapt and explore in dynamic environments for scientific breakthroughs.
Control for Deep and Federated Learning
CoDeFeL aims to enhance machine learning methods through control theory, developing efficient ResNet architectures and federated learning techniques for applications in digital medicine and recommendations.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Haalbaarheidsstudie naar de specificatie en automatische verificatie van data contractsVerum onderzoekt de haalbaarheid van het uitbreiden van Dezyne voor automatische verificatie van datacontracten en abstracte datatypes. | Mkb-innovati... | € 20.000 | 2022 | Details |
Haalbaarheidsstudie naar de specificatie en automatische verificatie van data contracts
Verum onderzoekt de haalbaarheid van het uitbreiden van Dezyne voor automatische verificatie van datacontracten en abstracte datatypes.