Data-Driven Verification and Learning Under Uncertainty

The DEUCE project aims to enhance reinforcement learning by developing novel verification methods that ensure safety and correctness in complex, uncertain environments through data-driven abstractions.

Subsidie
€ 1.500.000
2023

Projectdetails

Introduction

Reinforcement learning (RL) agents learn to behave optimally via trial and error, without the need to encode complicated behavior explicitly. However, RL generally lacks mechanisms to constantly ensure correct behavior regarding sophisticated task and safety specifications.

Background

Formal verification (FV), and in particular model checking, provides formal guarantees on a system's correctness based on rigorous methods and precise specifications. Despite active development by researchers from all over the world, fundamental challenges obstruct the application of FV to RL so far.

Key Challenges

We identify three key challenges that frame the objectives of this proposal:

  1. Curse of Dimensionality: Complex environments with large degrees of freedom induce large state and feature spaces. This curse of dimensionality poses a longstanding problem for verification.

  2. Idealized State Spaces: Common approaches for the correctness of RL systems employ idealized discrete state spaces. However, realistic problems are often continuous.

  3. Uncertainty in Real-World Environments: Knowledge about real-world environments is inherently uncertain. To ensure safety, correctness guarantees need to be robust against such imprecise knowledge about the environment.

Project Objectives

The main objective of the DEUCE project is to develop novel and data-driven verification methods that tightly integrate with RL.

  • To cope with the curse of dimensionality, we devise learning-based abstraction schemes that distill the system parts that are relevant for correctness.
  • We employ and define models whose expressiveness captures various types of uncertainty. These models are the basis for formal and data-driven abstractions of continuous spaces.
  • We provide model-based FV mechanisms that ensure safe and correct exploration for RL agents.

Conclusion

DEUCE will elevate the scalability and expressiveness of verification towards real-world deployment of reinforcement learning.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.500.000
Totale projectbegroting€ 1.500.000

Tijdlijn

Startdatum1-1-2023
Einddatum31-12-2027
Subsidiejaar2023

Partners & Locaties

Projectpartners

  • RUHR-UNIVERSITAET BOCHUMpenvoerder
  • STICHTING RADBOUD UNIVERSITEIT

Land(en)

GermanyNetherlands

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Deep Bayesian Reinforcement Learning -- Unifying Perception, Planning, and Control

Develop an algorithmic framework using deep learning and Bayesian reinforcement learning to enhance robotic manipulation in unstructured environments by effectively managing uncertainty.

€ 1.500.000
ERC Starting...

Verifiably Safe and Correct Deep Neural Networks

This project aims to develop scalable verification techniques for large deep neural networks to ensure their safety and correctness in critical systems, enhancing reliability and societal benefits.

€ 1.500.000
ERC Consolid...

Model-based Reinforcement Learning for Versatile Robots in the Real World

REAL-RL aims to create versatile autonomous robots that learn from experience using a model-based approach for efficient task adaptation and behavior planning.

€ 1.998.500
ERC Starting...

Continual and Sequential Learning for Artificial Intelligence

Develop a Continual and Sequential Learning AI that integrates RL with advanced data gathering to autonomously adapt and explore in dynamic environments for scientific breakthroughs.

€ 1.259.375
ERC Advanced...

Control for Deep and Federated Learning

CoDeFeL aims to enhance machine learning methods through control theory, developing efficient ResNet architectures and federated learning techniques for applications in digital medicine and recommendations.

€ 2.499.224

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

Haalbaarheidsstudie naar de specificatie en automatische verificatie van data contracts

Verum onderzoekt de haalbaarheid van het uitbreiden van Dezyne voor automatische verificatie van datacontracten en abstracte datatypes.

€ 20.000