Interactive and Explainable Human-Centered AutoML

ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.

Subsidie
€ 1.459.763
2022

Projectdetails

Introduction

Trust and interactivity are key factors in the future development and use of automated machine learning (AutoML), supporting developers and researchers in determining powerful task-specific machine learning pipelines. This includes pre-processing, predictive algorithms, their hyperparameters, and—if applicable—the architecture design of deep neural networks.

Current State of AutoML

Although AutoML is ready for its prime time after it achieved impressive results in several machine learning (ML) applications, its efficiency has improved by several orders of magnitudes in recent years. However, the democratization of machine learning via AutoML is still not achieved.

ixAutoML Design Philosophy

In contrast to previously purely automation-centered approaches, ixAutoML is designed with human users at its heart in several stages.

Foundation of Trust

First of all, the foundation of trustful use of AutoML will be based on explanations of its results and processes. Therefore, we aim for:

  1. Explaining static effects of design decisions in ML pipelines optimized by state-of-the-art AutoML systems.
  2. Explaining dynamic AutoML policies for temporal aspects of dynamically adapted hyperparameters while ML models are trained.

Enabling Interactions

These explanations will be the base for allowing interactions, bringing the best of two worlds together: human intuition and generalization capabilities for complex systems, and the efficiency of systematic optimization approaches for AutoML. Concretely, we aim for:

  1. Enabling interactions between humans and AutoML by taking human's latent knowledge into account and learning when to interact.
  2. Building first ixAutoML prototypes and showing its efficiency in the context of Industry 4.0.

Alignment with EU AI Strategy

Perfectly aligned with the EU's AI strategy and recent efforts on interpretability in the ML community, we strongly believe that this timely human-centered ixAutoML will have a substantial impact on the democratization of machine learning.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.459.763
Totale projectbegroting€ 1.459.763

Tijdlijn

Startdatum1-12-2022
Einddatum30-11-2027
Subsidiejaar2022

Partners & Locaties

Projectpartners

  • GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVERpenvoerder

Land(en)

Germany

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI

Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.

€ 1.470.250
ERC Starting...

Intuitive interaction for robots among humans

The INTERACT project aims to enable mobile robots to safely and intuitively interact with humans in complex environments through innovative motion planning and machine learning techniques.

€ 1.499.999
ERC Starting...

Explainable and Robust Automatic Fact Checking

ExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions.

€ 1.498.616
ERC Starting...

Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.

The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use.

€ 1.499.845
ERC Starting...

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

€ 1.500.000

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

€ 350.000
Mkb-innovati...

InContract AI

Het project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool.

€ 20.000
Mkb-innovati...

Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)

Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.

€ 20.000
Mkb-innovati...

InContract AI

Het project onderzoekt de technische en commerciële mogelijkheden van digital twins voor het automatiseren van contractprocessen in de tool InContract, met inzet van AI en deep learning.

€ 20.000
EIC Pathfinder

Context-aware adaptive visualizations for critical decision making

SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.

€ 4.485.655