Interactive and Explainable Human-Centered AutoML
ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.
Projectdetails
Introduction
Trust and interactivity are key factors in the future development and use of automated machine learning (AutoML), supporting developers and researchers in determining powerful task-specific machine learning pipelines. This includes pre-processing, predictive algorithms, their hyperparameters, and—if applicable—the architecture design of deep neural networks.
Current State of AutoML
Although AutoML is ready for its prime time after it achieved impressive results in several machine learning (ML) applications, its efficiency has improved by several orders of magnitudes in recent years. However, the democratization of machine learning via AutoML is still not achieved.
ixAutoML Design Philosophy
In contrast to previously purely automation-centered approaches, ixAutoML is designed with human users at its heart in several stages.
Foundation of Trust
First of all, the foundation of trustful use of AutoML will be based on explanations of its results and processes. Therefore, we aim for:
- Explaining static effects of design decisions in ML pipelines optimized by state-of-the-art AutoML systems.
- Explaining dynamic AutoML policies for temporal aspects of dynamically adapted hyperparameters while ML models are trained.
Enabling Interactions
These explanations will be the base for allowing interactions, bringing the best of two worlds together: human intuition and generalization capabilities for complex systems, and the efficiency of systematic optimization approaches for AutoML. Concretely, we aim for:
- Enabling interactions between humans and AutoML by taking human's latent knowledge into account and learning when to interact.
- Building first ixAutoML prototypes and showing its efficiency in the context of Industry 4.0.
Alignment with EU AI Strategy
Perfectly aligned with the EU's AI strategy and recent efforts on interpretability in the ML community, we strongly believe that this timely human-centered ixAutoML will have a substantial impact on the democratization of machine learning.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.459.763 |
Totale projectbegroting | € 1.459.763 |
Tijdlijn
Startdatum | 1-12-2022 |
Einddatum | 30-11-2027 |
Subsidiejaar | 2022 |
Partners & Locaties
Projectpartners
- GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVERpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AIDevelop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction. | ERC Starting... | € 1.470.250 | 2023 | Details |
Intuitive interaction for robots among humansThe INTERACT project aims to enable mobile robots to safely and intuitively interact with humans in complex environments through innovative motion planning and machine learning techniques. | ERC Starting... | € 1.499.999 | 2022 | Details |
Explainable and Robust Automatic Fact CheckingExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions. | ERC Starting... | € 1.498.616 | 2023 | Details |
Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use. | ERC Starting... | € 1.499.845 | 2023 | Details |
Uniting Statistical Testing and Machine Learning for Safe PredictionsThe project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications. | ERC Starting... | € 1.500.000 | 2024 | Details |
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI
Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.
Intuitive interaction for robots among humans
The INTERACT project aims to enable mobile robots to safely and intuitively interact with humans in complex environments through innovative motion planning and machine learning techniques.
Explainable and Robust Automatic Fact Checking
ExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions.
Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.
The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use.
Uniting Statistical Testing and Machine Learning for Safe Predictions
The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
InContract AIHet project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool. | Mkb-innovati... | € 20.000 | 2023 | Details |
Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen. | Mkb-innovati... | € 20.000 | 2020 | Details |
InContract AIHet project onderzoekt de technische en commerciële mogelijkheden van digital twins voor het automatiseren van contractprocessen in de tool InContract, met inzet van AI en deep learning. | Mkb-innovati... | € 20.000 | 2023 | Details |
Context-aware adaptive visualizations for critical decision makingSYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence. | EIC Pathfinder | € 4.485.655 | 2022 | Details |
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.
InContract AI
Het project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool.
Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)
Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.
InContract AI
Het project onderzoekt de technische en commerciële mogelijkheden van digital twins voor het automatiseren van contractprocessen in de tool InContract, met inzet van AI en deep learning.
Context-aware adaptive visualizations for critical decision making
SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.