Optimizing for Generalization in Machine Learning
This project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains.
Projectdetails
Introduction
Recent advances in the field of machine learning (ML) are revolutionizing an ever-growing variety of domains, ranging from statistical learning algorithms in computer vision and natural language processing all the way to reinforcement learning algorithms in autonomous driving and conversational AI.
The Generalization Mystery
However, many of these breakthroughs demonstrate phenomena that lack explanations, and sometimes even contradict conventional wisdom. Perhaps the greatest mystery of modern ML—and arguably, one of the greatest mysteries of all of modern computer science—is the question of generalization: why do these immensely complex prediction rules successfully apply to future unseen instances?
Importance of Understanding Generalization
Apart from the pure scientific curiosity it stimulates, I believe that this lack of understanding poses a significant obstacle to widening the applicability of ML to critical applications, like in healthcare or autonomous driving, where the cost of error is disastrous.
Project Goals
The broad goal of this project is to tackle the generalization mystery in the context of both statistical learning and reinforcement learning, focusing on optimization algorithms being the de facto contemporary standard in training learning models.
Methodology
Our methodology points out inherent shortcomings of widely accepted viewpoints with regard to the generalization of optimization-based learning algorithms. It takes a crucially different approach that targets the optimization algorithm itself.
Key Steps
- Building bottom-up from fundamental and tractable optimization models.
- Identifying intrinsic properties.
- Developing algorithmic methodologies that enable optimization to effectively generalize in modern statistical- and reinforcement-learning scenarios.
Expected Outcomes
A successful outcome would not only lead to a timely and crucial shift in the way the research community approaches the generalization of contemporary optimization-based ML, but it may also significantly transform the way we develop practical, efficient, and reliable learning systems.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.494.375 |
Totale projectbegroting | € 1.494.375 |
Tijdlijn
Startdatum | 1-10-2023 |
Einddatum | 30-9-2028 |
Subsidiejaar | 2023 |
Partners & Locaties
Projectpartners
- TEL AVIV UNIVERSITYpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Foundations of GeneralizationThis project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms. | ERC Starting... | € 1.419.375 | 2024 | Details |
Modern Challenges in Learning TheoryThis project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges. | ERC Starting... | € 1.433.750 | 2022 | Details |
Theoretical Understanding of Classic Learning AlgorithmsThe TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations. | ERC Consolid... | € 1.999.288 | 2024 | Details |
Uniting Statistical Testing and Machine Learning for Safe PredictionsThe project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications. | ERC Starting... | € 1.500.000 | 2024 | Details |
Algorithmic Bias Control in Deep learningThe project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications. | ERC Starting... | € 1.500.000 | 2022 | Details |
Foundations of Generalization
This project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms.
Modern Challenges in Learning Theory
This project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges.
Theoretical Understanding of Classic Learning Algorithms
The TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations.
Uniting Statistical Testing and Machine Learning for Safe Predictions
The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.
Algorithmic Bias Control in Deep learning
The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.