Modern Challenges in Learning Theory
This project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges.
Projectdetails
Introduction
Recent years have witnessed tremendous progress in the field of Machine Learning (ML). Learning algorithms are applied in an ever-increasing variety of contexts, ranging from engineering challenges such as self-driving cars all the way to societal contexts involving private data.
Challenges in Machine Learning
These developments pose important challenges:
-
Lack of Explanations: Many of the recent breakthroughs demonstrate phenomena that lack explanations and sometimes even contradict conventional wisdom. One main reason for this is because classical ML theory adopts a worst-case perspective which is too pessimistic to explain practical ML. In reality, data is rarely worst-case, and experiments indicate that often much less data is needed than predicted by traditional theory.
-
Privacy Concerns: The increase in ML applications that involve private and sensitive data highlights the need for algorithms that handle the data responsibly. While this need has been addressed by the field of Differential Privacy (DP), the cost of privacy remains poorly understood. Specifically, how much more data does private learning require compared to learning without privacy constraints?
Guiding Question
Inspired by these challenges, our guiding question is: How much data is needed for learning?
Research Objectives
Towards answering this question, we aim to develop a theory of generalization which complements the traditional theory and is better fit to model real-world learning tasks. We will base it on:
- Distribution-dependent perspectives
- Data-dependent perspectives
- Algorithm-dependent perspectives
These perspectives complement the distribution-free worst-case perspective of the classical theory and are suitable for exploiting specific properties of a given learning task.
Study Settings
We will use this theory to study various settings, including:
- Supervised learning
- Semisupervised learning
- Interactive learning
- Private learning
Expected Impact
We believe that this research will advance the field in terms of efficiency, reliability, and applicability. Furthermore, our work combines ideas from various areas in computer science and mathematics; we thus expect further impact outside our field.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.433.750 |
Totale projectbegroting | € 1.433.750 |
Tijdlijn
Startdatum | 1-9-2022 |
Einddatum | 31-8-2027 |
Subsidiejaar | 2022 |
Partners & Locaties
Projectpartners
- TECHNION - ISRAEL INSTITUTE OF TECHNOLOGYpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Foundations of GeneralizationThis project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms. | ERC Starting... | € 1.419.375 | 2024 | Details |
Optimizing for Generalization in Machine LearningThis project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains. | ERC Starting... | € 1.494.375 | 2023 | Details |
Theoretical Understanding of Classic Learning AlgorithmsThe TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations. | ERC Consolid... | € 1.999.288 | 2024 | Details |
Uniting Statistical Testing and Machine Learning for Safe PredictionsThe project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications. | ERC Starting... | € 1.500.000 | 2024 | Details |
Dynamics-Aware Theory of Deep LearningThis project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications. | ERC Starting... | € 1.498.410 | 2022 | Details |
Foundations of Generalization
This project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms.
Optimizing for Generalization in Machine Learning
This project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains.
Theoretical Understanding of Classic Learning Algorithms
The TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations.
Uniting Statistical Testing and Machine Learning for Safe Predictions
The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.
Dynamics-Aware Theory of Deep Learning
This project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications.