Theoretical Understanding of Classic Learning Algorithms
The TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations.
Projectdetails
Introduction
Machine learning has evolved from being a relatively isolated discipline to have a disruptive influence on all areas of science, industry, and society. Learning algorithms are typically classified into either deep learning or classic learning, where deep learning excels when data and computing resources are abundant, whereas classic algorithms shine when data is scarce.
Project Overview
In the TUCLA project, we expand our theoretical understanding of classic machine learning, with a particular emphasis on two of the most important such algorithms, namely Bagging and Boosting. As a result of this study, we shall provide faster learning algorithms that require less training data to make accurate predictions. The project accomplishes this by pursuing several objectives:
-
Establishing a Novel Framework
We will establish a novel learning theoretic framework for proving generalization bounds for learning algorithms. Using the framework, we will design new Boosting algorithms and prove that they make accurate predictions using less training data than what was previously possible. Moreover, we complement these algorithms by generalization lower bounds, proving that no other algorithm can make better use of data. -
Designing Parallel Versions of Boosting Algorithms
We will design parallel versions of Boosting algorithms, thereby allowing them to be used in combination with more computationally expensive base learning algorithms. We conjecture that success in this direction may lead to Boosting playing a more central role also in deep learning. -
Exploring Applications of Bagging
We will explore applications of the classic Bagging heuristic. Until recently, Bagging was not known to have significant theoretical benefits. However, recent pioneering work by the PI shows that Bagging is an optimal learning algorithm in an important learning setup. Using these recent insights, we will explore theoretical applications of Bagging in other important settings.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.999.288 |
Totale projectbegroting | € 1.999.288 |
Tijdlijn
Startdatum | 1-8-2024 |
Einddatum | 31-7-2029 |
Subsidiejaar | 2024 |
Partners & Locaties
Projectpartners
- AARHUS UNIVERSITETpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Foundations of GeneralizationThis project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms. | ERC Starting... | € 1.419.375 | 2024 | Details |
Modern Challenges in Learning TheoryThis project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges. | ERC Starting... | € 1.433.750 | 2022 | Details |
Optimizing for Generalization in Machine LearningThis project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains. | ERC Starting... | € 1.494.375 | 2023 | Details |
Algorithmic Bias Control in Deep learningThe project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications. | ERC Starting... | € 1.500.000 | 2022 | Details |
Reconciling Classical and Modern (Deep) Machine Learning for Real-World ApplicationsAPHELEIA aims to create robust, interpretable, and efficient machine learning models that require less data by integrating classical methods with modern deep learning, fostering interdisciplinary collaboration. | ERC Consolid... | € 1.999.375 | 2023 | Details |
Foundations of Generalization
This project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms.
Modern Challenges in Learning Theory
This project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges.
Optimizing for Generalization in Machine Learning
This project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains.
Algorithmic Bias Control in Deep learning
The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.
Reconciling Classical and Modern (Deep) Machine Learning for Real-World Applications
APHELEIA aims to create robust, interpretable, and efficient machine learning models that require less data by integrating classical methods with modern deep learning, fostering interdisciplinary collaboration.