Theoretical Understanding of Classic Learning Algorithms

The TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations.

Subsidie
€ 1.999.288
2024

Projectdetails

Introduction

Machine learning has evolved from being a relatively isolated discipline to have a disruptive influence on all areas of science, industry, and society. Learning algorithms are typically classified into either deep learning or classic learning, where deep learning excels when data and computing resources are abundant, whereas classic algorithms shine when data is scarce.

Project Overview

In the TUCLA project, we expand our theoretical understanding of classic machine learning, with a particular emphasis on two of the most important such algorithms, namely Bagging and Boosting. As a result of this study, we shall provide faster learning algorithms that require less training data to make accurate predictions. The project accomplishes this by pursuing several objectives:

  1. Establishing a Novel Framework
    We will establish a novel learning theoretic framework for proving generalization bounds for learning algorithms. Using the framework, we will design new Boosting algorithms and prove that they make accurate predictions using less training data than what was previously possible. Moreover, we complement these algorithms by generalization lower bounds, proving that no other algorithm can make better use of data.

  2. Designing Parallel Versions of Boosting Algorithms
    We will design parallel versions of Boosting algorithms, thereby allowing them to be used in combination with more computationally expensive base learning algorithms. We conjecture that success in this direction may lead to Boosting playing a more central role also in deep learning.

  3. Exploring Applications of Bagging
    We will explore applications of the classic Bagging heuristic. Until recently, Bagging was not known to have significant theoretical benefits. However, recent pioneering work by the PI shows that Bagging is an optimal learning algorithm in an important learning setup. Using these recent insights, we will explore theoretical applications of Bagging in other important settings.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.999.288
Totale projectbegroting€ 1.999.288

Tijdlijn

Startdatum1-8-2024
Einddatum31-7-2029
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • AARHUS UNIVERSITETpenvoerder

Land(en)

Denmark

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Foundations of Generalization

This project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms.

€ 1.419.375
ERC Starting...

Modern Challenges in Learning Theory

This project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges.

€ 1.433.750
ERC Starting...

Optimizing for Generalization in Machine Learning

This project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains.

€ 1.494.375
ERC Starting...

Algorithmic Bias Control in Deep learning

The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.

€ 1.500.000
ERC Consolid...

Reconciling Classical and Modern (Deep) Machine Learning for Real-World Applications

APHELEIA aims to create robust, interpretable, and efficient machine learning models that require less data by integrating classical methods with modern deep learning, fostering interdisciplinary collaboration.

€ 1.999.375