Foundations of Generalization

This project aims to explore generalization in overparameterized learning models through stochastic convex optimization and synthetic data generation, enhancing understanding of modern algorithms.

Subsidie
€ 1.419.375
2024

Projectdetails

Introduction

Arguably, the most crucial objective of Learning Theory is to understand the basic notion of generalization: How can a learning agent infer from a finite amount of data to the whole population? Today's learning algorithms are poorly understood from that perspective.

Best Practices and Challenges

In particular, best practices, such as using highly overparameterized models to fit relatively few data, seem to be in almost contradiction to common wisdom. Classical models of learning seem to be incapable of explaining the impressive success of such algorithms.

Objective of the Proposal

The objective of this proposal is to understand generalization in overparameterized models and understand the role of algorithms in learning. Toward this task, I will consider two mathematical models of learning that shed light on this fundamental problem.

First Model: Stochastic Convex Optimization

The first model is the well-studied, yet only seemingly well-understood, model of Stochastic Convex Optimization. My investigations, so far, provided a new picture that is much more complex than was previously known or assumed, regarding fundamental notions such as:

  1. Regularization
  2. Inductive bias
  3. Stability

These works show that even in this simplistic setup of learning, understanding such fundamental principles may be a highly ambitious task. On the other hand, given the simplicity of the model, it seems that such an understanding is a prerequisite to any future model that will explain modern Machine Learning algorithms.

Second Model: Synthetic Data Generation

The second model considers a modern task of synthetic data generation. Synthetic data generation serves as an ideal model to further study the tension between concepts such as generalization and memorization.

Fundamental Questions

Here we face a challenge to model the question of generalization and answer fundamental questions such as:

  • When is synthetic data original?
  • When is it a copy of the empirical data?

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.419.375
Totale projectbegroting€ 1.419.375

Tijdlijn

Startdatum1-1-2024
Einddatum31-12-2028
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • TEL AVIV UNIVERSITYpenvoerder

Land(en)

Israel

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Optimizing for Generalization in Machine Learning

This project aims to unravel the mystery of generalization in machine learning by developing novel optimization algorithms to enhance the reliability and applicability of ML in critical domains.

€ 1.494.375
ERC Starting...

Modern Challenges in Learning Theory

This project aims to develop a new theory of generalization in machine learning that better models real-world tasks and addresses data efficiency and privacy challenges.

€ 1.433.750
ERC Consolid...

Theoretical Understanding of Classic Learning Algorithms

The TUCLA project aims to enhance classic machine learning algorithms, particularly Bagging and Boosting, to achieve faster, data-efficient learning and improve their theoretical foundations.

€ 1.999.288
ERC Starting...

Understanding Deep Learning

The project aims to establish a solid theoretical foundation for deep learning by investigating optimization, statistical complexity, and representation, enhancing understanding and algorithm development.

€ 1.499.750
ERC Starting...

AI-based Learning for Physical Simulation

This project aims to enhance physical simulations by integrating machine learning with equation-based modeling for improved generalization and intelligibility, applicable across scientific disciplines and engineering.

€ 1.315.000