Machine learning in science and society: A dangerous toy?

This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.

Subsidie
€ 1.500.000
2025

Projectdetails

Introduction

Deep learning (DL) models are encroaching on nearly all our knowledge institutions. Ever more scientific fields—from medical science to fundamental physics—are turning to DL to solve long-standing problems or make new discoveries. At the same time, DL is used across society to inform and provide knowledge.

Need for Evaluation

We urgently need to evaluate the potentials and dangers of adopting DL for epistemic purposes, across science and society. This project uncovers the epistemic strengths and limits of DL models that are becoming the single most way we are structuring all our knowledge, and it does so by starting with an innovative hypothesis: that DL models are toy models.

Understanding Toy Models

A toy model is a type of highly idealized model that greatly distorts the gritty details of the real world. Every scientific domain has its own toy models that are used to "play around" with different features, gaining insight into complex phenomena.

Epistemic Benefits and Risks

Conceptualizing DL models as toy models exposes the epistemic benefits of DL, but also the enormous risk of overreliance. Since toy models are so divorced from the real world, how do we know they are not leading us astray?

Project Objectives

TOY addresses this fundamental issue by:

  1. Identifying interlocking model puzzles that face DL models and toy models alike.
  2. Developing a theory of DL (toy) models in science and society based on the function of their idealizations.
  3. Developing a philosophical theory for evaluating the epistemic value of DL (toy) models across science and society.

Contributions to Philosophy

In so doing, TOY solves existing problems, answers open questions, and identifies new challenges in:

  • Philosophy of science, on the nature and epistemic value of idealization and toy models.
  • Philosophy of machine learning (ML), by looking beyond DL opacity and developing a philosophical method for evaluating the epistemic value of DL models.
  • Bringing siloed debates in ethics of AI together with philosophy of science, providing necessary guidance on the appropriate use and trustworthiness of DL in society.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.500.000
Totale projectbegroting€ 1.500.000

Tijdlijn

Startdatum1-1-2025
Einddatum31-12-2029
Subsidiejaar2025

Partners & Locaties

Projectpartners

  • UNIVERSITEIT UTRECHTpenvoerder

Land(en)

Netherlands

Vergelijkbare projecten binnen European Research Council

ERC Consolid...

The Culture of Algorithmic Models: Advancing the Historical Epistemology of Artificial Intelligence

This project aims to develop a new epistemology and history of AI by tracing its origins in algorithmic modeling, impacting fields like digital humanities and AI ethics.

€ 1.927.573
ERC Starting...

Dynamics-Aware Theory of Deep Learning

This project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications.

€ 1.498.410
ERC Starting...

Algorithmic Bias Control in Deep learning

The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.

€ 1.500.000
ERC Starting...

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

€ 1.500.000
ERC Consolid...

Collaborative Machine Intelligence

CollectiveMinds aims to revolutionize machine learning by enabling decentralized, collaborative model updates to reduce resource consumption and democratize AI across various sectors.

€ 2.000.000

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

InContract AI

Het project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool.

€ 20.000