Machine learning in science and society: A dangerous toy?
This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.
Projectdetails
Introduction
Deep learning (DL) models are encroaching on nearly all our knowledge institutions. Ever more scientific fields—from medical science to fundamental physics—are turning to DL to solve long-standing problems or make new discoveries. At the same time, DL is used across society to inform and provide knowledge.
Need for Evaluation
We urgently need to evaluate the potentials and dangers of adopting DL for epistemic purposes, across science and society. This project uncovers the epistemic strengths and limits of DL models that are becoming the single most way we are structuring all our knowledge, and it does so by starting with an innovative hypothesis: that DL models are toy models.
Understanding Toy Models
A toy model is a type of highly idealized model that greatly distorts the gritty details of the real world. Every scientific domain has its own toy models that are used to "play around" with different features, gaining insight into complex phenomena.
Epistemic Benefits and Risks
Conceptualizing DL models as toy models exposes the epistemic benefits of DL, but also the enormous risk of overreliance. Since toy models are so divorced from the real world, how do we know they are not leading us astray?
Project Objectives
TOY addresses this fundamental issue by:
- Identifying interlocking model puzzles that face DL models and toy models alike.
- Developing a theory of DL (toy) models in science and society based on the function of their idealizations.
- Developing a philosophical theory for evaluating the epistemic value of DL (toy) models across science and society.
Contributions to Philosophy
In so doing, TOY solves existing problems, answers open questions, and identifies new challenges in:
- Philosophy of science, on the nature and epistemic value of idealization and toy models.
- Philosophy of machine learning (ML), by looking beyond DL opacity and developing a philosophical method for evaluating the epistemic value of DL models.
- Bringing siloed debates in ethics of AI together with philosophy of science, providing necessary guidance on the appropriate use and trustworthiness of DL in society.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.500.000 |
Totale projectbegroting | € 1.500.000 |
Tijdlijn
Startdatum | 1-1-2025 |
Einddatum | 31-12-2029 |
Subsidiejaar | 2025 |
Partners & Locaties
Projectpartners
- UNIVERSITEIT UTRECHTpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
The Culture of Algorithmic Models: Advancing the Historical Epistemology of Artificial IntelligenceThis project aims to develop a new epistemology and history of AI by tracing its origins in algorithmic modeling, impacting fields like digital humanities and AI ethics. | ERC Consolid... | € 1.927.573 | 2024 | Details |
Dynamics-Aware Theory of Deep LearningThis project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications. | ERC Starting... | € 1.498.410 | 2022 | Details |
Algorithmic Bias Control in Deep learningThe project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications. | ERC Starting... | € 1.500.000 | 2022 | Details |
Uniting Statistical Testing and Machine Learning for Safe PredictionsThe project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications. | ERC Starting... | € 1.500.000 | 2024 | Details |
Collaborative Machine IntelligenceCollectiveMinds aims to revolutionize machine learning by enabling decentralized, collaborative model updates to reduce resource consumption and democratize AI across various sectors. | ERC Consolid... | € 2.000.000 | 2025 | Details |
The Culture of Algorithmic Models: Advancing the Historical Epistemology of Artificial Intelligence
This project aims to develop a new epistemology and history of AI by tracing its origins in algorithmic modeling, impacting fields like digital humanities and AI ethics.
Dynamics-Aware Theory of Deep Learning
This project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications.
Algorithmic Bias Control in Deep learning
The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.
Uniting Statistical Testing and Machine Learning for Safe Predictions
The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.
Collaborative Machine Intelligence
CollectiveMinds aims to revolutionize machine learning by enabling decentralized, collaborative model updates to reduce resource consumption and democratize AI across various sectors.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
InContract AIHet project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool. | Mkb-innovati... | € 20.000 | 2023 | Details |
InContract AI
Het project onderzoekt de inzet van digital twins en AI voor het automatiseren van contracten binnen de InContract-tool.