The Culture of Algorithmic Models: Advancing the Historical Epistemology of Artificial Intelligence
This project aims to develop a new epistemology and history of AI by tracing its origins in algorithmic modeling, impacting fields like digital humanities and AI ethics.
Projectdetails
Introduction
The project proposes an alternative epistemology of artificial intelligence (AI). It argues that what is at stake in AI is not its similarity to human rationality (anthropomorphism), but its epistemic difference. Rather than speculating in the abstract on whether a machine can “think”, the project addresses a historical question: What is the logical and technical form of the current paradigm of AI, machine learning, and what is its origin?
Historical Context
The project traces the origins of machine learning back to the invention of algorithmic modelling (more precisely, algorithmic statistical modelling) that took shape in the artificial neural networks research of the mid 1950s. It records that a coherent history and epistemology of this groundbreaking artefact is still missing.
Objectives
The project pursues three objectives to turn its findings into a constructive paradigm:
- A new history of AI that stresses the key role of algorithmic models in the evolution of statistics, computer science, artificial neural networks, and machine learning.
- A new epistemology of AI that engages with the psychology of learning and the historical epistemology of science and technology.
- A study of the impact of the large multi-purpose models (e.g. Bert, GPT-3, Codex, and other recent foundation models) on work automation, data governance, and digital culture.
Impact
Through consolidating a model theory of AI, the research will benefit the reception of AI in general and fields such as digital humanities, scientific computing, robotics, and AI ethics, among others. Ultimately, it will help situate AI in the global horizon of the current technosphere and in the long history of knowledge systems.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.927.573 |
Totale projectbegroting | € 1.927.573 |
Tijdlijn
Startdatum | 1-1-2024 |
Einddatum | 31-12-2028 |
Subsidiejaar | 2024 |
Partners & Locaties
Projectpartners
- UNIVERSITA CA' FOSCARI VENEZIApenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Machine learning in science and society: A dangerous toy?This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society. | ERC Starting... | € 1.500.000 | 2025 | Details |
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies. | ERC Starting... | € 1.499.961 | 2023 | Details |
Towards an Artificial Cognitive ScienceThis project aims to establish a new field of artificial cognitive science by applying cognitive psychology to enhance the learning and decision-making of advanced AI models. | ERC Starting... | € 1.496.000 | 2024 | Details |
Participatory Algorithmic Justice: A multi-sited ethnography to advance algorithmic justice through participatory designThis project develops participatory algorithmic justice to address AI harms by centering marginalized voices in research and design interventions for equitable technology solutions. | ERC Starting... | € 1.472.390 | 2025 | Details |
Narrative Archetypes for Artificial IntelligenceAI STORIES investigates how narrative archetypes in training data influence biases in AI outputs, aiming to develop a narratology of AI to enhance cultural diversity and inform stakeholders. | ERC Advanced... | € 2.500.000 | 2024 | Details |
Machine learning in science and society: A dangerous toy?
This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.
This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies.
Towards an Artificial Cognitive Science
This project aims to establish a new field of artificial cognitive science by applying cognitive psychology to enhance the learning and decision-making of advanced AI models.
Participatory Algorithmic Justice: A multi-sited ethnography to advance algorithmic justice through participatory design
This project develops participatory algorithmic justice to address AI harms by centering marginalized voices in research and design interventions for equitable technology solutions.
Narrative Archetypes for Artificial Intelligence
AI STORIES investigates how narrative archetypes in training data influence biases in AI outputs, aiming to develop a narratology of AI to enhance cultural diversity and inform stakeholders.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Project HominisHet project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert. | Mkb-innovati... | € 20.000 | 2022 | Details |
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
Project Hominis
Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.