Measuring and Mitigating Risks of AI-driven Information Targeting

This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.

Subsidie
€ 1.499.953
2022

Projectdetails

Introduction

We are witnessing a massive shift in the way people consume information. In the past, people had an active role in selecting the news they read. More recently, the information started to appear on people's social media feeds as a byproduct of one's social relations.

Current Trends

At present, we see a new shift brought by the emergence of online advertising platforms where third parties can pay ad platforms to show specific information to particular groups of people through paid targeted ads. These targeting technologies are powered by AI-driven algorithms.

Risks of Information Targeting

Using these technologies to promote information, rather than promote products as they have been initially designed for, opens the way for self-interested groups to use users' personal data to manipulate them. European Institutions recognize the risks, and many fear a weaponization of the technology to engineer polarization or manipulate voters.

Project Goals

The goal of this project is to study the risks with AI-driven information targeting at three levels:

  1. Human-level: In which conditions targeted information can influence an individual's beliefs.
  2. Algorithmic-level: In which conditions AI-driven targeting algorithms can exploit people's vulnerabilities.
  3. Platform-level: Are targeting technologies leading to biases in the quality of information different groups of people receive and assimilate.

Then, we will use this understanding to propose protection mechanisms for platforms, regulators, and users.

Methodology

This proposal's key asset is the novel measurement methodology I propose that will allow for a rigorous and realistic evaluation of risks by enabling randomized controlled trials in social media. The measurement methodology builds on advances in multiple disciplines and takes advantage of our recent breakthrough in designing independent auditing systems for social media advertising.

Conclusion

Successful execution will provide a solid foundation for sustainable targeting technologies that ensure healthy information targeting.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.499.953
Totale projectbegroting€ 1.499.953

Tijdlijn

Startdatum1-10-2022
Einddatum30-9-2027
Subsidiejaar2022

Partners & Locaties

Projectpartners

  • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUEpenvoerder
  • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS

Land(en)

France

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Designing Social Media Recommendation Algorithms for Societal Good

The project aims to enhance social media algorithms by integrating civic discourse values to reduce risks to social cohesion while balancing freedom of expression through participatory design and risk assessment.

€ 2.037.464
ERC Starting...

Social Media: Measuring Effects and Mitigating Downsides

This project aims to investigate the causal effects of social media on political engagement and mental health, while evaluating interventions to mitigate its negative impacts on users and society.

€ 1.494.625
ERC Starting...

Human Ads: Towards Fair Advertising in Content Monetization on Social Media

HUMANads aims to establish a European legal framework for fair advertising by human ads on social media, addressing transparency issues in commercial and political communications through interdisciplinary research.

€ 1.500.000
ERC Consolid...

Enhancing Protections through the Collective Auditing of Algorithmic Personalization

The project aims to develop mathematical foundations for auditing algorithmic personalization systems while ensuring privacy, autonomy, and positive social impact.

€ 1.741.309
ERC Advanced...

VIrtual GuardIan AngeLs for the post-truth Information Age

The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.

€ 2.490.000

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

Sociaal media platform

Het project ontwikkelt een nieuw sociaal mediaplatform dat de privacy van kwetsbare gebruikers beschermt door innovatieve technologieën te integreren, zonder dataverzameling door multinationals.

€ 20.000
Mkb-innovati...

Self-service media-campagne platform

Het project ontwikkelt een gebruiksvriendelijk self-service platform voor dynamische Rich Media advertenties, gericht op gepersonaliseerde campagnes via big data en in-app metingen.

€ 164.400
Mkb-innovati...

Project POLIGEN-AI

Het project richt zich op het ontwikkelen van een betrouwbare "fact-based" chatbot om desinformatie te bestrijden en geïnformeerde beslissingen te ondersteunen, met aandacht voor technische en juridische haalbaarheid.

€ 20.000
Mkb-innovati...

AI bij Gedragswetenschappen

Het project ontwikkelt een AI-systeem voor het real-time synthetiseren van privacygevoelige data in gedragswetenschappelijk onderzoek.

€ 19.992