VIrtual GuardIan AngeLs for the post-truth Information Age

The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.

Subsidie
€ 2.490.000
2024

Projectdetails

Introduction

This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of this evolution is people’s innate dependence on cognitive heuristics to process information, which may lead to biases and poor decision making.

Cognitive Biases and Networked Society

These biases risk being amplified in a networked society, where people depend on others to form their opinions and judgments and to determine their actions. Contemporary generative Artificial Intelligence technologies may further exacerbate this risk, given their ability to fabricate and efficiently spread false but highly convincing information at an unprecedented scale.

Technological Impact

Combined with expected advances in virtual, augmented, mixed, and extended reality technologies, this may create an epistemic crisis with consequences that are hard to fully fathom: a post-truth era on steroids.

Project Goals

In the VIGILIA project, we will investigate a possible mitigation strategy. We propose to develop automated techniques for:

  1. Detecting triggers of cognitive biases and heuristics in humans when facing information.
  2. Assessing their effect at an interpersonal level (their effect on trust, reputation, and information propagation).
  3. Evaluating their impact at a societal level (in terms of possible irrational behavior and polarization).

Methodology

We aim to achieve this by leveraging techniques from AI itself, in particular Large Language Models, as well as by building on advanced user modeling approaches from the past ERC Consolidator Grant FORSIED.

Integration and Ethics

Our results will be integrated within tools that we refer to as VIrtual GuardIan AngeLs (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers. Ethical questions arising will be identified and dealt with as first-class questions within the research project.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 2.490.000
Totale projectbegroting€ 2.490.000

Tijdlijn

Startdatum1-10-2024
Einddatum30-9-2029
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • UNIVERSITEIT GENTpenvoerder

Land(en)

Belgium

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Measuring and Mitigating Risks of AI-driven Information Targeting

This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.

€ 1.499.953
ERC Advanced...

Harmony within society

This project aims to develop a unified framework for understanding social interactions and divisive behaviors, exploring safe spaces, transparency, and coopetition to enhance societal engagement.

€ 1.597.750
ERC Starting...

The epistemology of costly communication – offline and online

COST-X aims to develop a new methodology using Costly Signalling Theory to understand and improve truthful communication norms, addressing misinformation and enhancing democratic discourse.

€ 1.389.776
ERC Consolid...

Biases in Administrative Service Encounters: Transitioning from Human to Artificial Intelligence

This project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy.

€ 1.954.746
ERC Proof of...

FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine Results

FARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online.

€ 150.000

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

Project POLIGEN-AI

Het project richt zich op het ontwikkelen van een betrouwbare "fact-based" chatbot om desinformatie te bestrijden en geïnformeerde beslissingen te ondersteunen, met aandacht voor technische en juridische haalbaarheid.

€ 20.000
EIC Pathfinder

Value-Aware Artificial Intelligence

The VALAWAI project aims to develop a toolbox for Value-Aware AI that integrates moral consciousness to enhance ethical decision-making in social media, robotics, and medical protocols.

€ 3.926.432
EIC Pathfinder

Context-aware adaptive visualizations for critical decision making

SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.

€ 4.485.655
EIC Pathfinder

Counterfactual Assessment and Valuation for Awareness Architecture

The CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications.

€ 3.132.460