VIrtual GuardIan AngeLs for the post-truth Information Age
The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.
Projectdetails
Introduction
This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of this evolution is people’s innate dependence on cognitive heuristics to process information, which may lead to biases and poor decision making.
Cognitive Biases and Networked Society
These biases risk being amplified in a networked society, where people depend on others to form their opinions and judgments and to determine their actions. Contemporary generative Artificial Intelligence technologies may further exacerbate this risk, given their ability to fabricate and efficiently spread false but highly convincing information at an unprecedented scale.
Technological Impact
Combined with expected advances in virtual, augmented, mixed, and extended reality technologies, this may create an epistemic crisis with consequences that are hard to fully fathom: a post-truth era on steroids.
Project Goals
In the VIGILIA project, we will investigate a possible mitigation strategy. We propose to develop automated techniques for:
- Detecting triggers of cognitive biases and heuristics in humans when facing information.
- Assessing their effect at an interpersonal level (their effect on trust, reputation, and information propagation).
- Evaluating their impact at a societal level (in terms of possible irrational behavior and polarization).
Methodology
We aim to achieve this by leveraging techniques from AI itself, in particular Large Language Models, as well as by building on advanced user modeling approaches from the past ERC Consolidator Grant FORSIED.
Integration and Ethics
Our results will be integrated within tools that we refer to as VIrtual GuardIan AngeLs (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers. Ethical questions arising will be identified and dealt with as first-class questions within the research project.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 2.490.000 |
Totale projectbegroting | € 2.490.000 |
Tijdlijn
Startdatum | 1-10-2024 |
Einddatum | 30-9-2029 |
Subsidiejaar | 2024 |
Partners & Locaties
Projectpartners
- UNIVERSITEIT GENTpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Measuring and Mitigating Risks of AI-driven Information TargetingThis project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies. | ERC Starting... | € 1.499.953 | 2022 | Details |
Harmony within societyThis project aims to develop a unified framework for understanding social interactions and divisive behaviors, exploring safe spaces, transparency, and coopetition to enhance societal engagement. | ERC Advanced... | € 1.597.750 | 2024 | Details |
The epistemology of costly communication – offline and onlineCOST-X aims to develop a new methodology using Costly Signalling Theory to understand and improve truthful communication norms, addressing misinformation and enhancing democratic discourse. | ERC Starting... | € 1.389.776 | 2025 | Details |
Biases in Administrative Service Encounters: Transitioning from Human to Artificial IntelligenceThis project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy. | ERC Consolid... | € 1.954.746 | 2025 | Details |
FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine ResultsFARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online. | ERC Proof of... | € 150.000 | 2022 | Details |
Measuring and Mitigating Risks of AI-driven Information Targeting
This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.
Harmony within society
This project aims to develop a unified framework for understanding social interactions and divisive behaviors, exploring safe spaces, transparency, and coopetition to enhance societal engagement.
The epistemology of costly communication – offline and online
COST-X aims to develop a new methodology using Costly Signalling Theory to understand and improve truthful communication norms, addressing misinformation and enhancing democratic discourse.
Biases in Administrative Service Encounters: Transitioning from Human to Artificial Intelligence
This project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy.
FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine Results
FARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Project POLIGEN-AIHet project richt zich op het ontwikkelen van een betrouwbare "fact-based" chatbot om desinformatie te bestrijden en geïnformeerde beslissingen te ondersteunen, met aandacht voor technische en juridische haalbaarheid. | Mkb-innovati... | € 20.000 | 2023 | Details |
Value-Aware Artificial IntelligenceThe VALAWAI project aims to develop a toolbox for Value-Aware AI that integrates moral consciousness to enhance ethical decision-making in social media, robotics, and medical protocols. | EIC Pathfinder | € 3.926.432 | 2022 | Details |
Context-aware adaptive visualizations for critical decision makingSYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence. | EIC Pathfinder | € 4.485.655 | 2022 | Details |
Counterfactual Assessment and Valuation for Awareness ArchitectureThe CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications. | EIC Pathfinder | € 3.132.460 | 2022 | Details |
Project POLIGEN-AI
Het project richt zich op het ontwikkelen van een betrouwbare "fact-based" chatbot om desinformatie te bestrijden en geïnformeerde beslissingen te ondersteunen, met aandacht voor technische en juridische haalbaarheid.
Value-Aware Artificial Intelligence
The VALAWAI project aims to develop a toolbox for Value-Aware AI that integrates moral consciousness to enhance ethical decision-making in social media, robotics, and medical protocols.
Context-aware adaptive visualizations for critical decision making
SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.
Counterfactual Assessment and Valuation for Awareness Architecture
The CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications.