SubsidieMeesters logoSubsidieMeesters
ProjectenRegelingenAnalyses

Explainable and Robust Automatic Fact Checking

ExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions.

Subsidie
€ 1.498.616
2023

Projectdetails

Introduction

ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction.

Challenges in Current Methods

Automatic fact checking methods often use opaque deep neural network models, whose inner workings cannot easily be explained. Especially for complex tasks such as automatic fact checking, this hinders greater adoption, as it is unclear to users when the models' predictions can be trusted.

Existing explainable ML methods partly overcome this by reducing the task of explanation generation to highlighting the right rationale. While a good first step, this does not fully explain how a ML model arrived at a prediction.

Complexity of Fact Checking

For knowledge-intensive natural language understanding (NLU) tasks such as fact checking, a ML model needs to learn complex relationships between the claim, multiple evidence documents, and common sense knowledge in addition to retrieving the right evidence. There is currently no explainability method that aims to illuminate this highly complex process.

In addition, existing approaches are unable to produce diverse explanations, geared towards users with different information needs.

Proposed Innovations

ExplainYourself radically departs from existing work in proposing methods for explainable fact checking that more accurately reflect how fact checking models make decisions, and are useful to diverse groups of end users.

Future Applications

It is expected that these innovations will apply to explanation generation for other knowledge-intensive NLU tasks, such as:

  1. Question answering
  2. Entity linking

To achieve this, ExplainYourself builds on my pioneering work on explainable fact checking as well as my interdisciplinary expertise.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.498.616
Totale projectbegroting€ 1.498.616

Tijdlijn

Startdatum1-9-2023
Einddatum31-8-2028
Subsidiejaar2023

Partners & Locaties

Projectpartners

  • KOBENHAVNS UNIVERSITETpenvoerder

Land(en)

Denmark

Inhoudsopgave

European Research Council

Financiering tot €10 miljoen voor baanbrekend frontier-onderzoek via ERC-grants (Starting, Consolidator, Advanced, Synergy, Proof of Concept).

Bekijk regeling

Vergelijkbare projecten binnen European Research Council

ProjectRegelingBedragJaarActie

Interactive and Explainable Human-Centered AutoML

ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.

ERC Starting...€ 1.459.763
2022
Details

Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI

Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.

ERC Starting...€ 1.470.250
2023
Details

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

ERC Starting...€ 1.500.000
2024
Details

Controlling Large Language Models

Develop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption.

ERC Starting...€ 1.500.000
2024
Details

Machine learning in science and society: A dangerous toy?

This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.

ERC Starting...€ 1.500.000
2025
Details
ERC Starting...

Interactive and Explainable Human-Centered AutoML

ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.

ERC Starting Grant
€ 1.459.763
2022
Details
ERC Starting...

Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI

Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.

ERC Starting Grant
€ 1.470.250
2023
Details
ERC Starting...

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

ERC Starting Grant
€ 1.500.000
2024
Details
ERC Starting...

Controlling Large Language Models

Develop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption.

ERC Starting Grant
€ 1.500.000
2024
Details
ERC Starting...

Machine learning in science and society: A dangerous toy?

This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.

ERC Starting Grant
€ 1.500.000
2025
Details

Vergelijkbare projecten uit andere regelingen

ProjectRegelingBedragJaarActie

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

Mkb-innovati...€ 350.000
2022
Details

Onderzoek haalbaarheid AI factcheck module

Het project onderzoekt de haalbaarheid van een AI-module die claims in content kan extraheren en verifiëren op juistheid en relevantie met diverse databronnen.

Mkb-innovati...€ 20.000
2023
Details

Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)

Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.

Mkb-innovati...€ 20.000
2020
Details

HURL

Dit project onderzoekt de haalbaarheid van Uitlegbare Reinforcement Learning (URL) om eindgebruikers inzicht te geven in algoritmische beslissingen binnen een gesimuleerde handelsomgeving.

Mkb-innovati...€ 19.200
2022
Details

Project Hominis

Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.

Mkb-innovati...€ 20.000
2022
Details
Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

Mkb-innovatiestimulering Topsectoren R&D AI
€ 350.000
2022
Details
Mkb-innovati...

Onderzoek haalbaarheid AI factcheck module

Het project onderzoekt de haalbaarheid van een AI-module die claims in content kan extraheren en verifiëren op juistheid en relevantie met diverse databronnen.

Mkb-innovatiestimulering Topsectoren Haalbaarheid
€ 20.000
2023
Details
Mkb-innovati...

Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)

Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.

Mkb-innovatiestimulering Topsectoren Haalbaarheid
€ 20.000
2020
Details
Mkb-innovati...

HURL

Dit project onderzoekt de haalbaarheid van Uitlegbare Reinforcement Learning (URL) om eindgebruikers inzicht te geven in algoritmische beslissingen binnen een gesimuleerde handelsomgeving.

Mkb-innovatiestimulering Topsectoren Haalbaarheid
€ 19.200
2022
Details
Mkb-innovati...

Project Hominis

Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.

Mkb-innovatiestimulering Topsectoren Haalbaarheid
€ 20.000
2022
Details

SubsidieMeesters logoSubsidieMeesters

Vind en verken subsidieprojecten in Nederland en Europa.

Links

  • Projecten
  • Regelingen
  • Analyses

Suggesties

Heb je ideeën voor nieuwe features of verbeteringen?

Deel je suggestie
© 2025 SubsidieMeesters. Alle rechten voorbehouden.