Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching

Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.

Subsidie
€ 150.000
2024

Projectdetails

Introduction

The vision behind Act.AI is to utilize statistical matching for mitigating and auditing bias in Artificial Intelligence (AI) models. AI has been rapidly growing in various industries, from financial services to healthcare, education, and job recruitment.

Concerns About AI Fairness

However, as AI algorithms have become increasingly sophisticated and pervasive in decision-making processes, concerns have arisen about their fairness and compliance with regulations. In particular, the EU AI Act requires that AI providers in high-risk applications—such as employment, credit, or healthcare—identify (and thereby address) discrimination by their algorithms against certain demographics of people.

Challenges for AI Startups

Ensuring compliance with the Act can be challenging, particularly for AI startups that may not have the resources or expertise to fully understand and implement the Act's requirements.

Addressing Disconnects

Addressing existing disconnects between AI fairness toolkits' capabilities and current practitioner needs, the Act.AI tool can be easily integrated into any AI workflow in a plug-and-play fashion to continuously monitor and improve its fairness.

Versatility of Act.AI

A key aspect of Act.AI is the ability to operate with different types of data (tabular, images, and text) in a variety of contexts (binary and multiclass classification and regression). It is also able to match datasets in different domains, including out-of-distribution data, even if these datasets have different numbers of variables or features.

Stakeholder Integration

To ensure usability of Act.AI, it will integrate feedback from relevant stakeholders from two immediate target markets: financial services and healthcare.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 150.000
Totale projectbegroting€ 150.000

Tijdlijn

Startdatum1-6-2024
Einddatum30-11-2025
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • BCAM - BASQUE CENTER FOR APPLIED MATHEMATICSpenvoerder

Land(en)

Spain

Vergelijkbare projecten binnen European Research Council

ERC Consolid...

Biases in Administrative Service Encounters: Transitioning from Human to Artificial Intelligence

This project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy.

€ 1.954.746
ERC Starting...

Measuring and Mitigating Risks of AI-driven Information Targeting

This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.

€ 1.499.953
ERC Starting...

Algorithmic Bias Control in Deep learning

The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.

€ 1.500.000
ERC Starting...

Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.

This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies.

€ 1.499.961
ERC Starting...

Participatory Algorithmic Justice: A multi-sited ethnography to advance algorithmic justice through participatory design

This project develops participatory algorithmic justice to address AI harms by centering marginalized voices in research and design interventions for equitable technology solutions.

€ 1.472.390

Vergelijkbare projecten uit andere regelingen

EIC Accelerator

Quality Assurance for AI

GISKARD is developing an open-source SaaS platform for automated AI quality testing to address ethical biases and prediction errors, aiming to lead in compliance with the EU AI Act.

€ 2.499.999
Mkb-innovati...

Strijd tegen ongelijke verdeling in financiële keuzes.

Het project richt zich op het bestrijden van ongelijkheid door AI en big data in te zetten voor het identificeren van vooroordelen en het verbeteren van de toegang tot middelen.

€ 20.000
Mkb-innovati...

Project Hominis

Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.

€ 20.000
Mkb-innovati...

Generiek linguïstisch AI-voorspellingsmodel voor eerlijke HR-besluitvorming

Seedlink ontwikkelt een generiek AI-voorspellingsmodel voor HR-besluitvorming, gericht op het verbeteren van nauwkeurigheid en eerlijkheid zonder specifieke klantdata, toegankelijk voor kleinere bedrijven.

€ 20.000
Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

€ 350.000