Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching
Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.
Projectdetails
Introduction
The vision behind Act.AI is to utilize statistical matching for mitigating and auditing bias in Artificial Intelligence (AI) models. AI has been rapidly growing in various industries, from financial services to healthcare, education, and job recruitment.
Concerns About AI Fairness
However, as AI algorithms have become increasingly sophisticated and pervasive in decision-making processes, concerns have arisen about their fairness and compliance with regulations. In particular, the EU AI Act requires that AI providers in high-risk applications—such as employment, credit, or healthcare—identify (and thereby address) discrimination by their algorithms against certain demographics of people.
Challenges for AI Startups
Ensuring compliance with the Act can be challenging, particularly for AI startups that may not have the resources or expertise to fully understand and implement the Act's requirements.
Addressing Disconnects
Addressing existing disconnects between AI fairness toolkits' capabilities and current practitioner needs, the Act.AI tool can be easily integrated into any AI workflow in a plug-and-play fashion to continuously monitor and improve its fairness.
Versatility of Act.AI
A key aspect of Act.AI is the ability to operate with different types of data (tabular, images, and text) in a variety of contexts (binary and multiclass classification and regression). It is also able to match datasets in different domains, including out-of-distribution data, even if these datasets have different numbers of variables or features.
Stakeholder Integration
To ensure usability of Act.AI, it will integrate feedback from relevant stakeholders from two immediate target markets: financial services and healthcare.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 150.000 |
Totale projectbegroting | € 150.000 |
Tijdlijn
Startdatum | 1-6-2024 |
Einddatum | 30-11-2025 |
Subsidiejaar | 2024 |
Partners & Locaties
Projectpartners
- BCAM - BASQUE CENTER FOR APPLIED MATHEMATICSpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Biases in Administrative Service Encounters: Transitioning from Human to Artificial IntelligenceThis project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy. | ERC Consolid... | € 1.954.746 | 2025 | Details |
Measuring and Mitigating Risks of AI-driven Information TargetingThis project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies. | ERC Starting... | € 1.499.953 | 2022 | Details |
Algorithmic Bias Control in Deep learningThe project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications. | ERC Starting... | € 1.500.000 | 2022 | Details |
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies. | ERC Starting... | € 1.499.961 | 2023 | Details |
Participatory Algorithmic Justice: A multi-sited ethnography to advance algorithmic justice through participatory designThis project develops participatory algorithmic justice to address AI harms by centering marginalized voices in research and design interventions for equitable technology solutions. | ERC Starting... | € 1.472.390 | 2025 | Details |
Biases in Administrative Service Encounters: Transitioning from Human to Artificial Intelligence
This project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy.
Measuring and Mitigating Risks of AI-driven Information Targeting
This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.
Algorithmic Bias Control in Deep learning
The project aims to develop a theory of algorithmic bias in deep learning to improve training efficiency and generalization performance for real-world applications.
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.
This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies.
Participatory Algorithmic Justice: A multi-sited ethnography to advance algorithmic justice through participatory design
This project develops participatory algorithmic justice to address AI harms by centering marginalized voices in research and design interventions for equitable technology solutions.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Quality Assurance for AIGISKARD is developing an open-source SaaS platform for automated AI quality testing to address ethical biases and prediction errors, aiming to lead in compliance with the EU AI Act. | EIC Accelerator | € 2.499.999 | 2023 | Details |
Strijd tegen ongelijke verdeling in financiële keuzes.Het project richt zich op het bestrijden van ongelijkheid door AI en big data in te zetten voor het identificeren van vooroordelen en het verbeteren van de toegang tot middelen. | Mkb-innovati... | € 20.000 | 2023 | Details |
Project HominisHet project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert. | Mkb-innovati... | € 20.000 | 2022 | Details |
Generiek linguïstisch AI-voorspellingsmodel voor eerlijke HR-besluitvormingSeedlink ontwikkelt een generiek AI-voorspellingsmodel voor HR-besluitvorming, gericht op het verbeteren van nauwkeurigheid en eerlijkheid zonder specifieke klantdata, toegankelijk voor kleinere bedrijven. | Mkb-innovati... | € 20.000 | 2020 | Details |
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
Quality Assurance for AI
GISKARD is developing an open-source SaaS platform for automated AI quality testing to address ethical biases and prediction errors, aiming to lead in compliance with the EU AI Act.
Strijd tegen ongelijke verdeling in financiële keuzes.
Het project richt zich op het bestrijden van ongelijkheid door AI en big data in te zetten voor het identificeren van vooroordelen en het verbeteren van de toegang tot middelen.
Project Hominis
Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.
Generiek linguïstisch AI-voorspellingsmodel voor eerlijke HR-besluitvorming
Seedlink ontwikkelt een generiek AI-voorspellingsmodel voor HR-besluitvorming, gericht op het verbeteren van nauwkeurigheid en eerlijkheid zonder specifieke klantdata, toegankelijk voor kleinere bedrijven.
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.