Value-Aware Artificial Intelligence
The VALAWAI project aims to develop a toolbox for Value-Aware AI that integrates moral consciousness to enhance ethical decision-making in social media, robotics, and medical protocols.
Projectdetails
Introduction
By Value-Aware AI, we mean AI that includes a component performing the same function as human moral consciousness, namely the capacity to acquire and maintain a value system. This system is used to decide whether certain actions are morally acceptable and to be aware of the value systems of its users, allowing the AI to understand the intent and motivation of their actions and to properly and correctly engage with them.
Project Overview
The VALAWAI project will develop a toolbox to build Value-Aware AI resting on two pillars, both grounded in science:
- An architecture for consciousness inspired by the Global Neuronal Workspace model, developed on the basis of neurophysiological evidence and psychological data.
- A foundational framework for moral decision-making based on psychology, social cognition, and social brain science.
Application Areas
The project will demonstrate the utility of Value-Aware AI in three application areas where a moral dimension urgently needs to be included:
- Social Media: Addressing negative side effects such as disinformation, polarization, and the instigation of asocial and immoral behavior.
- Social Robots: Designed to be helpful or influence human behavior positively, but potentially enabling manipulation, deceit, and harmful behavior.
- Medical Protocols: Ensuring that medical decision-making is value-aligned.
Contribution to AI Development
The project contributes to the general goal of making EU-based AI more competitive by being more reliable, robust, ethically guided, explainable, and hence trustworthy. It does not propose new guidelines and regulations (for which there is already considerable effort) but advances the state of the art in core AI technology. This ensures that ethics is embedded inside applications, making them grounded in universal, European, and personal values.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 3.926.432 |
Totale projectbegroting | € 3.926.432 |
Tijdlijn
Startdatum | 1-10-2022 |
Einddatum | 30-9-2026 |
Subsidiejaar | 2022 |
Partners & Locaties
Projectpartners
- AGENCIA ESTATAL CONSEJO SUPERIOR DE INVESTIGACIONES CIENTIFICASpenvoerder
- FUNDACIO INSTITUT HOSPITAL DEL MAR D INVESTIGACIONS MEDIQUES
- FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
- UNIVERSITEIT GENT
- SONY EUROPE BV
- STUDIO STELLUTI
Land(en)
Vergelijkbare projecten binnen EIC Pathfinder
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Counterfactual Assessment and Valuation for Awareness ArchitectureThe CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications. | EIC Pathfinder | € 3.132.460 | 2022 | Details |
Improving social competences of virtual agents through artificial consciousness based on the Attention Schema TheoryASTOUND aims to develop an AI architecture for artificial consciousness using Attention Schema Theory to enhance social interaction and natural language understanding in machines. | EIC Pathfinder | € 3.330.897 | 2022 | Details |
Context-aware adaptive visualizations for critical decision makingSYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence. | EIC Pathfinder | € 4.485.655 | 2022 | Details |
Symbolic logic framework for situational awareness in mixed autonomySymAware aims to develop a comprehensive framework for situational awareness in multi-agent systems, enhancing collaboration and safety between autonomous agents and humans through advanced reasoning and risk assessment. | EIC Pathfinder | € 3.980.291 | 2022 | Details |
Counterfactual Assessment and Valuation for Awareness Architecture
The CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications.
Improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory
ASTOUND aims to develop an AI architecture for artificial consciousness using Attention Schema Theory to enhance social interaction and natural language understanding in machines.
Context-aware adaptive visualizations for critical decision making
SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.
Symbolic logic framework for situational awareness in mixed autonomy
SymAware aims to develop a comprehensive framework for situational awareness in multi-agent systems, enhancing collaboration and safety between autonomous agents and humans through advanced reasoning and risk assessment.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
VIrtual GuardIan AngeLs for the post-truth Information AgeThe VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society. | ERC Advanced... | € 2.490.000 | 2024 | Details |
Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use. | ERC Starting... | € 1.499.845 | 2023 | Details |
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical MatchingAct.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries. | ERC Proof of... | € 150.000 | 2024 | Details |
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies. | ERC Starting... | € 1.499.961 | 2023 | Details |
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.
VIrtual GuardIan AngeLs for the post-truth Information Age
The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.
Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.
The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use.
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching
Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.
This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies.