Value-Aware Artificial Intelligence

The VALAWAI project aims to develop a toolbox for Value-Aware AI that integrates moral consciousness to enhance ethical decision-making in social media, robotics, and medical protocols.

Subsidie
€ 3.926.432
2022

Projectdetails

Introduction

By Value-Aware AI, we mean AI that includes a component performing the same function as human moral consciousness, namely the capacity to acquire and maintain a value system. This system is used to decide whether certain actions are morally acceptable and to be aware of the value systems of its users, allowing the AI to understand the intent and motivation of their actions and to properly and correctly engage with them.

Project Overview

The VALAWAI project will develop a toolbox to build Value-Aware AI resting on two pillars, both grounded in science:

  1. An architecture for consciousness inspired by the Global Neuronal Workspace model, developed on the basis of neurophysiological evidence and psychological data.
  2. A foundational framework for moral decision-making based on psychology, social cognition, and social brain science.

Application Areas

The project will demonstrate the utility of Value-Aware AI in three application areas where a moral dimension urgently needs to be included:

  1. Social Media: Addressing negative side effects such as disinformation, polarization, and the instigation of asocial and immoral behavior.
  2. Social Robots: Designed to be helpful or influence human behavior positively, but potentially enabling manipulation, deceit, and harmful behavior.
  3. Medical Protocols: Ensuring that medical decision-making is value-aligned.

Contribution to AI Development

The project contributes to the general goal of making EU-based AI more competitive by being more reliable, robust, ethically guided, explainable, and hence trustworthy. It does not propose new guidelines and regulations (for which there is already considerable effort) but advances the state of the art in core AI technology. This ensures that ethics is embedded inside applications, making them grounded in universal, European, and personal values.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 3.926.432
Totale projectbegroting€ 3.926.432

Tijdlijn

Startdatum1-10-2022
Einddatum30-9-2026
Subsidiejaar2022

Partners & Locaties

Projectpartners

  • AGENCIA ESTATAL CONSEJO SUPERIOR DE INVESTIGACIONES CIENTIFICASpenvoerder
  • FUNDACIO INSTITUT HOSPITAL DEL MAR D INVESTIGACIONS MEDIQUES
  • FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
  • UNIVERSITEIT GENT
  • SONY EUROPE BV
  • STUDIO STELLUTI

Land(en)

SpainItalyBelgiumNetherlands

Vergelijkbare projecten binnen EIC Pathfinder

EIC Pathfinder

Counterfactual Assessment and Valuation for Awareness Architecture

The CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications.

€ 3.132.460
EIC Pathfinder

Improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory

ASTOUND aims to develop an AI architecture for artificial consciousness using Attention Schema Theory to enhance social interaction and natural language understanding in machines.

€ 3.330.897
EIC Pathfinder

Context-aware adaptive visualizations for critical decision making

SYMBIOTIK aims to enhance decision-making in critical scenarios through an AI-driven, human-InfoVis interaction framework that fosters awareness and emotional intelligence.

€ 4.485.655
EIC Pathfinder

Symbolic logic framework for situational awareness in mixed autonomy

SymAware aims to develop a comprehensive framework for situational awareness in multi-agent systems, enhancing collaboration and safety between autonomous agents and humans through advanced reasoning and risk assessment.

€ 3.980.291

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

€ 350.000
ERC Advanced...

VIrtual GuardIan AngeLs for the post-truth Information Age

The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.

€ 2.490.000
ERC Starting...

Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.

The project aims to develop society-aware machine learning algorithms through collaborative design, balancing the interests of owners, consumers, and regulators to foster trust and ethical use.

€ 1.499.845
ERC Proof of...

Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching

Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.

€ 150.000
ERC Starting...

Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.

This project aims to explore the socio-cultural dynamics of AI in health governance across six countries to develop a theory on ethical AI intervention and its impact on national health policies.

€ 1.499.961