Controlling Large Language Models

Develop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption.

Subsidie
€ 1.500.000
2024

Projectdetails

Introduction

Large language models (LMs) are quickly becoming the backbone of many artificial intelligence (AI) systems, achieving state-of-the-art results in many tasks and application domains. Despite the rapid progress in the field, AI systems suffer from multiple flaws inherited from the underlying LMs: biased behavior, out-of-date information, confabulations, flawed reasoning, and more.

Understanding and Controlling LMs

If we wish to control these systems, we must first understand how they work and develop mechanisms to intervene, update, and repair them. However, the black-box nature of LMs makes them largely inaccessible to such interventions. In this proposal, our overarching goal is to:

Develop a framework for elucidating the internal mechanisms in LMs and for controlling their behavior in an efficient, interpretable, and safe manner.

Objectives

To achieve this goal, we will work through four objectives:

  1. Dissecting Internal Mechanisms
    We will dissect the internal mechanisms of information storage and recall in LMs and develop ways to update and repair such information.

  2. Illuminating Higher-Level Capabilities
    We will illuminate the mechanisms of higher-level capabilities of LMs to perform reasoning and simulations. We will also repair problems stemming from alignment steps.

  3. Investigating Training Processes
    We will investigate how training processes of LMs affect their emergent mechanisms and develop methods for fine-grained control over the training process.

  4. Establishing a Standard Benchmark
    Finally, we will establish a standard benchmark for mechanistic interpretability of LMs to consolidate disparate efforts in the community.

Conclusion

Taken as a whole, we expect the proposed research to empower different stakeholders and ensure a safe, beneficial, and responsible adoption of LMs in AI technologies by our society.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.500.000
Totale projectbegroting€ 1.500.000

Tijdlijn

Startdatum1-11-2024
Einddatum31-10-2029
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • TECHNION - ISRAEL INSTITUTE OF TECHNOLOGYpenvoerder

Land(en)

Israel

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

€ 1.500.000
ERC Consolid...

DEep COgnition Learning for LAnguage GEneration

This project aims to enhance NLP models by integrating machine learning, cognitive science, and structured memory to improve out-of-domain generalization and contextual understanding in language generation tasks.

€ 1.999.595
ERC Advanced...

Control for Deep and Federated Learning

CoDeFeL aims to enhance machine learning methods through control theory, developing efficient ResNet architectures and federated learning techniques for applications in digital medicine and recommendations.

€ 2.499.224
ERC Starting...

Towards an Artificial Cognitive Science

This project aims to establish a new field of artificial cognitive science by applying cognitive psychology to enhance the learning and decision-making of advanced AI models.

€ 1.496.000
ERC Starting...

Next-Generation Natural Language Generation

This project aims to enhance natural language generation by integrating neural models with symbolic representations for better control, adaptability, and reliable evaluation across various applications.

€ 1.420.375

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

LLM Innovatie voor Klantenservice software (LLM)

Het project onderzoekt de haalbaarheid van een innovatief LLM voor geautomatiseerde klantenservice, gericht op privacy en maatwerk.

€ 20.000
Mkb-innovati...

Project Hominis

Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.

€ 20.000
Mkb-innovati...

Haalbaarheidsonderzoek naar AIPerLearn (AI-Powered Personalized Learning)

STARK Learning onderzoekt de toepassing en training van AI-modellen om het ontwikkelen van gepersonaliseerde lesmaterialen te automatiseren en de kwaliteit en validatie te waarborgen.

€ 20.000
Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

€ 350.000