A Foundation for Empirical Multimodality Research

FOUNDATIONS develops a novel methodology for empirical research on multimodality by creating large, annotated corpora and using AI to analyze human communication across diverse cultural artifacts.

Subsidie
€ 1.999.974
2024

Projectdetails

Introduction

This project lays a foundation for conducting data-driven empirical research on multimodality, that is, how human communication and interaction rely on combinations of 'modes' of expression. Theories of multimodality are rapidly gaining currency in diverse fields concerned with human communication, interaction, and cultural production. However, most theories of multimodality are based on conjecture and remain without an empirical foundation due to the lack of large, richly-annotated multimodal corpora and methods for their analysis.

Methodology Development

FOUNDATIONS solves this problem by developing a novel methodology for conducting empirical research on multimodality in everyday cultural artefacts, such as newspapers, textbooks, magazines, and social media videos. This methodology allows for critically examining key theoretical concepts in the field – medium, semiotic mode, and genre – and their joint contribution to meaning-making. This renewal enhances our understanding of key theoretical concepts in multimodality research and places their definitions on a solid empirical foundation.

Crowdsourcing and Data Collection

To do so, FOUNDATIONS creates large and reproducible multimodal corpora using microtask crowdsourcing. This approach breaks complex tasks into piecemeal work and distributes this effort to non-expert workers on online platforms. These crowdsourced descriptions are combined with computational representations into graphs that represent the structure of multimodal discourse.

Analytical Methods

For analysing these corpora, the project develops novel methods based on neuro-symbolic artificial intelligence. This enables the combination of crowdsourced human insights with the pattern recognition capability of neural networks.

Impact of FOUNDATIONS

The groundbreaking theoretical and methodological advances in FOUNDATIONS go far beyond the state of the art by enabling large-scale empirical research while preserving the analytical depth needed for multimodality research. This opens up new domains of inquiry for studying multimodality across cultures, situations, artefacts, and timescales.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.999.974
Totale projectbegroting€ 1.999.974

Tijdlijn

Startdatum1-5-2024
Einddatum30-4-2029
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • HELSINGIN YLIOPISTOpenvoerder

Land(en)

Finland

Vergelijkbare projecten binnen European Research Council

ERC Advanced...

Linguistic traces: low-frequency forms as evidence of language and population history

This project aims to reconstruct early European languages by analyzing low-frequency linguistic variants in historical texts, integrating philology with deep learning to uncover cultural interactions.

€ 2.498.135
ERC Advanced...

Making sense of the senses: Causal Inference in a complex dynamic multisensory world

This project aims to uncover how the brain approximates causal inference in complex multisensory environments using interdisciplinary methods, potentially informing AI and addressing perceptual challenges in clinical populations.

€ 2.499.527
ERC Consolid...

Language in the Dyad. Linking linguistic and neural alignment.

This project investigates the relationship between neural and linguistic alignment in dyadic communication using EEG hyper-scanning and interactive language games to enhance understanding of dialogue dynamics.

€ 1.997.048
ERC Starting...

The epistemology of costly communication – offline and online

COST-X aims to develop a new methodology using Costly Signalling Theory to understand and improve truthful communication norms, addressing misinformation and enhancing democratic discourse.

€ 1.389.776
ERC Consolid...

Defining an integrated model of the neural processing of speech in light of its multiscale dynamics

This project aims to develop an integrated model of speech perception by analyzing neural oscillatory dynamics and their relationship with linguistic timescales using advanced neuroimaging techniques.

€ 1.861.100