Learning Digital Humans in Motion
The project aims to enhance immersive telepresence by using natural language to reconstruct and animate photo-realistic digital humans for interactive communication in AR and VR environments.
Projectdetails
Introduction
We are taking part in a revolution of how people perceive each other and how they communicate in the digital space. Emerging technologies demonstrate how humans can be digitized such that their digital doubles (digital humans) can be viewed in mixed or virtual reality applications.
Importance of 3D Digital Humans
Especially for immersive telepresence, these 3D digital humans are of high interest since people can see each other in augmented (AR) or virtual reality (VR) and can interact with each other. For an immersive experience, this requires:
- A high-quality level of appearance reproduction
- The estimation of subtle motions of the human
Hardware Considerations
Besides quality aspects, digital humans need to be reconstructed and driven by consumer-grade hardware solutions (e.g., a webcam) to enable a wide applicability of digital humans for:
- Immersive telecommunication
- Virtual mirrors (e-commerce)
- Other entertainment purposes (e.g., computer games)
Research Focus
In this proposal, we concentrate on the computer vision and graphics aspects of capturing and rendering (realizing) motions and appearances of photo-realistic digital humans reconstructed with commodity hardware and ask the question:
Can we leverage natural language for the reconstruction, representation, and modelling of the appearance, motion, and interactions of digital humans?
Motion and Interaction Analysis
As the face and gestures play an important role during conversations, we focus our research on the upper body of the human, aiming at capturing and synthesizing complete, contact-rich digital humans with life-like motions.
We will analyze how humans move and will develop data-driven motion synthesis methods that learn how to generate talking and listening motion with interactions of the hand with the face.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.500.000 |
Totale projectbegroting | € 1.500.000 |
Tijdlijn
Startdatum | 1-3-2025 |
Einddatum | 28-2-2030 |
Subsidiejaar | 2025 |
Partners & Locaties
Projectpartners
- TECHNISCHE UNIVERSITAT DARMSTADTpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Learning to synthesize interactive 3D modelsThis project aims to automate the generation of interactive 3D models using deep learning to enhance virtual environments and applications in animation, robotics, and digital entertainment. | ERC Consolid... | € 2.000.000 | 2024 | Details |
SpatioTemporal Reconstruction of Interacting People for pErceiving SystemsThe project aims to develop robust methods for inferring Human-Object Interactions from natural images/videos, enhancing intelligent systems to assist people in task completion. | ERC Starting... | € 1.500.000 | 2025 | Details |
Learning to Create Virtual WorldsThis project aims to develop advanced machine learning techniques for automatic generation of high-fidelity 3D content, enhancing immersive experiences across various applications. | ERC Consolid... | € 2.750.000 | 2025 | Details |
A Robust, Real-time, and 3D Human Motion Capture System through Multi-Cameras and AIReal-Move aims to develop a marker-less, real-time 3D human motion tracking system using multi-camera views and AI to enhance workplace safety and ergonomics, reducing costs and improving quality of life. | ERC Proof of... | € 150.000 | 2024 | Details |
Computational Design of Multimodal Tactile Feedback within Immersive EnvironmentsADVHANDTURE aims to enhance virtual reality by developing innovative computational models for realistic multimodal tactile feedback, improving 3D interaction and user immersion. | ERC Consolid... | € 1.999.750 | 2023 | Details |
Learning to synthesize interactive 3D models
This project aims to automate the generation of interactive 3D models using deep learning to enhance virtual environments and applications in animation, robotics, and digital entertainment.
SpatioTemporal Reconstruction of Interacting People for pErceiving Systems
The project aims to develop robust methods for inferring Human-Object Interactions from natural images/videos, enhancing intelligent systems to assist people in task completion.
Learning to Create Virtual Worlds
This project aims to develop advanced machine learning techniques for automatic generation of high-fidelity 3D content, enhancing immersive experiences across various applications.
A Robust, Real-time, and 3D Human Motion Capture System through Multi-Cameras and AI
Real-Move aims to develop a marker-less, real-time 3D human motion tracking system using multi-camera views and AI to enhance workplace safety and ergonomics, reducing costs and improving quality of life.
Computational Design of Multimodal Tactile Feedback within Immersive Environments
ADVHANDTURE aims to enhance virtual reality by developing innovative computational models for realistic multimodal tactile feedback, improving 3D interaction and user immersion.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Samenwerken in een full immersive VR-omgevingVier innovatieve organisaties uit Zuid-Nederland ontwikkelen een multiplayer virtual reality systeem met natuurlijke interactie om toepassingen in diverse sectoren te verbeteren. | Mkb-innovati... | € 197.860 | 2016 | Details |
Blended Learning for DementiaTinqwise en TMVRS ontwikkelen een innovatief blended learning platform voor mentale gezondheidszorg, gericht op sociale cognitieve vaardigheden via interactieve VR-ervaringen met realistische gezichtsanimaties. | Mkb-innovati... | € 185.360 | 2016 | Details |
Human-size holographic simulation AR displayHet project onderzoekt de haalbaarheid van een betaalbaar holografisch AR-display en bijbehorende 3D content creatiesoftware. | Mkb-innovati... | € 20.000 | 2023 | Details |
2D naar 3D mapping voor autonome realtime feedback (MARF)360SportsIntelligence en Ludimos ontwikkelen 3D-mapping software voor het tracken van sporters en het genereren van digital twins, gericht op geautomatiseerde training en wedstrijdregistratie. | Mkb-innovati... | € 256.200 | 2023 | Details |
HoloCloud MR - toekomst voor de zorgHoloMoves ontwikkelt een Mixed Reality systeem met IoT-integratie om revalidatie te versnellen en zorgverleners inzicht te geven in patiëntbewegingen, met als doel verbeterde zorg op afstand. | Mkb-innovati... | € 20.000 | 2022 | Details |
Samenwerken in een full immersive VR-omgeving
Vier innovatieve organisaties uit Zuid-Nederland ontwikkelen een multiplayer virtual reality systeem met natuurlijke interactie om toepassingen in diverse sectoren te verbeteren.
Blended Learning for Dementia
Tinqwise en TMVRS ontwikkelen een innovatief blended learning platform voor mentale gezondheidszorg, gericht op sociale cognitieve vaardigheden via interactieve VR-ervaringen met realistische gezichtsanimaties.
Human-size holographic simulation AR display
Het project onderzoekt de haalbaarheid van een betaalbaar holografisch AR-display en bijbehorende 3D content creatiesoftware.
2D naar 3D mapping voor autonome realtime feedback (MARF)
360SportsIntelligence en Ludimos ontwikkelen 3D-mapping software voor het tracken van sporters en het genereren van digital twins, gericht op geautomatiseerde training en wedstrijdregistratie.
HoloCloud MR - toekomst voor de zorg
HoloMoves ontwikkelt een Mixed Reality systeem met IoT-integratie om revalidatie te versnellen en zorgverleners inzicht te geven in patiëntbewegingen, met als doel verbeterde zorg op afstand.