Learning to Create Virtual Worlds
This project aims to develop advanced machine learning techniques for automatic generation of high-fidelity 3D content, enhancing immersive experiences across various applications.
Projectdetails
Introduction
In recent years, we have seen a revolution of learning methods that generate highly-realistic images, such as generative adversarial neural networks, autoregressive methods, or diffusion models (e.g., DALL-E, Stable Diffusion, Runway, etc.). Unfortunately, the vast majority of these methods are tailored towards the 2D image domain, while their respective 3D counterparts – 3D models that fuel computer graphics applications and enable visually immersive experiences – remain in their infancy.
Project Overview
In this proposal, we tackle the challenge of automatic generation of 3D content for virtual worlds. Such 3D generated content enables versatility, with flexible rendering from arbitrary viewpoints that match the visual fidelity of the real world.
Applications
We focus on 3D content creation for visually immersive experiences for a much wider audience in myriad applications, such as:
- Video games
- Movies
- AR/VR scenarios
- CAD modeling
- Architectural & industrial design
- Medical applications
We believe that the key towards automated, high-fidelity content creation lies in developing new machine learning techniques to transform 3D content generation.
Methodology
3D Generative Models
(A) We will develop 3D Generative Models that output 3D polygon meshes, along with their surface textures and material properties, highlighting the generation of 3D content that can be directly consumed by modern graphics pipelines.
Supervision from Images and Videos
(B) To train our 3D generative models to reflect the complexity and diversity of real data, we will devise methods for Supervision from Images and Videos. The key challenge here is that such collections of images and videos are by nature incomplete projections of the underlying 3D world, thus requiring learning paradigms that generalize across partial instances.
Control and Editability
(C) We will research techniques that provide Control and Editability through Conditional Generation. In particular, we will focus on conditional input from both novice (e.g., text-based editing) and expert (e.g., based on existing authoring tools) users alike.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 2.750.000 |
Totale projectbegroting | € 2.750.000 |
Tijdlijn
Startdatum | 1-2-2025 |
Einddatum | 31-1-2030 |
Subsidiejaar | 2025 |
Partners & Locaties
Projectpartners
- TECHNISCHE UNIVERSITAET MUENCHENpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Learning to synthesize interactive 3D modelsThis project aims to automate the generation of interactive 3D models using deep learning to enhance virtual environments and applications in animation, robotics, and digital entertainment. | ERC Consolid... | € 2.000.000 | 2024 | Details |
Empowering Neural Rendering Methods with Physically-Based CapabilitiesNERPHYS aims to revolutionize 3D content creation by combining neural and physically-based rendering through polymorphic representations, ensuring accurate and efficient asset generation. | ERC Advanced... | € 2.488.029 | 2024 | Details |
Federated foundational models for embodied perceptionThe FRONTIER project aims to develop foundational models for embodied perception by integrating neural networks with physical simulations, enhancing learning efficiency and collaboration across intelligent systems. | ERC Advanced... | € 2.499.825 | 2024 | Details |
Learning Digital Humans in MotionThe project aims to enhance immersive telepresence by using natural language to reconstruct and animate photo-realistic digital humans for interactive communication in AR and VR environments. | ERC Starting... | € 1.500.000 | 2025 | Details |
Universal Geometric Transfer LearningDevelop a universal framework for transfer learning in geometric 3D data to enhance analysis across tasks with minimal supervision and improve generalization in diverse applications. | ERC Consolid... | € 1.999.490 | 2024 | Details |
Learning to synthesize interactive 3D models
This project aims to automate the generation of interactive 3D models using deep learning to enhance virtual environments and applications in animation, robotics, and digital entertainment.
Empowering Neural Rendering Methods with Physically-Based Capabilities
NERPHYS aims to revolutionize 3D content creation by combining neural and physically-based rendering through polymorphic representations, ensuring accurate and efficient asset generation.
Federated foundational models for embodied perception
The FRONTIER project aims to develop foundational models for embodied perception by integrating neural networks with physical simulations, enhancing learning efficiency and collaboration across intelligent systems.
Learning Digital Humans in Motion
The project aims to enhance immersive telepresence by using natural language to reconstruct and animate photo-realistic digital humans for interactive communication in AR and VR environments.
Universal Geometric Transfer Learning
Develop a universal framework for transfer learning in geometric 3D data to enhance analysis across tasks with minimal supervision and improve generalization in diverse applications.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
3D Foodmodellen met Generatieve AIDit project ontwikkelt generatieve AI-technologie voor het creëren van hoogwaardige 3D-modellen van voedsel, om innovaties in de agro- en foodsector te versnellen en de voedseltransitie te ondersteunen. | Mkb-innovati... | € 183.810 | 2023 | Details |
Synthetische Data GeneratorHet project ontwikkelt een automatische synthetische data generator voor het trainen van AI-modellen in de agrarische en industriële sector. | Mkb-innovati... | € 176.050 | 2023 | Details |
Synthetische Data GeneratorHet project ontwikkelt een automatische data generator voor synthetische data om AI-modellen in de agrarische en industriële sector te trainen, met als doel de efficiëntie en nauwkeurigheid te verbeteren. | Mkb-innovati... | € 176.050 | 2023 | Details |
Blended Learning for DementiaTinqwise en TMVRS ontwikkelen een innovatief blended learning platform voor mentale gezondheidszorg, gericht op sociale cognitieve vaardigheden via interactieve VR-ervaringen met realistische gezichtsanimaties. | Mkb-innovati... | € 185.360 | 2016 | Details |
Samenwerken in een full immersive VR-omgevingVier innovatieve organisaties uit Zuid-Nederland ontwikkelen een multiplayer virtual reality systeem met natuurlijke interactie om toepassingen in diverse sectoren te verbeteren. | Mkb-innovati... | € 197.860 | 2016 | Details |
3D Foodmodellen met Generatieve AI
Dit project ontwikkelt generatieve AI-technologie voor het creëren van hoogwaardige 3D-modellen van voedsel, om innovaties in de agro- en foodsector te versnellen en de voedseltransitie te ondersteunen.
Synthetische Data Generator
Het project ontwikkelt een automatische synthetische data generator voor het trainen van AI-modellen in de agrarische en industriële sector.
Synthetische Data Generator
Het project ontwikkelt een automatische data generator voor synthetische data om AI-modellen in de agrarische en industriële sector te trainen, met als doel de efficiëntie en nauwkeurigheid te verbeteren.
Blended Learning for Dementia
Tinqwise en TMVRS ontwikkelen een innovatief blended learning platform voor mentale gezondheidszorg, gericht op sociale cognitieve vaardigheden via interactieve VR-ervaringen met realistische gezichtsanimaties.
Samenwerken in een full immersive VR-omgeving
Vier innovatieve organisaties uit Zuid-Nederland ontwikkelen een multiplayer virtual reality systeem met natuurlijke interactie om toepassingen in diverse sectoren te verbeteren.