Our workshop explores a diverse range of research topics spanning from multi-view scene synthesis to 4D generation and video diffusion models. With our workshop, we aspire to bring together researchers specializing in diverse domains and discuss the latest advances and next steps towards developing more powerful 3D and 4D generative pipelines. Topics of interest include:
- World Models and Physical AI: Real-world simulators that generate complex, realistic, and large-scale environments based on text, image, video and/or action input.
- Camera-Controlled Video Generation: Methods enabling precise camera movement manipulation in video synthesis, allowing for seamless translations and rotations.
- Motion-Controlled Video Generation: Approaches capable of controlling object movement and dynamics within generated video content.
- Large 3D & 4D Reconstruction Models: Reconstruction pipelines leveraging generative priors for improved fidelity and completion.
- Controllable Urban Scene Generation: Next-generation simulation frameworks for autonomous vehicle testing through integrated video and spatial modeling.
- Distillation of Video Generative Models into 3D: Techniques for knowledge transfer from temporal models to explicit spatial representations.
- Video and 3D Editing with Generative Models:Techniques capable of editing real or synthesized videos or 3D scenes with pre-trained image or video generative models.