SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model

Guardat en:
Dades bibliogràfiques
Publicat a:The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings (2025), p. 1570-1580
Autor principal: Tan, Shuhan
Altres autors: Lambert, John, Jeon, Hong, Kulshrestha, Sakshum, Bai, Yijing, Luo, Jing, Anguelov, Dragomir, Tan, Mingxing, Jiang, Chiyu Max
Publicat:
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Matèries:
Accés en línia:Citation/Abstract
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3247056732
003 UK-CbPIL
024 7 |a 10.1109/CVPR52734.2025.00154  |2 doi 
035 |a 3247056732 
045 2 |b d20250101  |b d20251231 
084 |a 228229  |2 nlm 
100 1 |a Tan, Shuhan  |u UT Austin 
245 1 |a SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model 
260 |b The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  |c 2025 
513 |a Conference Proceedings 
520 3 |a Conference Title: 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Conference Start Date: 2025 June 10Conference End Date: 2025 June 17Conference Location: Nashville, TN, USAThe goal of traffic simulation is to augment a potentially limited amount of manually-driven miles that is available for testing and validation, with a much larger amount of simulated synthetic miles. The culmination of this vision would be a generative simulated city, where given a map of the city and an autonomous vehicle (AV) software stack, the simulator can seamlessly simulate the trip from point A to point B by populating the city around the AV and controlling all aspects of the scene, from animating the dynamic agents (e.g., vehicles, pedestrians) to controlling the traffic light states. We refer to this vision as CitySim, which requires an agglomeration of simulation technologies: scene generation to populate the initial scene, agent behavior modeling to animate the scene, occlusion reasoning, dynamic scene generation to seamlessly spawn and remove agents, and environment simulation for factors such as traffic lights. While some key technologies have been separately studied in various works, others such as dynamic scene generation and environment simulation have received less attention in the research community. We propose SceneDiffuser++, the first end-to-end generative world model trained on a single loss function capable of point A-to-B simulation on a city scale integrating all the requirements above. We demonstrate the city-scale traffic simulation capability of SceneDiffuser++ and study its superior realism under long simulation conditions. We evaluate the simulation quality on an augmented version of the Waymo Open Motion Dataset (WOMD) with larger map regions to support trip-level simulation. 
653 |a Pattern recognition 
653 |a Simulation 
653 |a Scene generation 
653 |a Pedestrians 
653 |a Computer vision 
653 |a Traffic signals 
653 |a Occlusion 
653 |a Environment simulation 
700 1 |a Lambert, John  |u Waymo LLC 
700 1 |a Jeon, Hong  |u Waymo LLC 
700 1 |a Kulshrestha, Sakshum  |u Waymo LLC 
700 1 |a Bai, Yijing  |u Waymo LLC 
700 1 |a Luo, Jing  |u Waymo LLC 
700 1 |a Anguelov, Dragomir  |u Waymo LLC 
700 1 |a Tan, Mingxing  |u Waymo LLC 
700 1 |a Jiang, Chiyu Max  |u Waymo LLC 
773 0 |t The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings  |g (2025), p. 1570-1580 
786 0 |d ProQuest  |t Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3247056732/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch