Diffusion-Based Attention Warping for Consistent 3D Scene Editing
Đã lưu trong:
| Xuất bản năm: | arXiv.org (Dec 10, 2024), p. n/a |
|---|---|
| Tác giả chính: | |
| Tác giả khác: | |
| Được phát hành: |
Cornell University Library, arXiv.org
|
| Những chủ đề: | |
| Truy cập trực tuyến: | Citation/Abstract Full text outside of ProQuest |
| Các nhãn: |
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
|
| Bài tóm tắt: | We present a novel method for 3D scene editing using diffusion models, designed to ensure view consistency and realism across perspectives. Our approach leverages attention features extracted from a single reference image to define the intended edits. These features are warped across multiple views by aligning them with scene geometry derived from Gaussian splatting depth estimates. Injecting these warped features into other viewpoints enables coherent propagation of edits, achieving high fidelity and spatial alignment in 3D space. Extensive evaluations demonstrate the effectiveness of our method in generating versatile edits of 3D scenes, significantly advancing the capabilities of scene manipulation compared to the existing methods. Project page: \url{https://attention-warp.github.io} |
|---|---|
| số ISSN: | 2331-8422 |
| Nguồn: | Engineering Database |