SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules
Gorde:
| Argitaratua izan da: | arXiv.org (Dec 6, 2024), p. n/a |
|---|---|
| Egile nagusia: | |
| Beste egile batzuk: | , , , , , , , , , , , , , |
| Argitaratua: |
Cornell University Library, arXiv.org
|
| Gaiak: | |
| Sarrera elektronikoa: | Citation/Abstract Full text outside of ProQuest |
| Etiketak: |
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
|
| Laburpena: | Text-to-image (T2I) generation using diffusion models has become a blockbuster service in today's AI cloud. A production T2I service typically involves a serving workflow where a base diffusion model is augmented with various "add-on" modules, notably ControlNet and LoRA, to enhance image generation control. Compared to serving the base model alone, these add-on modules introduce significant loading and computational overhead, resulting in increased latency. In this paper, we present SwiftDiffusion, a system that efficiently serves a T2I workflow through a holistic approach. SwiftDiffusion decouples ControNet from the base model and deploys it as a separate, independently scaled service on dedicated GPUs, enabling ControlNet caching, parallelization, and sharing. To mitigate the high loading overhead of LoRA serving, SwiftDiffusion employs a bounded asynchronous LoRA loading (BAL) technique, allowing LoRA loading to overlap with the initial base model execution by up to k steps without compromising image quality. Furthermore, SwiftDiffusion optimizes base model execution with a novel latent parallelism technique. Collectively, these designs enable SwiftDiffusion to outperform the state-of-the-art T2I serving systems, achieving up to 7.8x latency reduction and 1.6x throughput improvement in serving SDXL models on H800 GPUs, without sacrificing image quality. |
|---|---|
| ISSN: | 2331-8422 |
| Baliabidea: | Engineering Database |