RetCompletion:High-Speed Inference Image Completion with Retentive Network
保存先:
| 出版年: | arXiv.org (Dec 4, 2024), p. n/a |
|---|---|
| 第一著者: | |
| その他の著者: | , , , , |
| 出版事項: |
Cornell University Library, arXiv.org
|
| 主題: | |
| オンライン・アクセス: | Citation/Abstract Full text outside of ProQuest |
| タグ: |
タグなし, このレコードへの初めてのタグを付けませんか!
|
| 抄録: | Time cost is a major challenge in achieving high-quality pluralistic image completion. Recently, the Retentive Network (RetNet) in natural language processing offers a novel approach to this problem with its low-cost inference capabilities. Inspired by this, we apply RetNet to the pluralistic image completion task in computer vision. We present RetCompletion, a two-stage framework. In the first stage, we introduce Bi-RetNet, a bidirectional sequence information fusion model that integrates contextual information from images. During inference, we employ a unidirectional pixel-wise update strategy to restore consistent image structures, achieving both high reconstruction quality and fast inference speed. In the second stage, we use a CNN for low-resolution upsampling to enhance texture details. Experiments on ImageNet and CelebA-HQ demonstrate that our inference speed is 10\(\times\) faster than ICT and 15\(\times\) faster than RePaint. The proposed RetCompletion significantly improves inference speed and delivers strong performance. |
|---|---|
| ISSN: | 2331-8422 |
| ソース: | Engineering Database |