UNet--: Memory-Efficient and Feature-Enhanced Network Architecture based on U-Net with Reduced Skip-Connections

保存先:
書誌詳細
出版年:arXiv.org (Dec 24, 2024), p. n/a
第一著者: Yin, Lingxiao
その他の著者: Tao, Wei, Zhao, Dongyue, Ito, Tadayuki, Osa, Kinya, Kato, Masami, Tse-Wei, Chen
出版事項:
Cornell University Library, arXiv.org
主題:
オンライン・アクセス:Citation/Abstract
Full text outside of ProQuest
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 3149108798
003 UK-CbPIL
022 |a 2331-8422 
024 7 |a 10.1007/978-981-96-0963-5  |2 doi 
035 |a 3149108798 
045 0 |b d20241224 
100 1 |a Yin, Lingxiao 
245 1 |a UNet--: Memory-Efficient and Feature-Enhanced Network Architecture based on U-Net with Reduced Skip-Connections 
260 |b Cornell University Library, arXiv.org  |c Dec 24, 2024 
513 |a Working Paper 
520 3 |a U-Net models with encoder, decoder, and skip-connections components have demonstrated effectiveness in a variety of vision tasks. The skip-connections transmit fine-grained information from the encoder to the decoder. It is necessary to maintain the feature maps used by the skip-connections in memory before the decoding stage. Therefore, they are not friendly to devices with limited resource. In this paper, we propose a universal method and architecture to reduce the memory consumption and meanwhile generate enhanced feature maps to improve network performance. To this end, we design a simple but effective Multi-Scale Information Aggregation Module (MSIAM) in the encoder and an Information Enhancement Module (IEM) in the decoder. The MSIAM aggregates multi-scale feature maps into single-scale with less memory. After that, the aggregated feature maps can be expanded and enhanced to multi-scale feature maps by the IEM. By applying the proposed method on NAFNet, a SOTA model in the field of image restoration, we design a memory-efficient and feature-enhanced network architecture, UNet--. The memory demand by the skip-connections in the UNet-- is reduced by 93.3%, while the performance is improved compared to NAFNet. Furthermore, we show that our proposed method can be generalized to multiple visual tasks, with consistent improvements in both memory consumption and network accuracy compared to the existing efficient architectures. 
653 |a Visual tasks 
653 |a Feature maps 
653 |a Image restoration 
653 |a Memory tasks 
653 |a Memory devices 
653 |a Modules 
653 |a Decoding 
653 |a Consumption 
653 |a Coders 
653 |a Effectiveness 
700 1 |a Tao, Wei 
700 1 |a Zhao, Dongyue 
700 1 |a Ito, Tadayuki 
700 1 |a Osa, Kinya 
700 1 |a Kato, Masami 
700 1 |a Tse-Wei, Chen 
773 0 |t arXiv.org  |g (Dec 24, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3149108798/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.18276