Detail-aware image denoising via structure preserved network and residual diffusion model

Guardado en:
Detalles Bibliográficos
Publicado en:The Visual Computer vol. 41, no. 1 (Jan 2025), p. 639
Publicado:
Springer Nature B.V.
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:The rapid development of deep learning has led to significant strides in image denoising research and has achieved advanced denoising performance in terms of distortion metrics. However, most denoising models that construct loss functions based on pixel-by-pixel differences cause phenomena, such as blurred edges or over-smoothing in denoised images, unsatisfactory to human perception. Our approach to addressing this issue involves prioritizing visual perceptual quality and efficiently restoring high-frequency details that may have been lost during the point-by-point denoising process, all the while preserving the overall structure of the image. We introduce a structure preserved network to generate cost-effective initial predictions that are subsequently incorporated into a conditional diffusion model as a constraint that closely aligns with the actual images. This allows us to more accurately estimate the distribution of clean images by diffusing from the residuals. We observe that by maintaining image consistency in the initial prediction, we can use a residual diffusion model with lower complexity and fewer iterations to restore the detailed texture for the smoothed parts, ultimately leading to a denoised image sample that is more consistent with the visual perceptual quality. Our method is superior in matching human perceptual metrics, e.g. FID, and maintains its performance even at high noise levels, enabling the preservation of the sharp edge and texture features of the image, while reducing computational costs and equipment requirements. This not only achieves the objective of denoising but also results in enhanced subjective visual effects.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-024-03353-y
Fuente:Advanced Technologies & Aerospace Database