Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration

Yuhong Zhang1, Hengsheng Zhang1, Xinning Chai1, Zhengxue Cheng1, Rong Xie1, Li Song1, Wenjun Zhang1
1Shanghai Jiao Tong University

Our Diff-Restorer model demonstrates remarkable restoration results on the restoration tasks with single degradation, real-world degradation, and mixed degradation. Diff-Restorer can adaptively handle various degradation types.

Abstract

Image restoration is a classic low-level problem aimed at recovering high-quality images from low-quality images with various degradations such as blur, noise, rain, haze, etc. However, due to the inherent complexity and non-uniqueness of degradation in real-world images, it is challenging for a model trained for single tasks to handle real-world restoration problems effectively. Moreover, existing methods often suffer from over-smoothing and lack of realism in the restored results. To address these issues, we propose Diff-Restorer, a universal image restoration method based on the diffusion model, aiming to leverage the prior knowledge of Stable Diffusion to remove degradation while generating high perceptual quality restoration results. Specifically, we utilize the pre-trained visual language model to extract visual prompts from degraded images, including semantic and degradation embeddings. The semantic embeddings serve as content prompts to guide the diffusion model for generation. In contrast, the degradation embeddings modulate the Image-guided Control Module to generate spatial priors for controlling the spatial structure of the diffusion process, ensuring faithfulness to the original image. Additionally, we design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain. We conducted comprehensive qualitative and quantitative analysis on restoration tasks with different degradations, demonstrating the effectiveness and superiority of our approach.

Method

Overview of the architecture of Diff-Restorer.

CLIP Image Encoder and Prompt Processor are used to extract visual prompts, which include semantic embedding and degradation embedding. A Image-guided Control Module modulated with degradation embedding is used to provide control information. The pre-trained denoising U-Net utilizes control information and semantic embedding as conditions to denoise. After multiple denoising steps, the latent features are generated and subsequently transformed into high-quality restored images by the Degradation-aware Decoder.

Results

Single degradation restoration results

Qualitative comparison of different methods on the eight single degradation restoration tasks.

Real-world degradation restoration results

Qualitative comparison of different methods on real-world restoration tasks.

Mixed degradation restoration results

Qualitative comparison of different methods on mixed degradation restoration tasks.

BibTeX

@misc{zhang2024diffrestorerunleashingvisualprompts,
      title={Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration}, 
      author={Yuhong Zhang and Hengsheng Zhang and Xinning Chai and Zhengxue Cheng and Rong Xie and Li Song and Wenjun Zhang},
      year={2024},
      eprint={2407.03636},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.03636}, 
}