login
Home / Papers / Computer Vision and Image Understanding

Computer Vision and Image Understanding

41 Citations•2021•
Jaihyun Koh, Jangho Lee, Sungroh Yoon
journal unavailable

It was found that multi-scale training helps NNs to deal with large blurs, and RNNs outperform CNNs and GANs using a perceptual loss function produce artifacts.

Abstract

Neural networks (NNs) are becoming the tool of choice for sharpening blurred images. We discuss and categorize deblurring NNs. Then we evaluate seven NNs for non-blind deblurring (NBD), and seven NNs and four optimization techniques for blind deblurring (BD). To do this we use several current datasets containing pairs of sharp and blurred images, synthesized either by convolving sharp images with blur kernels or by averaging consecutive sharp images, so as to produce both uniform and non-uniform blurs. We also introduce a newly reorganized benchmark dataset in which blurred images have been classified using attributes that depend on the extent of the blur. We use this dataset to compare the effectiveness of single and multi-scale training in coping with large blurs. On NBD, NNs that use regularization with a denoising prior network outperform other denoising NNs; and NNs that use a deep image prior network outperform other deconvolution NNs. On BD, NNs outperform optimizations in signal-difference terms, but not in terms of perceptual fidelity. We found that multi-scale training helps NNs to deal with large blurs, and RNNs outperform CNNs. We also observed that GANs using a perceptual loss function produce artifacts; but also that some form of perceptual fidelity loss is required to get the best results from NNs. We contend that the domain bias of current datasets works against robustness and generality. And we discuss the potential of more sophisticated perceptual loss functions, attention techniques, and unsupervised learning.