Abstract:Aiming at the problem that the super-resolution performance of image is significantly reduced when the assumed degradation model is inconsistent with the actual model, a deep convolution neural network model integrating the local and degradation representation information of image space is proposed. First, the initial features and degraded expressions are extracted from the low-resolution image, and then the cascaded spatial local information and degraded information modules and feature fusion blocks are constructed. These modules are further cascaded to form the feature transformation sub network. Finally, the high-resolution image is obtained by using the deconvolution layer. The experiments on the benchmark test dataset show that the algorithm achieves higher peak signal-to-noise ratio (PSNR) than the current mainstream blind super-resolution algorithms for both and sampling factors when the Gaussian kernel width is not 0, with the highest PSNR value is 37.56 for blind super-resolution and 31.87 for blind super-resolution, and also has higher reconstruction efficiency and better reconstruction visual effect compared with the mainstream algorithms.