基于双阶段多尺度生成对抗网络的图像复原方法
DOI:
作者:
作者单位:

1.南京信息工程大学电子与信息工程学院 南京;2.无锡学院电子信息工程学院 无锡

作者简介:

通讯作者:

中图分类号:

TP391.41

基金项目:

国家自然科学基金(62071240),2024年江苏省研究生创新项目(2311082401501)


Image restoration method based on two-stage multi-scale generative adversarial network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对人脸图像复原任务中对图像尺度信息利用不足和眼镜结构复原错误的问题,提出一种基于双阶段多尺度生成对抗网络复原模型。该模型第一阶段引入改进损失的U-Net粗重构网络,利用跳连接减少原始图像信息的丢失,融合三种不同的损失函数提高生成器的重构能力,采用双判别器考虑全局信息和局部信息,并提出一种混合域注意力机制用于关注图像的空间和通道信息。第二阶段的精修复网络构建了全新的特征增强模块,增强网络对细节信息的提取能力和对结构的表达能力,引入相对判别器,用于关注生成样本与真实样本之间的相对真实性,提高了生成质量和训练稳定性。实验结果表明,该方法能够复原各类图像缺失的情况,并能够有效复原佩戴眼镜的人脸图像,与其他方法相比,该方法的峰值信噪比、结构相似性和感知相似度评估等指标分别提升了3.81%,2.65%和0.45%。

    Abstract:

    A two-stage multi-scale generative adversarial network restoration model is proposed to solve the problem of insufficient use of image scale information and incorrect glasses structure restoration in face image restoration tasks. In the first stage, the model introduces U-Net coarse reconstruction network with perceptual loss, fuses image features at different scales with jump connections, introduces three different loss functions to improve the reconstruction ability of the generator, adopts double discriminator to consider global information and local information, and proposes a mixed-domain attention mechanism to focus on spatial and channel information of the image. In the second stage, a new feature enhancement module is constructed using the fine repair network to enhance the ability of the network to extract details and express structures. A relative discriminator is introduced to focus on the relative authenticity between the generated samples and the real samples, which improves the generation quality and training stability. The experimental results show that this method has excellent performance in all kinds of missing images, and has good performance in face restoration with glasses. Compared with other methods, the peak signal-to-noise ratio, structural similarity and perceived similarity evaluation indexes of the proposed method are improved by 3.81%, 2.65% and 0.45%, respectively.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-01-22
  • 最后修改日期:2024-04-16
  • 录用日期:2024-04-18
  • 在线发布日期:
  • 出版日期: