首页 | 本学科首页   官方微博 | 高级检索  
     

基于特征融合和注意力机制的图像超分辨率模型
引用本文:盘展鸿,朱鉴,蔡瑞初,陈炳丰.基于特征融合和注意力机制的图像超分辨率模型[J].计算机应用研究,2022,39(3):884-888.
作者姓名:盘展鸿  朱鉴  蔡瑞初  陈炳丰
作者单位:广东工业大学 计算机学院,广州510006,北京航空航天大学 青岛研究院,山东 青岛266000
基金项目:国家自然科学基金资助项目(61502109,61672502,61702112);;广东省自然科学基金资助项目(2016A030310342);;广东省科技计划项目(2016A040403078,2017B010110015,2017B010110007);;广州市科技计划项目(201604016075,202007040005);
摘    要:现有的基于深度学习的单张图像超分辨率(single image super-resolution, SISR)模型通常是通过加深网络层数来提升模型的拟合能力,没有充分提取和复用特征,导致重建图像的质量较低。针对该问题,提出了基于特征融合和注意力机制的图像超分辨率模型。该模型在特征提取模块使用残差中嵌入残差(residual in residual, RIR)的结构,该网络的特征提取模块由包含多个残差块的残差组构成,并且在每个残差组内进行局部特征融合,在每个组之间进行全局特征融合。此外,在每一个残差块中引入坐标注意力模块,在每一个残差组中引入空间注意力模块。经验证,该模型能充分提取特征并且复用特征。实验最终结果表明,该模型在客观评价指标和主观视觉效果上都优于现有的模型。

关 键 词:超分辨率  深度学习  特征融合  注意力机制
收稿时间:2021/7/1 0:00:00
修稿时间:2022/2/16 0:00:00

Image super-resolution model based on feature fusion and attention mechanism
Pan Zhanhong,Zhu Jian,Cai Ruichu and Chen Bingfeng.Image super-resolution model based on feature fusion and attention mechanism[J].Application Research of Computers,2022,39(3):884-888.
Authors:Pan Zhanhong  Zhu Jian  Cai Ruichu and Chen Bingfeng
Affiliation:(School of Computers,Guangdong University of Technology,Guangzhou 510006,China;Qingdao Research Institute of Beihang University,Qingdao Shandong 266000,China)
Abstract:Existing deep learning based single image super-resolution(SISR) models usually improve the fitting ability of the model by increasing the number of network layers, but fail to fully extract and reuse features, leading low quality of reconstructed images.To solve this problem, this paper proposed an image super-resolution model based on feature fusion and attention mechanism.This model used residual in residual(RIR) structure in feature extraction module.The feature extraction module of the network consisted of several residual groups.Each residual group consisted of several residual block.This module implemented local feature fusion in each residual group and global feature fusion between each group.In addition, this model introduced coordinate attention module into each residual block and spatial attention module into each residual group.It verifies that the model is able to fully extract features and reuse features.The final experimental results show that the model is superior to the existing models in objective evaluation indexes and subjective visual effect.
Keywords:super-resolution  deep learning  feature fusion  attention mechanism
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号