首页 | 本学科首页   官方微博 | 高级检索  
     

改进生成对抗网络实现红外与可见光图像融合
引用本文:闵莉,曹思健,赵怀慈,刘鹏飞.改进生成对抗网络实现红外与可见光图像融合[J].红外与激光工程,2022,51(4):20210291-1-20210291-10.
作者姓名:闵莉  曹思健  赵怀慈  刘鹏飞
作者单位:1.沈阳建筑大学 机械工程学院,辽宁 沈阳 110168
基金项目:国家重点研发计划(2018YFB1105300);装备预研重点基金(JZX7Y2019025049301)
摘    要:红外与可见光图像融合技术能够同时提供红外图像的热辐射信息和可见光图像的纹理细节信息,在智能监控、目标探测和跟踪等领域具有广泛的应用。两种图像基于不同的成像原理,如何融合各自图像的优点并保证图像不失真是融合技术的关键,传统融合算法只是叠加图像信息而忽略了图像的语义信息。针对该问题,提出了一种改进的生成对抗网络,生成器设计了局部细节特征和全局语义特征两路分支捕获源图像的细节和语义信息;在判别器中引入谱归一化模块,解决传统生成对抗网络不易训练的问题,加速网络收敛;引入了感知损失,保持融合图像与源图像的结构相似性,进一步提升了融合精度。实验结果表明,提出的方法在主观评价与客观指标上均优于其他代表性方法,对比基于全变分模型方法,平均梯度和空间频率分别提升了55.84%和49.95%。

关 键 词:图像融合    生成对抗网络    语义信息    谱归一化
收稿时间:2021-05-02

Infrared and visible image fusion using improved generative adversarial networks
Affiliation:1.School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang 110168, China2.Key Laboratory of Optical-Electronics Information Processing, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
Abstract:The infrared and visible image fusion technology can provide both the thermal radiation information of infrared images and the texture detail information of visible images. It has a wide range of applications in the fields of intelligent monitoring, target detection and tracking. The two type of images are based on different imaging principles. How to integrate the advantages of each type of image and ensure that the image will not distorted is the key to the fusion technology. Traditional fusion methods only superimpose images information and ignore the semantic information of images. To solve this problem, an improved generative adversarial network was proposed. The generator was designed with two branches of part detail feature and global semantic feature to capture the detail and semantic information of source images; the spectral normalization module was introduced into the discriminator, which would solve the problem that traditional generation adversarial networks were not easy to train and accelerates the network convergence; the perceptual loss was introduced to maintain the structural similarity between the fused image and source images, and further improve the fusion accuracy. The experimental results show that the proposed method is superior to other representative methods in subjective evaluation and objective indicators. Compared with the method based on the total variation model, the average gradient and spatial frequency are increased by 55.84% and 49.95%, respectively.
Keywords:
点击此处可从《红外与激光工程》浏览原始摘要信息
点击此处可从《红外与激光工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号