首页 | 本学科首页   官方微博 | 高级检索  
     

基于语义分割的红外和可见光图像融合
引用本文:周华兵, 侯积磊, 吴伟, 张彦铎, 吴云韬, 马佳义. 基于语义分割的红外和可见光图像融合[J]. 计算机研究与发展, 2021, 58(2): 436-443. DOI: 10.7544/issn1000-1239.2021.20200244
作者姓名:周华兵  侯积磊  吴伟  张彦铎  吴云韬  马佳义
作者单位:1(武汉工程大学计算机科学与工程学院 武汉 430205);2(智能机器人湖北省重点实验室(武汉工程大学) 武汉 430205);3(武汉大学电子信息学院 武汉 430072) (zhouhuabing@gmail.com)
基金项目:国家自然科学基金项目;湖北省技术创新工程项目
摘    要:红外图像即使在低光照条件下,也能根据热辐射的差异将目标与背景区分开来,而可见光图像具有高空间分辨率的纹理细节,此外,红外和可见光图像都含有相应的语义信息.因此,红外与可见光图像融合,需要既保留红外图像的辐射信息,也保留可见光图像的纹理细节,同时,也要反映出二者的语义信息.而语义分割可以将图像转换为带有语义的掩膜,提取源图像的语义信息.提出了一种基于语义分割的红外和可见光图像融合方法,能够克服现有融合方法不能针对性地提取不同区域特有信息的缺点.使用生成式对抗神经网络,并针对源图像的不同区域设计了2种不同的损失函数,以提高融合图像的质量.首先通过语义分割得到含有红外图像目标区域语义信息的掩模,并利用掩模将红外和可见光图像分割为红外图像目标区域、红外图像背景区域、可见光图像目标区域和可见光图像背景区域;然后对目标区域和背景区域分别采用不同的损失函数得到目标区域和背景区域的融合图像;最后将2幅融合图像结合起来得到最终融合图像.实验表明,融合结果目标区域对比度更高,背景区域纹理细节更丰富,提出的方法取得了较好的融合效果.

关 键 词:红外图像  可见光图像  图像融合  语义分割  掩膜

Infrared and Visible Image Fusion Based on Semantic Segmentation
Zhou Huabing, Hou Jilei, Wu Wei, Zhang Yanduo, Wu Yuntao, Ma Jiayi. Infrared and Visible Image Fusion Based on Semantic Segmentation[J]. Journal of Computer Research and Development, 2021, 58(2): 436-443. DOI: 10.7544/issn1000-1239.2021.20200244
Authors:Zhou Huabing  Hou Jilei  Wu Wei  Zhang Yanduo  Wu Yuntao  Ma Jiayi
Affiliation:1(College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205);2(Hubei Key Laboratory of Intelligent Robot (Wuhan Institute of Technology), Wuhan 430205);3(Electronic Information School, Wuhan University, Wuhan 430072)
Abstract:Infrared images can distinguish targets from their backgrounds due to the difference in thermal radiation even in poor lighting conditions.By contrast,visible images can represent texture details with high spatial resolution.Meanwhile,both of infrared and visible images preserve corresponding semantic information.Therefore,infrared and visible image fusion should keep both radiation information of the infrared image and texture details of the visible image;additionally,it needs to reserve the semantic information of both.Semantic segmentation can transform the source images into the masks with semantic information.In this paper,an infrared and visible image fusion method is proposed based on semantic segmentation.It can overcome the shortcomings that the existing fusion methods are not specific to different regions.Considering the specific information for different regions of infrared and visible images,we design two loss functions for different regions to improve the quality of fused image under the framework of generative adversarial network.Firstly,we gain the masks of the infrared images with semantic information by semantic segmentation;then we use the masks to divide the infrared and visible images into infrared target area,infrared background area,visible target area,and visible background area.Secondly,we employ different methods to fuse the target and background area,respectively.Finally,we combine the two regions to obtain the final fused image.The experiment shows that the proposed method outperforms state-of-the-art,where our results have higher contrast in the target area and richer texture details in the background area.
Keywords:infrared image  visible image  image fusion  semantic segmentation  mask
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机研究与发展》浏览原始摘要信息
点击此处可从《计算机研究与发展》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号