首页 | 本学科首页   官方微博 | 高级检索  
     

采用非采样Contourlet变换与区域分类的红外和可见光图像融合
引用本文:张蕾,金龙旭,韩双丽,吕增明,李新娥.采用非采样Contourlet变换与区域分类的红外和可见光图像融合[J].光学精密工程,2015,23(3):810-818.
作者姓名:张蕾  金龙旭  韩双丽  吕增明  李新娥
作者单位:1. 中国科学院 长春光学精密机械与物理研究所, 吉林 长春 130033;2. 中国科学院大学, 北京 100039;3. 东北师范大学 物理学院, 吉林 长春 130024
基金项目:吉林省科技发展计划资助项目(No.20126016)
摘    要:提出基于多尺度变换和区域相结合的红外与可见光图像融合方法,用于有效保留红外图像与可见光图像中的空间信息及热目标信息,提升融合图像的可观测性和可理解性。首先,基于非采样Contourlet变换(NSCT)方法对红外和可见光图像进行初步融合,采用基于局部能量的规则融合低通子带系数,根据尺度内各方向子带的相关性原则融合带通方向子带系数。然后,计算初次融合后所得的融合图像与源图像的结构相似性(SSIM),根据源图像与初次融合图像的结构相似程度对图像进行区域分类,得到相似区域分类标识图。最后,依据区域内各自的相似度特性,分别采用不同的融合策略进行二次融合,从而得到最终的融合结果。实验结果表明:该方法能够充分提取源图像的区域特征和纹理特征,融合结果在主观和客观评价上均优于目前流行的融合方法。与仅使用NSCT法进行融合相比,实验所采用的两组图像的质量评价指标分别提高了16%、85%、54%、36%和18%、102%、84%、41%。表明该方法在主客观评价上均优于双树复杂小波变换(DTCWT)、NSCT、冗余离散小波变换(RDWT)等方法。

关 键 词:图像融合  红外图像  可见光图像  多尺度变换  非采样Contourlet变换  结构相似度
收稿时间:2013-12-27

Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification
ZHANG Lei , JIN Long-xu , HAN Shuang-li , LV Zeng-ming , LI Xin-e.Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification[J].Optics and Precision Engineering,2015,23(3):810-818.
Authors:ZHANG Lei  JIN Long-xu  HAN Shuang-li  LV Zeng-ming  LI Xin-e
Affiliation:1. Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China;2. University of Chinese Academy of Sciences, Beijing 100039, China;3. School of Physics, Northeast Normal University, Changchun 130024, China
Abstract:A novel method based on region classification and multi-resolution transform was presented for the fusion of infrared and visual images to retain their spatial information and thermal target information and to improve their observability and intelligibility. The fusion process contained the following three steps. Firstly, infrared and visual images were fused by the Non-sampled Contourlet Transform (NSCT) to get lowpass subband coefficients and bandpass directional subband coefficients. Lowpass subband coefficients were fused by the region energy rule and the bandpass directional subband coefficients was fused based on the correlation rule of the bandpass directional subband coefficients. Then, the Structural Similarity Index (SSIM) between original images and intermediate fused image was computed. Based on the obtained SSIM, the images were classified in regions and the similarity region classification maps were obtained. Finally, to generate general and complementary regions, pixels of original images were classified by the threshold of similarity. In accordance with the concentrated similarity of different regions, the original images were fused for the second time and the final fused images were obtained. In this method, the general and complementary regions of infrared and visual images were distinguished effectively. The experimental results show that the method is better in fusing infrared and visual images than some current methods, such as NCST, Dual-tree Complex Wavelet Transform (DTCWT), Redundant Discrete Wavelet Transform (RDWT), and Discrete Wavelet Transform (DWT). As compared with the NSCT method in two group images, their quality indexes have been increased by 16%, 85%, 54%, 36% and 18%, 102%, 84%, 41%, respectively.
Keywords:image fusion  infrared image  visible image  multiresolution transform  Nonsubsampled Contourlet Transform (NSCT)  Structural Similarity Index (SSIM)
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《光学精密工程》浏览原始摘要信息
点击此处可从《光学精密工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号