首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Image fusion aims to integrate complementary information from multiple modalities into a single image with none distortion and loss of data. Image fusion is important in medical imaging, specifically for the purpose of detecting the tumor and identification of diseases. In this article, completely unique discrete wavelet transform (DWT) and intuitionistic fuzzy sets (IFSs) based fusion method (DWT‐IFS) is proposed. For fusion, initially, all source images are fused using DWT with the average, maximum, and entropy fusion rules. Besides, on the fused image IFS is applied. In the IFS process images are converted into intuitionistic fuzzy images (IFIs) by selecting an optimum value for the parameter in membership, non‐membership, and hesitation degree function using entropy. Then, the resulting IFIs are decomposed into the blocks, and the corresponding blocks of the images are fused using the intersection and union operations of IFS. The efficiency of the proposed DWT‐IFS fusion method is recognized by examining it with other existing methods, such as Averaging (AVG), Principal Component Analysis (PCA), Laplacian Pyramid Approach (LPA), Contrast Pyramid Approach (CPA), Discrete Wavelet Transform (DWT), Morphological Pyramid Approach (MPA), Redundancy Discrete Wavelet Transform (RDWT), Contourlet Transform (CONTRA), and Intuitionistic Fuzzy Set (IFS) using subjective and objective performance evaluation measures. The experimental results reveal that the proposed DWT‐IFS fusion method provides higher quality of information in terms of physical properties and contrast as compared to the existing methods.  相似文献   

2.
Image fusion makes the fused image more reliable and intelligible, and more suitable for human vision and computer detection, classification, recognition and understanding. This paper proposes a pixel-level image fusion method for merging two source images of the same scene using wavelet transform and gray-level features (GLF). First, a three-level discrete two-dimensional wavelet transform is used to decompose the two source images into low-frequency image components and horizontal, vertical, and diagonal high-frequency components. Then, the spatial frequency correlation coefficient is used to determine the pixel fusion rule to apply to each of the low-frequency images, and the correlation coefficient of the GLF is used to determine the pixel fusion rule to apply to each of the high-frequency images. Finally, the fused image is reconstructed using inverse wavelet transform. The results of the experiments conducted indicate that the proposed method is more effective than relevant conventional methods.  相似文献   

3.
Multimodal sensor medical image fusion has been widely reported in recent years, but the fused image by the existing methods introduces low contrast information and little detail information. To overcome this problem, the new image fusion method is proposed based on mutual‐structure for joint filtering and sparse representation in this article. First, the source image is decomposed into a series of detail images and coarse images by mutual‐structure for joint filtering. Second, sparse representation is adopted to fuse coarse images and then local contrast is applied for fusing detail images. Finally, the fused image is reconstructed by the addition of the fused coarse images and the fused detail images. By experimental results, the proposed method shows the best performance on preserving detail information and contrast information in the views of subjective and objective evaluations.  相似文献   

4.
基于边缘信息的偏振图像融合算法及评价   总被引:1,自引:1,他引:0  
张晶晶  方勇华 《光电工程》2007,34(11):78-81,87
偏振遥感图像通常都采用强度、偏振度、偏振角来表征目标偏振特性.本文提出的基于边缘信息的偏振图像融合算法是将三幅偏振图像利用离散小波变换把图像分解成不同尺度的低频和高频部分,采用小波区域窗口和子区域窗口统计把小波系数分类成边缘和非边缘系数,通过这些方法进行有效的边缘细节信息提取.在融合处理中,低频图像的小波系数平均值作为融合后的低频系数,高频细节系数根据不同区域特征选择方法以及对应输入图像小波系数的窗口区域方差来确定融合后高频小波系数.仿真实验结果表明,这样使得融合后的图像细节更真实更丰富,图像的偏振特性体现更为充分,同时减少对源图像的预处理要求,使图像在整体上有较好的视觉效果.从而证明这种方法能够在保留图像微小细节方面获得满意的结果,且算法有效性优于其他的图像融合方法.  相似文献   

5.
Fusing multimodal medical images into an integrated image, providing more details and rich information thereby facilitating medical diagnosis and therapy. Most of the existing multiscale-based fusion methods ignore the correlations between the decomposition coefficients and lead to incomplete fusion results. A novel contextual hidden Markov model (CHMM) is proposed to construct the statistical model of contourlet coefficients. First, the pair brain images are decomposed into multiscale, multidirectional, and anisotropic subbands with a contourlet transform. Then the low-frequency components are fused with the choose-max rule. For the high-frequency coefficients, the CHMM is learned with the EM algorithm, and incorporate with a novel fuzzy entropy-based context, building the fuzzy relationships among these coefficients. Finally, the fused brain image is obtained by using the inverse contourlet transform. Fusion experiments on several multimodal brain images show the superiority of the proposed method in terms of both visual quality and some widely used objective measures.  相似文献   

6.
Many types of medical images must be fused, as single‐modality medical images can only provide limited information due to the imaging principles and the complexity of human organ structures. In this paper, a multimodal medical image fusion method that combines the advantages of nonsubsampling contourlet transform (NSCT) and fuzzy entropy is proposed to provide a basis for clinical diagnosis and improve the accuracy of target recognition and the quality of fused images. An image is initially decomposed into low‐ and high‐frequency subbands through NSCT. The corresponding fusion rules are adopted in accordance with the different characteristics of the low‐ and high‐frequency components. The membership degree of low‐frequency coefficients is calculated. The fuzzy entropy is also computed and subsequently used to guide the fusion of coefficients to preserve image details. High‐frequency components are fused by maximizing the regional energy. The final fused image is obtained by inverse transformation. Experimental results show that the proposed method achieves good fusion effect based on the subjective visual effect and objective evaluation criteria. This method can also obtain high average gradient, SD, and edge preservation and effectively retain the details of the fused image. The results of the proposed algorithm can provide effective reference for doctors to assess patient condition.  相似文献   

7.
基于模糊Choquet积分的图像融合效果评价   总被引:11,自引:0,他引:11  
徐宝昌  陈哲 《光电工程》2004,31(11):42-46
针对多源图像的融合效果评价问题,提出了一种基于模糊Choquet积分的图像融合效果评价方法。该方法采用融合图像熵值、交叉熵平均值、交叉熵均方根值、平均梯度值构成单因素评价指标集,基于知识确定F测度和各单因素评价指标的隶属度函数,应用Choquet积分综合各单因素指标得到一个综合评价指标。该方法综合利用了多个单因素指标的信息,并将人的知识引入到对图像融合效果的评价中,其评价结果更为全面客观。将该方法应用于CCD/SAR图像的融合效果评价,评价结果与理论分析结果和目视效果是一致的,表明了该评价方法的有效性。  相似文献   

8.
目的 为提高红外与彩色可见光融合图像的可视性,更好地再现图像的对比度和色彩效果,提出一种基于多级低秩表示和HSI颜色空间的彩色图像融合算法.方法 首先利用RGB到HSI颜色空间转换,把彩色可见光RGB图像转化到HSI颜色空间,并分离H,S,I三通道.然后利用LatLRR对彩色可见光图像的I通道图像和红外图像进行二级分解,可得到显著的细节部分和基础部分,并将彩色可见光图像I通道和红外图像的细节部分采用核范数自适应加权融合策略进行融合,基础部分采用高斯模糊逻辑值自适应加权进行融合.最后把融合后的细节部分和基础部分相加产生新的I通道图像,结合H,S通道再转到RGB空间,得到融合图.结果 实验结果表明,文中算法得到的融合图主观上彩色失真度最小、场景细节最清晰、红外目标更突出,同时客观评价指标值上升约1%~24%.结论 文中算法是一种有效的算法,对彩色图像融合结果有较好的改善作用.  相似文献   

9.

Due to limited depth-of-field of digital single-lens reflex cameras, the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of focus appear as blurred (out-of-focus) in the image. Multi-Focus Image Fusion can be used to reconstruct a fully focused image from two or more partially focused images of the same scene. In this paper, a new Fuzzy Based Hybrid Focus Measure (FBHFM) for multi-focus image fusion has been proposed. Optimal block size is very critical step for multi-focus image fusion. Particle Swarm Optimization (PSO) algorithm has been used to find optimal size of the block of the images for extraction of focus measure features. After finding optimal blocks, three focus measures Sum of Modified Laplacian, Gray Level Variance and Contrast Visibility has been extracted and combined these focus measures by using intelligent fuzzy technique. Fuzzy based hybrid intelligent focus values were estimated using contrast visibility measure to generate focused image. Different sets of multi-focus images have been used in detailed experimentation and compared the results with state-of-the-art existing techniques such as Genetic Algorithm (GA), Principal Component Analysis (PCA), Laplacian Pyramid discrete wavelet transform (DWT), and aDWT for image fusion. It has been found that proposed method performs well as compare to existing methods.

  相似文献   

10.
A new multispectral image wavelet representation is introduced, based on multiscale fundamental forms. This representation describes gradient information of multispectral images in a multiresolution framework. The representation is, in particular, extremely suited for fusion and merging of multispectral images. For fusion as well as for merging, a strategy is described. Experiments are performed on multispectral images, where Landsat Thematic Mapper images are fused and merged with SPOT Panchromatic images. The proposed techniques are compared with wavelet-based techniques described in the literature.  相似文献   

11.
应用统计信号处理和模糊数学的图像融合算法   总被引:2,自引:0,他引:2  
曹治国  王文武 《光电工程》2005,32(5):73-75,96
针对多传感器图像在像素级上的融合问题,将模糊数学理论引入到图像融合模型。该模型假定理想的融合后的图像包含场景所有的信息;将它乘上一个模糊因子,再加上随机噪声,可用来描述某一个成像传感器中获得的场景图像;不同的传感器对应不同的模糊因子和噪声。在此基础上,提出了建立在非多尺度分解框架下的图像融合算法。它以各传感器获取的图像作为输入条件,应用统计信号处理中的EM算法,求出针对不同传感器的噪声参数和模糊因子,通过迭代估计出融合的图像。实验结果显示,该算法获得的融合图像的互信息和联合熵分别达到3.5079和24.732,均优于加权平均融合法、小波融合算法和Laplacian融合算法的融合质量。  相似文献   

12.
Multimodal medical image fusion merges two medical images to produce a visual enhanced fused image, to provide more accurate comprehensive pathological information to doctors for better diagnosis and treatment. In this article, we present a perceptual multimodal medical image fusion method with free energy (FE) motivated adaptive pulse coupled neural network (PCNN) by employing Internal Generative Mechanism (IGM). First, source images are divided into predicted layers and detail layers with Bayesian prediction model. Then to retain human visual system inspired features, FE is used to motivate the PCNN for processing detail layers, and large firing times are selected as coefficients. The predicted layers are fused with the averaging strategy as activity level measurement. Finally, the fused image is reconstructed by merging coefficients in both fused layers. Experimental results visually and quantitatively show that the proposed fusion strategy is superior to the state‐of‐the‐art methods.  相似文献   

13.
杨晟炜  张志华  孔玲君  王茜 《包装工程》2019,40(11):194-202
目的 鉴于传统的红外与彩色可见光图像融合算法得到的融合图像,无法很好兼顾清晰度、对比度和色彩是否失真等,提出一种新的基于NSST和IHS颜色空间的彩色图像融合算法。方法 首先将源彩色可见光的RGB图像变换到各通道相关性最小的IHS颜色空间,分离出亮度分量和色度分量。其次对彩色可见光的亮度分量和红外图像分别进行NSST分解,对分解得到的低频系数采用基于自适应高斯模糊逻辑函数的系数选择方案,对高频系数则采用基于像素点的绝对值取大的系数选择方案,然后对经过选择的低、高频系数进行NSST逆变换,得到的融合图像作为新的亮度分量,结合已有的色度分量将其进行IHS逆变换,得到最终的RGB融合图像。结果 通过2种场景的红外与彩色可见光图像进行仿真实验,将提出的算法与LPT,SWT和NSCT等算法对比,通过主观评价和客观评价指标IE, AG, SF和SD等,可知新算法的融合结果图像场景细节最清晰,红外隐藏目标对比度最高,且色彩未出现明显失真现象,图像融合质量最高。结论 提出的算法相较于传统的红外与彩色可见光图像的融合质量,全面提升了效果,表明该算法具有优越性。  相似文献   

14.
《成像科学杂志》2013,61(7):408-422
Abstract

Image fusion is a challenging area of research with a variety of applications. The process of image fusion collects information from different sources and combines them in a single composite image. The composite fused image can better describe the scene than any of the source images. In this paper, we have proposed a method for noisy image fusion in contourlet domain. The proposed method works equally well for fusion of noise free images. Contourlet transform is a multiscale, multidirectional transform with various aspect ratios. These properties make it more suitable for image fusion than other conventional transforms. In the proposed work, the fusion algorithm is combined with a denoising algorithm to reverse the effect of noise. In the proposed method, we have used a level dependent threshold that is based on standard deviation of contourlet coefficients, mean and median of the absolute contourlet coefficients. Experimental results demonstrate that the proposed method performs well in the presence of different types of noise. Performance of the proposed method is compared with principal components analysis and sharp fusion based methods as well as other fusion methods based on variants of wavelet transform like dual tree complex wavelet transform, discrete wavelet transform, lifting wavelet transform, multiwavelet transform, stationary wavelet transform and pyramid transform using six standard quantitative quality metrics (entropy, standard deviation, edge strength, fusion factor, sharpness and peak signal to noise ratio). The combined qualitative and quantitative evaluation of the experimental results shows that the proposed method performs better than other methods.  相似文献   

15.
Sum‐modified‐Laplacian (SML) plays an important role in medical image fusion. However, fused rules based on larger SML always lead to fusion image distortion in transform domain image fusion or image information loss in spatial domain image fusion. Combined with average filter and median filter, a new medical image fusion method based on improved SML (ISML) is proposed. First, a basic fused image is gained by ISML, which is used for evaluation of the selection map of medical images. Second, difference images can be obtained by subtracting average image of all sources of medical images. Finally, basic fused image can be refined by difference images. The algorithm can both preserve the information of the source images well and suppress pixel distortion. Experimental results demonstrate that the proposed method outperforms the state‐of‐the‐art medical image fusion methods. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 206–212, 2015  相似文献   

16.
Image fusion can integrate complementary information from multimodal molecular images to provide an informative single result image. In order to obtain a better fusion effect, this article proposes a novel method based on relative total variation and co-saliency detection (RTVCSD). First, only the gray-scale anatomical image is decomposed into a base layer and a texture layer according to the relative total variation; then, the three-channel color functional image is transformed into the luminance and chroma (YUV) color space, and the luminance component Y is directly fused with the base layer of the anatomical image by comparing the co-saliency information; next, the fused base layer is linearly combined with the texture layer, and the obtained fused result is combined with the chroma information U and V of the functional image. Finally, the fused image is obtained by transforming back to the red–green–blue color space. The dataset consists of magnetic resonance imaging (MRI)/positron emission tomography images, MRI/single photon emission computed tomography (SPECT) images, computed tomography/SPECT images, and green fluorescent protein/phase contrast images, each category with 20 image pairs. Experimental results demonstrate that the proposed method RTVCSD outperforms the nine comparison algorithms in terms of visual effects and objective evaluation. RTVCSD well preserves the texture information of the anatomical image and the metabolism or protein distribution information of the functional image.  相似文献   

17.
Wide-dynamic numerical images are increasingly frequent in professional environments, military photointerpretation, and x-ray or magnetic resonance medical imagery. However, a dynamic compression process is necessary to exploit such images without incessant image manipulation. A wealth of efficient methods has been developed to tackle this problem on aesthetic grounds. We argue that professional imagery interpretation needs preservation of the original radiometric order. We develop a measure of image efficiency use of the 8-bit radiometric channel and find that it correlates nicely with subjective appraisal. The image underscores a radiometric order-preserving process to reach a standard radiometric efficiency. Lost information is then reintroduced by addition of an edge image devoid of artifacts, with an automatic weighting ensuring a natural-looking image.  相似文献   

18.
Multi-source information can be obtained through the fusion of infrared images and visible light images, which have the characteristics of complementary information. However, the existing acquisition methods of fusion images have disadvantages such as blurred edges, low contrast, and loss of details. Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform (NSST). Furthermore, the low-frequency subbands were fused by convolutional sparse representation (CSR), and the high-frequency subbands were fused by an improved pulse coupled neural network (IPCNN) algorithm, which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm, improving the performance of sparse representation with details injection. The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.  相似文献   

19.
Recently, the computed tomography (CT) and magnetic resonance imaging (MRI) medical image fusion have turned into a challenging issue in the medical field. The optimal fused image is a significant component to detect the disease easily. In this research, we propose an iterative optimization approach for CT and MRI image fusion. Initially, the CT and MRI image fusion is subjected to a multilabel optimization problem. The main aim is to minimize the data and smoothness cost during image fusion. To optimize the fusion parameters, the Modified Global Flower Pollination Algorithm is proposed. Here, six sets of fusion images with different experimental analysis are evaluated in terms of different evaluation metrics such as accuracy, specificity, sensitivity, SD, structural similarity index, feature similarity index, mutual information, fusion quality, and root mean square error (RMSE). While comparing to state‐of‐art methods, the proposed fusion model provides best RMSE with higher fusion performance. Experiments on a set of MRI and CT images of medical data set show that the proposed method outperforms a very competitive performance in terms of fusion quality.  相似文献   

20.
王运生  王黎明  聂芬 《包装工程》2018,39(13):208-215
目的为了使多聚焦灰度图像融合时保持源图像清晰信息,并有效抑制块效应和重影现象,基于3种不同聚焦测量与模糊推理系统,设计一种焦点度量与模糊逻辑的多聚焦灰度图像融合方案。方法首先,分别利用空间频率(SF)、改进的Laplacian能量和(SML),以及梯度和(SOG)计算输入灰度图像在像素邻域的局部焦点特征,并利用像素相关性改善对比度,从而得到SF,SML,SOG等3种聚焦度量。其次,根据SF与SML强度关系,建立焦点三态图,结合互补聚焦信息,并进行形态、中值滤波和一致性检查,消除狭窄的鸿沟和突起问题。然后,引入模糊逻辑算子,将每个图像像素的SF,SML图以及SOG作为模糊化的输入,通过模糊规则和去模糊器,生成每个图像的融合权重。最后,根据焦点权重执行加权融合,形成最后的融合图像。结果实验结果表明,与当前流行的融合方案相比,对于灰度图像,所提算法在融合性能上具有一定的优势,其融合图像具有更好的景深信息,避免了块效应与重影现象。结论所提算法具有良好的融合质量,能够有效提高灰度图像的分辨率,在图像处理领域具有一定的价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号