首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
The research and development of biomedical imaging techniques requires more number of image data from medical image acquisition devices, like computed tomography (CT), magnetic resonance imaging (MRI), positron emission technology, and single photon emission computed tomography. Multimodal image fusion is the process of combining information from various images to get the maximum amount of content captured by a single image acquisition device at different angles and different times or stages. This article analyses and compares the performance of different existing image fusion techniques for the clinical images in the medical field. The fusion techniques compared are simple or pixel‐based fusion, pyramid‐based fusion, and transform‐based fusion techniques. Four set of CT and MRI images are used for the above fusion techniques. The performance of the fused results is measured with seven parameters. The experimental results show that out of seven parameters the values of four parameters, such as average difference, mean difference, root mean square error, and standard deviation are minimum and the values of remaining three parameters, such as peak signal to noise ratio, entropy, and mutual information are maximum. From the experimental results, it is clear that out of 14 fusion techniques taken for survey, image fusion using dual tree complex wavelet transform gives better fusion result for the clinical CT and MRI images. Advantages and limitations of all the techniques are discussed with their experimental results and their relevance. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 193–202, 2014.  相似文献   

2.
Medical treatment and diagnosis require information that is taken from several modalities of images like Magnetic Resonance Imaging (MRI), Computerized Tomography and so on. The information obtained for certain ailments is often incomplete, invisible and lacking in consistent scanner performance. Hence, to overcome these issues in the image modalities, image fusion schemes are developed in the literature. This paper proposes a hybrid algorithm using fuzzy concept and a novel P-Whale algorithm, called Fuzzy Whale Fusion (FWFusion), for the fusion of MRI multimodal images. Two multimodal images from MRI (T1, T1C, T2 and FLAIR) are considered as the source images, which are fed as inputs to a wavelet transform. The transform utilized converts the images into four different bands, which are fused using two newly derived fusion factors, fuzzy fusion and whale fusion, in a weighted function. The proposed P-Whale approach combines Whale Optimization Algorithm (WOA) and Particle Swarm Optimization (PSO) for the effective selection of whale fusion factors. The performance of FWFusion model is compared to those of the existing strategies using Mutual Information (MI), Peak Signal-to-Noise Ratio (PSNR) and Root Mean Squared Error (RMSE), as the evaluation metrics. From the mean performance evaluation, it is observed that the proposed approach can achieve MI of 1.714, RMSE of 1.9 and PSNR of 27.9472.  相似文献   

3.
张晶晶  方勇华 《光电工程》2007,34(11):78-81,87
偏振遥感图像通常都采用强度、偏振度、偏振角来表征目标偏振特性.本文提出的基于边缘信息的偏振图像融合算法是将三幅偏振图像利用离散小波变换把图像分解成不同尺度的低频和高频部分,采用小波区域窗口和子区域窗口统计把小波系数分类成边缘和非边缘系数,通过这些方法进行有效的边缘细节信息提取.在融合处理中,低频图像的小波系数平均值作为融合后的低频系数,高频细节系数根据不同区域特征选择方法以及对应输入图像小波系数的窗口区域方差来确定融合后高频小波系数.仿真实验结果表明,这样使得融合后的图像细节更真实更丰富,图像的偏振特性体现更为充分,同时减少对源图像的预处理要求,使图像在整体上有较好的视觉效果.从而证明这种方法能够在保留图像微小细节方面获得满意的结果,且算法有效性优于其他的图像融合方法.  相似文献   

4.
The standardization of images derived from different medical modalities should be ensured when image fusion brings essential information and hybrid scanners are not available. The aim of this study was to show that precise image fusion standardization can be obtained using special and unique multimodal heart phantom (MHP) which is compatible with all applied diagnostic methods. MHP was designed and constructed according to International Commission on Radiological Protection reports, scanners requirements and personal experience. Three different types of acquisitions were done: physiological perfusion, myocardial ischaemia and intestines artefacts. The measurements were done using different modalities (single photon emission computed tomography (SPECT), positron emission tomography (PET), MRI, CT) as well as hybrid scanners (PET/CT, SPECT/CT). It was shown that MHP can be used not only for improvement of image fusion standardization protocol but also for verification of proper implementation of the fusion protocol in hybrid scanners.  相似文献   

5.
Image fusion can integrate complementary information from multimodal molecular images to provide an informative single result image. In order to obtain a better fusion effect, this article proposes a novel method based on relative total variation and co-saliency detection (RTVCSD). First, only the gray-scale anatomical image is decomposed into a base layer and a texture layer according to the relative total variation; then, the three-channel color functional image is transformed into the luminance and chroma (YUV) color space, and the luminance component Y is directly fused with the base layer of the anatomical image by comparing the co-saliency information; next, the fused base layer is linearly combined with the texture layer, and the obtained fused result is combined with the chroma information U and V of the functional image. Finally, the fused image is obtained by transforming back to the red–green–blue color space. The dataset consists of magnetic resonance imaging (MRI)/positron emission tomography images, MRI/single photon emission computed tomography (SPECT) images, computed tomography/SPECT images, and green fluorescent protein/phase contrast images, each category with 20 image pairs. Experimental results demonstrate that the proposed method RTVCSD outperforms the nine comparison algorithms in terms of visual effects and objective evaluation. RTVCSD well preserves the texture information of the anatomical image and the metabolism or protein distribution information of the functional image.  相似文献   

6.
Recently, the computed tomography (CT) and magnetic resonance imaging (MRI) medical image fusion have turned into a challenging issue in the medical field. The optimal fused image is a significant component to detect the disease easily. In this research, we propose an iterative optimization approach for CT and MRI image fusion. Initially, the CT and MRI image fusion is subjected to a multilabel optimization problem. The main aim is to minimize the data and smoothness cost during image fusion. To optimize the fusion parameters, the Modified Global Flower Pollination Algorithm is proposed. Here, six sets of fusion images with different experimental analysis are evaluated in terms of different evaluation metrics such as accuracy, specificity, sensitivity, SD, structural similarity index, feature similarity index, mutual information, fusion quality, and root mean square error (RMSE). While comparing to state‐of‐art methods, the proposed fusion model provides best RMSE with higher fusion performance. Experiments on a set of MRI and CT images of medical data set show that the proposed method outperforms a very competitive performance in terms of fusion quality.  相似文献   

7.
Sum‐modified‐Laplacian (SML) plays an important role in medical image fusion. However, fused rules based on larger SML always lead to fusion image distortion in transform domain image fusion or image information loss in spatial domain image fusion. Combined with average filter and median filter, a new medical image fusion method based on improved SML (ISML) is proposed. First, a basic fused image is gained by ISML, which is used for evaluation of the selection map of medical images. Second, difference images can be obtained by subtracting average image of all sources of medical images. Finally, basic fused image can be refined by difference images. The algorithm can both preserve the information of the source images well and suppress pixel distortion. Experimental results demonstrate that the proposed method outperforms the state‐of‐the‐art medical image fusion methods. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 206–212, 2015  相似文献   

8.
Medical image fusion is widely used in various clinical procedures for the precise diagnosis of a disease. Image fusion procedures are used to assist real-time image-guided surgery. These procedures demand more accuracy and less computational complexity in modern diagnostics. Through the present work, we proposed a novel image fusion method based on stationary wavelet transform (SWT) and texture energy measures (TEMs) to address poor contrast and high-computational complexity issues of fusion outcomes. SWT extracts approximate and detail information of source images. TEMs have the capability to capture various features of the image. These are considered for fusion of approximate information. In addition, the morphological operations are used to refine the fusion process. Datasets consisting of images of seven patients suffering from neurological disorders are used in this study. Quantitative comparison of fusion results with visual information fidelity-based image fusion quality metric, ratio of spatial frequency error, edge information-based image fusion quality metric, and structural similarity index-based image fusion quality metrics proved the superiority. Also, the proposed method is superior in terms of average execution time to state-of-the-art image fusion methods. The proposed work can be extended for fusion of other imaging modalities like fusion of functional image with an anatomical image. Suitability of the fused images by the proposed method for image analysis tasks needs to be studied.  相似文献   

9.
孔玲君  张志华  曾茜  王茜 《包装工程》2018,39(19):216-222
目的鉴于非下采样剪切波变换NSST的红外与可见光图像融合的结果存在细微特征缺失问题,提出一种基于NSST和SWT的红外与可见光图像融合算法,以提升融合图像的质量。方法首先分别对红外与可见光图像进行NSST分解,各得到一个低频系数和多个不同方向、尺度的高频系数。然后低频系数分别通过SWT分解得到新的低频系数和高频系数,通过SWT分解得到的新的低频系数和高频系数分别采用采用线性加权平均法和区域平均能量取大的融合策略,融合结果再进行SWT逆变换得到低频系数融合结果。高频系数采用区域平均能量取大的融合策略进行融合。最后通过NSST逆变换得到最终的融合图像。结果通过仿真实验结果表明,文中算法与NSST,SWT和NSCT等算法相比,融合图像在主观视觉上的红外目标更突出,图像细节更清晰,且在IE, AG, QAB/F, SF和SD等评价指标上也最优。结论文中算法的融合结果能更好地表现源图像的目标信息和细节纹理信息,表明该算法具有优越性。  相似文献   

10.
Multimodal sensor medical image fusion has been widely reported in recent years, but the fused image by the existing methods introduces low contrast information and little detail information. To overcome this problem, the new image fusion method is proposed based on mutual‐structure for joint filtering and sparse representation in this article. First, the source image is decomposed into a series of detail images and coarse images by mutual‐structure for joint filtering. Second, sparse representation is adopted to fuse coarse images and then local contrast is applied for fusing detail images. Finally, the fused image is reconstructed by the addition of the fused coarse images and the fused detail images. By experimental results, the proposed method shows the best performance on preserving detail information and contrast information in the views of subjective and objective evaluations.  相似文献   

11.
This research proposes an improved hybrid fusion scheme for non-subsampled contourlet transform (NSCT) and stationary wavelet transform (SWT). Initially, the source images are decomposed into different sub-bands using NSCT. The locally weighted sum of square of the coefficients based fusion rule with consistency verification is used to fuse the detailed coefficients of NSCT. The SWT is employed to decompose approximation coefficients of NSCT into different sub-bands. The entropy of square of the coefficients and weighted sum-modified Laplacian is employed as the fusion rules with SWT. The final output is obtained using inverse NSCT. The proposed research is compared with existing fusion schemes visually and quantitatively. From the visual analysis, it is observed that the proposed scheme retained important complementary information of source images in a better way. From the quantitative comparison, it is seen that this scheme gave improved edge information, clarity, contrast, texture, and brightness in the fused image.  相似文献   

12.
Multimodal medical image fusion merges two medical images to produce a visual enhanced fused image, to provide more accurate comprehensive pathological information to doctors for better diagnosis and treatment. In this article, we present a perceptual multimodal medical image fusion method with free energy (FE) motivated adaptive pulse coupled neural network (PCNN) by employing Internal Generative Mechanism (IGM). First, source images are divided into predicted layers and detail layers with Bayesian prediction model. Then to retain human visual system inspired features, FE is used to motivate the PCNN for processing detail layers, and large firing times are selected as coefficients. The predicted layers are fused with the averaging strategy as activity level measurement. Finally, the fused image is reconstructed by merging coefficients in both fused layers. Experimental results visually and quantitatively show that the proposed fusion strategy is superior to the state‐of‐the‐art methods.  相似文献   

13.
Image fusion aims to integrate complementary information from multiple modalities into a single image with none distortion and loss of data. Image fusion is important in medical imaging, specifically for the purpose of detecting the tumor and identification of diseases. In this article, completely unique discrete wavelet transform (DWT) and intuitionistic fuzzy sets (IFSs) based fusion method (DWT‐IFS) is proposed. For fusion, initially, all source images are fused using DWT with the average, maximum, and entropy fusion rules. Besides, on the fused image IFS is applied. In the IFS process images are converted into intuitionistic fuzzy images (IFIs) by selecting an optimum value for the parameter in membership, non‐membership, and hesitation degree function using entropy. Then, the resulting IFIs are decomposed into the blocks, and the corresponding blocks of the images are fused using the intersection and union operations of IFS. The efficiency of the proposed DWT‐IFS fusion method is recognized by examining it with other existing methods, such as Averaging (AVG), Principal Component Analysis (PCA), Laplacian Pyramid Approach (LPA), Contrast Pyramid Approach (CPA), Discrete Wavelet Transform (DWT), Morphological Pyramid Approach (MPA), Redundancy Discrete Wavelet Transform (RDWT), Contourlet Transform (CONTRA), and Intuitionistic Fuzzy Set (IFS) using subjective and objective performance evaluation measures. The experimental results reveal that the proposed DWT‐IFS fusion method provides higher quality of information in terms of physical properties and contrast as compared to the existing methods.  相似文献   

14.
基于导向滤波与分形维度的图像加权融合算法   总被引:1,自引:1,他引:0  
张晓琪  侯世英 《包装工程》2018,39(9):220-227
目的为了解决当前图像融合技术中易丢失图像信息,不能较好地保持源图像的边缘与纹理信息,从而降低了图像分辨率与视觉质量,使其不能对目标进行清晰、完整、准确地信息描述等问题。方法提出一种导向滤波耦合分形维度的图像加权融合方案。首先对源图像进行预处理,通过增强对比度来提高图像的动态范围。通过小波变换将图像分解为低频与高频部分,并引入导向滤波器,对其低频、高频成分进行处理,获取相应的低频、高频权重,较好地保持图像的边缘信息。然后,通过提取局部特征分形维数来获取微小纹理特征。最后,定义一种加权融合方案,根据低频与高频权重进行融合,得到最后融合图像。结果实验数据表明,与当前常用图像融合算法比较,文中算法具有更好的融合视觉效果,更好地保持了源图像的真实信息;在信息熵、交互信息、平均梯度和标准差等4种定量分析指标方面,所提算法具有更大的优势。结论所提算法具有良好的融合质量,在图像处理领域具有一定的参考价值。  相似文献   

15.
提出一种基于小波变换的像素级CT,MR医学图像融合方法,利用离散小波变换分别将两幅源图像进行多尺度分解,再用不同的小波系数邻域特征指导高频分量和低频分量的小波系数的融合,低频分量采用邻域方差指导,高频分量采用邻域能量指导,最后根据融合图像的各小波系数重构融合图像.实验表明:不论从主观感受,还是采用信息熵和平均梯度两项指标作为客观定量评价标准,该方法都优于传统的融合方法,获得的融合图像有效地综合了CT与MR图像信息,能够同时清晰地显示脑部骨组织和软组织.  相似文献   

16.
龚昌来 《光电工程》2008,35(11):106-110
分析了基于局部能量的小波图像融合算法的优缺点,针对该算法存在的对窗口内像素包含独立信息考虑不足的问题,提出了一种改进算法。改进算法利用了窗口中心像索的相位信息,将基于局部能量的选择和加权平均融合规则相结合。当两幅源图像小波分解后的对应高频系数相位相同时,采用局部能量极大值准则选择融合图像对应的高频系数,相位相反时,采用局部能量加权平均方法确定融合图像对应的高频系数。实验证明了改进算法的有效性。  相似文献   

17.
一种基于Directionlet变换的图像融合算法   总被引:3,自引:0,他引:3  
为了提高图像融合效果,提出了一种基于Directionlet变换的图像融合算法.首先对已配准的待融合源图像由给定的生成矩阵分别进行陪集分解,得到每个陪集对应的子图;接着将每两个子图相减,得到源图像的高频和低频分量,其中边缘、纹理等奇异特征包含在高频分量中;然后对低频分量采用直接平均融合的方法进行系数选择,对高频分量选择子区域边缘信息较强的系数;最后,通过Directionlet陪集分解的反变换,得到融合后的图像.多聚焦图像融合实验表明,在主观视觉上,该算法明显更好地融合了边缘等图像特征,从而较好地保持了左右聚焦图像各自的细节信息;在客观评价上,通过熵、平均梯度、标准差和互信息量等性能参数比较,该方法也优于小波变换和其他的融合方法.  相似文献   

18.
Multi-source information can be obtained through the fusion of infrared images and visible light images, which have the characteristics of complementary information. However, the existing acquisition methods of fusion images have disadvantages such as blurred edges, low contrast, and loss of details. Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform (NSST). Furthermore, the low-frequency subbands were fused by convolutional sparse representation (CSR), and the high-frequency subbands were fused by an improved pulse coupled neural network (IPCNN) algorithm, which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm, improving the performance of sparse representation with details injection. The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.  相似文献   

19.
Detail enhancement algorithms are important for raw infrared images to improve their overall contrast and highlight important information in them. To solve the problems that current algorithms like GF&DDE have, an improved adaptive detail enhancement algorithm for infrared images based on a guided image filter is proposed in this paper. It chooses the threshold for the base layer image adaptively according to the histogram statistical information and adjusts the mapping range of the histograms according to the dynamic range of the image. Besides, the detail layer is handled by a simpler adaptive gain control method to achieve the good detail enhancement effect. Finally, the base layer and the detail are merged according to the approximate proportion of the background and the details. Experimental results show that the proposed algorithm can adaptively and efficiently enhance different dynamic range images in different scenarios. Moreover, this algorithm has high real-time performance.  相似文献   

20.
基于特征能量加权的红外与可见光图像融合   总被引:2,自引:0,他引:2  
目前红外与可见光图像直接融合存在红外目标取舍和场景信息提取困难,结合非采样Contourlet的多尺度、多方向性和平移不变性的优点,本文提出了一种基于非采样Contourlet变换(NSCT)的红外与可见光图像融合方法.首先对源图像进行分解,然后低频子带通过构造基于区域的特征像素能量,进行加权融合,高频子带直接选用方差取大法融合.使用该算法进行了融合实验,并给出了融合质量评价.实验结果表明,本文提出的基于NSCT的图像融合算法在保留图像细节信息、增加信息量方面都有显著地提高.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号