首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Fusion of multimodal imaging data supports medical experts with ample information for better disease diagnosis and further clinical investigations. Recently, sparse representation (SR)‐based fusion algorithms has been gaining importance for their high performance. Building a compact, discriminative dictionary with reduced computational effort is a major challenge to these algorithms. Addressing this key issue, we propose an adaptive dictionary learning approach for fusion of multimodal medical images. The proposed approach consists of three steps. First, zero informative patches of source images are discarded by variance computation. Second, the structural information of remaining image patches is evaluated using modified spatial frequency (MSF). Finally, a selection rule is employed to separate the useful informative patches of source images for dictionary learning. At the fusion step, batch‐OMP algorithm is utilized to estimate the sparse coefficients. A novel fusion rule which measures the activity level in both spatial domain and transform domain is adopted to reconstruct the fused image with the sparse vectors and trained dictionary. Experimental results of various medical image pairs and clinical data sets reveal that the proposed fusion algorithm gives better visual quality and competes with existing methodologies both visually and quantitatively.  相似文献   

2.
采用图像融合技术的多模式人脸识别   总被引:2,自引:0,他引:2  
利用图像融合技术实现了基于可见光图像和红外热图像相结合的多模式人脸识别,研究了两种图像在像素级和特征级的融合方法.在像素级,提出了基于小波分解的图像融合方法,实现了两种图像的有效融合.在特征级,采用分别提取两种识别方法中具有较好分类效果的前50%的特征进行特征级的融合.实验表明,经像素级和特征级融合后,识别准确率都较单一图像有很大程度的提高,并且特征级的融合效果明显优于像素级的融合.因此,基于图像融合技术的多模式人脸识别,有效的增加了图像的信息量,是提高人脸识别准确率的有效途径之一.  相似文献   

3.
一种新的ICA域图像融合算法   总被引:1,自引:0,他引:1  
针对红外和可见光图像的特点,结合Mitianoudis提出的ICA域图像融合方法,本文提出了一种改进的ICA域多模图像融合算法.该方法根据Mitianoudis的方法,通过训练得到的基函数对图像进行线性变换,在变换域中将图像分割成不同的区域,对活跃区域采用绝对值取大的融合规则,而对非活跃区域则按照目标传感器图像的区域分割结果分别采取不同的融合规则,最后反变换得到融合图像.实验结果表明了本丈方法的有效性.  相似文献   

4.
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image‐guided radiotherapy and surgery. Although numerous medical image fusion methods have been proposed, most approaches have not touched the low rank nature of matrix formed by medical image, which usually lead to fusion image distortion and image information loss. These methods also often lack universality when dealing with different kinds of medical images. In this article, we propose a novel medical image fusion to overcome aforementioned issues on existing methods with the aid of low rank matrix approximation with nuclear norm minimization (NNM) constraint. The workflow of our method is described as: firstly, nonlocal similar patches across the medical image are searched by block matching for local patch in source images. Second, a fused matrix is stacking by shared nonlocal similarity patches, then the low rank matrix approximation methods under nuclear norm minimization can be used to recover low rank feature of fused matrix. Finally, fused image can be gotten by aggregating all the fused patches. Experimental results show that the proposed method is superior to other methods in both subjectively visual performance and objective criteria. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 310–316, 2015  相似文献   

5.
Recently, sparse representation classification (SRC) and fisher discrimination dictionary learning (FDDL) methods have emerged as important methods for vehicle classification. In this paper, inspired by recent breakthroughs of discrimination dictionary learning approach and multi-task joint covariate selection, we focus on the problem of vehicle classification in real-world applications by formulating it as a multi-task joint sparse representation model based on fisher discrimination dictionary learning to merge the strength of multiple features among multiple sensors. To improve the classification accuracy in complex scenes, we develop a new method, called multi-task joint sparse representation classification based on fisher discrimination dictionary learning, for vehicle classification. In our proposed method, the acoustic and seismic sensor data sets are captured to measure the same physical event simultaneously by multiple heterogeneous sensors and the multi-dimensional frequency spectrum features of sensors data are extracted using Mel frequency cepstral coefficients (MFCC). Moreover, we extend our model to handle sparse environmental noise. We experimentally demonstrate the benefits of joint information fusion based on fisher discrimination dictionary learning from different sensors in vehicle classification tasks.  相似文献   

6.
For the last two decades, physicians and clinical experts have used a single imaging modality to identify the normal and abnormal structure of the human body. However, most of the time, medical experts are unable to accurately analyze and examine the information from a single imaging modality due to the limited information. To overcome this problem, a multimodal approach is adopted to increase the qualitative and quantitative medical information which helps the doctors to easily diagnose diseases in their early stages. In the proposed method, a Multi-resolution Rigid Registration (MRR) technique is used for multimodal image registration while Discrete Wavelet Transform (DWT) along with Principal Component Averaging (PCAv) is utilized for image fusion. The proposed MRR method provides more accurate results as compared with Single Rigid Registration (SRR), while the proposed DWT-PCAv fusion process adds-on more constructive information with less computational time. The proposed method is tested on CT and MRI brain imaging modalities of the HARVARD dataset. The fusion results of the proposed method are compared with the existing fusion techniques. The quality assessment metrics such as Mutual Information (MI), Normalize Cross-correlation (NCC) and Feature Mutual Information (FMI) are computed for statistical comparison of the proposed method. The proposed methodology provides more accurate results, better image quality and valuable information for medical diagnoses.  相似文献   

7.
Fusing multimodal medical images into an integrated image, providing more details and rich information thereby facilitating medical diagnosis and therapy. Most of the existing multiscale-based fusion methods ignore the correlations between the decomposition coefficients and lead to incomplete fusion results. A novel contextual hidden Markov model (CHMM) is proposed to construct the statistical model of contourlet coefficients. First, the pair brain images are decomposed into multiscale, multidirectional, and anisotropic subbands with a contourlet transform. Then the low-frequency components are fused with the choose-max rule. For the high-frequency coefficients, the CHMM is learned with the EM algorithm, and incorporate with a novel fuzzy entropy-based context, building the fuzzy relationships among these coefficients. Finally, the fused brain image is obtained by using the inverse contourlet transform. Fusion experiments on several multimodal brain images show the superiority of the proposed method in terms of both visual quality and some widely used objective measures.  相似文献   

8.
There are several medical imaging techniques such as the magnetic resonance (MR) and the computed tomography (CT) techniques. Both techniques give sophisticated characteristics of the region to be imaged. This paper proposes a curvelet based approach for fusing MR and CT images to obtain images with as much detail as possible, for the sake of medical diagnosis. This approach is based on the application of the additive wavelet transform (AWT) on both images and the segmentation of their detail planes into small overlapping tiles. The ridgelet transform is then applied on each of these tiles, and the fusion process is performed on the ridgelet transforms of the tiles. Simulation results show the superiority of the proposed curvelet fusion approach to the traditional fusion techniques like the multiresolution discrete wavelet transform (DWT) technique and the principal component analysis (PCA) technique. The fusion of MR and CT images in the presence of noise is also studied and the results reveal that unlike the DWT fusion technique, the proposed curvelet fusion approach doesn't require denoising.  相似文献   

9.
Alzheimer's disease is a chronic brain condition that takes a toll on memory and potential to do even the most basic tasks. With no specific solution viable at this time, it's critical to pinpoint the start of Alzheimer's disease so that necessary steps may be initiated to limit its progression. We used three distinct neuroanatomical computational methodologies namely 3D-Subject, 3D-Patches, and 3D-Slices to construct a multimodal multi-class deep learning model for three class and two class Alzheimer's classification using T1w-MRI and AV-45 PET scans obtained from ADNI database. Further, patches of various sizes were created using the patch-extraction algorithm designed with torch package leading to separate datasets of patch size 32, 40, 48, 56, 64, 72, 80, and 88. In addition, Slices were produced from images using either uniform slicing, subset slicing, or interpolation zoom approaches then joined back to form a 3D image of varying depth (8,16,24,32,40,48,56, and 64) for the Slice-based technique. Using T1w-MRI and AV45-PET scans, our multimodal multi-class Ensembled Volumetric ConvNet framework obtained 93.01% accuracy for AD versus NC versus MCI (highest accuracy achieved using multi-modalities as per our knowledge). The 3D-Subject-based neuroanatomy computation approach achieved 93.01% classification accuracy and it overruled Patch-based approach which achieved 89.55% accuracy and Slice-Based approach that achieved 89.37% accuracy. Using a 3D-Patch-based feature extraction technique, it was discovered that patches of greater size (80, 88) had accuracy over 89%, while medium-sized patches (56, 64, and 72) had accuracy ranging from 83 to 88%, and small-sized patches (32, 40, and 48) had the least accuracy ranging from 57 to 80%. From the three independent algorithms created for 3D-Slice-based neuroanatomy computational approach, the interpolation zoom technique outperformed uniform slicing and subset slicing, obtaining 89.37% accuracy over 88.35% and 82.83%, respectively. Link to GitHub code: https://github.com/ngoenka04/Alzheimer-Detection .  相似文献   

10.
Image fusion can integrate complementary information from multimodal molecular images to provide an informative single result image. In order to obtain a better fusion effect, this article proposes a novel method based on relative total variation and co-saliency detection (RTVCSD). First, only the gray-scale anatomical image is decomposed into a base layer and a texture layer according to the relative total variation; then, the three-channel color functional image is transformed into the luminance and chroma (YUV) color space, and the luminance component Y is directly fused with the base layer of the anatomical image by comparing the co-saliency information; next, the fused base layer is linearly combined with the texture layer, and the obtained fused result is combined with the chroma information U and V of the functional image. Finally, the fused image is obtained by transforming back to the red–green–blue color space. The dataset consists of magnetic resonance imaging (MRI)/positron emission tomography images, MRI/single photon emission computed tomography (SPECT) images, computed tomography/SPECT images, and green fluorescent protein/phase contrast images, each category with 20 image pairs. Experimental results demonstrate that the proposed method RTVCSD outperforms the nine comparison algorithms in terms of visual effects and objective evaluation. RTVCSD well preserves the texture information of the anatomical image and the metabolism or protein distribution information of the functional image.  相似文献   

11.
Multimodal medical image fusion merges two medical images to produce a visual enhanced fused image, to provide more accurate comprehensive pathological information to doctors for better diagnosis and treatment. In this article, we present a perceptual multimodal medical image fusion method with free energy (FE) motivated adaptive pulse coupled neural network (PCNN) by employing Internal Generative Mechanism (IGM). First, source images are divided into predicted layers and detail layers with Bayesian prediction model. Then to retain human visual system inspired features, FE is used to motivate the PCNN for processing detail layers, and large firing times are selected as coefficients. The predicted layers are fused with the averaging strategy as activity level measurement. Finally, the fused image is reconstructed by merging coefficients in both fused layers. Experimental results visually and quantitatively show that the proposed fusion strategy is superior to the state‐of‐the‐art methods.  相似文献   

12.
Source images are frequently corrupted by noise before fusion, which will lead to the quality decline of fused image and the inconvenience for subsequent observation. However, at present, most of the traditional medical image fusion scheme cannot be implemented in noisy environment. Besides, the existing fusion methods scarcely make full use of the dependencies between source images. In this research, a novel fusion algorithm based on the statistical properties of wavelet coefficients is proposed, which incorporates fusion and denoising simultaneously. In the proposed algorithm, the new saliency and matching measures are defined by two distributions: the marginal statistical distribution of single wavelet coefficient fit by the generalized Gaussian Distribution and joint distribution of dual source wavelet coefficients modeled by the anisotropic bivariate Laplacian model. Additionally, the bivariate shrinkage is introduced to develop a noise robust fusion method, and a moment-based parameter estimation applied in the fusion scheme is also exploited in denoising method, which allows to achieve the consistency of fusion and denoising. The experiments demonstrate that the proposed algorithm performs very well on both noisy and noise-free images from multimodal medical datasets (computerized tomography, magnetic resonance imaging, magnetic resonance angiography, etc.), outperforming the conventional methods in terms of both fusion quality and noise reduction.  相似文献   

13.
With the development of deep learning and Convolutional Neural Networks (CNNs), the accuracy of automatic food recognition based on visual data have significantly improved. Some research studies have shown that the deeper the model is, the higher the accuracy is. However, very deep neural networks would be affected by the overfitting problem and also consume huge computing resources. In this paper, a new classification scheme is proposed for automatic food-ingredient recognition based on deep learning. We construct an up-to-date combinational convolutional neural network (CBNet) with a subnet merging technique. Firstly, two different neural networks are utilized for learning interested features. Then, a well-designed feature fusion component aggregates the features from subnetworks, further extracting richer and more precise features for image classification. In order to learn more complementary features, the corresponding fusion strategies are also proposed, including auxiliary classifiers and hyperparameters setting. Finally, CBNet based on the well-known VGGNet, ResNet and DenseNet is evaluated on a dataset including 41 major categories of food ingredients and 100 images for each category. Theoretical analysis and experimental results demonstrate that CBNet achieves promising accuracy for multi-class classification and improves the performance of convolutional neural networks.  相似文献   

14.
The standardization of images derived from different medical modalities should be ensured when image fusion brings essential information and hybrid scanners are not available. The aim of this study was to show that precise image fusion standardization can be obtained using special and unique multimodal heart phantom (MHP) which is compatible with all applied diagnostic methods. MHP was designed and constructed according to International Commission on Radiological Protection reports, scanners requirements and personal experience. Three different types of acquisitions were done: physiological perfusion, myocardial ischaemia and intestines artefacts. The measurements were done using different modalities (single photon emission computed tomography (SPECT), positron emission tomography (PET), MRI, CT) as well as hybrid scanners (PET/CT, SPECT/CT). It was shown that MHP can be used not only for improvement of image fusion standardization protocol but also for verification of proper implementation of the fusion protocol in hybrid scanners.  相似文献   

15.
增强稀疏编码的超分辨率重建   总被引:1,自引:1,他引:0  
李民  程建  乐翔  罗环敏  刘小芳 《光电工程》2011,38(1):127-133
本文提出一种基于稀疏字典编码的超分辫率方法.该方法有效地建立高、低分辫率图像高频块间的稀疏关联,并将这种关联作为先验知识来指导基于稀疏字典的超分辫率重建.较超完备字典,稀疏字典对先验知识的表达更紧凑、更高效.字典训练过程中,本文选用高频信息作为高分辫率图像的特征,更有效地建立高、低分辫率图像决间的稀疏关联,所需的训练样...  相似文献   

16.
Masses of research decompose the image into different levels of feature maps, but the structures and edges may not appropriately separated. This may cause the loss of image detail in the fusion process. Therefore, we design a robust method for multimodal medical image fusion using spectral total variation transform (STVT). In our method, the source images are first decomposed into a series of texture signatures (referred to as deviation components) and base components via STVT algorithm. Then combine the local structural patch measurement (LSPM) to fuse the deviation components, and the base components are merged using a spatial frequency (SF) dual‐channel spiking cortical model (SF‐DCSCM), in which the SF of base components are regarded as stimulus to activate DCSCM. Finally, the final image is reconstructed by the inverse STVT with the restored images together. Experimental results suggest that proposed scheme achieves promising results, and more competitiveness against some state‐of‐the‐art methods.  相似文献   

17.
With the critical innovations of using submillimeter transducers and multiband analysis of the first arrival pulse, a high‐resolution ultrasonic transmission tomography (HUTT) system has been built and tested to produce multiband images of biological organs at submillimeter resolution. Since the resulting multiband images consist of frequency‐dependent attenuation coefficients (relative to water reference) of transmitted ultrasound pulses, their contrast and sharpness depend on the specific frequency band(s) used for image formation. Even though this multiband representation provides a powerful tool for soft‐tissue differentiation, it hinders visual inspection and limits the visual interpretation of image contents in a short time. To facilitate the visual interpretation of HUTT multiband images, this article presents an efficient image fusion methodology called local principal component analysis with structure tensor (LPCA‐ST). The LPCA has been known as a feasible tool for the fusion of spectral data, since it utilizes the principal components of spectral data as a fusion‐weighting vector of local area. Nonetheless, the LPCA‐fused image often suffers from oversmoothness because of the redundancy of the spectral data. To prevent this problem, we propose a structure tensor as the metric used to select the most informative bands for subsequent LPCA fusion. Our preliminary studies have shown that the contrast of the LPCA‐fused image improves dramatically only when multiband images whose values of the respective structure tensor are the highest are used in the LPCA fusion process. This is achieved in 3D without increasing the computational complexity of the fusion process. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 277–282, 2009  相似文献   

18.
Medical image fusion is widely used in various clinical procedures for the precise diagnosis of a disease. Image fusion procedures are used to assist real-time image-guided surgery. These procedures demand more accuracy and less computational complexity in modern diagnostics. Through the present work, we proposed a novel image fusion method based on stationary wavelet transform (SWT) and texture energy measures (TEMs) to address poor contrast and high-computational complexity issues of fusion outcomes. SWT extracts approximate and detail information of source images. TEMs have the capability to capture various features of the image. These are considered for fusion of approximate information. In addition, the morphological operations are used to refine the fusion process. Datasets consisting of images of seven patients suffering from neurological disorders are used in this study. Quantitative comparison of fusion results with visual information fidelity-based image fusion quality metric, ratio of spatial frequency error, edge information-based image fusion quality metric, and structural similarity index-based image fusion quality metrics proved the superiority. Also, the proposed method is superior in terms of average execution time to state-of-the-art image fusion methods. The proposed work can be extended for fusion of other imaging modalities like fusion of functional image with an anatomical image. Suitability of the fused images by the proposed method for image analysis tasks needs to be studied.  相似文献   

19.
不同信息融合方法在结构损伤识别上的应用和分析   总被引:3,自引:1,他引:3  
郭惠勇  张陵  蒋健 《工程力学》2006,23(1):28-32,37
在工程结构的损伤探测领域,不同的信息融合方法和方式对结构的损伤敏感程度以及计算的复杂程度往往不同,而且适用条件也不同。为了解决以上问题,描述了基于结构损伤识别的功能信息融合模型,并在此基础之上采用了多种融合方法进行了数值仿真和分析。数值仿真结果表明,采用了信息融合技术的结构多损伤位置识别,可以产生比单一信息源更精确、更完全的估计和判决,而且不同的信息融合算法的应用往往取决于研究对象和实际条件的要求。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号