首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到7条相似文献,搜索用时 0 毫秒
1.
This research proposes an improved hybrid fusion scheme for non-subsampled contourlet transform (NSCT) and stationary wavelet transform (SWT). Initially, the source images are decomposed into different sub-bands using NSCT. The locally weighted sum of square of the coefficients based fusion rule with consistency verification is used to fuse the detailed coefficients of NSCT. The SWT is employed to decompose approximation coefficients of NSCT into different sub-bands. The entropy of square of the coefficients and weighted sum-modified Laplacian is employed as the fusion rules with SWT. The final output is obtained using inverse NSCT. The proposed research is compared with existing fusion schemes visually and quantitatively. From the visual analysis, it is observed that the proposed scheme retained important complementary information of source images in a better way. From the quantitative comparison, it is seen that this scheme gave improved edge information, clarity, contrast, texture, and brightness in the fused image.  相似文献   

2.
Medical image fusion is widely used in various clinical procedures for the precise diagnosis of a disease. Image fusion procedures are used to assist real-time image-guided surgery. These procedures demand more accuracy and less computational complexity in modern diagnostics. Through the present work, we proposed a novel image fusion method based on stationary wavelet transform (SWT) and texture energy measures (TEMs) to address poor contrast and high-computational complexity issues of fusion outcomes. SWT extracts approximate and detail information of source images. TEMs have the capability to capture various features of the image. These are considered for fusion of approximate information. In addition, the morphological operations are used to refine the fusion process. Datasets consisting of images of seven patients suffering from neurological disorders are used in this study. Quantitative comparison of fusion results with visual information fidelity-based image fusion quality metric, ratio of spatial frequency error, edge information-based image fusion quality metric, and structural similarity index-based image fusion quality metrics proved the superiority. Also, the proposed method is superior in terms of average execution time to state-of-the-art image fusion methods. The proposed work can be extended for fusion of other imaging modalities like fusion of functional image with an anatomical image. Suitability of the fused images by the proposed method for image analysis tasks needs to be studied.  相似文献   

3.
卓颉  张怡  刘雄厚  刘宗伟 《声学技术》2015,34(2):115-120
提出一种阈值改进整数小波与基于字典编码的LZW(Lempel-Ziv-Welch)算法相结合的数据压缩方法,该方法旨在减少水声数据传输量的同时尽可能地达到高保真。数据压缩过程中,先对水声数据进行整数小波变换,再对变换后的高频系数采用改进的小波阈值算法和阈值函数进行处理,提高了数据压缩倍数和信噪比,降低了误差。最后通过LZW将处理后的系数进行编码输出,进一步提升压缩效果。文中给出了相应的数据压缩算法流程。实际舰船辐射噪声数据的压缩处理结果表明,该方法能有效提高信号信噪比、减少信号失真并能获得更大的压缩倍数。  相似文献   

4.
Automated and accurate classification of MR brain images is of crucially importance for medical analysis and interpretation. We proposed a novel automatic classification system based on particle swarm optimization (PSO) and artificial bee colony (ABC), with the aim of distinguishing abnormal brains from normal brains in MRI scanning. The proposed method used stationary wavelet transform (SWT) to extract features from MR brain images. SWT is translation‐invariant and performed well even the image suffered from slight translation. Next, principal component analysis (PCA) was harnessed to reduce the SWT coefficients. Based on three different hybridization methods of PSO and ABC, we proposed three new variants of feed‐forward neural network (FNN), consisting of IABAP‐FNN, ABC‐SPSO‐FNN, and HPA‐FNN. The 10 runs of K‐fold cross validation result showed the proposed HPA‐FNN was superior to not only other two proposed classifiers but also existing state‐of‐the‐art methods in terms of classification accuracy. In addition, the method achieved perfect classification on Dataset‐66 and Dataset‐160. For Dataset‐255, the 10 repetition achieved average sensitivity of 99.37%, average specificity of 100.00%, average precision of 100.00%, and average accuracy of 99.45%. The offline learning cost 219.077 s for Dataset‐255, and merely 0.016 s for online prediction. Thus, the proposed SWT + PCA + HPA‐FNN method excelled existing methods. It can be applied to practical use.  相似文献   

5.
The research and development of biomedical imaging techniques requires more number of image data from medical image acquisition devices, like computed tomography (CT), magnetic resonance imaging (MRI), positron emission technology, and single photon emission computed tomography. Multimodal image fusion is the process of combining information from various images to get the maximum amount of content captured by a single image acquisition device at different angles and different times or stages. This article analyses and compares the performance of different existing image fusion techniques for the clinical images in the medical field. The fusion techniques compared are simple or pixel‐based fusion, pyramid‐based fusion, and transform‐based fusion techniques. Four set of CT and MRI images are used for the above fusion techniques. The performance of the fused results is measured with seven parameters. The experimental results show that out of seven parameters the values of four parameters, such as average difference, mean difference, root mean square error, and standard deviation are minimum and the values of remaining three parameters, such as peak signal to noise ratio, entropy, and mutual information are maximum. From the experimental results, it is clear that out of 14 fusion techniques taken for survey, image fusion using dual tree complex wavelet transform gives better fusion result for the clinical CT and MRI images. Advantages and limitations of all the techniques are discussed with their experimental results and their relevance. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 193–202, 2014.  相似文献   

6.
Stereotactic neuro‐radiosurgery is a well‐established therapy for intracranial diseases, especially brain metastases and highly invasive cancers that are difficult to treat with conventional surgery or radiotherapy. Nowadays, magnetic resonance imaging (MRI) is the most used modality in radiation therapy for soft‐tissue anatomical districts, allowing for an accurate gross tumor volume (GTV) segmentation. Investigating also necrotic material within the whole tumor has significant clinical value in treatment planning and cancer progression assessment. These pathological necrotic regions are generally characterized by hypoxia, which is implicated in several aspects of tumor development and growth. Therefore, particular attention must be deserved to these hypoxic areas that could lead to recurrent cancers and resistance to therapeutic damage. This article proposes a novel fully automatic method for necrosis extraction (NeXt), using the Fuzzy C‐Means algorithm, after the GTV segmentation. This unsupervised Machine Learning technique detects and delineates the necrotic regions also in heterogeneous cancers. The overall processing pipeline is an integrated two‐stage segmentation approach useful to support neuro‐radiosurgery. NeXt can be exploited for dose escalation, allowing for a more selective strategy to increase radiation dose in hypoxic radioresistant areas. Moreover, NeXt analyzes contrast‐enhanced T1‐weighted MR images alone and does not require multispectral MRI data, representing a clinically feasible solution. This study considers an MRI dataset composed of 32 brain metastatic cancers, wherein 20 tumors present necroses. The segmentation accuracy of NeXt was evaluated using both spatial overlap‐based and distance‐based metrics, achieving these average values: Dice similarity coefficient 95.93% ± 4.23% and mean absolute distance 0.225 ± 0.229 (pixels).  相似文献   

7.
To propose and implement an automated machine learning (ML) based methodology to predict the overall survival of glioblastoma multiforme (GBM) patients. In the proposed methodology, we used deep learning (DL) based 3D U-shaped Convolutional Neural Network inspired encoder-decoder architecture to segment the brain tumor. Further, feature extraction was performed on these segmented and raw magnetic resonance imaging (MRI) scans using a pre-trained 2D residual neural network. The dimension-reduced principal components were integrated with clinical data and the handcrafted features of tumor subregions to compare the performance of regression-based automated ML techniques. Through the proposed methodology, we achieved the mean squared error (MSE) of 87 067.328, median squared error of 30 915.66, and a SpearmanR correlation of 0.326 for survival prediction (SP) with the validation set of Multimodal Brain Tumor Segmentation 2020 dataset. These results made the MSE far better than the existing automated techniques for the same patients. Automated SP of GBM patients is a crucial topic with its relevance in clinical use. The results proved that DL-based feature extraction using 2D pre-trained networks is better than many heavily trained 3D and 2D prediction models from scratch. The ensembled approach has produced better results than single models. The most crucial feature affecting GBM patients' survival is the patient's age, as per the feature importance plots presented in this work. The most critical MRI modality for SP of GBM patients is the T2 fluid attenuated inversion recovery, as evident from the feature importance plots.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号