首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 393 毫秒
1.
Abstract

This paper presents two novel contrast enhancement approaches using texture regions-based histogram equalization (HE). In HE-based contrast enhancement methods, the enhanced image often contains undesirable artefacts because an excessive number of pixels in the non-textured areas heavily bias the histogram. The novel idea presented in this paper is to suppress the impact of pixels in non-textured areas and to exploit texture features for the computation of histogram in the process of HE. The first algorithm named as Dominant Orientation-based Texture Histogram Equalization (DOTHE), constructs the histogram of the image using only those image patches having dominant orientation. DOTHE categories image patches into smooth, dominant or non-dominant orientation patches by using the image variance and singular value decomposition algorithm and utilizes only dominant orientation patches in the process of HE. The second method termed as Edge-based Texture Histogram Equalization, calculates significant edges in the image and constructs the histogram using the grey levels present in the neighbourhood of edges. The cumulative density function of the histogram formed from texture features is mapped on the entire dynamic range of the input image to produce the contrast-enhanced image. Subjective as well as objective performance assessment of proposed methods is conducted and compared with other existing HE methods. The performance assessment in terms of visual quality, contrast improvement index, entropy and measure of enhancement reveals that the proposed methods outperform the existing HE methods.  相似文献   

2.
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the major reason for blindness among working-age population. The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment. An automated screening for DR has been identified as an effective method for early DR detection, which can decrease the workload associated to manual grading as well as save diagnosis costs and time. Several studies have been carried out to develop automated detection and classification models for DR. This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy (DR). The proposed model incorporates different processes namely data collection, preprocessing, segmentation, feature extraction and classification. At first, the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server. Then, the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization (CLAHE) model. Next, the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering (ASKFCM) model. Afterwards, deep Convolution Neural Network (CNN) based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes (GNB) model. The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.  相似文献   

3.
COVID-19 has been considered one of the recent epidemics that occurred at the last of 2019 and the beginning of 2020 that world widespread. This spread of COVID-19 requires a fast technique for diagnosis to make the appropriate decision for the treatment. X-ray images are one of the most classifiable images that are used widely in diagnosing patients’ data depending on radiographs due to their structures and tissues that could be classified. Convolutional Neural Networks (CNN) is the most accurate classification technique used to diagnose COVID-19 because of the ability to use a different number of convolutional layers and its high classification accuracy. Classification using CNNs techniques requires a large number of images to learn and obtain satisfactory results. In this paper, we used SqueezNet with a modified output layer to classify X-ray images into three groups: COVID-19, normal, and pneumonia. In this study, we propose a deep learning method with enhance the features of X-ray images collected from Kaggle, Figshare to distinguish between COVID-19, Normal, and Pneumonia infection. In this regard, several techniques were used on the selected image samples which are Unsharp filter, Histogram equal, and Complement image to produce another view of the dataset. The Squeeze Net CNN model has been tested in two scenarios using the 13,437 X-ray images that include 4479 for each type (COVID-19, Normal and Pneumonia). In the first scenario, the model has been tested without any enhancement on the datasets. It achieved an accuracy of 91%. But, in the second scenario, the model was tested using the same previous images after being improved by several techniques and the performance was high at approximately 95%. The conclusion of this study is the used model gives higher accuracy results for enhanced images compared with the accuracy results for the original images. A comparison of the outcomes demonstrated the effectiveness of our DL method for classifying COVID-19 based on enhanced X-ray images.  相似文献   

4.
Medical Resonance Imaging (MRI) is a noninvasive, nonradioactive, and meticulous diagnostic modality capability in the field of medical imaging. However, the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation. Therefore, to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network (SANR_CNN) for eliminating noise and improving the MR image reconstruction quality. The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality, and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets. The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction. An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). The proposed SANR_CNN model achieved higher PSNR, SSIM, and MSE efficiency than the other noise removal techniques. The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.  相似文献   

5.
In the last decade, there has been a significant increase in medical cases involving brain tumors. Brain tumor is the tenth most common type of tumor, affecting millions of people. However, if it is detected early, the cure rate can increase. Computer vision researchers are working to develop sophisticated techniques for detecting and classifying brain tumors. MRI scans are primarily used for tumor analysis. We proposed an automated system for brain tumor detection and classification using a saliency map and deep learning feature optimization in this paper. The proposed framework was implemented in stages. In the initial phase of the proposed framework, a fusion-based contrast enhancement technique is proposed. In the following phase, a tumor segmentation technique based on saliency maps is proposed, which is then mapped on original images based on active contour. Following that, a pre-trained CNN model named EfficientNetB0 is fine-tuned and trained in two ways: on enhanced images and on tumor localization images. Deep transfer learning is used to train both models, and features are extracted from the average pooling layer. The deep learning features are then fused using an improved fusion approach known as Entropy Serial Fusion. The best features are chosen in the final step using an improved dragonfly optimization algorithm. Finally, the best features are classified using an extreme learning machine (ELM). The experimental process is conducted on three publically available datasets and achieved an improved accuracy of 95.14, 94.89, and 95.94%, respectively. The comparison with several neural nets shows the improvement of proposed framework.  相似文献   

6.
In the current era of technological development, medical imaging plays an important part in several applications of medical diagnosis and therapy. This requires more precise images with much more details and information for correct medical diagnosis and therapy. Medical image fusion is one of the solutions for obtaining much spatial and spectral information in a single image. This article presents an optimization-based contourlet image fusion approach in addition to a comparative study for the performance of both multi-resolution and multi-scale geometric effects on fusion quality. An optimized multi-scale fusion technique based on the Non-Subsampled Contourlet Transform (NSCT) using the Modified Central Force Optimization (MCFO) and local contrast enhancement techniques is presented. The first step in the proposed fusion approach is the histogram matching of one of the images to the other to allow the same dynamic range for both images. The NSCT is used after that to decompose the images to be fused into their coefficients. The MCFO technique is used to determine the optimum decomposition level and the optimum gain parameters for the best fusion of coefficients based on certain constraints. Finally, an additional contrast enhancement process is applied on the fused image to enhance its visual quality and reinforce details. The proposed fusion framework is subjectively and objectively evaluated with different fusion quality metrics including average gradient, local contrast, standard deviation (STD), edge intensity, entropy, peak signal-to-noise ratio, Q ab/f, and processing time. Experimental results demonstrate that the proposed optimized NSCT medical image fusion approach based on the MCFO and histogram matching achieves a superior performance with higher image quality, average gradient, edge intensity, STD, better local contrast and entropy, a good quality factor, and much more details in images. These characteristics help for more accurate medical diagnosis in different medical applications.  相似文献   

7.
Background: In medical image analysis, the diagnosis of skin lesions remains a challenging task. Skin lesion is a common type of skin cancer that exists worldwide. Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer. Challenges: Many computerized methods have been introduced in the literature to classify skin cancers. However, challenges remain such as imbalanced datasets, low contrast lesions, and the extraction of irrelevant or redundant features. Proposed Work: In this study, a new technique is proposed based on the conventional and deep learning framework. The proposed framework consists of two major tasks: lesion segmentation and classification. In the lesion segmentation task, contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination. Subsequently, the best channel is selected and the lesion map is computed, which is further converted into a binary form using a thresholding function. In the lesion classification task, two pre-trained CNN models were modified and trained using transfer learning. Deep features were extracted from both models and fused using canonical correlation analysis. During the fusion process, a few redundant features were also added, lowering classification accuracy. A new technique called maximum entropy score-based selection (MESbS) is proposed as a solution to this issue. The features selected through this approach are fed into a cubic support vector machine (C-SVM) for the final classification. Results: The experimental process was conducted on two datasets: ISIC 2017 and HAM10000. The ISIC 2017 dataset was used for the lesion segmentation task, whereas the HAM10000 dataset was used for the classification task. The achieved accuracy for both datasets was 95.6% and 96.7%, respectively, which was higher than the existing techniques.  相似文献   

8.
In this article, brightness preserving bi‐level fuzzy histogram equalization (BPFHE) is proposed for the contrast enhancement of MRI brain images. Histogram equalization (HE) is widely used for improving the contrast in digital images. As a result, such image creates side‐effects such as washed‐out appearance and false contouring due to the significant change in brightness. In order to overcome these problems, mean brightness preserving HE based techniques have been proposed. Generally, these methods partition the histogram of the original image into sub histograms and then independently equalize each sub‐histogram. The BPFHE consists of two stages. First, fuzzy histogram is computed based on fuzzy set theory to handle the inexactness of gray level values in a better way compared to classical crisp histograms. In the second stage, the fuzzy histogram is divided into two sub‐histograms based on the mean intensities of the multi‐peaks in the original image and then equalizes them independently to preserve image brightness. The quantitative and subjective enhancement of proposed BPBFHE algorithm is evaluated using two well known parameters like entropy or average information contents (AIC) and Feature Similarity Index Matrix (FSIM) for different gray scale images. The proposed method have been tested using several images and gives better visual quality as compared to the conventional methods. The simulation results show that the proposed method has better performance than the existing methods, and preserve the original brightness quite well, so that it is possible to be utilized in medical image diagnosis.  相似文献   

9.
In the area of medical image processing, stomach cancer is one of the most important cancers which need to be diagnose at the early stage. In this paper, an optimized deep learning method is presented for multiple stomach disease classification. The proposed method work in few important steps—preprocessing using the fusion of filtering images along with Ant Colony Optimization (ACO), deep transfer learning-based features extraction, optimization of deep extracted features using nature-inspired algorithms, and finally fusion of optimal vectors and classification using Multi-Layered Perceptron Neural Network (MLNN). In the feature extraction step, pre-trained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step. Later on, the activation function is applied to Global Average Pool (GAP) for feature extraction. However, the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization (PSO) with dynamic fitness function and Crow Search Algorithm (CSA). Hence, both methods’ output is fused by a maximal value approach and classified the fused feature vector by MLNN. Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%. The comparison with existing techniques, it is shown that the proposed method shows significant performance.  相似文献   

10.
王晓红  曾静  麻祥才  刘芳 《包装工程》2020,41(15):245-252
目的为了有效地去除多种图像模糊,提高图像质量,提出基于深度强化学习的图像去模糊方法。方法选用GoPro与DIV2K这2个数据集进行实验,以峰值信噪比(PSNR)和结构相似性(SSIM)为客观评价指标。通过卷积神经网络获得模糊图像的高维特征,利用深度强化学习结合多种CNN去模糊工具建立去模糊框架,将峰值信噪比(PSNR)作为训练奖励评价函数,来选择最优修复策略,逐步对模糊图像进行修复。结果通过训练与测试,与现有的主流算法相比,文中方法有着更好的主观视觉效果,且PSNR值与SSIM值都有更好的表现。结论实验结果表明,文中方法能有效地解决图像的高斯模糊和运动模糊等问题,并取得了良好的视觉效果,在图像去模糊领域具有一定的参考价值。  相似文献   

11.
Artificial intelligence, which has recently emerged with the rapid development of information technology, is drawing attention as a tool for solving various problems demanded by society and industry. In particular, convolutional neural networks (CNNs), a type of deep learning technology, are highlighted in computer vision fields, such as image classification and recognition and object tracking. Training these CNN models requires a large amount of data, and a lack of data can lead to performance degradation problems due to overfitting. As CNN architecture development and optimization studies become active, ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models. In this study, data augmentation and contour image extraction were performed to overcome the data shortage problem. In addition, we propose a hierarchical ensemble technique to achieve high image classification accuracy, even if trained from a small amount of data. First, we trained the UC-Merced land use dataset and the contour images for each image on pretrained VGGNet, GoogLeNet, ResNet, DenseNet, and EfficientNet. We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed. These experiments were performed in cases where the proportion of training datasets was 30%, 50%, and 70%, resulting in a performance improvement of up to 4.68% compared to the average accuracy of the entire model.  相似文献   

12.
Many types of medical images must be fused, as single‐modality medical images can only provide limited information due to the imaging principles and the complexity of human organ structures. In this paper, a multimodal medical image fusion method that combines the advantages of nonsubsampling contourlet transform (NSCT) and fuzzy entropy is proposed to provide a basis for clinical diagnosis and improve the accuracy of target recognition and the quality of fused images. An image is initially decomposed into low‐ and high‐frequency subbands through NSCT. The corresponding fusion rules are adopted in accordance with the different characteristics of the low‐ and high‐frequency components. The membership degree of low‐frequency coefficients is calculated. The fuzzy entropy is also computed and subsequently used to guide the fusion of coefficients to preserve image details. High‐frequency components are fused by maximizing the regional energy. The final fused image is obtained by inverse transformation. Experimental results show that the proposed method achieves good fusion effect based on the subjective visual effect and objective evaluation criteria. This method can also obtain high average gradient, SD, and edge preservation and effectively retain the details of the fused image. The results of the proposed algorithm can provide effective reference for doctors to assess patient condition.  相似文献   

13.
Image fusion aims to integrate complementary information from multiple modalities into a single image with none distortion and loss of data. Image fusion is important in medical imaging, specifically for the purpose of detecting the tumor and identification of diseases. In this article, completely unique discrete wavelet transform (DWT) and intuitionistic fuzzy sets (IFSs) based fusion method (DWT‐IFS) is proposed. For fusion, initially, all source images are fused using DWT with the average, maximum, and entropy fusion rules. Besides, on the fused image IFS is applied. In the IFS process images are converted into intuitionistic fuzzy images (IFIs) by selecting an optimum value for the parameter in membership, non‐membership, and hesitation degree function using entropy. Then, the resulting IFIs are decomposed into the blocks, and the corresponding blocks of the images are fused using the intersection and union operations of IFS. The efficiency of the proposed DWT‐IFS fusion method is recognized by examining it with other existing methods, such as Averaging (AVG), Principal Component Analysis (PCA), Laplacian Pyramid Approach (LPA), Contrast Pyramid Approach (CPA), Discrete Wavelet Transform (DWT), Morphological Pyramid Approach (MPA), Redundancy Discrete Wavelet Transform (RDWT), Contourlet Transform (CONTRA), and Intuitionistic Fuzzy Set (IFS) using subjective and objective performance evaluation measures. The experimental results reveal that the proposed DWT‐IFS fusion method provides higher quality of information in terms of physical properties and contrast as compared to the existing methods.  相似文献   

14.
Coronavirus (COVID-19) infection was initially acknowledged as a global pandemic in Wuhan in China. World Health Organization (WHO) stated that the COVID-19 is an epidemic that causes a 3.4% death rate. Chest X-Ray (CXR) and Computerized Tomography (CT) screening of infected persons are essential in diagnosis applications. There are numerous ways to identify positive COVID-19 cases. One of the fundamental ways is radiology imaging through CXR, or CT images. The comparison of CT and CXR scans revealed that CT scans are more effective in the diagnosis process due to their high quality. Hence, automated classification techniques are required to facilitate the diagnosis process. Deep Learning (DL) is an effective tool that can be utilized for detection and classification this type of medical images. The deep Convolutional Neural Networks (CNNs) can learn and extract essential features from different medical image datasets. In this paper, a CNN architecture for automated COVID-19 detection from CXR and CT images is offered. Three activation functions as well as three optimizers are tested and compared for this task. The proposed architecture is built from scratch and the COVID-19 image datasets are directly fed to train it. The performance is tested and investigated on the CT and CXR datasets. Three activation functions: Tanh, Sigmoid, and ReLU are compared using a constant learning rate and different batch sizes. Different optimizers are studied with different batch sizes and a constant learning rate. Finally, a comparison between different combinations of activation functions and optimizers is presented, and the optimal configuration is determined. Hence, the main objective is to improve the detection accuracy of COVID-19 from CXR and CT images using DL by employing CNNs to classify medical COVID-19 images in an early stage. The proposed model achieves a classification accuracy of 91.67% on CXR image dataset, and a classification accuracy of 100% on CT dataset with training times of 58 min and 46 min on CXR and CT datasets, respectively. The best results are obtained using the ReLU activation function combined with the SGDM optimizer at a learning rate of 10−5 and a minibatch size of 16.  相似文献   

15.
Dataset dependence affects many real-life applications of machine learning: the performance of a model trained on a dataset is significantly worse on samples from another dataset than on new, unseen samples from the original one. This issue is particularly acute for small and somewhat specific databases in medical applications; the automated recognition of melanoma from skin lesion images is a prime example. We document dataset dependence in dermoscopic skin lesion image classification using three publicly available medium size datasets. Standard machine learning techniques aimed at improving the predictive power of a model might enhance performance slightly, but the gain is small, the dataset dependence is not reduced, and the best combination depends on model details. We demonstrate that simple differences in image statistics account for only 5% of the dataset dependence. We suggest a solution with two essential ingredients: using an ensemble of heterogeneous models, and training on a heterogeneous dataset. Our ensemble consists of 29 convolutional networks, some of which are trained on features considered important by dermatologists; the networks' output is fused by a trained committee machine. The combined International Skin Imaging Collaboration dataset is suitable for training, as it is multi-source, produced by a collaboration of a number of clinics over the world. Building on the strengths of the ensemble, it is applied to a related problem as well: recognizing melanoma based on clinical (non-dermoscopic) images. This is a harder problem as both the image quality is lower than those of the dermoscopic ones and the available public datasets are smaller and scarcer. We explored various training strategies and showed that 79% balanced accuracy can be achieved for binary classification averaged over three clinical datasets.  相似文献   

16.
In this article, fuzzy logic based adaptive histogram equalization (AHE) is proposed to enhance the contrast of MRI brain image. Medical image plays an important role in monitoring patient's health condition and giving an effective diagnostic. Mostly, medical images suffer from different problems such as poor contrast and noise. So it is necessary to enhance the contrast and to remove the noise in order to improve the quality of a various medical images such as CT, X‐ray, MRI, and MAMOGRAM images. Fuzzy logic is a useful tool for handling the ambiguity or uncertainty. Brightness Preserving Adaptive Fuzzy Histogram Equalization technique is proposed to improve the contrast of MRI brain images by preserving brightness. Proposed method comprises of two stages. First, fuzzy logic is applied to an input image and then it's output is given to AHE technique. This process not only preserves the mean brightness and but also improves the contrast of an image. A huge number of highly MRI brain images are taken in the proposed method. Performance of the proposed method is compared with existing methods using the parameters namely entropy, feature similarity index, and contrast improvement index and the experimental results show that the proposed method overwhelms the previous existing methods.  相似文献   

17.
Multi-modality medical image fusion (MMIF) procedures have been generally utilized in different clinical applications. MMIF can furnish an image with anatomical as well as physiological data for specialists that could advance the diagnostic procedures. Various models were proposed earlier related to MMIF though there is a need still exists to enhance the efficiency of the previous techniques. In this research, the authors proposed a novel fusion model based on optimal thresholding with deep learning concepts. An enhanced monarch butterfly optimization (EMBO) is utilized to decide the optimal threshold of fusion rules in shearlet transform. Then, low and high-frequency sub-bands were fused on the basis of feature maps and were given by the extraction part of the deep learning method. Here, restricted Boltzmann machine (RBM) was utilized to conduct the MMIF procedure. A benchmark dataset was utilized for training and testing purposes. The investigations were conducted utilizing a set of generally-utilized pre-enrolled CT and MR images that are publicly accessible. From the usage of fused low and high level frequency groups, the fused image can be attained. The simulation performance results were attained and the proposed model was proved to offer effective performance in terms of SD, edge quality (EQ), mutual information (MI), fusion factor (FF), entropy, correlation factor (CF), and spatial frequency (SF) with respective values being 97.78, 0.96, 5.71, 6.53, 7.43, 0.97, and 25.78 over the compared methods.  相似文献   

18.
Fusing multimodal medical images into an integrated image, providing more details and rich information thereby facilitating medical diagnosis and therapy. Most of the existing multiscale-based fusion methods ignore the correlations between the decomposition coefficients and lead to incomplete fusion results. A novel contextual hidden Markov model (CHMM) is proposed to construct the statistical model of contourlet coefficients. First, the pair brain images are decomposed into multiscale, multidirectional, and anisotropic subbands with a contourlet transform. Then the low-frequency components are fused with the choose-max rule. For the high-frequency coefficients, the CHMM is learned with the EM algorithm, and incorporate with a novel fuzzy entropy-based context, building the fuzzy relationships among these coefficients. Finally, the fused brain image is obtained by using the inverse contourlet transform. Fusion experiments on several multimodal brain images show the superiority of the proposed method in terms of both visual quality and some widely used objective measures.  相似文献   

19.
A hybrid convolutional neural network (CNN)-based model is proposed in the article for accurate detection of COVID-19, pneumonia, and normal patients using chest X-ray images. The input images are first pre-processed to tackle problems associated with the formation of the dataset from different sources, image quality issues, and imbalances in the dataset. The literature suggests that several abnormalities can be found with limited medical image datasets by using transfer learning. Hence, various pre-trained CNN models: VGG-19, InceptionV3, MobileNetV2, and DenseNet are adopted in the present work. Finally, with the help of these models, four hybrid models: VID (VGG-19, Inception, and DenseNet), VMI(VGG-19, MobileNet, and Inception), VMD (VGG-19, MobileNet, and DenseNet), and IMD(Inception, MobileNet, and DenseNet) are proposed. The model outcome is also tested using five-fold cross-validation. The best-performing hybrid model is the VMD model with an overall testing accuracy of 97.3%. Thus, a new hybrid model architecture is presented in the work that combines three individual base CNN models in a parallel configuration to counterbalance the shortcomings of individual models. The experimentation result reveals that the proposed hybrid model outperforms most of the previously suggested models. This model can also be used in the identification of diseases, especially in rural areas where limited laboratory facilities are available.  相似文献   

20.
闫海  邓忠民 《复合材料学报》2019,36(6):1413-1420
结合深度学习在图像识别领域的优势,将卷积神经网络(CNN)应用于有限元代理模型,预测了平面随机分布短纤维增强聚氨酯复合材料的有效弹性参数,并针对训练过程出现的过拟合,提出了一种数据增强的方法。为验证该代理模型的有效性,比较了其与传统代理模型在预测有效杨氏模量和剪切模量上的精度差异。在此基础上结合蒙特卡洛法利用卷积神经网络代理模型研究了材料微几何参数不确定性的误差正向传递。结果表明:相对于传统代理模型,卷积神经网络模型能更好地学习图像样本的内部特征,得到更加精确的预测结果,并在训练样本空间外的一定范围内可以保持较好的鲁棒性;随着纤维长宽比的增大,微几何参数的不确定性对材料有效性能预测结果会传递较大的误差。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号