首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the modern world, one of the most severe eye infections brought on by diabetes is known as diabetic retinopathy (DR), which will result in retinal damage, and, thus, lead to blindness. Diabetic retinopathy (DR) can be well treated with early diagnosis. Retinal fundus images of humans are used to screen for lesions in the retina. However, detecting DR in the early stages is challenging due to the minimal symptoms. Furthermore, the occurrence of diseases linked to vascular anomalies brought on by DR aids in diagnosing the condition. Nevertheless, the resources required for manually identifying the lesions are high. Similarly, training for Convolutional Neural Networks (CNN) is more time-consuming. This proposed research aims to improve diabetic retinopathy diagnosis by developing an enhanced deep learning model (EDLM) for timely DR identification that is potentially more accurate than existing CNN-based models. The proposed model will detect various lesions from retinal images in the early stages. First, characteristics are retrieved from the retinal fundus picture and put into the EDLM for classification. For dimensionality reduction, EDLM is used. Additionally, the classification and feature extraction processes are optimized using the stochastic gradient descent (SGD) optimizer. The EDLM’s effectiveness is assessed on the KAGGLE dataset with 3459 retinal images, and results are compared over VGG16, VGG19, RESNET18, RESNET34, and RESNET50. Experimental results show that the EDLM achieves higher average sensitivity by 8.28% for VGG16, by 7.03% for VGG19, by 5.58% for ResNet18, by 4.26% for ResNet 34, and by 2.04% for ResNet 50, respectively.  相似文献   

2.
Diabetic Retinopathy (DR) has become a widespread illness among diabetics across the globe. Retinal fundus images are generally used by physicians to detect and classify the stages of DR. Since manual examination of DR images is a time-consuming process with the risks of biased results, automated tools using Artificial Intelligence (AI) to diagnose the disease have become essential. In this view, the current study develops an Optimal Deep Learning-enabled Fusion-based Diabetic Retinopathy Detection and Classification (ODL-FDRDC) technique. The intention of the proposed ODL-FDRDC technique is to identify DR and categorize its different grades using retinal fundus images. In addition, ODL-FDRDC technique involves region growing segmentation technique to determine the infected regions. Moreover, the fusion of two DL models namely, CapsNet and MobileNet is used for feature extraction. Further, the hyperparameter tuning of these models is also performed via Coyote Optimization Algorithm (COA). Gated Recurrent Unit (GRU) is also utilized to identify DR. The experimental results of the analysis, accomplished by ODL-FDRDC technique against benchmark DR dataset, established the supremacy of the technique over existing methodologies under different measures.  相似文献   

3.
COVID-19 has been considered one of the recent epidemics that occurred at the last of 2019 and the beginning of 2020 that world widespread. This spread of COVID-19 requires a fast technique for diagnosis to make the appropriate decision for the treatment. X-ray images are one of the most classifiable images that are used widely in diagnosing patients’ data depending on radiographs due to their structures and tissues that could be classified. Convolutional Neural Networks (CNN) is the most accurate classification technique used to diagnose COVID-19 because of the ability to use a different number of convolutional layers and its high classification accuracy. Classification using CNNs techniques requires a large number of images to learn and obtain satisfactory results. In this paper, we used SqueezNet with a modified output layer to classify X-ray images into three groups: COVID-19, normal, and pneumonia. In this study, we propose a deep learning method with enhance the features of X-ray images collected from Kaggle, Figshare to distinguish between COVID-19, Normal, and Pneumonia infection. In this regard, several techniques were used on the selected image samples which are Unsharp filter, Histogram equal, and Complement image to produce another view of the dataset. The Squeeze Net CNN model has been tested in two scenarios using the 13,437 X-ray images that include 4479 for each type (COVID-19, Normal and Pneumonia). In the first scenario, the model has been tested without any enhancement on the datasets. It achieved an accuracy of 91%. But, in the second scenario, the model was tested using the same previous images after being improved by several techniques and the performance was high at approximately 95%. The conclusion of this study is the used model gives higher accuracy results for enhanced images compared with the accuracy results for the original images. A comparison of the outcomes demonstrated the effectiveness of our DL method for classifying COVID-19 based on enhanced X-ray images.  相似文献   

4.
Diabetic retinopathy (DR) diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features. This task is very difficult for ophthalmologists and time-consuming. Therefore, many computer-aided diagnosis (CAD) systems were developed to automate this screening process of DR. In this paper, a CAD-DR system is proposed based on preprocessing and a pre-train transfer learning-based convolutional neural network (PCNN) to recognize the five stages of DR through retinal fundus images. To develop this CAD-DR system, a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results. The architecture of the PCNN model is based on three main phases. Firstly, the training process of the proposed PCNN is accomplished by using the expected gradient length (EGL) to decrease the image labeling efforts during the training of the CNN model. Secondly, the most informative patches and images were automatically selected using a few pieces of training labeled samples. Thirdly, the PCNN method generated useful masks for prognostication and identified regions of interest. Fourthly, the DR-related lesions involved in the classification task such as micro-aneurysms, hemorrhages, and exudates were detected and then used for recognition of DR. The PCNN model is pre-trained using a high-end graphical processor unit (GPU) on the publicly available Kaggle benchmark. The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity (SE), specificity (SP), and accuracy (ACC). On the test set of 30,000 images, the CAD-DR system achieved an average SE of 93.20%, SP of 96.10%, and ACC of 98%. This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.  相似文献   

5.
White blood cells (WBC) or leukocytes are a vital component of the blood which forms the immune system, which is accountable to fight foreign elements. The WBC images can be exposed to different data analysis approaches which categorize different kinds of WBC. Conventionally, laboratory tests are carried out to determine the kind of WBC which is erroneous and time consuming. Recently, deep learning (DL) models can be employed for automated investigation of WBC images in short duration. Therefore, this paper introduces an Aquila Optimizer with Transfer Learning based Automated White Blood Cells Classification (AOTL-WBCC) technique. The presented AOTL-WBCC model executes data normalization and data augmentation process (rotation and zooming) at the initial stage. In addition, the residual network (ResNet) approach was used for feature extraction in which the initial hyperparameter values of the ResNet model are tuned by the use of AO algorithm. Finally, Bayesian neural network (BNN) classification technique has been implied for the identification of WBC images into distinct classes. The experimental validation of the AOTL-WBCC methodology is performed with the help of Kaggle dataset. The experimental results found that the AOTL-WBCC model has outperformed other techniques which are based on image processing and manual feature engineering approaches under different dimensions.  相似文献   

6.
MU Akram  A Tariq  MA Anjum  MY Javed 《Applied optics》2012,51(20):4858-4866
Medical image analysis is a very popular research area these days in which digital images are analyzed for the diagnosis and screening of different medical problems. Diabetic retinopathy (DR) is an eye disease caused by the increase of insulin in blood and may cause blindness. An automated system for early detection of DR can save a patient's vision and can also help the ophthalmologists in screening of DR. The background or nonproliferative DR contains four types of lesions, i.e., microaneurysms, hemorrhages, hard exudates, and soft exudates. This paper presents a method for detection and classification of exudates in colored retinal images. We present a novel technique that uses filter banks to extract the candidate regions for possible exudates. It eliminates the spurious exudate regions by removing the optic disc region. Then it applies a Bayesian classifier as a combination of Gaussian functions to detect exudate and nonexudate regions. The proposed system is evaluated and tested on publicly available retinal image databases using performance parameters such as sensitivity, specificity, and accuracy. We further compare our system with already proposed and published methods to show the validity of the proposed system.  相似文献   

7.
Diabetic retinopathy (DR) and Diabetic Macular Edema (DME) are severe diseases that affect the eyes due to damage in blood vessels. Computer-aided automated grading will help clinicians conduct disease diagnoses at ease. Experiments of automated image processing with deep learning techniques using CNN produce promising results, especially in the medical imaging domain. However, the disease grading tasks in retinal images using CNN struggle to retain high-quality information at the output. A novel deep learning model based on variational auto-encoder to grade DR and DME abnormalities in retinal images is proposed. The objective of the proposed model is to extract the most relevant retinal image features efficiently. It focuses on addressing less relevant candidate region generation and translational invariance present in images. The experiments are conducted in IDRID dataset and evaluated using accuracy, U-kappa, sensitivity, specificity and precision metrics. The results outperform compared with other state-of-art techniques.  相似文献   

8.
An explicit extraction of the retinal vessel is a standout amongst the most significant errands in the field of medical imaging to analyze both the ophthalmological infections, for example, Glaucoma, Diabetic Retinopathy (DR), Retinopathy of Prematurity (ROP), Age-Related Macular Degeneration (AMD) as well as non retinal sickness such as stroke, hypertension and cardiovascular diseases. The state of the retinal vasculature is a significant indicative element in the field of ophthalmology. Retinal vessel extraction in fundus imaging is a difficult task because of varying size vessels, moderately low distinction, and presence of pathologies such as hemorrhages, microaneurysms etc. Manual vessel extraction is a challenging task due to the complicated nature of the retinal vessel structure, which also needs strong skill set and training. In this paper, a supervised technique for blood vessel extraction in retinal images using Modified Adaboost Extreme Learning Machine (MAD-ELM) is proposed. Firstly, the fundus image preprocessing is done for contrast enhancement and in-homogeneity correction. Then, a set of core features is extracted, and the best features are selected using “minimal Redundancy-maximum Relevance (mRmR).” Later, using MAD-ELM method vessels and non vessels are classified. DRIVE and DR-HAGIS datasets are used for the evaluation of the proposed method. The algorithm’s performance is assessed based on accuracy, sensitivity and specificity. The proposed technique attains accuracy of 0.9619 on the DRIVE database and 0.9519 on DR-HAGIS database, which contains pathological images. Our results show that, in addition to healthy retinal images, the proposed method performs well in extracting blood vessels from pathological images and is therefore comparable with state of the art methods.  相似文献   

9.
Vehicle type classification is considered a central part of an intelligent traffic system. In recent years, deep learning had a vital role in object detection in many computer vision tasks. To learn high-level deep features and semantics, deep learning offers powerful tools to address problems in traditional architectures of handcrafted feature-extraction techniques. Unlike other algorithms using handcrated visual features, convolutional neural network is able to automatically learn good features of vehicle type classification. This study develops an optimized automatic surveillance and auditing system to detect and classify vehicles of different categories. Transfer learning is used to quickly learn the features by recording a small number of training images from vehicle frontal view images. The proposed system employs extensive data-augmentation techniques for effective training while avoiding the problem of data shortage. In order to capture rich and discriminative information of vehicles, the convolutional neural network is fine-tuned for the classification of vehicle types using the augmented data. The network extracts the feature maps from the entire dataset and generates a label for each object (vehicle) in an image, which can help in vehicle-type detection and classification. Experimental results on a public dataset and our own dataset demonstrated that the proposed method is quite effective in detection and classification of different types of vehicles. The experimental results show that the proposed model achieves 96.04% accuracy on vehicle type classification.  相似文献   

10.
Diabetic retinopathy (DR) is a complication of diabetes mellitus that appears in the retina. Clinitians use retina images to detect DR pathological signs related to the occlusion of tiny blood vessels. Such occlusion brings a degenerative cycle between the breaking off and the new generation of thinner and weaker blood vessels. This research aims to develop a suitable retinal vasculature segmentation method for improving retinal screening procedures by means of computer-aided diagnosis systems. The blood vessel segmentation methodology relies on an effective feature selection based on Sequential Forward Selection, using the error rate of a decision tree classifier in the evaluation function. Subsequently, the classification process is performed by three alternative approaches: artificial neural networks, decision trees and support vector machines. The proposed methodology is validated on three publicly accessible datasets and a private one provided by Hospital Sant Joan of Reus. In all cases we obtain an average accuracy above 96% with a sensitivity of 72% in the blood vessel segmentation process. Compared with the state-of-the-art, our approach achieves the same performance as other methods that need more computational power. Our method significantly reduces the number of features used in the segmentation process from 20 to 5 dimensions. The implementation of the three classifiers confirmed that the five selected features have a good effectiveness, independently of the classification algorithm.  相似文献   

11.
Image classification is one of the significant applications in the field of ophthalmology for abnormality detection in retinal images. Image classification is a pattern recognition technique in which abnormal retinal images are categorized into different groups based on similarity measures. Accuracy and convergence rate are the important parameters of this automated diagnostic system. Artificial neural networks (ANNs) are widely used for automated image analysis systems. Kohonen neural networks (KNNs) are one of the prime unsupervised ANNs suitable for image processing applications. Besides the numerous advantages, KNNs suffer from two drawbacks: (a) lack of standard convergence conditions and (b) less accurate results. In this study, a novel approach is adopted to eliminate these disadvantages by performing suitable modifications in the conventional KNN. Initially, the fuzzy approach is an integrated one within KNN in the training algorithm to overcome the convergence difficulties. Second, a particle swarm optimization algorithm is used in feature selection for better accuracy. This proposed approach is tested on four different abnormal retinal image categories. The system is analyzed using several performance measures and the experimental results suggest promising results for the proposed system. Comparative analyses with other systems are also presented to show the superior nature of the proposed system.  相似文献   

12.
Automated retinal disease detection and grading is one of the most researched areas in medical image analysis. In recent years, Deep Learning models have attracted much attention in this field. Hence, in this paper, we present a Deep Learning-based, lightweight, fully automated end-to-end diagnostic system for the detection of the two major retinal diseases, namely diabetic macular oedema (DME) and drusen macular degeneration (DMD). Early detection of these diseases is important to prevent vision impairment. Optical coherence tomography (OCT) is the main imaging technique for detecting these diseases. The model proposed in this work is based on residual blocks and channel attention modules. The performance of the model is evaluated using the publicly available Mendeley OCT dataset and the Duke dataset. We were able to achieve a classification accuracy of 99.5% in the Mendeley test dataset and 94.9% in the Duke dataset with the proposed model. For the application, we performed an extensive evaluation of pre-trained models (LeNet, AlexNet, VGG-16, ResNet50 and SE-ResNet). The proposed model has a much smaller number of trainable parameters and shows superior performance compared to existing methods.  相似文献   

13.
Biopsy is one of the most commonly used modality to identify breast cancer in women, where tissue is removed and studied by the pathologist under the microscope to look for abnormalities in tissue. This technique can be time-consuming, error-prone, and provides variable results depending on the expertise level of the pathologist. An automated and efficient approach not only aids in the diagnosis of breast cancer but also reduces human effort. In this paper, we develop an automated approach for the diagnosis of breast cancer tumors using histopathological images. In the proposed approach, we design a residual learning-based 152-layered convolutional neural network, named as ResHist for breast cancer histopathological image classification. ResHist model learns rich and discriminative features from the histopathological images and classifies histopathological images into benign and malignant classes. In addition, to enhance the performance of the developed model, we design a data augmentation technique, which is based on stain normalization, image patches generation, and affine transformation. The performance of the proposed approach is evaluated on publicly available BreaKHis dataset. The proposed ResHist model achieves an accuracy of 84.34% and an F1-score of 90.49% for the classification of histopathological images. Also, this approach achieves an accuracy of 92.52% and F1-score of 93.45% when data augmentation is employed. The proposed approach outperforms the existing methodologies in the classification of benign and malignant histopathological images. Furthermore, our experimental results demonstrate the superiority of our approach over the pre-trained networks, namely AlexNet, VGG16, VGG19, GoogleNet, Inception-v3, ResNet50, and ResNet152 for the classification of histopathological images.  相似文献   

14.
With the rapid development of computer technology, millions of images are produced everyday by different sources. How to efficiently process these images and accurately discern the scene in them becomes an important but tough task. In this paper, we propose a novel supervised learning framework based on proposed adaptive binary coding for scene classification. Specifically, we first extract some high-level features of images under consideration based on available models trained on public datasets. Then, we further design a binary encoding method called one-hot encoding to make the feature representation more efficient. Benefiting from the proposed adaptive binary coding, our method is free of time to train or fine-tune the deep network and can effectively handle different applications. Experimental results on three public datasets, i.e., UIUC sports event dataset, MIT Indoor dataset, and UC Merced dataset in terms of three different classifiers, demonstrate that our method is superior to the state-of-the-art methods with large margins.  相似文献   

15.
Artificial intelligence, which has recently emerged with the rapid development of information technology, is drawing attention as a tool for solving various problems demanded by society and industry. In particular, convolutional neural networks (CNNs), a type of deep learning technology, are highlighted in computer vision fields, such as image classification and recognition and object tracking. Training these CNN models requires a large amount of data, and a lack of data can lead to performance degradation problems due to overfitting. As CNN architecture development and optimization studies become active, ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models. In this study, data augmentation and contour image extraction were performed to overcome the data shortage problem. In addition, we propose a hierarchical ensemble technique to achieve high image classification accuracy, even if trained from a small amount of data. First, we trained the UC-Merced land use dataset and the contour images for each image on pretrained VGGNet, GoogLeNet, ResNet, DenseNet, and EfficientNet. We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed. These experiments were performed in cases where the proportion of training datasets was 30%, 50%, and 70%, resulting in a performance improvement of up to 4.68% compared to the average accuracy of the entire model.  相似文献   

16.
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the major reason for blindness among working-age population. The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment. An automated screening for DR has been identified as an effective method for early DR detection, which can decrease the workload associated to manual grading as well as save diagnosis costs and time. Several studies have been carried out to develop automated detection and classification models for DR. This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy (DR). The proposed model incorporates different processes namely data collection, preprocessing, segmentation, feature extraction and classification. At first, the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server. Then, the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization (CLAHE) model. Next, the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering (ASKFCM) model. Afterwards, deep Convolution Neural Network (CNN) based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes (GNB) model. The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.  相似文献   

17.
The most effective treatment for diabetic retinopathy (DR) is the early detection through regular screening, which is critical for a better prognosis. Automatic screening of the images would assist the physicians in diagnosing the condition of patients easily and accurately. This condition searches out for special importance of image processing technology in the way of processing the retinal fundus images. Accordingly, this article plans to develop an automatic DR detection model with the aid of three main stages like (a) image preprocessing, (b) blood vessel segmentation, and (c) classification. The preprocessing phase includes two steps: conversion of RGB to Lab, and contrast enhancement. The Histogram equalization process is done using the contrast enhancement of an image. To the next of preprocessing, the segmentation phase starts with a valuable procedure. It includes (a), thresholding the contrast-enhanced and filtered images, (b) thresholding the keypoints of contrast-enhanced and filtered images, and (c) adding both thresholded binary images. Here, the filtering process is performed by proposed adaptive average filtering, where the filter coefficients are tuned or optimized by an improved meta-heuristic algorithm called fitness probability-based CSO (FP-CSO). Finally, the classification part uses Deep CNN, where the improvement is exploited on the convolutional layer, which is optimized by the same improved FP-CSO. Since the conventional CSO depends on a fitness probability in the improved algorithm, the proposed algorithm termed as FP-CSO. Finally, valuable comparative and performance analysis has confirmed the effectiveness of the proposed model.  相似文献   

18.
The sparse representation-based classification (SRC) method is a powerful tool to present high-dimensionality data and its superiority in many fields, especially in face recognition application has been proved. With sparsity appropriately harnessed, the SRC can solve face classification problems caused by varying expression, illumination as well as occlusion and disguise. However, face images as high-dimensionality data are usually noisy and the dimensionality is always larger than the number of training sample in real-world applications, which bring a disadvantage for the performance of SRC. Therefore, it is beneficial to perform dimensionality reduction (DR) before utilizing the SRC method. But most prevalent DR methods have no direct connection to SRC. In this paper, we proposed a supervised DR algorithm which suits SRC well and improves the discriminating ability in the low-dimensionality space. The proposed method utilizes the fisher discriminant criterion and low-dimensionality reconstructive restriction to extract the discriminating structure of data. The extensive experiments on public face databases verified the effectiveness of the supervised DR with the model of sparse representation.  相似文献   

19.
Dataset dependence affects many real-life applications of machine learning: the performance of a model trained on a dataset is significantly worse on samples from another dataset than on new, unseen samples from the original one. This issue is particularly acute for small and somewhat specific databases in medical applications; the automated recognition of melanoma from skin lesion images is a prime example. We document dataset dependence in dermoscopic skin lesion image classification using three publicly available medium size datasets. Standard machine learning techniques aimed at improving the predictive power of a model might enhance performance slightly, but the gain is small, the dataset dependence is not reduced, and the best combination depends on model details. We demonstrate that simple differences in image statistics account for only 5% of the dataset dependence. We suggest a solution with two essential ingredients: using an ensemble of heterogeneous models, and training on a heterogeneous dataset. Our ensemble consists of 29 convolutional networks, some of which are trained on features considered important by dermatologists; the networks' output is fused by a trained committee machine. The combined International Skin Imaging Collaboration dataset is suitable for training, as it is multi-source, produced by a collaboration of a number of clinics over the world. Building on the strengths of the ensemble, it is applied to a related problem as well: recognizing melanoma based on clinical (non-dermoscopic) images. This is a harder problem as both the image quality is lower than those of the dermoscopic ones and the available public datasets are smaller and scarcer. We explored various training strategies and showed that 79% balanced accuracy can be achieved for binary classification averaged over three clinical datasets.  相似文献   

20.
The content-based image retrieval (CBIR) in dermatological diagnosis context, the information matching is the major concern in terms of feature vector-based classification. The discrimination of the feature vector leads to better classification as well as retrieval rate. Better retrieval results help the dermatologist to improve the diagnosis. In this paper, we proposed a support vector machine weight map (SVM W-Map)-based feature selection along with multi-class particle swarm optimization (PSO) presented for multi-class dermatological imaging dataset. The performance of the system was tested on a dataset including 1450 images and obtained 99.7% for specificity and 95.89% for sensitivity. The analysis and evaluations of results show that the proposed system has higher diagnosis ability when compared with other works.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号