首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fully convolutional networks (FCNs) take the input of arbitrary size and produce correspondingly sized output with efficient inference and learning. The automatic diagnosis of melanoma is very essential for reducing the mortality rate by identifying the disease in earlier stages. A two-stage framework is used for implementing the melanoma detection, segmentation of skin lesion, and identification of melanoma lesions. Two FCNs based on VGG-16 and GoogLeNet are incorporated for improving the segmentation accuracy. A hybrid framework is used for incorporating these two FCNs. The classification is done by extracting the feature from segmented lesion by using deep residual network and a hand-crafted feature. Classification is done by support vector machine. The performance analysis of our framework gives a promising accuracy, that is, 0.8892 for classification in ISBI 2016 dataset and 0.853 for ISIC 2017 dataset.  相似文献   

2.
Classification of skin lesions is a complex identification challenge. Due to the wide variety of skin lesions, doctors need to spend a lot of time and effort to judge the lesion image which zoomed through the dermatoscopy. The diagnosis which the algorithm of identifying pathological images assists doctors gets more and more attention. With the development of deep learning, the field of image recognition has made longterm progress. The effect of recognizing images through convolutional neural network models is better than traditional image recognition technology. In this work, we try to classify seven kinds of lesion images by various models and methods of deep learning, common models of convolutional neural network in the field of image classification include ResNet, DenseNet and SENet, etc. We use a fine-tuning model with a multi-layer perceptron, by training the skin lesion model, in the validation set and test set we use data expansion based on multiple cropping, and use five models’ ensemble as the final results. The experimental results show that the program has good results in improving the sensitivity of skin lesion diagnosis.  相似文献   

3.
With the massive success of deep networks, there have been significant efforts to analyze cancer diseases, especially skin cancer. For this purpose, this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images. This paper aims to develop and fine-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images. Fine-tuning is a powerful method to obtain enhanced classification results by the customized pre-trained network. Regularization, batch normalization, and hyperparameter optimization are performed for fine-tuning the proposed deep network. The proposed fine-tuned ResNet50 model successfully classified 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset. The developed deep model was compared against two powerful models, i.e., InceptionV3 and VGG16, using the Dice similarity coefficient (DSC) and the area under the curve (AUC). The evaluation results show that the proposed model achieved higher results than some recent and robust models.  相似文献   

4.
Automated retinal disease detection and grading is one of the most researched areas in medical image analysis. In recent years, Deep Learning models have attracted much attention in this field. Hence, in this paper, we present a Deep Learning-based, lightweight, fully automated end-to-end diagnostic system for the detection of the two major retinal diseases, namely diabetic macular oedema (DME) and drusen macular degeneration (DMD). Early detection of these diseases is important to prevent vision impairment. Optical coherence tomography (OCT) is the main imaging technique for detecting these diseases. The model proposed in this work is based on residual blocks and channel attention modules. The performance of the model is evaluated using the publicly available Mendeley OCT dataset and the Duke dataset. We were able to achieve a classification accuracy of 99.5% in the Mendeley test dataset and 94.9% in the Duke dataset with the proposed model. For the application, we performed an extensive evaluation of pre-trained models (LeNet, AlexNet, VGG-16, ResNet50 and SE-ResNet). The proposed model has a much smaller number of trainable parameters and shows superior performance compared to existing methods.  相似文献   

5.
Skin cancer is one of the most severe diseases, and medical imaging is among the main tools for cancer diagnosis. The images provide information on the evolutionary stage, size, and location of tumor lesions. This paper focuses on the classification of skin lesion images considering a framework of four experiments to analyze the classification performance of Convolutional Neural Networks (CNNs) in distinguishing different skin lesions. The CNNs are based on transfer learning, taking advantage of ImageNet weights. Accordingly, in each experiment, different workflow stages are tested, including data augmentation and fine-tuning optimization. Three CNN models based on DenseNet-201, Inception-ResNet-V2, and Inception-V3 are proposed and compared using the HAM10000 dataset. The results obtained by the three models demonstrate accuracies of 98%, 97%, and 96%, respectively. Finally, the best model is tested on the ISIC 2019 dataset showing an accuracy of 93%. The proposed methodology using CNN represents a helpful tool to accurately diagnose skin cancer disease.  相似文献   

6.
Dataset dependence affects many real-life applications of machine learning: the performance of a model trained on a dataset is significantly worse on samples from another dataset than on new, unseen samples from the original one. This issue is particularly acute for small and somewhat specific databases in medical applications; the automated recognition of melanoma from skin lesion images is a prime example. We document dataset dependence in dermoscopic skin lesion image classification using three publicly available medium size datasets. Standard machine learning techniques aimed at improving the predictive power of a model might enhance performance slightly, but the gain is small, the dataset dependence is not reduced, and the best combination depends on model details. We demonstrate that simple differences in image statistics account for only 5% of the dataset dependence. We suggest a solution with two essential ingredients: using an ensemble of heterogeneous models, and training on a heterogeneous dataset. Our ensemble consists of 29 convolutional networks, some of which are trained on features considered important by dermatologists; the networks' output is fused by a trained committee machine. The combined International Skin Imaging Collaboration dataset is suitable for training, as it is multi-source, produced by a collaboration of a number of clinics over the world. Building on the strengths of the ensemble, it is applied to a related problem as well: recognizing melanoma based on clinical (non-dermoscopic) images. This is a harder problem as both the image quality is lower than those of the dermoscopic ones and the available public datasets are smaller and scarcer. We explored various training strategies and showed that 79% balanced accuracy can be achieved for binary classification averaged over three clinical datasets.  相似文献   

7.
Skin lesion recognition is an important challenge in the medical field. In this paper, we have implemented an intelligent classification system based on convolutional neural network. First of all, this system can classify whether the input image is a dermascopic image with an accuracy of 99%. And then diagnose the dermoscopic image and the non-skin mirror image separately. Due to the limitation of the data, we can only realize the recognition of vitiligo by non-skin mirror. We propose a vitiligo recognition based on the probability average of three structurally identical CNN models. The method is more efficient and robust than the traditional RGB color space-based image recognition method. For the dermoscopic classification model, we were able to classify 7 skin lesions, use weighted optimization to overcome the unbalanced data set, and greatly improve the sensitivity of the model by means of model fusion. The optimization and expansion of the system depend on the increase of database.  相似文献   

8.
Background: In medical image analysis, the diagnosis of skin lesions remains a challenging task. Skin lesion is a common type of skin cancer that exists worldwide. Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer. Challenges: Many computerized methods have been introduced in the literature to classify skin cancers. However, challenges remain such as imbalanced datasets, low contrast lesions, and the extraction of irrelevant or redundant features. Proposed Work: In this study, a new technique is proposed based on the conventional and deep learning framework. The proposed framework consists of two major tasks: lesion segmentation and classification. In the lesion segmentation task, contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination. Subsequently, the best channel is selected and the lesion map is computed, which is further converted into a binary form using a thresholding function. In the lesion classification task, two pre-trained CNN models were modified and trained using transfer learning. Deep features were extracted from both models and fused using canonical correlation analysis. During the fusion process, a few redundant features were also added, lowering classification accuracy. A new technique called maximum entropy score-based selection (MESbS) is proposed as a solution to this issue. The features selected through this approach are fed into a cubic support vector machine (C-SVM) for the final classification. Results: The experimental process was conducted on two datasets: ISIC 2017 and HAM10000. The ISIC 2017 dataset was used for the lesion segmentation task, whereas the HAM10000 dataset was used for the classification task. The achieved accuracy for both datasets was 95.6% and 96.7%, respectively, which was higher than the existing techniques.  相似文献   

9.
The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially. This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network (F-RCNN). The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions. Furthermore, image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation. The permanent changes in climate are of serious concern. The leading causes beyond these destructive variations are ozone layer depletion, greenhouse gas release, deforestation, pollution, water resources contamination, and UV radiation. This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues, e.g., skin cancer, damage to marine life, crops damage, and impacts on living being’s immune systems. We have tried to classify the ozone images dataset into two major classes, depleted and non-depleted regions, to extract the required persuading features through F-RCNN. Furthermore, CNN has been used for feature extraction in the existing literature, and those extricated diverse RoIs are passed on to the CNN for grouping purposes. It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results. The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91% to 93% in identifying climate variation through ozone concentration classification, whether the region in the image under consideration is depleted or non-depleted. Our proposed model presented 93% accuracy, and it outperforms the prevailing techniques.  相似文献   

10.
Currently, distracted driving is among the most important causes of traffic accidents. Consequently, intelligent vehicle driving systems have become increasingly important. Recently, interest in driver-assistance systems that detect driver actions and help them drive safely has increased. In these studies, although some distinct data types, such as the physical conditions of the driver, audio and visual features, and vehicle information, are used, the primary data source is images of the driver that include the face, arms, and hands taken with a camera inside the car. In this study, an architecture based on a convolution neural network (CNN) is proposed to classify and detect driver distraction. An efficient CNN with high accuracy is implemented, and to implement intense convolutional networks for large-scale image recognition, a new architecture was proposed based on the available Visual Geometry Group (VGG-16) architecture. The proposed architecture was evaluated using the StateFarm dataset for driver-distraction detection. This dataset is publicly available on Kaggle and is frequently used for this type of research. The proposed architecture achieved 96.95% accuracy.  相似文献   

11.
陈宏彩  程煜  任亚恒 《包装工程》2024,45(9):135-140
目的 为了克服药包玻璃瓶缺陷样本不足带来的缺陷检测模型精度不准的问题,提出改进StyleGAN2-ADA的缺陷样本生成方法,提升模型鲁棒性。方法 首先,基于StyleGAN2-ADA算法,在无缺陷图像集上训练网络模型并作为骨干。其次,在骨干网络上添加缺陷感知残差块,生成缺陷掩码,在少量的缺陷图像数据集上训练网络模型操纵掩码区域的特征,模拟缺陷图像生成过程,合成缺陷图像。最后,采用YOLOv7检测网络验证该样本生成方法的效果。结果 实验结果表明,该方法在大量正常图像和少量缺陷图像基础上生成逼真且多样性的缺陷图像,应用该缺陷样本合成方法丰富数据集后,西林瓶缺陷检测平均准确率(mAP)达到97.3%,较原始数据集合和StyleGAN2-ADA算法分别提高了33.1%和4.1%。结论 该图像生成方法可以在少量缺陷样本下生成高质量的缺陷图像,优化不均衡数据集,增强模型训练的稳定性,提高药用玻璃包装产品的质量和合格率。  相似文献   

12.
A hybrid convolutional neural network (CNN)-based model is proposed in the article for accurate detection of COVID-19, pneumonia, and normal patients using chest X-ray images. The input images are first pre-processed to tackle problems associated with the formation of the dataset from different sources, image quality issues, and imbalances in the dataset. The literature suggests that several abnormalities can be found with limited medical image datasets by using transfer learning. Hence, various pre-trained CNN models: VGG-19, InceptionV3, MobileNetV2, and DenseNet are adopted in the present work. Finally, with the help of these models, four hybrid models: VID (VGG-19, Inception, and DenseNet), VMI(VGG-19, MobileNet, and Inception), VMD (VGG-19, MobileNet, and DenseNet), and IMD(Inception, MobileNet, and DenseNet) are proposed. The model outcome is also tested using five-fold cross-validation. The best-performing hybrid model is the VMD model with an overall testing accuracy of 97.3%. Thus, a new hybrid model architecture is presented in the work that combines three individual base CNN models in a parallel configuration to counterbalance the shortcomings of individual models. The experimentation result reveals that the proposed hybrid model outperforms most of the previously suggested models. This model can also be used in the identification of diseases, especially in rural areas where limited laboratory facilities are available.  相似文献   

13.
In recent years, with the development of machine learning and deep learning, it is possible to identify and even control crop diseases by using electronic devices instead of manual observation. In this paper, an image recognition method of citrus diseases based on deep learning is proposed. We built a citrus image dataset including six common citrus diseases. The deep learning network is used to train and learn these images, which can effectively identify and classify crop diseases. In the experiment, we use MobileNetV2 model as the primary network and compare it with other network models in the aspect of speed, model size, accuracy. Results show that our method reduces the prediction time consumption and model size while keeping a good classification accuracy. Finally, we discuss the significance of using MobileNetV2 to identify and classify agricultural diseases in mobile terminal, and put forward relevant suggestions.  相似文献   

14.
Cervical cancer is one of the most common gynecological malignancies, and when detected and treated at an early stage, the cure rate is almost 100%. Colposcopy can be used to diagnose cervical lesions by direct observation of the surface of the cervix using microscopic biopsy and pathological examination, which can improve the diagnosis rate and ensure that patients receive fast and effective treatment. Digital colposcopy and automatic image analysis can reduce the work burden on doctors, improve work efficiency, and help healthcare institutions to make better treatment decisions in underdeveloped areas. The present study used a deep-learning model to classify the images of cervical lesions. Clinicians could determine patient treatment based on the type of cervix, which greatly improved the diagnostic efficiency and accuracy. The present study was divided into two parts. First, convolutional neural networks were used to segment the lesions in the cervical images; and second, a neural network model similar to CapsNet was used to identify and classify the cervical images. Finally, the training set accuracy of our model was 99%, the test set accuracy was 80.1%, it obtained better results than other classification methods, and it realized rapid classification and prediction of mass image data.  相似文献   

15.
李梅梅  胡春海  周影  宋昕 《计量学报》2023,44(2):296-303
阿尔茨海默病(AD)是一种发病进程缓慢、随着时间不断恶化的神经退化性疾病,在老龄化的趋势下,AD患者数量日渐增加。因此,如何对其予以早期精准诊断并进行正向干预是急需解决的问题。为提高计算机辅助诊断的效率,同时促进疾病的病理生理机制研究,提出了改进的基于SE模块二维双路径融合网络的分类方法,在网络中加入缩减系数模块,增加图片有用信息占比;对通道注意模块的权重函数重新设计,增大特征图间差异,联合二维双路径网络,增大网络倚重点,达到更好分类性能的同时,防止模型过拟合。使用ADNI数据集对AD、EMCI、NC进行二分类,实验表明所提出模型准确度相比于VGG和二维双路径融合模型分别提高了5.59%和8.11%,与其它先进方法进行比较验证了所提方法的可行性。  相似文献   

16.
Biopsy is one of the most commonly used modality to identify breast cancer in women, where tissue is removed and studied by the pathologist under the microscope to look for abnormalities in tissue. This technique can be time-consuming, error-prone, and provides variable results depending on the expertise level of the pathologist. An automated and efficient approach not only aids in the diagnosis of breast cancer but also reduces human effort. In this paper, we develop an automated approach for the diagnosis of breast cancer tumors using histopathological images. In the proposed approach, we design a residual learning-based 152-layered convolutional neural network, named as ResHist for breast cancer histopathological image classification. ResHist model learns rich and discriminative features from the histopathological images and classifies histopathological images into benign and malignant classes. In addition, to enhance the performance of the developed model, we design a data augmentation technique, which is based on stain normalization, image patches generation, and affine transformation. The performance of the proposed approach is evaluated on publicly available BreaKHis dataset. The proposed ResHist model achieves an accuracy of 84.34% and an F1-score of 90.49% for the classification of histopathological images. Also, this approach achieves an accuracy of 92.52% and F1-score of 93.45% when data augmentation is employed. The proposed approach outperforms the existing methodologies in the classification of benign and malignant histopathological images. Furthermore, our experimental results demonstrate the superiority of our approach over the pre-trained networks, namely AlexNet, VGG16, VGG19, GoogleNet, Inception-v3, ResNet50, and ResNet152 for the classification of histopathological images.  相似文献   

17.
In this study, an innovative hybrid machine learning-technique is used for the early skin cancer diagnosis fusing Convolutional Neural Network and Multilayer Perceptron to analyze images and information related to the skin cancer. This information is extracted manually after applying different color space conversions on the original images for better screening of the lesions. The proposed architecture is compared with standalone architecture in addition to some other techniques by commonly used evaluation metrics. HAM10000 dataset is used for training and testing as this data contain seven different skin lesions. The novelty of the proposed hybrid model is the structure of the network which handles structured data (patients' metadata and other useful features from different color spaces related to the illumination, energy, darkness, etc.) and unstructured data (images). The results show an overall 86%, 95% top-1 and top-2 accuracy respectively, and 96% area under the curve for the seven classes. The study demonstrates the superiority of the proposed hybrid model with a 2% improvement in the accuracy over the standalone model and a promising behavior as compared to the ensemble techniques. The follow-up research will include more patient data to develop a skin cancer detection device.  相似文献   

18.
Vehicle type classification is considered a central part of an intelligent traffic system. In recent years, deep learning had a vital role in object detection in many computer vision tasks. To learn high-level deep features and semantics, deep learning offers powerful tools to address problems in traditional architectures of handcrafted feature-extraction techniques. Unlike other algorithms using handcrated visual features, convolutional neural network is able to automatically learn good features of vehicle type classification. This study develops an optimized automatic surveillance and auditing system to detect and classify vehicles of different categories. Transfer learning is used to quickly learn the features by recording a small number of training images from vehicle frontal view images. The proposed system employs extensive data-augmentation techniques for effective training while avoiding the problem of data shortage. In order to capture rich and discriminative information of vehicles, the convolutional neural network is fine-tuned for the classification of vehicle types using the augmented data. The network extracts the feature maps from the entire dataset and generates a label for each object (vehicle) in an image, which can help in vehicle-type detection and classification. Experimental results on a public dataset and our own dataset demonstrated that the proposed method is quite effective in detection and classification of different types of vehicles. The experimental results show that the proposed model achieves 96.04% accuracy on vehicle type classification.  相似文献   

19.
The nutritional value of perishable food items, such as fruits and vegetables, depends on their freshness levels. The existing approaches solve a binary class problem by classifying a known fruit\vegetable class into fresh or rotten only. We propose an automated fruits and vegetables categorization approach that first recognizes the class of object in an image and then categorizes that fruit or vegetable into one of the three categories: pure-fresh, medium-fresh, and rotten. We gathered a dataset comprising of 60K images of 11 fruits and vegetables, each is further divided into three categories of freshness, using hand-held cameras. The recognition and categorization of fruits and vegetables are performed through two deep learning models: Visual Geometry Group (VGG-16) and You Only Look Once (YOLO), and their results are compared. VGG-16 classifies fruits and vegetables and categorizes their freshness, while YOLO also localizes them within the image. Furthermore, we have developed an android based application that takes the image of the fruit or vegetable as input and returns its class label and its freshness degree. A comprehensive experimental evaluation of proposed approach demonstrates that the proposed approach can achieve a high accuracy and F1score on gathered FruitVeg Freshness dataset. The dataset is publicly available for further evaluation by the research community.  相似文献   

20.
图片卫士:一个自动成人图像识别系统   总被引:4,自引:0,他引:4  
设计并实现了一个自动识别成人图像识别系统“图片卫士”。图片卫士采用3层识别框架,利用肤色、纹理、图像视觉特征分层逐级识别成人图像。为了可靠地检测到图像中的肤色区域,提出了一种新的自适应统计肤色模型。在肤色检测基础上,通过皮肤纹理验证过程,图像中的人体皮肤区域被准确地分割出来。基于图像中皮肤区域,提取9个经验特征来表示图像内容,并采用AdaBoost算法构造一个总体分类器进行图像分类,识别正常图像和成人图像。在算法评估中,建立了一个78205幅图像的测试集,其中59885幅为正常图像,18320幅为成人图像。图片卫士显示了良好的系统性能,具有成人图像88.5%的识别率,正常图像92.5%的识别率。在PentiumⅣ1.5GHz的个人计算机上,图片卫士的平均处理速度为正常图像每秒5.6幅和成人图像每秒1.9幅。图片卫士可以应用在个人计算机或网络传输中,实时监控和过滤成人图像,还可以为网络安全等应用提供技术支持。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号