首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A hybrid convolutional neural network (CNN)-based model is proposed in the article for accurate detection of COVID-19, pneumonia, and normal patients using chest X-ray images. The input images are first pre-processed to tackle problems associated with the formation of the dataset from different sources, image quality issues, and imbalances in the dataset. The literature suggests that several abnormalities can be found with limited medical image datasets by using transfer learning. Hence, various pre-trained CNN models: VGG-19, InceptionV3, MobileNetV2, and DenseNet are adopted in the present work. Finally, with the help of these models, four hybrid models: VID (VGG-19, Inception, and DenseNet), VMI(VGG-19, MobileNet, and Inception), VMD (VGG-19, MobileNet, and DenseNet), and IMD(Inception, MobileNet, and DenseNet) are proposed. The model outcome is also tested using five-fold cross-validation. The best-performing hybrid model is the VMD model with an overall testing accuracy of 97.3%. Thus, a new hybrid model architecture is presented in the work that combines three individual base CNN models in a parallel configuration to counterbalance the shortcomings of individual models. The experimentation result reveals that the proposed hybrid model outperforms most of the previously suggested models. This model can also be used in the identification of diseases, especially in rural areas where limited laboratory facilities are available.  相似文献   

2.
Chronic hemodialysis (HD) patients are predisposed to several complications associated with pleural effusion. In addition, uremia can directly cause pleuritis. However, there are inadequate data about pathogenesis and natural course of uremic pleuritis. In this study, 76 chronic HD patients with pleural effusion admitted to the Respiratory Center of Masih Daneshvari Hospital, in Tehran, Iran between June 2005 and May 2011 were evaluated to figure out the etiology of their pleural disease. Among these patients, patients with uremic pleuritis were identified and studied. The rate of uremic pleuritis was 23.7%. Other frequent etiologies of pleural effusion were parapneumonic effusion (23.7%), cardiac failure (19.7%), tuberculosis (6.6%), volume overload, malignancy, and unknown. In patients with uremic pleuritis, dyspnea was the most common symptom, followed by cough, weight loss, anorexia, chest pain, and fever. Compared to patients with parapneumonic effusion, patients with uremic effusion had a significantly higher rate of dyspnea and lower rate of cough and fever. Pleural fluid analysis showed that these patients had a significantly lower pleural to serum lactic dehydrogenase ratio, total pleural leukocytes, and polymorphonuclear count compared to patients with parapneumonic effusion. Improvement was achieved in 94.1% of patients with uremic pleuritis by continuation of HD, chest tube insertion or pleural decortication; an outcome better than the previous reports. Despite the association with an exudative effusion, inflammatory pleural reactions in patients with uremic pleuritis may not be as severe as infection‐induced effusions. Owing to the advancement in HD technology and other interventions, outcome of uremic pleuritis may be improved.  相似文献   

3.
Pulmonary diseases are common throughout the world, especially in developing countries. These diseases include chronic obstructive pulmonary diseases, pneumonia, asthma, tuberculosis, fibrosis, and recently COVID-19. In general, pulmonary diseases have a similar footprint on chest radiographs which makes them difficult to discriminate even for expert radiologists. In recent years, many image processing techniques and artificial intelligence models have been developed to quickly and accurately diagnose lung diseases. In this paper, the performance of four popular pretrained models (namely VGG16, DenseNet201, DarkNet19, and XceptionNet) in distinguishing between different pulmonary diseases was analyzed. To the best of our knowledge, this is the first published study to ever attempt to distinguish all four cases normal, pneumonia, COVID-19 and lung opacity from Chest-X-Ray (CXR) images. All models were trained using Chest-X-Ray (CXR) images, and statistically tested using 5-fold cross validation. Using individual models, XceptionNet outperformed all other models with a 94.775% accuracy and Area Under the Curve (AUC) of Receiver Operating Characteristic (ROC) of 99.84%. On the other hand, DarkNet19 represents a good compromise between accuracy, fast convergence, resource utilization, and near real time detection (0.33 s). Using a collection of models, the 97.79% accuracy achieved by Ensemble Features was the highest among all surveyed methods, but it takes the longest time to predict an image (5.68 s). An efficient effective decision support system can be developed using one of those approaches to assist radiologists in the field make the right assessment in terms of accuracy and prediction time, such a dependable system can be used in rural areas and various healthcare sectors.  相似文献   

4.
In the present paper, our model consists of deep learning approach: DenseNet201 for detection of COVID and Pneumonia using the Chest X-ray Images. The model is a framework consisting of the modeling software which assists in Health Insurance Portability and Accountability Act Compliance which protects and secures the Protected Health Information . The need of the proposed framework in medical facilities shall give the feedback to the radiologist for detecting COVID and pneumonia though the transfer learning methods. A Graphical User Interface tool allows the technician to upload the chest X-ray Image. The software then uploads chest X-ray radiograph (CXR) to the developed detection model for the detection. Once the radiographs are processed, the radiologist shall receive the Classification of the disease which further aids them to verify the similar CXR Images and draw the conclusion. Our model consists of the dataset from Kaggle and if we observe the results, we get an accuracy of 99.1%, sensitivity of 98.5%, and specificity of 98.95%. The proposed Bio-Medical Innovation is a user-ready framework which assists the medical providers in providing the patients with the best-suited medication regimen by looking into the previous CXR Images and confirming the results. There is a motivation to design more such applications for Medical Image Analysis in the future to serve the community and improve the patient care.  相似文献   

5.
The COVID-19 pandemic poses an additional serious public health threat due to little or no pre-existing human immunity, and developing a system to identify COVID-19 in its early stages will save millions of lives. This study applied support vector machine (SVM), k-nearest neighbor (K-NN) and deep learning convolutional neural network (CNN) algorithms to classify and detect COVID-19 using chest X-ray radiographs. To test the proposed system, chest X-ray radiographs and CT images were collected from different standard databases, which contained 95 normal images, 140 COVID-19 images and 10 SARS images. Two scenarios were considered to develop a system for predicting COVID-19. In the first scenario, the Gaussian filter was applied to remove noise from the chest X-ray radiograph images, and then the adaptive region growing technique was used to segment the region of interest from the chest X-ray radiographs. After segmentation, a hybrid feature extraction composed of 2D-DWT and gray level co-occurrence matrix was utilized to extract the features significant for detecting COVID-19. These features were processed using SVM and K-NN. In the second scenario, a CNN transfer model (ResNet 50) was used to detect COVID-19. The system was examined and evaluated through multiclass statistical analysis, and the empirical results of the analysis found significant values of 97.14%, 99.34%, 99.26%, 99.26% and 99.40% for accuracy, specificity, sensitivity, recall and AUC, respectively. Thus, the CNN model showed significant success; it achieved optimal accuracy, effectiveness and robustness for detecting COVID-19.  相似文献   

6.
With the increasing and rapid growth rate of COVID-19 cases, the healthcare scheme of several developed countries have reached the point of collapse. An important and critical steps in fighting against COVID-19 is powerful screening of diseased patients, in such a way that positive patient can be treated and isolated. A chest radiology image-based diagnosis scheme might have several benefits over traditional approach. The accomplishment of artificial intelligence (AI) based techniques in automated diagnoses in the healthcare sector and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and recognition systems. This study develops an Intelligent Firefly Algorithm Deep Transfer Learning Based COVID-19 Monitoring System (IFFA-DTLMS). The proposed IFFA-DTLMS model majorly aims at identifying and categorizing the occurrence of COVID19 on chest radiographs. To attain this, the presented IFFA-DTLMS model primarily applies densely connected networks (DenseNet121) model to generate a collection of feature vectors. In addition, the firefly algorithm (FFA) is applied for the hyper parameter optimization of DenseNet121 model. Moreover, autoencoder-long short term memory (AE-LSTM) model is exploited for the classification and identification of COVID19. For ensuring the enhanced performance of the IFFA-DTLMS model, a wide-ranging experiments were performed and the results are reviewed under distinctive aspects. The experimental value reports the betterment of IFFA-DTLMS model over recent approaches.  相似文献   

7.
Artificial intelligence, which has recently emerged with the rapid development of information technology, is drawing attention as a tool for solving various problems demanded by society and industry. In particular, convolutional neural networks (CNNs), a type of deep learning technology, are highlighted in computer vision fields, such as image classification and recognition and object tracking. Training these CNN models requires a large amount of data, and a lack of data can lead to performance degradation problems due to overfitting. As CNN architecture development and optimization studies become active, ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models. In this study, data augmentation and contour image extraction were performed to overcome the data shortage problem. In addition, we propose a hierarchical ensemble technique to achieve high image classification accuracy, even if trained from a small amount of data. First, we trained the UC-Merced land use dataset and the contour images for each image on pretrained VGGNet, GoogLeNet, ResNet, DenseNet, and EfficientNet. We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed. These experiments were performed in cases where the proportion of training datasets was 30%, 50%, and 70%, resulting in a performance improvement of up to 4.68% compared to the average accuracy of the entire model.  相似文献   

8.
In medical imaging, segmenting brain tumor becomes a vital task, and it provides a way for early diagnosis and treatment. Manual segmentation of brain tumor in magnetic resonance (MR) images is a time‐consuming and challenging task. Hence, there is a need for a computer‐aided brain tumor segmentation approach. Using deep learning algorithms, a robust brain tumor segmentation approach is implemented by integrating convolution neural network (CNN) and multiple kernel K means clustering (MKKMC). In this proposed CNN‐MKKMC approach, classification of MR images into normal and abnormal is performed by CNN algorithm. At next, MKKMC algorithm is employed to segment the brain tumor from the abnormal brain image. The proposed CNN‐MKKMC algorithm is evaluated both visually and objectively in terms of accuracy, sensitivity, and specificity with the existing segmentation methods. The experimental results demonstrate that the proposed CNN‐MKKMC approach yields better accuracy in segmenting brain tumor with less time cost.  相似文献   

9.
Large bowel obstruction (LBO) occurs when there is a blockage or twisting in the large bowel that prevents wastes and gas from passing through. If left untreated, the blockage cuts off blood supply to the colon, causing sections of it to die which results in high rates of morbidity and fatality. The examination of clinical symptoms of LBO involves careful inspection of the cecum and colon. Radiologists use X-rays to inspect the clinical signs. Some research has been done to automate the detection of related abdominal and intestinal diseases. However, all these studies concentrate only on detecting Crohn's, ulcerative colitis, Acute Appendicitis, colorectal cancer, celiac diseases, liver diseases, and chronic kidney diseases. Automatic detection and classification of LBO has not been given due attention so far to the best of the authors knowledge. To address this challenge, we have designed a model for the detection and classification of LBO. The models development comprises of stages such as preprocessing, detection, segmentation, feature extraction, and classification. We used YOLOv3 for detection and used a gray scale level co-occurrence matrix (GLCM), and a convolutional neural network for feature extraction, while support vector machine (SVM) and softmax were used for classification. The proposed model achieved a diagnostic accuracy of 89% when feature extraction methods such as CNN and median filter with softmax classifier were used. CNN and Gaussian filter with soft max classifier achieved 91%, while CNN and anisotropic filter with soft max classifier achieved 92%. GLCM with threshold segmentation and Gaussian filter with SVM classifier achieved 87%, while CNN with watershed segmentation and Gaussian filter with SVM classifier achieved 97% and CNN-GLCM with watershed segmentation and anisotropic diffusion filter with SVM classifier achieved 98% for detection and classification of LBO. Finally, this paper presented a performance analysis of various machine learning approaches for detection and classification of LBO. Hence, our model is designed to assist human experts (Radiologists) in diagnosing LBO.  相似文献   

10.
Citrus fruit crops are among the world’s most important agricultural products, but pests and diseases impact their cultivation, resulting in yield and quality losses. Computer vision and machine learning have been widely used to detect and classify plant diseases over the last decade, allowing for early disease detection and improving agricultural production. This paper presented an automatic system for the early detection and classification of citrus plant diseases based on a deep learning (DL) model, which improved accuracy while decreasing computational complexity. The most recent transfer learning-based models were applied to the Citrus Plant Dataset to improve classification accuracy. Using transfer learning, this study successfully proposed a Convolutional Neural Network (CNN)-based pre-trained model (EfficientNetB3, ResNet50, MobiNetV2, and InceptionV3) for the identification and categorization of citrus plant diseases. To evaluate the architecture’s performance, this study discovered that transferring an EfficientNetb3 model resulted in the highest training, validating, and testing accuracies, which were 99.43%, 99.48%, and 99.58%, respectively. In identifying and categorizing citrus plant diseases, the proposed CNN model outperforms other cutting-edge CNN model architectures developed previously in the literature.  相似文献   

11.
目的 为了提升烟包缺陷检测的准确率,构建卷烟包装外观缺陷识别基准数据集,并开展主流深度学习模型在卷烟包装外观缺陷智能检测中的应用研究。方法 首先,从生产运行中的ZB45型细支烟硬盒包装机组采集缺陷图像,经过人工审核与筛选后获取典型的缺陷数据。然后,根据缺陷的特征与成因,将缺陷数据划分为23个类别,并逐一进行目标检测框标注。最终,形成了包含13 000余张缺陷图像的卷烟包装外观缺陷识别基准数据集,并针对烟包缺陷识别、缺陷分类、目标检测、模型迁移4项任务开展实验。结果 结果表明,数据集能够满足高准确率深度学习模型的训练需求;通过模型迁移,能够利用该数据集大幅提高不同牌号卷烟的缺陷检测效果;DenseNet模型在烟包缺陷识别与缺陷分类任务上表现较好,准确率分别达到93.70%和95.43%,YOLOv5模型在缺陷目标检测任务上mAP@0.5值达到了96.61%。结论 该数据集能够作为烟包缺陷检测领域的基准数据集,研究成果将进一步支撑卷烟包装领域的数据应用与数字化转型。  相似文献   

12.
The abnormal development of cells in brain leads to the formation of tumors in brain. In this article, image fusion based brain tumor detection and segmentation methodology is proposed using convolutional neural networks (CNN). This proposed methodology consists of image fusion, feature extraction, classification, and segmentation. Discrete wavelet transform (DWT) is used for image fusion and enhanced brain image is obtained by fusing the coefficients of the DWT transform. Further, Grey Level Co‐occurrence Matrix features are extracted and fed to the CNN classifier for glioma image classifications. Then, morphological operations with closing and opening functions are used to segment the tumor region in classified glioma brain image.  相似文献   

13.
生物式水质监测通常是先通过提取水生物在不同环境下的应激反应特征,再进行特征分类,从而识别水质。针对水质监测问题,提出一种使用卷积神经网络(CNN)的方法。鱼类运动轨迹是当前所有文献使用的多种水质分类特征的综合性表现,是生物式水质分类的重要依据。使用Mask-RCNN的图像分割方法,求取鱼体的质心坐标,并绘制出一定时间段内鱼体的运动轨迹图像,制作正常与异常水质下两种轨迹图像数据集。融合Inception-v3网络作为数据集的特征预处理部分,重新建立卷积神经网络对Inception-v3网络提取的特征进行分类。通过设置多组平行实验,在不同的水质环境中对正常水质与异常水质进行分类。结果表明,卷积神经网络模型的水质识别率为99.38%,完全达到水质识别的要求。  相似文献   

14.
Attacks on websites and network servers are among the most critical threats in network security. Network behavior identification is one of the most effective ways to identify malicious network intrusions. Analyzing abnormal network traffic patterns and traffic classification based on labeled network traffic data are among the most effective approaches for network behavior identification. Traditional methods for network traffic classification utilize algorithms such as Naive Bayes, Decision Tree and XGBoost. However, network traffic classification, which is required for network behavior identification, generally suffers from the problem of low accuracy even with the recently proposed deep learning models. To improve network traffic classification accuracy thus improving network intrusion detection rate, this paper proposes a new network traffic classification model, called ArcMargin, which incorporates metric learning into a convolutional neural network (CNN) to make the CNN model more discriminative. ArcMargin maps network traffic samples from the same category more closely while samples from different categories are mapped as far apart as possible. The metric learning regularization feature is called additive angular margin loss, and it is embedded in the object function of traditional CNN models. The proposed ArcMargin model is validated with three datasets and is compared with several other related algorithms. According to a set of classification indicators, the ArcMargin model is proofed to have better performances in both network traffic classification tasks and open-set tasks. Moreover, in open-set tasks, the ArcMargin model can cluster unknown data classes that do not exist in the previous training dataset.  相似文献   

15.
As the amount of online video content is increasing, consumers are becoming increasingly interested in various product names appearing in videos, particularly in cosmetic-product names in videos related to fashion, beauty, and style. Thus, the identification of such products by using image recognition technology may aid in the identification of current commercial trends. In this paper, we propose a two-stage deep-learning detection and classification method for cosmetic products. Specifically, variants of the YOLO network are used for detection, where the bounding box for each given input product is predicted and subsequently cropped for classification. We use four state-of-the-art classification networks, namely ResNet, InceptionResNetV2, DenseNet, and EfficientNet, and compare their performance. Furthermore, we employ dilated convolution in these networks to obtain better feature representations and improve performance. Extensive experiments demonstrate that YOLOv3 and its tiny version achieve higher speed and accuracy. Moreover, the dilated networks marginally outperform the base models, or achieve similar performance in the worst case. We conclude that the proposed method can effectively detect and classify cosmetic products.  相似文献   

16.
Manual detection of small uncalcified pulmonary nodules (diameter <4 mm) in thoracic computed tomography (CT) scans is a tedious and error-prone task. Automatic detection of disperse micronodules is, thus, highly desirable for improved characterization of the fatal and incurable occupational pulmonary diseases. Here, we present a novel computer-assisted detection (CAD) scheme specifically dedicated to detect micronodules. The proposed scheme consists of a candidate-screening module and a false positive (FP) reduction module. The candidate-screening module is initiated by a lung segmentation algorithm and is followed by a combination of 2D/3D features-based thresholding parameters to identify plausible micronodules. The FP reduction module employs a 3D convolutional neural network (CNN) to classify each identified candidate. It automatically encodes the discriminative representations by exploiting the volumetric information of each candidate. A set of 872 micro-nodules in 598 CT scans marked by at least two radiologists are extracted from the Lung Image Database Consortium and Image Database Resource Initiative to test our CAD scheme. The CAD scheme achieves a detection sensitivity of 86.7% (756/872) with only 8 FPs/scan and an AUC of 0.98. Our proposed CAD scheme efficiently identifies micronodules in thoracic scans with only a small number of FPs. Our experimental results provide evidence that the automatically generated features by the 3D CNN are highly discriminant, thus making it a well-suited FP reduction module of a CAD scheme.  相似文献   

17.
To find a better way to screen early lung cancer, motivated by the great success of deep learning, we empirically investigate the challenge of classifying lung nodules in computed tomography (CT) in an end‐to‐end manner. Multi‐view convolutional neural networks (MV‐CNN) are proposed in this article for lung nodule classification. Unlike the traditional CNNs, a MV‐CNN takes multiple views of each entered nodule. We carry out a binary classification (benign and malignant) and a ternary classification (benign, primary malignant, and metastatic malignant) using the Lung Image Database Consortium and Image Database Resource Initiative database. The results show that, for binary or ternary classifications, the multiview strategy produces higher accuracy than the single view method, even for cases that are over‐fitted. Our model achieves an error rate of 5.41 and 13.91% for binary and ternary classifications, respectively. Finally, the receiver operating characteristic curve and t‐distributed stochastic neighbor embedding algorithm are used to analyze the models. The results reveal that the deep features learned by the model proposed in this article have a higher separability than features from the image space and the multiview strategies; therefore, researchers can get better representation. © 2017 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 27, 12–22, 2017  相似文献   

18.
基于语谱图与改进DenseNet的野外车辆识别   总被引:1,自引:0,他引:1       下载免费PDF全文
针对在野外运动车辆分类过程中,传统梅尔倒谱系数与高斯混合模型分类方法对干扰噪声较为敏感的情况,提出了改进的密集卷积网络结构(DenseNet)方法。首先是将声音信号转换为语谱图,然后送入到改进的DenseNet网络结构中进行识别。其中,改进的DenseNet网络结构是在全连接层加入了中心损失(center loss)函数,使得同类特征聚合程度较高,这样就能够提取出声音信号的深度特征,有利于分类。实验结果表明,在相同的样本集下,改进DenseNet方法的识别率得到了明显的提升,达到97.70%。  相似文献   

19.
Recently years, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. However, training a CNN from scratch is difficult because it requires a large amount of labeled training data, which remains a challenge in medical imaging domain. To this end, deep transfer learning (TL) technique is widely used for many medical image tasks. In this paper, we propose a novel multisource transfer learning CNN model for lymph node detection. The mechanism behind it is straightforward. Point-wise (1 × 1) convolution is used to fuse multisource transfer learning knowledge. Concretely, we view the transferred features as priori domain knowledge and 1 × 1 convolutional operation is implemented after pre-trained convolution layers to adaptively combine the transfer information for target task. In order to learn non-linear transferred features and prevent over-fitting, we present an encode process for the pre-trained convolution kernels. At last, based on convolutional factorization technique, we train the proposed CNN model and the encoder process jointly, which improves the feasibility of our approach. The effectiveness of the proposed method is verified on lymph node (LN) dataset: 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans for LN detection. Our method demonstrates sensitivities of about 85%/71% at 3 FP/vol. and 92%/85% at 6 FP/vol. for mediastinum and abdomen respectively, which compares favorably to previous methods.  相似文献   

20.
Tuberculosis (TB) is a highly infectious disease and is one of the major health problems all over the world. The accurate detection of TB is a major challenge faced by most of the existing methods. This work addresses these issues and developed an effective mechanism for detecting TB using deep learning. Here, the color space transformation is applied for transforming the red green and blue image to LUV space, where L stands for luminance, U and V represent chromaticity values of color images. Then, adaptive thresholding is carried out for image segmentation and various features, like coverage, density, color histogram, area, length, and texture features, are extracted to enable effective classification. After the feature extraction, the size of the features is reduced using principal component analysis. The extracted features are subjected to fractional crow search-based deep convolutional neural network (FC-SVNN) for the classification. Then, the image level features, like bacilli count, bacilli area, scattering coefficients and skeleton features are considered to perform severity detection using proposed adaptive fractional crow (AFC)-deep CNN. Finally, the inflection level is determined using entropy, density and detection percentage. The proposed AFC-Deep CNN algorithm is designed by modifying FC algorithm using self-adaptive concept. The proposed AFC-Deep CNN shows better performance with maximum accuracy value as 0.935.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号