首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Brain tumor is one of the most dangerous disease that causes due to uncontrollable and abnormal cell partition. In this paper, we have used MRI brain scan in comparison with CT brain scan as it is less harmful to detect brain tumor. We considered watershed segmentation technique for brain tumor detection. The proposed methodology is divided as follows: pre-processing, computing foreground applying watershed, extract and supply features to machine learning algorithms. Consequently, this study is tested on big data set of images and we achieved acceptable accuracy from K-NN classification algorithm in detection of brain tumor.  相似文献   

2.
Vehicle type classification is considered a central part of an intelligent traffic system. In recent years, deep learning had a vital role in object detection in many computer vision tasks. To learn high-level deep features and semantics, deep learning offers powerful tools to address problems in traditional architectures of handcrafted feature-extraction techniques. Unlike other algorithms using handcrated visual features, convolutional neural network is able to automatically learn good features of vehicle type classification. This study develops an optimized automatic surveillance and auditing system to detect and classify vehicles of different categories. Transfer learning is used to quickly learn the features by recording a small number of training images from vehicle frontal view images. The proposed system employs extensive data-augmentation techniques for effective training while avoiding the problem of data shortage. In order to capture rich and discriminative information of vehicles, the convolutional neural network is fine-tuned for the classification of vehicle types using the augmented data. The network extracts the feature maps from the entire dataset and generates a label for each object (vehicle) in an image, which can help in vehicle-type detection and classification. Experimental results on a public dataset and our own dataset demonstrated that the proposed method is quite effective in detection and classification of different types of vehicles. The experimental results show that the proposed model achieves 96.04% accuracy on vehicle type classification.  相似文献   

3.
针对传统电能质量扰动分类方法中人工选取特征困难、步骤繁琐和分类准确率低等问题,提出了一种基于粒子群优化(particle swarm optimization,PSO)算法与卷积神经网络(convolutional neural network,CNN)的扰动分类方法。首先,利用reshape函数将各电能质量扰动信号的一维时间序列分别转成行列相等的二维矩阵,并对这些二维矩阵进行适当划分,形成训练数据集和测试数据集;其次,基于CNN构建电能质量扰动的分类模型;再次,采用PSO算法对该分类模型的参数进行优化,使用训练数据集对优化后的电能质量扰动分类模型进行训练;最后,使用测试数据集对经过训练的电能质量扰动分类模型进行测试,根据输出标签得到各类电能质量扰动的分类结果。仿真结果表明:该分类模型可以自行提取电能质量扰动数据的特征,相较于其他电能质量扰动分类模型,其对电能质量扰动信号的分类准确率更高。  相似文献   

4.
Data fusion is one of the challenging issues, the healthcare sector is facing in the recent years. Proper diagnosis from digital imagery and treatment are deemed to be the right solution. Intracerebral Haemorrhage (ICH), a condition characterized by injury of blood vessels in brain tissues, is one of the important reasons for stroke. Images generated by X-rays and Computed Tomography (CT) are widely used for estimating the size and location of hemorrhages. Radiologists use manual planimetry, a time-consuming process for segmenting CT scan images. Deep Learning (DL) is the most preferred method to increase the efficiency of diagnosing ICH. In this paper, the researcher presents a unique multi-modal data fusion-based feature extraction technique with Deep Learning (DL) model, abbreviated as FFE-DL for Intracranial Haemorrhage Detection and Classification, also known as FFEDL-ICH. The proposed FFEDL-ICH model has four stages namely, preprocessing, image segmentation, feature extraction, and classification. The input image is first preprocessed using the Gaussian Filtering (GF) technique to remove noise. Secondly, the Density-based Fuzzy C-Means (DFCM) algorithm is used to segment the images. Furthermore, the Fusion-based Feature Extraction model is implemented with handcrafted feature (Local Binary Patterns) and deep features (Residual Network-152) to extract useful features. Finally, Deep Neural Network (DNN) is implemented as a classification technique to differentiate multiple classes of ICH. The researchers, in the current study, used benchmark Intracranial Haemorrhage dataset and simulated the FFEDL-ICH model to assess its diagnostic performance. The findings of the study revealed that the proposed FFEDL-ICH model has the ability to outperform existing models as there is a significant improvement in its performance. For future researches, the researcher recommends the performance improvement of FFEDL-ICH model using learning rate scheduling techniques for DNN.  相似文献   

5.
针对电机故障诊断问题,设计一种新型的一维卷积神经网络结构(1D-CNN),提出一种基于声信号和1D-CNN的电机故障诊断方法.为了验证1D-CNN算法在电机故障识别领域的有效性,以一组空调故障电机作为实验对象,搭建电机故障诊断平台,对4种状态的空调电机进行声信号采集实验,制作电机故障声信号数据集,并运用1D-CNN算法...  相似文献   

6.
With the development of deep learning and Convolutional Neural Networks (CNNs), the accuracy of automatic food recognition based on visual data have significantly improved. Some research studies have shown that the deeper the model is, the higher the accuracy is. However, very deep neural networks would be affected by the overfitting problem and also consume huge computing resources. In this paper, a new classification scheme is proposed for automatic food-ingredient recognition based on deep learning. We construct an up-to-date combinational convolutional neural network (CBNet) with a subnet merging technique. Firstly, two different neural networks are utilized for learning interested features. Then, a well-designed feature fusion component aggregates the features from subnetworks, further extracting richer and more precise features for image classification. In order to learn more complementary features, the corresponding fusion strategies are also proposed, including auxiliary classifiers and hyperparameters setting. Finally, CBNet based on the well-known VGGNet, ResNet and DenseNet is evaluated on a dataset including 41 major categories of food ingredients and 100 images for each category. Theoretical analysis and experimental results demonstrate that CBNet achieves promising accuracy for multi-class classification and improves the performance of convolutional neural networks.  相似文献   

7.
Deep learning (DL) techniques, which do not need complex pre-processing and feature analysis, are used in many areas of medicine and achieve promising results. On the other hand, in medical studies, a limited dataset decreases the abstraction ability of the DL model. In this context, we aimed to produce synthetic brain images including three tumor types (glioma, meningioma, and pituitary), unlike traditional data augmentation methods, and classify them with DL. This study proposes a tumor classification model consisting of a Dense Convolutional Network (DenseNet121)-based DL model to prevent forgetting problems in deep networks and delay information flow between layers. By comparing models trained on two different datasets, we demonstrated the effect of synthetic images generated by Cycle Generative Adversarial Network (CycleGAN) on the generalization of DL. One model is trained only on the original dataset, while the other is trained on the combined dataset of synthetic and original images. Synthetic data generated by CycleGAN improved the best accuracy values for glioma, meningioma, and pituitary tumor classes from 0.9633, 0.9569, and 0.9904 to 0.9968, 0.9920, and 0.9952, respectively. The developed model using synthetic data obtained a higher accuracy value than the related studies in the literature. Additionally, except for pixel-level and affine transform data augmentation, synthetic data has been generated in the figshare brain dataset for the first time.  相似文献   

8.
With the development of artificial intelligence-related technologies such as deep learning, various organizations, including the government, are making various efforts to generate and manage big data for use in artificial intelligence. However, it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage. There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology. Therefore, this study proposes a mixed contour data augmentation technique, which is a data augmentation technique using contour images, to solve a problem caused by a lack of data. ResNet, a famous convolutional neural network (CNN) architecture, and CIFAR-10, a benchmark data set, are used for experimental performance evaluation to prove the superiority of the proposed method. And to prove that high performance improvement can be achieved even with a small training dataset, the ratio of the training dataset was divided into 70%, 50%, and 30% for comparative analysis. As a result of applying the mixed contour data augmentation technique, it was possible to achieve a classification accuracy improvement of up to 4.64% and high accuracy even with a small amount of data set. In addition, it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.  相似文献   

9.
Background: In medical image analysis, the diagnosis of skin lesions remains a challenging task. Skin lesion is a common type of skin cancer that exists worldwide. Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer. Challenges: Many computerized methods have been introduced in the literature to classify skin cancers. However, challenges remain such as imbalanced datasets, low contrast lesions, and the extraction of irrelevant or redundant features. Proposed Work: In this study, a new technique is proposed based on the conventional and deep learning framework. The proposed framework consists of two major tasks: lesion segmentation and classification. In the lesion segmentation task, contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination. Subsequently, the best channel is selected and the lesion map is computed, which is further converted into a binary form using a thresholding function. In the lesion classification task, two pre-trained CNN models were modified and trained using transfer learning. Deep features were extracted from both models and fused using canonical correlation analysis. During the fusion process, a few redundant features were also added, lowering classification accuracy. A new technique called maximum entropy score-based selection (MESbS) is proposed as a solution to this issue. The features selected through this approach are fed into a cubic support vector machine (C-SVM) for the final classification. Results: The experimental process was conducted on two datasets: ISIC 2017 and HAM10000. The ISIC 2017 dataset was used for the lesion segmentation task, whereas the HAM10000 dataset was used for the classification task. The achieved accuracy for both datasets was 95.6% and 96.7%, respectively, which was higher than the existing techniques.  相似文献   

10.
Knee Osteoarthritis (KOA) is a degenerative knee joint disease caused by ‘wear and tear’ of ligaments between the femur and tibial bones. Clinically, KOA is classified into four grades ranging from 1 to 4 based on the degradation of the ligament in between these two bones and causes suffering from impaired movement. Identifying this space between bones through the anterior view of a knee X-ray image is solely subjective and challenging. Automatic classification of this process helps in the selection of suitable treatment processes and customized knee implants. In this research, a new automatic classification of KOA images based on unsupervised local center of mass (LCM) segmentation method and deep Siamese Convolutional Neural Network (CNN) is presented. First-order statistics and the GLCM matrix are used to extract KOA anatomical Features from segmented images. The network is trained on our clinical data with 75 iterations with automatic weight updates to improve its validation accuracy. The assessment performed on the LCM segmented KOA images shows that our network can efficiently detect knee osteoarthritis, achieving about 93.2% accuracy along with multi-class classification accuracy of 72.01% and quadratic weighted Kappa of 0.86.  相似文献   

11.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   

12.
赵鹏  唐英杰  杨牧  安静 《包装工程》2020,41(5):192-196
目的针对传统无纺布缺陷分类检测中人工依赖性强、效率低等问题,提出一种能够满足工厂要求的卷积神经网络分类检测方法。方法首先建立包括脏点、褶皱、断裂、缺纱和无缺陷等5种共计7万张无纺布图像样本库,其次构造一个具有不同神经元个数的卷积层和池化层的神经网络,然后采用反向传播算法逐层更新权值,通过梯度下降法最小化损失函数,最后利用Softmax分类器实现无纺布的缺陷分类检测。结果构建了12层的卷积神经网络,通过2万张样本进行测试实验,无缺陷样本准确率可以达到100%,缺陷样本分类准确率均在95%以上,检测时间在35 ms以内。结论该方法能够满足工业生产线中对于无纺布缺陷实时分类检测的要求。  相似文献   

13.
目的 针对锂电池极片涂布缺陷种类多,传统方法分类检测精度不高,以及人工依赖性强等问题,提出一种基于卷积神经网络的锂电池极片涂布缺陷自动分类算法。方法 首先对网络结构以及模型参数进行优化,接着在网络中加入跳跃连接结构,将空洞卷积提取到的多尺度特征与高层特征进行融合以获取更多缺陷特征,并采用LeakyReLU(Leaky Rectified Linear Unit)激活函数保留图像中的负值特征信息,最后通过构建的数据集训练模型,实现锂电池极片涂布缺陷的准确分类。结果 实验结果表明,当前方法识别准确率能够达到99.34%,平均检测时间为51ms。结论 改进后的方法能够准确分类出锂电池极片18种涂布缺陷,满足工业生产中实时分类检测的要求。  相似文献   

14.
Based on the theory of modal acoustic emission (AE), when the convolutional neural network (CNN) is used to identify rotor rub-impact faults, the training data has a small sample size, and the AE sound segment belongs to a single channel signal with less pixel-level information and strong local correlation. Due to the convolutional pooling operations of CNN, coarse-grained and edge information are lost, and the top-level information dimension in CNN network is low, which can easily lead to overfitting. To solve the above problems, we first propose the use of sound spectrograms and their differential features to construct multi-channel image input features suitable for CNN and fully exploit the intrinsic characteristics of the sound spectra. Then, the traditional CNN network structure is improved, and the outputs of all convolutional layers are connected as one layer constitutes a fused feature that contains information at each layer, and is input into the network’s fully connected layer for classification and identification. Experiments indicate that the improved CNN recognition algorithm has significantly improved recognition rate compared with CNN and dynamical neural network (DNN) algorithms.  相似文献   

15.
Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.  相似文献   

16.
Handwritten character recognition systems are used in every field of life nowadays, including shopping malls, banks, educational institutes, etc. Urdu is the national language of Pakistan, and it is the fourth spoken language in the world. However, it is still challenging to recognize Urdu handwritten characters owing to their cursive nature. Our paper presents a Convolutional Neural Networks (CNN) model to recognize Urdu handwritten alphabet recognition (UHAR) offline and online characters. Our research contributes an Urdu handwritten dataset (aka UHDS) to empower future works in this field. For offline systems, optical readers are used for extracting the alphabets, while diagonal-based extraction methods are implemented in online systems. Moreover, our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition. To this end, we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums. Subsequently, we carried out detailed experiments for character recognition, as detailed in the results. The proposed CNN model outperformed as compared to previously published approaches.  相似文献   

17.
Abnormal growth of brain tissues is the real cause of brain tumor. Strategy for the diagnosis of brain tumor at initial stages is one of the key step for saving the life of a patient. The manual segmentation of brain tumor magnetic resonance images (MRIs) takes time and results vary significantly in low-level features. To address this issue, we have proposed a ResNet-50 feature extractor depended on multilevel deep convolutional neural network (CNN) for reliable images segmentation by considering the low-level features of MRI. In this model, we have extracted features through ResNet-50 architecture and fed these feature maps to multi-level CNN model. To handle the classification process, we have collected a total number of 2043 MRI patients of normal, benign, and malignant tumor. Three model CNN, multi-level CNN, and ResNet-50 based multi-level CNN have been used for detection and classification of brain tumors. All the model results are calculated in terms of various numerical values identified as precision (P), recall (R), accuracy (Acc) and f1-score (F1-S). The obtained average results are much better as compared to already existing methods. This modified transfer learning architecture might help the radiologists and doctors as a better significant system for tumor diagnosis.  相似文献   

18.
Compressive strength of concrete is a significant factor to assess building structure health and safety. Therefore, various methods have been developed to evaluate the compressive strength of concrete structures. However, previous methods have several challenges in costly, time-consuming, and unsafety. To address these drawbacks, this paper proposed a digital vision based concrete compressive strength evaluating model using deep convolutional neural network (DCNN). The proposed model presented an alternative approach to evaluating the concrete strength and contributed to improving efficiency and accuracy. The model was developed with 4,000 digital images and 61,996 images extracted from video recordings collected from concrete samples. The experimental results indicated a root mean square error (RMSE) value of 3.56 (MPa), demonstrating a strong feasibility that the proposed model can be utilized to predict the concrete strength with digital images of their surfaces and advantages to overcome the previous limitations. This experiment contributed to provide the basis that could be extended to future research with image analysis technique and artificial neural network in the diagnosis of concrete building structures.  相似文献   

19.
Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network (CNN) features. Secondly, we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term. Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.  相似文献   

20.
As they have nutritional, therapeutic, so values, plants were regarded as important and they’re the main source of humankind’s energy supply. Plant pathogens will affect its leaves at a certain time during crop cultivation, leading to substantial harm to crop productivity & economic selling price. In the agriculture industry, the identification of fungal diseases plays a vital role. However, it requires immense labor, greater planning time, and extensive knowledge of plant pathogens. Computerized approaches are developed and tested by different researchers to classify plant disease identification, and that in many cases they have also had important results several times. Therefore, the proposed study presents a new framework for the recognition of fruits and vegetable diseases. This work comprises of the two phases wherein the phase-I improved localization model is presented that comprises of the two different types of the deep learning models such as You Only Look Once (YOLO)v2 and Open Exchange Neural (ONNX) model. The localization model is constructed by the combination of the deep features that are extracted from the ONNX model and features learning has been done through the convolutional-05 layer and transferred as input to the YOLOv2 model. The localized images passed as input to classify the different types of plant diseases. The classification model is constructed by ensembling the deep features learning, where features are extracted dimension of from pre-trained Efficientnetb0 model and supplied to next 07 layers of the convolutional neural network such as 01 features input, 01 ReLU, 01 Batch-normalization, 02 fully-connected. The proposed model classifies the plant input images into associated labels with approximately 95% prediction scores that are far better as compared to current published work in this domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号