首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
马敏  王涛 《计量学报》2021,42(2):232-238
针对传统的数据特征提取方法难以提取ECT滑油监测数据有效特征的缺陷,提出了一种基于卷积神经网络(convolutional neural network,CNN)和多尺度长短期记忆(multi-scales long short-term memory,MSLSTM)神经网络的双通道网络模型CNN-MSLSTM.将多尺...  相似文献   

2.
The Convolutional Neural Network (CNN) is a widely used deep neural network. Compared with the shallow neural network, the CNN network has better performance and faster computing in some image recognition tasks. It can effectively avoid the problem that network training falls into local extremes. At present, CNN has been applied in many different fields, including fault diagnosis, and it has improved the level and efficiency of fault diagnosis. In this paper, a two-streams convolutional neural network (TCNN) model is proposed. Based on the short-time Fourier transform (STFT) spectral and Mel Frequency Cepstrum Coefficient (MFCC) input characteristics of two-streams acoustic emission (AE) signals, an AE signal processing and classification system is constructed and compared with the traditional recognition methods of AE signals and traditional CNN networks. The experimental results illustrate the effectiveness of the proposed model. Compared with single-stream convolutional neural network and a simple Long Short-Term Memory (LSTM) network, the performance of TCNN which combines spatial and temporal features is greatly improved, and the accuracy rate can reach 100% on the current database, which is 12% higher than that of single-stream neural network.  相似文献   

3.
We investigated whether a convolutional neural network (CNN) can enhance the usability of computer‐aided detection (CAD) of chest radiographs for various pulmonary abnormal lesions. The numbers of normal and abnormal patients were 6055 and 3463, respectively. Two radiologists delineated regions of interest for lesions and labeled the disease types as ground truths. The datasets were split into training, tuning, and testing as 7:1: 2. Total test sets were randomly selected in 1214 normal and 690 abnormal. A 5‐fold, cross‐validation was performed on our datasets. For the classification of normal and abnormal, we developed a CNN based on DenseNet169; for abnormal detection, The You Only Look Once (YOLO) v2 with DenseNet was used. Detection and classification of normal and five classes of diseases (nodule[s], consolidation, interstitial opacity, pleural effusion, and pneumothorax) on chest radiographs were analyzed. Our CNN model classified chest radiographs as normal or abnormal with an accuracy of 97.8%. For the results of the abnormal, F1 score, was 75.2 ± 2.28% for nodules, 55.0 ± 4.3% for consolidation, 78.2 ± 7.85% for interstitial opacity, 81.6 ± 2.07% for pleural effusion, and 70.0 ± 7.97% for pneumothorax, respectively. In addition, we conducted the experiments between our method and RetinaNet with only nodules. The results of our method and RetinaNet at cutoff‐0.5 in the free response operating characteristic curve were 83.45% and 80.55%, respectively. Our algorithm demonstrated viable detection and disease classification capacity and could be used for CAD of lung diseases on chest radiographs.  相似文献   

4.
《工程(英文)》2021,7(12):1786-1796
This paper presents a vision-based crack detection approach for concrete bridge decks using an integrated one-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM) method in the image frequency domain. The so-called 1D-CNN-LSTM algorithm is trained using thousands of images of cracked and non-cracked concrete bridge decks. In order to improve the training efficiency, images are first transformed into the frequency domain during a preprocessing phase. The algorithm is then calibrated using the flattened frequency data. LSTM is used to improve the performance of the developed network for long sequence data. The accuracy of the developed model is 99.05%, 98.9%, and 99.25%, respectively, for training, validation, and testing data. An implementation framework is further developed for future application of the trained model for large-scale images. The proposed 1D-CNN-LSTM method exhibits superior performance in comparison with existing deep learning methods in terms of accuracy and computation time. The fast implementation of the 1D-CNN-LSTM algorithm makes it a promising tool for real-time crack detection.  相似文献   

5.
Lung cancer is the main cause of cancer related death owing to its destructive nature and postponed detection at advanced stages. Early recognition of lung cancer is essential to increase the survival rate of persons and it remains a crucial problem in the healthcare sector. Computer aided diagnosis (CAD) models can be designed to effectually identify and classify the existence of lung cancer using medical images. The recently developed deep learning (DL) models find a way for accurate lung nodule classification process. Therefore, this article presents a deer hunting optimization with deep convolutional neural network for lung cancer detection and classification (DHODCNN-LCC) model. The proposed DHODCNN-LCC technique initially undergoes pre-processing in two stages namely contrast enhancement and noise removal. Besides, the features extraction process on the pre-processed images takes place using the Nadam optimizer with RefineDet model. In addition, denoising stacked autoencoder (DSAE) model is employed for lung nodule classification. Finally, the deer hunting optimization algorithm (DHOA) is utilized for optimal hyper parameter tuning of the DSAE model and thereby results in improved classification performance. The experimental validation of the DHODCNN-LCC technique was implemented against benchmark dataset and the outcomes are assessed under various aspects. The experimental outcomes reported the superior outcomes of the DHODCNN-LCC technique over the recent approaches with respect to distinct measures.  相似文献   

6.
Lightweight deep convolutional neural networks (CNNs) present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients. Recently, advantages of portable Ultrasound (US) imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases. In this paper, a new framework of lightweight deep learning classifiers, namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images. Compared to traditional deep learning models, lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources. Four main lightweight deep learning models, namely MobileNets, ShuffleNets, MENet and MnasNet have been proposed to identify the health status of lungs using US images. Public image dataset (POCUS) was used to validate our proposed COVID-LWNet framework successfully. Three classes of infectious COVID-19, bacterial pneumonia, and the healthy lung were investigated in this study. The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0% and 647.0 s, respectively. This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobile-based radiological tool for clinical diagnosis of COVID-19 and other lung diseases.  相似文献   

7.
8.
To find a better way to screen early lung cancer, motivated by the great success of deep learning, we empirically investigate the challenge of classifying lung nodules in computed tomography (CT) in an end‐to‐end manner. Multi‐view convolutional neural networks (MV‐CNN) are proposed in this article for lung nodule classification. Unlike the traditional CNNs, a MV‐CNN takes multiple views of each entered nodule. We carry out a binary classification (benign and malignant) and a ternary classification (benign, primary malignant, and metastatic malignant) using the Lung Image Database Consortium and Image Database Resource Initiative database. The results show that, for binary or ternary classifications, the multiview strategy produces higher accuracy than the single view method, even for cases that are over‐fitted. Our model achieves an error rate of 5.41 and 13.91% for binary and ternary classifications, respectively. Finally, the receiver operating characteristic curve and t‐distributed stochastic neighbor embedding algorithm are used to analyze the models. The results reveal that the deep features learned by the model proposed in this article have a higher separability than features from the image space and the multiview strategies; therefore, researchers can get better representation. © 2017 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 27, 12–22, 2017  相似文献   

9.
针对滚动轴承故障信号非平稳非线性且易受背景噪声干扰的特点,结合深度学习的优势,提出了一种基于卷积神经网络(CNN)的滚动轴承故障诊断法。将不同故障下多个传感器测得的1维(1D)振动信号转化为2维(2D)灰度图像作为网络输入,并将其分为训练集和测试集;将训练集输入卷积神经网络进行训练,自动提取其中的特征;测试集被用于验证学习完毕的网络的有效性,实现滚动轴承故障识别。该方法不依赖于人为经验和信号处理技术进行预先的信号特征提取,实验数据分析表明,相比于经典的支持向量机和概率神经网络方法,提出的方法识别准确率更高且更稳定。  相似文献   

10.
李海山  唐海艳  梁栋  韩军 《包装工程》2021,42(23):170-177
目的 提取样本图像颜色直方图特征对卷积神经网络进行训练,达到快速、高准确率检测图像颜色缺陷的目的.方法 将标准图像从RGB颜色空间转换至HSV颜色空间,通过改变图像H,S,V三分量值获取训练样本和测试样本;在HSV颜色空间中非均匀量化图像的颜色直方图,得到所有训练样本和测试样本的颜色直方图特征;利用样本图像颜色直方图特征训练卷积神经网络,然后对测试样本进行检测,研究检测的速度、准确率,并将该检测方法与逐像素、超像素、BP神经网络和支持向量机方法进行对比.结果 对于图片尺寸为512×512的彩色图像,卷积神经网络检测单幅图片的平均检测时间约为57.66 ms,训练样本图像为50000张时,卷积神经网络方法对10000张测试样本进行检测的准确率为99.77%.结论 卷积神经网络方法在保证高准确率的前提下大幅提高检测精度,对于印刷品色差缺陷在线检测具有良好的应用价值.  相似文献   

11.
ABSTRACT

Deep metric learning has become a general method for person re-identification (ReID) recently. Existing methods train ReID model with various loss functions to learn feature representation and identify pedestrian. However, the interaction between person features and classification vectors in the training process is rarely concerned. Distribution of pedestrian features will greatly affect convergence of the model and the pedestrian similarity computing in the test phase. In this paper, we formulate improved softmax function to learn pedestrian features and classification vectors. Our method applies pedestrian feature representation to be scattered across the coordinate space and embedding hypersphere to solve the classification problem. Then, we propose an end-to-end convolutional neural network (CNN) framework with improved softmax function to improve the performance of pedestrian features. Finally, experiments are performed on four challenging datasets. The results demonstrate that our work is competitive compared to the state-of-the-art.  相似文献   

12.
Manual detection of small uncalcified pulmonary nodules (diameter <4 mm) in thoracic computed tomography (CT) scans is a tedious and error-prone task. Automatic detection of disperse micronodules is, thus, highly desirable for improved characterization of the fatal and incurable occupational pulmonary diseases. Here, we present a novel computer-assisted detection (CAD) scheme specifically dedicated to detect micronodules. The proposed scheme consists of a candidate-screening module and a false positive (FP) reduction module. The candidate-screening module is initiated by a lung segmentation algorithm and is followed by a combination of 2D/3D features-based thresholding parameters to identify plausible micronodules. The FP reduction module employs a 3D convolutional neural network (CNN) to classify each identified candidate. It automatically encodes the discriminative representations by exploiting the volumetric information of each candidate. A set of 872 micro-nodules in 598 CT scans marked by at least two radiologists are extracted from the Lung Image Database Consortium and Image Database Resource Initiative to test our CAD scheme. The CAD scheme achieves a detection sensitivity of 86.7% (756/872) with only 8 FPs/scan and an AUC of 0.98. Our proposed CAD scheme efficiently identifies micronodules in thoracic scans with only a small number of FPs. Our experimental results provide evidence that the automatically generated features by the 3D CNN are highly discriminant, thus making it a well-suited FP reduction module of a CAD scheme.  相似文献   

13.
Diabetic retinopathy (DR) diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features. This task is very difficult for ophthalmologists and time-consuming. Therefore, many computer-aided diagnosis (CAD) systems were developed to automate this screening process of DR. In this paper, a CAD-DR system is proposed based on preprocessing and a pre-train transfer learning-based convolutional neural network (PCNN) to recognize the five stages of DR through retinal fundus images. To develop this CAD-DR system, a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results. The architecture of the PCNN model is based on three main phases. Firstly, the training process of the proposed PCNN is accomplished by using the expected gradient length (EGL) to decrease the image labeling efforts during the training of the CNN model. Secondly, the most informative patches and images were automatically selected using a few pieces of training labeled samples. Thirdly, the PCNN method generated useful masks for prognostication and identified regions of interest. Fourthly, the DR-related lesions involved in the classification task such as micro-aneurysms, hemorrhages, and exudates were detected and then used for recognition of DR. The PCNN model is pre-trained using a high-end graphical processor unit (GPU) on the publicly available Kaggle benchmark. The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity (SE), specificity (SP), and accuracy (ACC). On the test set of 30,000 images, the CAD-DR system achieved an average SE of 93.20%, SP of 96.10%, and ACC of 98%. This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.  相似文献   

14.
针对语音情感识别任务中特征提取单一、分类准确率低等问题,提出一种3D和1D多特征融合的情感识别方法,对特征提取算法进行改进.在3D网络,综合考虑空间特征学习和时间依赖性构造,利用双线性卷积神经网络(Bilinear Convolutional Neural Network,BCNN)提取空间特征,长短期记忆网络(Sho...  相似文献   

15.
适应性动量(Adam)估计优化器易使深度长短时记忆神经网络(long short-term memory,LSTM)陷入局部极小值,导致故障诊断精度过低;鲸鱼算法(whale optimization algorithm,WOA)的寻优区域过大,导致寻优效率过低。针对上述两问题,将WOA进行改进(improved whale optimization algorithm,IWOA),并优化LSTM,提出IWOA-LSTM新方法。所提方法将WOA赋予动量驱动功能,继承了LSTM中的Adam优化器动量项,可优化细胞权值的搜索区域,进而提高权值寻优效率;将其与Adam优化器联合优化更新权值矩阵,以跳出局部最小值,提高故障诊断精度。此外,还系统地分析了学习效率和迭代次数对IWOA-LSTM的诊断精度影响,实现高效的故障诊断分析。通过分析实测滚动轴承内圈、外圈和滚动体三种故障可知,IWOA-LSTM的故障诊断效率分别较浅层BP神经网络(BPNN)、深度卷积神经网络(convolutional neural network,CNN)、深度门限循环单元神经网络(gated recurrent unit,GRU)、LSTM、WOA优化的LSTM(WOA-LSTM)高出了47.60%,38.06%,37.62%,26.82%,22.71%,且实现高达97%的诊断精度。  相似文献   

16.
Based on the theory of modal acoustic emission (AE), when the convolutional neural network (CNN) is used to identify rotor rub-impact faults, the training data has a small sample size, and the AE sound segment belongs to a single channel signal with less pixel-level information and strong local correlation. Due to the convolutional pooling operations of CNN, coarse-grained and edge information are lost, and the top-level information dimension in CNN network is low, which can easily lead to overfitting. To solve the above problems, we first propose the use of sound spectrograms and their differential features to construct multi-channel image input features suitable for CNN and fully exploit the intrinsic characteristics of the sound spectra. Then, the traditional CNN network structure is improved, and the outputs of all convolutional layers are connected as one layer constitutes a fused feature that contains information at each layer, and is input into the network’s fully connected layer for classification and identification. Experiments indicate that the improved CNN recognition algorithm has significantly improved recognition rate compared with CNN and dynamical neural network (DNN) algorithms.  相似文献   

17.
Human action recognition under complex environment is a challenging work. Recently, sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions. The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class, and the minimal reconstruction error indicates its corresponding class. However, how to learn a discriminative dictionary is still a difficult work. In this work, we make two contributions. First, we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network (CNN) features. Secondly, we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term. Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.  相似文献   

18.
Image recognition has always been a hot research topic in the scientific community and industry. The emergence of convolutional neural networks(CNN) has made this technology turned into research focus on the field of computer vision, especially in image recognition. But it makes the recognition result largely dependent on the number and quality of training samples. Recently, DCGAN has become a frontier method for generating images, sounds, and videos. In this paper, DCGAN is used to generate sample that is difficult to collect and proposed an efficient design method of generating model. We combine DCGAN with CNN for the second time. Use DCGAN to generate samples and training in image recognition model, which based by CNN. This method can enhance the classification model and effectively improve the accuracy of image recognition. In the experiment, we used the radar profile as dataset for 4 categories and achieved satisfactory classification performance. This paper applies image recognition technology to the meteorological field.  相似文献   

19.
Abnormal growth of brain tissues is the real cause of brain tumor. Strategy for the diagnosis of brain tumor at initial stages is one of the key step for saving the life of a patient. The manual segmentation of brain tumor magnetic resonance images (MRIs) takes time and results vary significantly in low-level features. To address this issue, we have proposed a ResNet-50 feature extractor depended on multilevel deep convolutional neural network (CNN) for reliable images segmentation by considering the low-level features of MRI. In this model, we have extracted features through ResNet-50 architecture and fed these feature maps to multi-level CNN model. To handle the classification process, we have collected a total number of 2043 MRI patients of normal, benign, and malignant tumor. Three model CNN, multi-level CNN, and ResNet-50 based multi-level CNN have been used for detection and classification of brain tumors. All the model results are calculated in terms of various numerical values identified as precision (P), recall (R), accuracy (Acc) and f1-score (F1-S). The obtained average results are much better as compared to already existing methods. This modified transfer learning architecture might help the radiologists and doctors as a better significant system for tumor diagnosis.  相似文献   

20.
为解决在复杂工况下风力发电机组轴承故障诊断虚警率高的问题,提出一种端到端的混合深度学习框架——基于多种小波变换的一维卷积循环神经网络.首先,通过多种小波变换得到多个时-频矩阵,以充分提取信号特征;再通过一种扩展的LSTM,对多通道时-频矩阵不同时间步信息进行提取,捕获时-频数据时空特征;最后,通过全局池化层和分类层对故...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号