共查询到20条相似文献,搜索用时 0 毫秒
1.
Jieren Cheng Yifu Liu Xiangyan Tang Victor S. Sheng Mengyang Li Junqi Li 《计算机、材料和连续体(英文)》2020,62(3):1317-1333
Distributed Denial-of-Service (DDoS) has caused great damage to the network in the big data environment. Existing methods are characterized by low computational efficiency, high false alarm rate and high false alarm rate. In this paper, we propose a DDoS attack detection method based on network flow grayscale matrix feature via multiscale convolutional neural network (CNN). According to the different characteristics of the attack flow and the normal flow in the IP protocol, the seven-tuple is defined to describe the network flow characteristics and converted into a grayscale feature by binary. Based on the network flow grayscale matrix feature (GMF), the convolution kernel of different spatial scales is used to improve the accuracy of feature segmentation, global features and local features of the network flow are extracted. A DDoS attack classifier based on multi-scale convolution neural network is constructed. Experiments show that compared with correlation methods, this method can improve the robustness of the classifier, reduce the false alarm rate and the missing alarm rate. 相似文献
2.
Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system, making it more and more widely used in image retrieval. Multi-view data describes image information more comprehensively than traditional methods using a single-view. How to use hashing to combine multi-view data for image retrieval is still a challenge. In this paper, a multi-view fusion hashing method based on RKCCA (Random Kernel Canonical Correlation Analysis) is proposed. In order to describe image content more accurately, we use deep learning dense convolutional network feature DenseNet to construct multi-view by combining GIST feature or BoW_SIFT (Bag-of-Words model+SIFT feature) feature. This algorithm uses RKCCA method to fuse multi-view features to construct association features and apply them to image retrieval. The algorithm generates binary hash code with minimal distortion error by designing quantization regularization terms. A large number of experiments on benchmark datasets show that this method is superior to other multi-view hashing methods. 相似文献
3.
《工程(英文)》2019,5(4):671-678
In this research, an auxiliary illumination visual sensor system, an ultraviolet/visible (UVV) band visual sensor system (with a wavelength less than 780 nm), a spectrometer, and a photodiode are employed to capture insights into the high-power disc laser welding process. The features of the visible optical light signal and the reflected laser light signal are extracted by decomposing the original signal captured by the photodiode via the wavelet packet decomposition (WPD) method. The captured signals of the spectrometer mainly have a wavelength of 400–900 nm, and are divided into 25 sub-bands to extract the spectrum features by statistical methods. The features of the plume and spatters are acquired by images captured by the UVV visual sensor system, and the features of the keyhole are extracted from images captured by the auxiliary illumination visual sensor system. Based on these real-time quantized features of the welding process, a deep belief network (DBN) is established to monitor the welding status. A genetic algorithm is applied to optimize the parameters of the proposed DBN model. The established DBN model shows higher accuracy and robustness in monitoring welding status in comparison with a traditional back-propagation neural network (BPNN) model. The effectiveness and generalization ability of the proposed DBN are validated by three additional experiments with different welding parameters. 相似文献
4.
柴油机喷油系统压力波形的特征抽取及描述方法 总被引:5,自引:0,他引:5
本文基于主导峰(谷)思想提出了一种波形特征抽取和描述方法,这种方法能有效地抓住波形的主要特征.文中首先给出了主导峰(谷)的概念及识别算法.然后,根据柴油机喷油压力波形的特点,具体讨论了压力波形的符号化方法及步骤,并给出了描述实例.最后,简要分析了将这种方法进一步扩展而构造的波形特征层次树及其特点. 相似文献
5.
Guangquan Zhao Jin Yang Jun Chen Guang Zhu Zedong Jiang Xiaoyong Liu Guangxing Niu Zhong Lin Wang Bin Zhang 《Advanced Materials Technologies》2019,4(1)
Due to the heavy reliance on computers and networks, security issues have become a major concern for individuals, companies, and nations. Traditional security measures such as personal identification numbers, tokens, or passwords only provide limited protection. With the development of intelligent keyboard (IKB), this paper proposes a deep‐learning‐based keystroke dynamics identification method for increased security. The IKB is a kind of self‐powered, nonmechanical‐punching keyboard, which converts mechanical stimuli applied to the keyboard into local electronic signals. Multilayer deep belief network (DBN) is established to mine the useful information from raw electronic signals and output the keystroke dynamics identification result. The contributions include development of a novel solution that does not rely on manual feature extraction, and provides promising recognition accuracy on large amount of typing samples. One significant advantage of the proposed method is that it extracts features adaptively from the raw current signals and automatically recognizes the typing pattern, which simplifies the design of verification and identification system. The experimental results on 104 typing samples demonstrate the effectiveness of the proposed method. The proposed method has extensive applications in keyboard‐based information security. 相似文献
6.
Yongmei Zhang Jianzhe Ma Lei Hu Keming Yu Lihua Song Huini Chen 《计算机、材料和连续体(英文)》2020,64(3):1929-1944
The prediction of particles less than 2.5 micrometers in diameter (PM2.5) in fog and haze has been paid more and more attention, but the prediction accuracy of the results is not ideal. Haze prediction algorithms based on traditional numerical and statistical prediction have poor effects on nonlinear data prediction of haze. In order to improve the effects of prediction, this paper proposes a haze feature extraction and pollution level identification pre-warning algorithm based on feature selection and integrated learning. Minimum Redundancy Maximum Relevance method is used to extract low-level features of haze, and deep confidence network is utilized to extract high-level features. eXtreme Gradient Boosting algorithm is adopted to fuse low-level and high-level features, as well as predict haze. Establish PM2.5 concentration pollution grade classification index, and grade the forecast data. The expert experience knowledge is utilized to assist the optimization of the pre-warning results. The experiment results show the presented algorithm can get better prediction effects than the results of Support Vector Machine (SVM) and Back Propagation (BP) widely used at present, the accuracy has greatly improved compared with SVM and BP. 相似文献
7.
8.
Jinhua Sheng Senior Member of IEEE 《International journal of imaging systems and technology》2017,27(2):162-170
Brain imaging genetics is a popular research topic on evaluating the association between genetic variations and neuroimaging quantitative traits (QTs). As a bi‐multivariate analysis method, sparse canonical correlation analysis (CCA) is a useful technique which identifies efficiently genetic diseases on the brain with modeling dependencies between the variables of genotype data and phenotype data. The initial efforts on evaluating several space CCA methods are made for brain imaging genetics. A linear model is proposed to generate realistic imaging genomic data with selected genotype‐phenotype associations from real data and effectively capture the sparsity underlying projects. Three space CCA algorithms are applied to the synthetic data, and show better or comparable performance on the synthetic data in terms of the estimated canonical correlations. They have successfully identified an important association between genotype and phenotype. Experiments on simulated and real imaging genetic data show that approximating covariance structure using an identity or diagonal matrix and the approach used in these space CCA algorithms could limit the space CCA capability in identifying the underlying imaging genetics associations. Further development depends largely on enhanced space CCA methods that effectively pay attention to the covariance structures in simulated and real imaging genetics data. 相似文献
9.
人脸识别是当前人工智能和模式识别的研究热点,得到了广泛的关注.基于对不同色彩空间数据的分析,论文提出了多彩色空间典型相关分析的人脸识别方法.文中对2维的Contourlet变换特性进行了分析和讨论,利用Contourlet的多尺度,方向性和各向异性等特点,提出了一种基于Contourlet变换的彩色人脸识别算法.算法对原图进行Contourlet分解,对分解得到的低频和高频图像进行cca分析.典型相关分析是一种有效的分析方法,其实际应用十分广泛.低频系数反映图像的轮廓信息,高频系数反映图像的细节信息,使用cca充分利用不同频率的信息,使不同色彩空间的不同分辨率图形的相关性达到最大,得到投影系数,最后,采用决策级最近邻分类器完成人脸识别.在对彩色人脸数据库AR的识别实验中,该算法识别率达到98%以上,与传统算法相比,该算法不仅既有良好的识别结果,而且具有很快的运算速度. 相似文献
10.
Khalil Khan Rehan Ullah Khan Jehad Ali Irfan Uddin Sahib Khan Byeong-hee Roh 《计算机、材料和连续体(英文)》2021,68(3):3483-3498
Race classification is a long-standing challenge in the field of face image analysis. The investigation of salient facial features is an important task to avoid processing all face parts. Face segmentation strongly benefits several face analysis tasks, including ethnicity and race classification. We propose a race-classification algorithm using a prior face segmentation framework. A deep convolutional neural network (DCNN) was used to construct a face segmentation model. For training the DCNN, we label face images according to seven different classes, that is, nose, skin, hair, eyes, brows, back, and mouth. The DCNN model developed in the first phase was used to create segmentation results. The probabilistic classification method is used, and probability maps (PMs) are created for each semantic class. We investigated five salient facial features from among seven that help in race classification. Features are extracted from the PMs of five classes, and a new model is trained based on the DCNN. We assessed the performance of the proposed race classification method on four standard face datasets, reporting superior results compared with previous studies. 相似文献
11.
This work introduces a deep learning pipeline for automatic patent classification with multichannel inputs based on LSTM and word vector embeddings. Sophisticated text mining methods are used to extract the most important segments from patent texts, and a domain-specific pre-trained word embeddings model for the patent domain is developed; it was trained on a very large dataset of more than five million patents. The deep learning pipeline is using multiple parallel LSTM networks that read the source patent document using different input dimensions namely embeddings of different segments of patent texts, and sparse linear input of different metadata. Classifying patents into corresponding technical fields is selected as a use case. In this use case, a series of patent classification experiments are conducted on different patent datasets, and the experimental results indicate that using the segments of patent texts as well as the metadata as multichannel inputs for a deep neural network model, achieves better performance than one input channel. 相似文献
12.
Haim Shore 《Quality and Reliability Engineering International》2008,24(4):389-399
The data‐transformation approach and generalized linear modeling both require specification of a transformation prior to deriving the linear predictor (LP). By contrast, response modeling methodology (RMM) requires no such specifications. Furthermore, RMM effectively decouples modeling of the LP from modeling its relationship to the response. It may therefore be of interest to compare LPs obtained by the three approaches. Based on numerical quality problems that have appeared in the literature, these approaches are compared in terms of both the derived structure of the LPs and goodness‐of‐fit statistics. The relative advantages of RMM are discussed. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
13.
Luigi Mascolo Pietro Guccione Giovanni Nico Paolo Taurisano Leonardo Fazio 《International journal of imaging systems and technology》2014,24(3):239-248
The purpose of this article is to present a methodology to identify the sources of activity in brain networks from functional magnetic resonance imaging (fMRI) data using the multiset canonical correlation analysis algorithm. The aim is to lay the foundations for a screening marker to be used as indicator of mental diseases. Group analysis blind source separation methods have proved reliable to extract the latent sources underlying the brain activities but currently there is no recognized biomarker for mental disorders. Recent studies have identified alterations in the so called default mode network (DMN) that are common to several neuropsychiatric disorders, including schizophrenia. In particular, here we account for the hypothesis that the alterations in the DMN activity can be effectively highlighted by analyzing the transient states between two different tasks. A set of fMRI data acquired from 18 subjects performing working memory tasks is investigated for such purpose. Subjects are patients affected by schizophrenia for one half and healthy control subjects for the other. Under these conditions, the proposed methodology provides high discrimination performances in terms of classification error, thereby providing promising results for a preliminary tool able to monitor the disease state or to perform a prescreening for patients at risk for schizophrenia. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 239–248, 2014 相似文献
14.
Undoubtedly, uncooperative or malicious nodes threaten the safety of Internet of Vehicles (IoV) by destroying routing or data. To this end, some researchers have designed some node detection mechanisms and trust calculating algorithms based on some different feature parameters of IoV such as communication, data, energy, etc., to detect and evaluate vehicle nodes. However, it is difficult to effectively assess the trust level of a vehicle node only by message forwarding, data consistency, and energy sufficiency. In order to resolve these problems, a novel mechanism and a new trust calculating model is proposed in this paper. First, the four tuple method is adopted, to qualitatively describing various types of nodes of IoV; Second, analyzing the behavioral features and correlation of various nodes based on route forwarding rate, data forwarding rate and physical location; third, designing double layer detection feature parameters with the ability to detect uncooperative nodes and malicious nodes; fourth, establishing a node correlative detection model with a double layer structure by combining the network layer and the perception layer. Accordingly, we conducted simulation experiments to verify the accuracy and time of this detection method under different speed-rate topological conditions of IoV. The results show that comparing with methods which only considers energy or communication parameters, the method proposed in this paper has obvious advantages in the detection of uncooperative and malicious nodes of IoV; especially, with the double detection feature parameters and node correlative detection model combined, detection accuracy is effectively improved, and the calculation time of node detection is largely reduced. 相似文献
15.
In the Norwegian offshore oil and gas industry risk analyses have been used to provide decision support for more than 20 years. The focus has traditionally been on the planning phase, but during the last years a need for better risk analysis methods for the operational phase has been identified. Such methods should take human and organizational factors into consideration in a more explicit way than the traditional risk analysis methods do. Recently, a framework, called hybrid causal logic (HCL), has been developed based on traditional risk analysis tools combined with Bayesian belief networks (BBNs), using the aviation industry as a case. This paper reviews this framework and discusses its applicability for the offshore industry, and the relationship to existing research projects, such as the barrier and operational risk analysis project (BORA). The paper also addresses specific features of the framework and suggests a new approach for the probability assignment process. This approach simplifies the assignment process considerably without loosing the flexibility that is needed to properly reflect the phenomena being studied. 相似文献
16.
Kevin L. Mills James J. Filliben 《Journal of research of the National Institute of Standards and Technology》2011,116(5):771-783
Experimenters characterize the behavior of simulation models for data communications networks by measuring multiple responses under selected parameter combinations. The resulting multivariate data may include redundant responses reflecting aspects of a smaller number of underlying behaviors. Reducing the dimension of multivariate responses can reveal the most significant model behaviors, allowing subsequent analyses to focus on one response per behavior. This paper investigates two methods for reducing dimension in multivariate data generated from simulation models. One method combines correlation analysis and clustering. The second method uses principal components analysis. We apply both methods to reduce a 22-dimensional dataset generated by a network simulator. We identify issues that an analyst must decide, and we compare the reductions suggested by the methods. We have used these methods to identify significant behaviors in simulated networks, and we suspect they may be applied to reduce the dimension of empirical data measured from real networks. 相似文献
17.
18.
在医学诊断、场景分析、语音识别、生态环境分析等方面语音分类都有着广泛的应用价值。传统的语音分类器采用的是神经网络。但是在精确度,模型设置,参数调整和资料的预处理等方面,有较大的缺陷。在这一基础上,文章提出了一种以“深度森林”为基础的改进方法——LightGBM的深度学习模型(Deep LightGBM模型)。它能够在保证模型简洁的前提下,提高分类精度和泛化能力。该算法有效降低了参数依赖性。在UrbanSound8K这一数据集中,采用向量方法进行语音特征的提取,其分类精确度达95.84%。将卷积神经网络(Convolutional Neural Net‐work,CNN)抽取的特征和向量法获取的特征进行融合,并利用新的模型进行训练,其准确率可达97.67%。实验证明,此算法采用的特征提取方式与Deep LightGBM配合获得的模型参数调整容易,精度高,不会产生过度拟合,并且泛化能力好。 相似文献
19.
Sameh Abd ElGhany Mai Ramadan Ibraheem Madallah Alruwaili Mohammed Elmogy 《计算机、材料和连续体(英文)》2021,68(1):117-135
With the massive success of deep networks, there have been significant efforts to analyze cancer diseases, especially skin cancer. For this purpose, this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images. This paper aims to develop and fine-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images. Fine-tuning is a powerful method to obtain enhanced classification results by the customized pre-trained network. Regularization, batch normalization, and hyperparameter optimization are performed for fine-tuning the proposed deep network. The proposed fine-tuned ResNet50 model successfully classified 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset. The developed deep model was compared against two powerful models, i.e., InceptionV3 and VGG16, using the Dice similarity coefficient (DSC) and the area under the curve (AUC). The evaluation results show that the proposed model achieved higher results than some recent and robust models. 相似文献
20.
Bayesian networks for multilevel system reliability 总被引:1,自引:0,他引:1
Alyson G. Wilson Aparna V. Huzurbazar 《Reliability Engineering & System Safety》2007,92(10):1413-1420
Bayesian networks have recently found many applications in systems reliability; however, the focus has been on binary outcomes. In this paper we extend their use to multilevel discrete data and discuss how to make joint inference about all of the nodes in the network. These methods are applicable when system structures are too complex to be represented by fault trees. The methods are illustrated through four examples that are structured to clarify the scope of the problem. 相似文献