首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在大型和高维数据上进行有效检测, 在实际应用中具有重要意义. 异常点检测是指识别出偏离一般数据分布的数据点, 其核心是密度估计. 尽管像深度自编码高斯混合模型通过先降低维度, 再进行密度估计已经取得了重大进展, 但是它对低维潜在空间引入噪声, 并且在对密度估计模块优化时存在一些限制, 例如需要保证协方差是正定矩阵. 为解决这些限制, 本文提出一种用于无监督异常检测的深度自编码标准化流(deep autoencoder normalizing flow, DANF). 该模型利用深度自编码器为每个输入样本生成低维潜在空间表示和重构误差, 进而将其输入标准化流(normalizing flow, NF), 最终映射成高斯分布. 在多个公开的基准数据集上的实验结果表明, 深度自编码标准化流模型显著优于最先进的异常检测技术, 在评估指标F1-score上最高提升26.43%.  相似文献   

2.
Meng  Lingheng  Ding  Shifei  Zhang  Nan  Zhang  Jian 《Neural computing & applications》2018,30(7):2083-2100

Learning results depend on the representation of data, so how to efficiently represent data has been a research hot spot in machine learning and artificial intelligence. With the deepening of the deep learning research, studying how to train the deep networks to express high dimensional data efficiently also has been a research frontier. In order to present data more efficiently and study how to express data through deep networks, we propose a novel stacked denoising sparse autoencoder in this paper. Firstly, we construct denoising sparse autoencoder through introducing both corrupting operation and sparsity constraint into traditional autoencoder. Then, we build stacked denoising sparse autoencoders which has multi-hidden layers by layer-wisely stacking denoising sparse autoencoders. Experiments are designed to explore the influences of corrupting operation and sparsity constraint on different datasets, using the networks with various depth and hidden units. The comparative experiments reveal that test accuracy of stacked denoising sparse autoencoder is much higher than other stacked models, no matter what dataset is used and how many layers the model has. We also find that the deeper the network is, the less activated neurons in every layer will have. More importantly, we find that the strengthening of sparsity constraint is to some extent equal to the increase in corrupted level.

  相似文献   

3.
Visual motion segmentation (VMS) is an important and key part of many intelligent crowd systems. It can be used to figure out the flow behavior through a crowd and to spot unusual life-threatening incidents like crowd stampedes and crashes, which pose a serious risk to public safety and have resulted in numerous fatalities over the past few decades. Trajectory clustering has become one of the most popular methods in VMS. However, complex data, such as a large number of samples and parameters, makes it difficult for trajectory clustering to work well with accurate motion segmentation results. This study introduces a spatial-angular stacked sparse autoencoder model (SA-SSAE) with l2-regularization and softmax, a powerful deep learning method for visual motion segmentation to cluster similar motion patterns that belong to the same cluster. The proposed model can extract meaningful high-level features using only spatial-angular features obtained from refined tracklets (a.k.a ‘trajectories’). We adopt l2-regularization and sparsity regularization, which can learn sparse representations of features, to guarantee the sparsity of the autoencoders. We employ the softmax layer to map the data points into accurate cluster representations. One of the best advantages of the SA-SSAE framework is it can manage VMS even when individuals move around randomly. This framework helps cluster the motion patterns effectively with higher accuracy. We put forward a new dataset with its manual ground truth, including 21 crowd videos. Experiments conducted on two crowd benchmarks demonstrate that the proposed model can more accurately group trajectories than the traditional clustering approaches used in previous studies. The proposed SA-SSAE framework achieved a 0.11 improvement in accuracy and a 0.13 improvement in the F-measure compared with the best current method using the CUHK dataset.  相似文献   

4.
Li  Daqiu  Fu  Zhangjie  Xu  Jun 《Applied Intelligence》2021,51(5):2805-2817

With the outbreak of COVID-19, medical imaging such as computed tomography (CT) based diagnosis is proved to be an effective way to fight against the rapid spread of the virus. Therefore, it is important to study computerized models for infectious detection based on CT imaging. New deep learning-based approaches are developed for CT assisted diagnosis of COVID-19. However, most of the current studies are based on a small size dataset of COVID-19 CT images as there are less publicly available datasets for patient privacy reasons. As a result, the performance of deep learning-based detection models needs to be improved based on a small size dataset. In this paper, a stacked autoencoder detector model is proposed to greatly improve the performance of the detection models such as precision rate and recall rate. Firstly, four autoencoders are constructed as the first four layers of the whole stacked autoencoder detector model being developed to extract better features of CT images. Secondly, the four autoencoders are cascaded together and connected to the dense layer and the softmax classifier to constitute the model. Finally, a new classification loss function is constructed by superimposing reconstruction loss to enhance the detection accuracy of the model. The experiment results show that our model is performed well on a small size COVID-2019 CT image dataset. Our model achieves the average accuracy, precision, recall, and F1-score rate of 94.7%, 96.54%, 94.1%, and 94.8%, respectively. The results reflect the ability of our model in discriminating COVID-19 images which might help radiologists in the diagnosis of suspected COVID-19 patients.

  相似文献   

5.
In addition to classification and regression, outlier detection has emerged as a relevant activity in deep learning. In comparison with previous approaches where the original features of the examples were used for separating the examples with high dissimilarity from the rest of the examples, deep learning can automatically extract useful features from raw data, thus removing the need for most of the feature engineering efforts usually required with classical machine learning approaches. This requires training the deep learning algorithm with labels identifying the examples or with numerical values. Although outlier detection in deep learning has been usually undertaken by training the algorithm with categorical labels—classifier—, it can also be performed by using the algorithm as regressor. Nowadays numerous urban areas have deployed a network of sensors for monitoring multiple variables about air quality. The measurements of these sensors can be treated individually—as time series—or collectively. Collectively, a variable monitored by a network of sensors can be transformed into a map. Maps can be used as images in machine learning algorithms—including computer vision algorithms—for outlier detection. The identification of anomalous episodes in air quality monitoring networks allows later processing this time period with finer‐grained scientific packages involving fluid dynamic and chemical evolution software, or the identification of malfunction stations. In this work, a Convolutional Neural Network is trained—as a regressor—using as input Ozone‐urban images generated from the Air Quality Monitoring Network of Madrid (Spain). The learned features are processed by Density‐based Spatial Clustering of Applications with Noise (DBSCAN) algorithm for identifying anomalous maps. Comparisons with other deep learning architectures are undertaken, for instance, autoencoders—undercomplete and denoizing—for learning salient features of the maps and later to use as input of DBSCAN. The proposed approach is able efficiently find maps with local anomalies compared to other approaches based on raw images or latent features extracted with autoencoders architectures with DBSCAN.  相似文献   

6.
Internet data thefts, intrusions and DDoS attacks are some of the big concerns for the network security today. Detection of these anomalies, is gaining tremendous impetus with the development of machine learning and artificial intelligence. Even now researchers are shifting the base from machine learning to the deep neural architectures with auto-feature selection capabilities. We in this paper propose multiple deep neural network architectures which can select, co-learn and teach the gradients of the neural network by itself with no human intervention. This is what we call as meta-learning. The models are configured in both many to one and many to many design architectures. We combine long short-term memory (LSTM), bi-directional long short-term memory (BiLSTM), convolutional neural network (CNN) layers along with attention mechanism to achieve the higher accuracy values among all the available deep learning model architectures. LSTMs overcomes the vanishing and exploding gradient problem of RNN and attention mechanism mimics the human cognitive attention that screens the network flow to obtain the key features for network traffic classification. In addition, we also add multiple convolutional layers to get the key features for network traffic classification. We get the time series analysis of the traffic done for the possibility of a DDoS attack without using any feature selection techniques and without balancing the dataset. The performance analysis is done based on confusion matrix scores, that is, accuracy, false alarm rate (FAR), sensitivity, specificity, false-positive rate (FPR), F1 score, area under curve (AUC) analysis and loss functions on well-known public benchmark KDD Cup'99 data set. The results of our experiments reveal that our models outperform existing techniques, showing their superiority in performance.  相似文献   

7.
Currently, core networking architectures are facing disruptive developments, due to emergence of paradigms such as Software-Defined-Networking (SDN) for control, Network Function Virtualization (NFV) for services, and so on. These are the key enabling technologies for future applications in 5G and locality-based Internet of things (IoT)/wireless sensor network services. The proliferation of IoT devices at the Edge networks is driving the growth of all-connected world of Internet traffic. In the Cloud-to-Things continuum, processing of information and data at the Edge mandates development of security best practices to arise within a fog computing environment. Service providers are transforming their business using NFV-based services and SDN-enabled networks. The SDN paradigm offers an easily programmable model, global view, and control for modern networks, which demand faster response to security incidents and dynamically enforce countermeasures to intrusions and cyberattacks. This article proposes an autonomic multilayer security framework called Distributed Threat Analytics and Response System (DTARS) for a converged architecture of Fog/Edge computing and SDN infrastructures, for emerging applications in IoT and 5G networks. The major detection scheme is deployed within the data plane, consisting of a coarse-grained behavioral, anti-spoofing, flow monitoring and fine-grained traffic multi-feature entropy-based algorithms. We developed exemplary defense applications under DTARS framework, on a malware testbed imitating the real-life DDoS/botnets such as Mirai. The experiments and analysis show that DTARS is capable of detecting attacks in real-time with accuracy more than 95% under attack intensities up to 50 000 packets/s. The benign traffic forwarding rate remains unaffected with DTARS, while it drops down to 65% with traditional NIDS for advanced DDoS attacks. Further, DTARS achieves this performance without incurring additional latency due to data plane overhead.  相似文献   

8.
目的 在细粒度视觉识别中,难点是对处于相同层级的大类,区分其具有微小差异的子类,为实现准确的分类精度,通常要求具有专业知识,所以细粒度图像分类为计算机视觉的研究提出更高的要求。为了方便普通人在不具备专业知识和专业技能的情况下能够区分物种细粒度类别,进而提出一种基于深度区域网络的卷积神经网络结构。方法 该结构基于深度区域网络,首先,进行深度特征提取任务,使用VGG16层网络和残差101层网络两种结构作为特征提取网络,用于提取深层共享特征,产生特征映射。其次,使用区域建议网络结构,在特征映射上进行卷积,产生目标区域;同时使用兴趣区域(RoI)池化层对特征映射进行最大值池化,实现网络共享。之后将池化后的目标区域输入到区域卷积网络中进行细粒度类别预测和目标边界回归,最终输出网络预测类别及回归边框点坐标。同时还进行了局部遮挡实验,检测局部遮挡部位对于分类正确性的影响,分析局部信息对于鸟类分类的影响情况。结果 该模型针对CUB_200_2011鸟类数据库进行实验,该数据库包含200种细粒度鸟类类别,11 788幅鸟类图片。经过训练及测试,实现VGG16+R-CNN (RPN)和Res101+R-CNN (RPN)两种结构验证正确率分别为90.88%和91.72%,两种结构Top-5验证正确率都超过98%。本文模拟现实环境遮挡情况进行鸟类局部特征遮挡实验,检测分类效果。结论 基于深度区域网络的卷积神经网络模型,提高了细粒度鸟类图像的分类性能,在细粒度鸟类图像的分类上,具有分类精度高、泛化能力好和鲁棒性强的优势,实验发现头部信息对于细粒度鸟类分类识别非常重要。  相似文献   

9.
The amount of digital data in the universe is growing at an exponential rate, doubling every 2 years, and changing how we live in the world. The information storage capacity and data requirement crossed the zettabytes. With this level of bombardment of data on machine learning techniques, it becomes very difficult to carry out parallel computations. Deep learning is broadening its scope and gaining more popularity in natural language processing, feature extraction and visualization, and almost in every machine learning trend. The purpose of this study is to provide a brief review of deep learning architectures and their working. Research papers and proceedings of conferences from various authentic resources (Institute of Electrical and Electronics Engineers, Wiley, Nature, and Elsevier) are studied and analyzed. Different architectures and their effectiveness to solve domain specific problems are evaluated. Various limitations and open problems of current architectures are discussed to provide better insights to help researchers and student to resume their research on these issues. One hundred one articles were reviewed for this meta‐analysis of deep learning. From this analysis, it is concluded that advanced deep learning architectures are combinations of few conventional architectures. For example, deep belief network and convolutional neural network are used to build convolutional deep belief network, which has higher capabilities than the parent architectures. These combined architectures are more robust to explore the problem space and thus can be the answer to build a general‐purpose architecture.  相似文献   

10.
ABSTRACT

Hyperspectral unmixing is essential for image analysis and quantitative applications. To further improve the accuracy of hyperspectral unmixing, we propose a novel linear hyperspectral unmixing method based on l1?l2 sparsity and total variation (TV) regularization. First, the enhanced sparsity based on the l1?l2 norm is explored to depict the intrinsic sparse characteristic of the fractional abundances in a sparse regression unmixing model because the l1?l2 norm promotes stronger sparsity than the l1 norm. Then, TV is minimized to enforce the spatial smoothness by considering the spatial correlation between neighbouring pixels. Finally, the extended alternating direction method of multipliers (ADMM) is utilized to solve the proposed model. Experimental results on simulated and real hyperspectral datasets show that the proposed method outperforms several state-of-the-art unmixing methods.  相似文献   

11.
杨洋  吕光宏  赵会  李鹏飞 《软件学报》2020,31(7):2184-2204
数据转发与控制分离的软件定义网络(softwaredefinednetworking,简称SDN)是对传统网络架构的彻底颠覆,为网络各方面的研究引入了新的机遇和挑战.随着传统网络研究方法在SDN中遭遇瓶颈,基于深度学习的方法被引入到SDN的研究中,在实现实时智能的网络管控上成果颇丰,推动了SDN研究的深入发展.调查了深度学习开发平台,训练数据集、智能SDN架构等深度学习引入SDN的促进因素;对智能路由、入侵检测、流量感知和其他应用等SDN研究领域中的深度学习应用进行系统的介绍,深入分析了现有深度学习应用的特点和不足;最后展望了SDN未来的研究方向与趋势.  相似文献   

12.
基于深度学习的短时交通流预测   总被引:2,自引:0,他引:2  
针对现有预测方法未能充分揭示交通流内部的本质规律,提出了一种基于深度学习的短时交通流预测方法。该方法结合深度信念网路模型(DBN)与支持向量回归分类器(SVR)作为预测模型,利用差分去除交通流数据的趋势向,用深度信念网络模型进行交通流特征学习,在网络顶层连接支持向量回归模型进行流量预测。实际交通流数据测试结果表明:文中提出的预测模型与传统预测模型相比,具有更高的预测精度,预测性能提高了18.01%,是一种有效的交通流预测方法。  相似文献   

13.
The study and development of transportation systems have been a focus of attention in recent years, with many research efforts directed in particular at modelling traffic behaviour from both macroscopic and microscopic points of views. Although many statistical regression models of road traffic relationships have been formulated, they have proven to be unsuitable due to multiple and ill-defined traffic characteristics. Alternative methods such as neural networks have thus been sought but, despite some promising results, their design remains problematic and implementation is equally difficult. Another salient issue is that the opaqueness of trained networks prevents understanding the underlying models. Hybrid neuro-fuzzy rule-based systems, which combine the complementary capabilities of both neural networks and fuzzy logic, constitute a more promising technique for modelling traffic flow. This paper describes the application of a specific class of neuro-fuzzy system known as the Pseudo Outer-Product Fuzzy-Neural Network using Truth-Value-Restriction method (POPFNN-TVR) for modelling traffic behaviour. This approach has been shown to perform better on such problems than similar architectures. The results obtained highlight the capability of POPFNN-TVR in fuzzy knowledge extraction for modelling inter-lane relationships in a highway traffic stream, as well as in generalizing from sample data, as compared to traditional feed-forward neural networks using back-propagation learning. The model thus obtained automatically can be understood, analysed, and readily applied for transportation planning.  相似文献   

14.
Service-oriented architectures (SOA) provide a flexible and dynamic platform for implementing business solutions. In this paper, we address the modeling of such architectures by refining business-oriented architectures, which abstract from technology aspects, into service-oriented ones, focusing on the ability of dynamic reconfiguration (binding to new services at run-time) typical for SOA.The refinement is based on conceptual models of the platforms involved as architectural styles, formalized by graph transformation systems. Based on a refinement relation between abstract and platform-specific styles we investigate how to realize business-specific scenarios on the SOA platform by automatically deriving refined, SOA-specific reconfiguration scenarios.Research partially supported by the European Research Training Network SegraVis (on Syntactic and Semantic Integration of Visual Modelling Techniques)  相似文献   

15.
针对当前高光谱遥感影像分类人工标注样本费时费力,大量未标注样本未得到有效利用以及主要利用光谱信息而忽视空间信息等问题,提出了一种空-谱信息与主动深度学习相结合的高光谱影像分类方法。首先利用主成分分析对原始影像进行降维,在此基础上提取像素的一正方形小邻域作为该像素的空间信息并结合其原始光谱信息得到空谱特征。然后,通过稀疏自编码器得到原始数据的稀疏特征表达,并通过逐层无监督学习稀疏自编码器构建深度神经网络,输出原始数据的深度特征,将其连接到softmax分类器,利用少量标记样本以监督学习的方式完成模型的精调。最后,利用主动学习算法选择最不确定性样本对其进行标注,并加入至训练样本以提高分类器的分类效果。分别对PaviaU影像和PaviaC影像进行分类实验的结果表明,该方法在少量标记样本情况下,相对于传统方法能有效地提高分类精度。  相似文献   

16.
ABSTRACT

As fingerprints continue toward ubiquity in human recognition applications, growing fingerprint databases will pose an increasingly greater risk of irreversible identity theft in the event of a database breach. Consequently, more focus is being placed on researching new and effective ways of securing fingerprint templates during database storage. Recently, a new fingerprint template protection scheme, based on representing a fingerprint by a sparse 3-, 4-, or 5-minutiae pattern, has been proposed. The most important advantage of this method over other fingerprint template protection schemes is that it employs only a small number of identifying features in the creation of the protected template, such that it is impossible to recover the original fingerprint even if the protected template is compromised. In this article, we present a thorough analysis to demonstrate that this new fingerprint construct also boasts impressive cancellability and diversity properties. Cancellability allows for the replacement of a compromised template with a new template from the same fingerprint, and diversity enables a person to enroll into multiple applications using the same fingerprint without the prospect of being tracked across the different applications.  相似文献   

17.
尚敬文  王朝坤  辛欣  应翔 《软件学报》2017,28(3):648-662
社区结构是复杂网络的一个重要特征,社区发现对研究网络结构有重要的应用价值.k-均值等经典聚类算法是解决社区发现问题的一类基本方法.然而,在处理网络的高维矩阵时,使用这些经典聚类方法得到的社区往往不够准确.提出一种基于深度稀疏自动编码器的社区发现算法CoDDA,尝试提高使用这些经典方法处理高维邻接矩阵进行社区发现的准确性.首先,提出基于跳数的处理方法,对稀疏的邻接矩阵进行优化处理.得到的相似度矩阵不仅能反映网络拓扑结构中相连节点间的相似关系,同时能反映不相连节点间的相似关系.接着,基于无监督深度学习方法,构建深度稀疏自动编码器,对相似度矩阵进行特征提取,得到低维的特征矩阵.与邻接矩阵相比,特征矩阵对网络拓扑结构有更强的特征表达能力.最后,使用k-均值算法对低维特征矩阵聚类得到社区结构.实验结果显示,与6种典型的社区发现算法相比,CoDDA算法能够发现更准确的社区结构.同时,参数实验结果显示,CoDDA算法发现的社区结构比直接使用高维邻接矩阵的基本k-均值算法发现的社区结构更为准确.  相似文献   

18.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

19.
准确实时的短时交通流预测对现代交通管理服务体系的构建至关重要.为了充分挖掘并利用不同路段短时交通流交互作用而表现出的时空特性,构建由自相关函数、互相关函数和KNN算法组成的两级筛选机制评估与目标路段的相关性优化路段组合,实现空间信息深度挖掘;提出一种GCN-GRU组合预测模型,利用图卷积网络(GCN)全局处理路段拓扑信息的优势进一步捕捉短时交通流的空间特性,并借助门控循环单元(GRU)对时间信息的长时记忆能力提取其时间特性.利用实测高速公路短时交通流数据进行验证,仿真结果表明,采用两级筛选机制对路段进行有效筛选并引入深度学习组合模型,预测性能明显改善,优于堆栈式自编码网络(SAEs)和GRU等经典模型.  相似文献   

20.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号