首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
特征学习是模式识别领域的关键问题。基于自动编码器的深度神经网络通过无监督预训练与有监督微调能够有效地提取数据中关键信息,形成特征。提出一种基于栈式去噪自编码器的边际Fisher分析算法,该算法将边际Fisher分析运用于有监督微调阶段,进一步提升算法的特征学习能力。实验结果表明,该算法与标准的栈式去噪自编码器和基于受限玻尔兹曼机的深度信念网相比,具有更好的识别效果。  相似文献   

2.
Extreme learning machine (ELM), which can be viewed as a variant of Random Vector Functional Link (RVFL) network without the input–output direct connections, has been extensively used to create multi-layer (deep) neural networks. Such networks employ randomization based autoencoders (AE) for unsupervised feature extraction followed by an ELM classifier for final decision making. Each randomization based AE acts as an independent feature extractor and a deep network is obtained by stacking several such AEs. Inspired by the better performance of RVFL over ELM, in this paper, we propose several deep RVFL variants by utilizing the framework of stacked autoencoders. Specifically, we introduce direct connections (feature reuse) from preceding layers to the fore layers of the network as in the original RVFL network. Such connections help to regularize the randomization and also reduce the model complexity. Furthermore, we also introduce denoising criterion, recovering clean inputs from their corrupted versions, in the autoencoders to achieve better higher level representations than the ordinary autoencoders. Extensive experiments on several classification datasets show that our proposed deep networks achieve overall better and faster generalization than the other relevant state-of-the-art deep neural networks.  相似文献   

3.
针对当前高光谱遥感影像分类人工标注样本费时费力,大量未标注样本未得到有效利用以及主要利用光谱信息而忽视空间信息等问题,提出了一种空-谱信息与主动深度学习相结合的高光谱影像分类方法。首先利用主成分分析对原始影像进行降维,在此基础上提取像素的一正方形小邻域作为该像素的空间信息并结合其原始光谱信息得到空谱特征。然后,通过稀疏自编码器得到原始数据的稀疏特征表达,并通过逐层无监督学习稀疏自编码器构建深度神经网络,输出原始数据的深度特征,将其连接到softmax分类器,利用少量标记样本以监督学习的方式完成模型的精调。最后,利用主动学习算法选择最不确定性样本对其进行标注,并加入至训练样本以提高分类器的分类效果。分别对PaviaU影像和PaviaC影像进行分类实验的结果表明,该方法在少量标记样本情况下,相对于传统方法能有效地提高分类精度。  相似文献   

4.
深度神经网络是具有复杂结构和多个非线性处理单元的模型,广泛应用于计算机视觉、自然语言处理等领域.但是,深度神经网络存在不可解释这一致命缺陷,即“黑箱问题”,这使得深度学习在各个领域的应用仍然存在巨大的障碍.本文提出了一种新的深度神经网络模型——知识堆叠降噪自编码器(Knowledge-based stacked denoising autoencoder,KBSDAE).尝试以一种逻辑语言的方式有效解释网络结构及内在运作机理,同时确保逻辑规则可以进行深度推导.进一步通过插入提取的规则到深度网络,使KBSDAE不仅能自适应地构建深度网络模型并具有可解释和可视化特性,而且有效地提高了模式识别性能.大量的实验结果表明,提取的规则不仅能够有效地表示深度网络,还能够初始化网络结构以提高KBSDAE的特征学习性能、模型可解释性与可视化,可应用性更强.  相似文献   

5.
The concept of deep dictionary learning (DDL) has been recently proposed. Unlike shallow dictionary learning which learns single level of dictionary to represent the data, it uses multiple layers of dictionaries. So far, the problem could only be solved in a greedy fashion; this was achieved by learning a single layer of dictionary in each stage where the coefficients from the previous layer acted as inputs to the subsequent layer (only the first layer used the training samples as inputs). This was not optimal; there was feedback from shallower to deeper layers but not the other way. This work proposes an optimal solution to DDL whereby all the layers of dictionaries are solved simultaneously. We employ the Majorization Minimization approach. Experiments have been carried out on benchmark datasets; it shows that optimal learning indeed improves over greedy piecemeal learning. Comparison with other unsupervised deep learning tools (stacked denoising autoencoder, deep belief network, contractive autoencoder and K-sparse autoencoder) show that our method supersedes their performance both in accuracy and speed.  相似文献   

6.
A connection between score matching and denoising autoencoders   总被引:1,自引:0,他引:1  
Vincent P 《Neural computation》2011,23(7):1661-1674
Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models.  相似文献   

7.
现有的隐式反馈协同算法直接利用稀疏的二值社交信任信息辅助推荐,存在严重的数据稀疏问题,且没有深层次地融合社交信任信息的影响。针对以上问题,提出利用降噪自编码器深度融合用户隐式反馈数据与社交信息的算法。首先从不同的角度区分用户信任,提出一种信任相似度的新度量方法来改善社交数据的稀疏性,利用降噪自编码器将信任数据与用户隐式交互信息深度融合,通过综合二者的影响,有效提高了推荐质量。实验表明,该算法优于现有主流的的隐式反馈推荐算法。  相似文献   

8.
聚类分析作为一种常见的分析方法,广泛应用于各种场景。随着机器学习技术的发展,深度聚类算法也成了当下研究的热点,基于自编码器的深度聚类算法是其中的代表算法。为了及时了解掌握基于自编码器的深度聚类算法的发展,介绍了四种自编码器的模型,对近些年代表性的算法依照自编码器的结构进行了分类。在MNIST、USPS、Fashion-MNIST数据集上,针对传统聚类算法和基于自编码器的深度聚类算法进行了实验对比、分析,最后对基于自编码器的深度聚类算法目前存在的问题进行了总结,展望了深度聚类算法的研究方向。  相似文献   

9.
特征提取是软件缺陷预测中的关键步骤,特征提取的质量决定了缺陷预测模型的性能,但传统的特征提取方法难以提取出软件缺陷数据的深层本质特征。深度学习理论中的自动编码器能够从原始数据中自动学习特征,并获得其特征表示,同时为了增强自动编码器的鲁棒性,本文提出一种基于堆叠降噪稀疏自动编码器的特征提取方法,通过设置不同的隐藏层数、稀疏性约束和加噪方式,可以直接高效地从软件缺陷数据中提取出分类预测所需的各层次特征表示。利用Eclipse缺陷数据集的实验结果表明,该方法较传统特征提取方法具有更好的性能。  相似文献   

10.
目前多数图像分类的方法是采用监督学习或者半监督学习对图像进行降维,然而监督学习与半监督学习需要图像携带标签信息。针对无标签图像的降维及分类问题,提出采用混阶栈式稀疏自编码器对图像进行无监督降维来实现图像的分类学习。首先,构建一个具有三个隐藏层的串行栈式自编码器网络,对栈式自编码器的每一个隐藏层单独训练,将前一个隐藏层的输出作为后一个隐藏层的输入,对图像数据进行特征提取并实现对数据的降维。其次,将训练好的栈式自编码器的第一个隐藏层和第二个隐藏层的特征进行拼接融合,形成一个包含混阶特征的矩阵。最后,使用支持向量机对降维后的图像特征进行分类,并进行精度评价。在公开的四个图像数据集上将所提方法与七个对比算法进行对比实验,实验结果表明,所提方法能够对无标签图像进行特征提取,实现图像分类学习,减少分类时间,提高图像的分类精度。  相似文献   

11.
多标记学习是针对一个实例同时与一组标签相关联而提出的一种机器学习框架,是该领域研究热点之一,降维是多标记学习一个重要且具有挑战性的工作。针对有监督的多标记维数约简方法,提出一种无监督自编码网络的多标记降维方法。首先,通过构建自编码神经网络,对输入数据进行编码和解码输出;然后,引入稀疏约束计算总体成本,使用梯度下降法进行迭代求解;最后,通过深度学习训练获得自编码网络学习模型,提取数据特征实现维数约简。实验中使用多标记算法ML-kNN做分类器,在6个公开数据集上与其他4种方法对比。实验结果表明,该方法能够在不使用标记的情况下有效提取特征,降低多标记数据维度,稳定提高多标记学习性能。  相似文献   

12.
Visual motion segmentation (VMS) is an important and key part of many intelligent crowd systems. It can be used to figure out the flow behavior through a crowd and to spot unusual life-threatening incidents like crowd stampedes and crashes, which pose a serious risk to public safety and have resulted in numerous fatalities over the past few decades. Trajectory clustering has become one of the most popular methods in VMS. However, complex data, such as a large number of samples and parameters, makes it difficult for trajectory clustering to work well with accurate motion segmentation results. This study introduces a spatial-angular stacked sparse autoencoder model (SA-SSAE) with l2-regularization and softmax, a powerful deep learning method for visual motion segmentation to cluster similar motion patterns that belong to the same cluster. The proposed model can extract meaningful high-level features using only spatial-angular features obtained from refined tracklets (a.k.a ‘trajectories’). We adopt l2-regularization and sparsity regularization, which can learn sparse representations of features, to guarantee the sparsity of the autoencoders. We employ the softmax layer to map the data points into accurate cluster representations. One of the best advantages of the SA-SSAE framework is it can manage VMS even when individuals move around randomly. This framework helps cluster the motion patterns effectively with higher accuracy. We put forward a new dataset with its manual ground truth, including 21 crowd videos. Experiments conducted on two crowd benchmarks demonstrate that the proposed model can more accurately group trajectories than the traditional clustering approaches used in previous studies. The proposed SA-SSAE framework achieved a 0.11 improvement in accuracy and a 0.13 improvement in the F-measure compared with the best current method using the CUHK dataset.  相似文献   

13.
In recent years,there are numerous works been proposed to leverage the techniques of deep learning to improve social-aware recommendation performance.In most cases,it requires a larger number of data to train a robust deep learning model,which contains a lot of parameters to fit training data.However,both data of user ratings and social networks are facing critical sparse problem,which makes it not easy to train a robust deep neural network model.Towards this problem,we propose a novel correlative denoising autoencoder(CoDAE)method by taking correlations between users with multiple roles into account to learn robust representations from sparse inputs of ratings and social networks for recommendation.We develop the CoDAE model by utilizing three separated autoencoders to learn user features with roles of rater,truster and trustee,respectively.Especially,on account of that each input unit of user vectors with roles of truster and trustee is corresponding to a particular user,we propose to utilize shared parameters to learn common information of the units that corresponding to same users.Moreover,we propose a related regularization term to learn correlations between user features that learnt by the three subnetworks of CoDAE model.We further conduct a series of experiments to evaluate the proposed method on two public datasets for Top-N recommendation task.The experimental results demonstrate that the proposed model outperforms state-of-the-art algorithms on rank-sensitive metrics of MAP and NDCG.  相似文献   

14.
自编码器作为深度学习的一个重要分支, 吸引了该领域内大量杰出的研究者. 研究者们深入研究其本质并在此基础上提出了很多的优化方法, 如稀疏自编码器、降噪自编码器、收缩自编码器和卷积自编码器等. 在深入阅读了多篇基于自编码器方法的文献之后, 我们发现优化后的自编码器在图像分类、自然语言处理、目标识别等方面都取得了较好的实验结果. 因此, 本文将详细地分析优化后自编码器的基本结构和原理, 并对文献中的实验结果进行多方面的评价与分析.  相似文献   

15.
The paper is focused on the idea to demonstrate the advantages of deep learning approaches over ordinary shallow neural network on their comparative applications to image classifying from such popular benchmark databases as FERET and MNIST. An autoassociative neural network is used as a standalone program realized the nonlinear principal component analysis for prior extracting the most informative features of input data for neural networks to be compared further as classifiers. A special study of the optimal choice of activation function and the normalization transformation of input data allows to improve efficiency of the autoassociative program. One more study devoted to denoising properties of this program demonstrates its high efficiency even on noisy data. Three types of neural networks are compared: feed-forward neural net with one hidden layer, deep network with several hidden layers and deep belief network with several pretraining layers realized restricted Boltzmann machine. The number of hidden layer and the number of hidden neurons in them were chosen by cross-validation procedure to keep balance between number of layers and hidden neurons and classification efficiency. Results of our comparative study demonstrate the undoubted advantage of deep networks, as well as denoising power of autoencoders. In our work we use both multiprocessor graphic card and cloud services to speed up our calculations. The paper is oriented to specialists in concrete fields of scientific or experimental applications, who have already some knowledge about artificial neural networks, probability theory and numerical methods.  相似文献   

16.
Autoencoders have been successfully used to build deep hierarchical models of data. However, a deep architecture usually needs further supervised fine-tuning to obtain better discriminative capacity. To improve the discriminative capacity of deep hierarchical features, this paper proposes a new deterministic autoencoder, trained by a label consistency constraints algorithm that injects discriminative information to the network. We introduce the center loss as label consistency constraints to learn the hidden features of data and add it to the Sparse AutoEncoder to form a new autoencoder, namely Label Consistency Constrained Sparse AutoEncoders (LCCSAE). Specifically, the center loss learns the center of each class, and simultaneously penalizes the distances between the features and their corresponding class centers. In the end, autoencoders are stacked to form a deep architecture of LCCSAE for image classification tasks. To validate the effectiveness of LCCSAE, we compare it with other autoencoders in terms of the deeply learned features and the subsequent classification tasks on MNIST and CIFAR-bw datasets. Experimental results demonstrate the superiority of LCCSAE over other methods.  相似文献   

17.
Li  Daqiu  Fu  Zhangjie  Xu  Jun 《Applied Intelligence》2021,51(5):2805-2817

With the outbreak of COVID-19, medical imaging such as computed tomography (CT) based diagnosis is proved to be an effective way to fight against the rapid spread of the virus. Therefore, it is important to study computerized models for infectious detection based on CT imaging. New deep learning-based approaches are developed for CT assisted diagnosis of COVID-19. However, most of the current studies are based on a small size dataset of COVID-19 CT images as there are less publicly available datasets for patient privacy reasons. As a result, the performance of deep learning-based detection models needs to be improved based on a small size dataset. In this paper, a stacked autoencoder detector model is proposed to greatly improve the performance of the detection models such as precision rate and recall rate. Firstly, four autoencoders are constructed as the first four layers of the whole stacked autoencoder detector model being developed to extract better features of CT images. Secondly, the four autoencoders are cascaded together and connected to the dense layer and the softmax classifier to constitute the model. Finally, a new classification loss function is constructed by superimposing reconstruction loss to enhance the detection accuracy of the model. The experiment results show that our model is performed well on a small size COVID-2019 CT image dataset. Our model achieves the average accuracy, precision, recall, and F1-score rate of 94.7%, 96.54%, 94.1%, and 94.8%, respectively. The results reflect the ability of our model in discriminating COVID-19 images which might help radiologists in the diagnosis of suspected COVID-19 patients.

  相似文献   

18.

In recent times, Chronic Kidney Disease (CKD) has affected more than 10% of the population worldwide and millions of people die every year. So, early-stage detection of CKD could be beneficial for increasing the life expectancy of suffering patients and reducing the treatment cost. It is required to build such a multimedia driven model which can help to diagnose the disease efficiently with higher accuracy before leading to worse conditions. Various techniques related to conventional machine learning models have been used by researchers in the past time without involvement of multimodal data-driven learning. This research paper offers a novel deep learning framework for chronic kidney disease classification using stacked autoencoder model utilizing multimedia data with a softmax classifier. The stacked autoencoder helps to extract the useful features from the dataset and then a softmax classifier is used to predict the final class. It has experimented on UCI dataset which contains early stages of 400 CKD patients with 25 attributes, which is a binary classification problem. Precision, recall, specificity and F1-score were used as evaluation metrics for the assessment of the proposed network. It was observed that this multimodal model outperformed the other conventional classifiers used for chronic kidney disease with a classification accuracy of 100%.

  相似文献   

19.
航空发动机是飞行器的核心动力系统,工作环境恶劣,对其进行状态监测和寿命预测是保障飞行器安全可靠运行的重要技术手段。本文研究一种基于堆叠稀疏自编码神经网络的航空发动机剩余寿命预测方法,首先将多个自编码网络连接构成深度堆叠自编码网络,选取发动机的状态数据作为网络的训练输入,使网络逐层智能提取数据间的分布式规则,从而构建发动机退化的堆叠自编码学习模型。通过采用BP神经网络对发动机剩余寿命区间进行分类,作为发动机剩余寿命预测的结果。通过使用PHM2008挑战赛中发动机退化数据对本文研究方法进行了验证,结果验证了堆叠自编码网络深度学习方法对航空发动机剩余寿命预测的有效性。  相似文献   

20.
提出深层融合对称子空间学习稀疏特征提取模型.在深度子空间基础上,引入对称性、稀疏性约束,通过构建深层映射网络,完成深层特征提取.首先根据最小化重构误差准则构建基本子空间模型,并在构建过程中加入对称性、稀疏性约束.然后对基本子空间模型进行深度化改造,得到深层对称稀疏子空间模型.最后将各个层特征进行融合编码,得到深层特征提取结果.在人脸数据库及目标数据库上的实验表明,文中算法可以取得较高识别率及较好光照、表情、人脸朝向的鲁棒性.相比卷积神经网络等深度学习框架,文中算法具有结构简洁、收敛速度快等优点.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号