首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
多维尺度分析已经在维度约减和数据挖掘领域得到了广泛应用。MDS的主要缺点是其定义在训练数据上,对于新的测试样本无法直接获得映射结果。另外,MDS基于欧氏距离度量,不适合获取相似数据中的非线性流形结构。将MDS扩展到关联度量空间,称为关联度量多维尺度分析(CMDS)。与传统MDS在训练数据中完成映射,进而缩小空间范围相比,CMDS 能够直接获得测试样本映射结果。此外,CMDS基于关联度量,能够有效学习相似数据中的非线性流形结构。理论分析表明,CMDS可以利用核方法扩展到新特征空间,解决非线性问题。实验结果表明,CMDS及其核形式KG-CMDS性能优于常用传统降维方法。  相似文献   

2.
为了充分利用人脸图像的潜在信息,提出一种通过设置不同尺寸的卷积核来得到图像多尺度特征的方法,多尺度卷积自动编码器(Multi-Scale Convolutional Auto-Encoder,MSCAE)。该结构所提取的不同尺度特征反映人脸的本质信息,可以更好地还原人脸图像。这种特征提取框架是一个卷积和采样交替的层级结构,使得特征对旋转、平移、比例缩放等具有高度不变性。MSCAE以encoder-decoder模式训练得到特征提取器,用它提取特征,并融合形成用于分类的特征向量。BP神经网络在ORL和Yale人脸库上的分类结果表明,多尺度特征在识别率和性能上均优于单尺度特征。此外,MSCAE特征与HOG(Histograms of Oriented Gradients)的融合特征取得了比单一特征更高的识别率。  相似文献   

3.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

4.
An unsupervised competitive learning algorithm based on the classical k-means clustering algorithm is proposed. The proposed learning algorithm called the centroid neural network (CNN) estimates centroids of the related cluster groups in training date. This paper also explains algorithmic relationships among the CNN and some of the conventional unsupervised competitive learning algorithms including Kohonen's self-organizing map and Kosko's differential competitive learning algorithm. The CNN algorithm requires neither a predetermined schedule for learning coefficient nor a total number of iterations for clustering. The simulation results on clustering problems and image compression problems show that CNN converges much faster than conventional algorithms with compatible clustering quality while other algorithms may give unstable results depending on the initial values of the learning coefficient and the total number of iterations.  相似文献   

5.
We propose a method for visual tracking-by-detection based on online feature learning. Our learning framework performs feature encoding with respect to an over-complete dictionary, followed by spatial pyramid pooling. We then learn a linear classifier based on the resulting feature encoding. Unlike previous work, we learn the dictionary online and update it to help capture the appearance of the tracked target as well as the background. In more detail, given a test image window, we extract local image patches from it and each local patch is encoded with respect to the dictionary. The encoded features are then pooled over a spatial pyramid to form an aggregated feature vector. Finally, a simple linear classifier is trained on these features.Our experiments show that the proposed powerful—albeit simple—tracker, outperforms all the state-of-the-art tracking methods that we have tested. Moreover, we evaluate the performance of different dictionary learning and feature encoding methods in the proposed tracking framework, and analyze the impact of each component in the tracking scenario. In particular, we show that a small dictionary, learned and updated online is as effective and more efficient than a huge dictionary learned offline. We further demonstrate the flexibility of feature learning by showing how it can be used within a structured learning tracking framework. The outcome is one of the best trackers reported to date, which facilitates the advantages of both feature learning and structured output prediction. We also implement a multi-object tracker, which achieves state-of-the-art performance.  相似文献   

6.
This letter presents a new memristor crossbar array system and demonstrates its applications in image learning. The controlled pulse and image overlay technique are introduced for the programming of memristor crossbars and promising a better performance for noise reduction. The time-slot technique is helpful for improving the processing speed of image. Simulink and numerical simulations have been employed to demonstrate the useful applications of the proposed circuit structure in image learning.  相似文献   

7.
8.
Clustering aims to partition a data set into homogenous groups which gather similar objects. Object similarity, or more often object dissimilarity, is usually expressed in terms of some distance function. This approach, however, is not viable when dissimilarity is conceptual rather than metric. In this paper, we propose to extract the dissimilarity relation directly from the available data. To this aim, we train a feedforward neural network with some pairs of points with known dissimilarity. Then, we use the dissimilarity measure generated by the network to guide a new unsupervised fuzzy relational clustering algorithm. An artificial data set and a real data set are used to show how the clustering algorithm based on the neural dissimilarity outperforms some widely used (possibly partially supervised) clustering algorithms based on spatial dissimilarity.  相似文献   

9.
A novel neural network called Class Directed Unsupervised Learning (CDUL) is introduced. The architecture, based on a Kohonen self-organising network, uses additional input nodes to feed class knowledge to the network during training, in order to optimise the final positioning of Kohonen nodes in feature space. The structure and training of CDUL networks is detailed, showing that (a) networks cannot suffer from the problem of single Kohonen nodes being trained by vectors of more than one class, (b) the number of Kohonen nodes necessary to represent the classes is found during training, and (c) the number of training set passes CDUL requires is low in comparison to similar networks. CDUL is subsequently applied to the classification of chemical excipients from Near Infrared (NIR) reflectance spectra, and its performance compared with three other unsupervised paradigms. The results thereby obtained demonstrate a superior performance which remains relatively constant through a wide range of network parameters.  相似文献   

10.
Unsupervised learning is used to categorize multidimensional data into a number of meaningful classes on the basis of the similarity or correlation between individual samples. In neural-network implementation of various unsupervised algorithms such as principal component analysis, competitive learning or self-organizing map, sample vectors are normalized to equal lengths so that similarity could be easily and efficiently obtained by their dot products. In general, sample vectors span the whole multidimensional feature space and existing normalization methods distort the intrinsic patterns present in the sample set. In this work, a novel method of normalization by mapping the samples to a new space of one more dimension is proposed. The original distribution of the samples in the feature space is shown to be almost preserved in the transformed space. Simple rules are given to map from original space to the normalized space and vice versa.  相似文献   

11.
Neural-network front ends in unsupervised learning   总被引:1,自引:0,他引:1  
Proposed is an idea of partial supervision realized in the form of a neural-network front end to the schemes of unsupervised learning (clustering). This neural network leads to an anisotropic nature of the induced feature space. The anisotropic property of the space provides us with some of its local deformation necessary to properly represent labeled data and enhance efficiency of the mechanisms of clustering to be exploited afterwards. The training of the network is completed based upon available labeled patterns-a referential form of the labeling gives rise to reinforcement learning. It is shown that the discussed approach is universal and can be utilized in conjunction with any clustering method. Experimental studies are concentrated on three main categories of unsupervised learning including FUZZY ISODATA, Kohonen self-organizing maps, and hierarchical clustering.  相似文献   

12.
Wu  Wei  Alvarez  Jaime  Liu  Chengcheng  Sun  Hung-Min 《Microsystem Technologies》2018,24(1):209-217
Microsystem Technologies - This research focuses on bot detection through implementation of techniques such as traffic analysis, unsupervised machine learning, and similarity analysis between...  相似文献   

13.
特征抽取是图像识别的关键环节,准确的特征表达能够产生更准确的分类效果。采用软阈值编码器和正交匹配追踪(OMP)算法正交化视觉词典的方法,以提高单级计算结构的识别率,并进一步构造两级计算结构,获取图像更准确的特征,以提高图像的识别率。实验表明,采用软阈值编码器和OMP算法能提高单级计算结构提取特征的能力,提高大样本数据集中图像的识别率。两级计算结构能够提高自选数据集中图像的识别率。采用OMP算法能提高VOC2012数据中图像的识别率。在自选数据集上,两级计算结构优于单级计算结构,与NIN结构相比表现出优势,与卷积神经网络CNN相当,说明两级计算结构在自选数据集上有很好的适应性。  相似文献   

14.
We present methods of extractive query-oriented single-document summarization using a deep auto-encoder (AE) to compute a feature space from the term-frequency (tf) input. Our experiments explore both local and global vocabularies. We investigate the effect of adding small random noise to local tf as the input representation of AE, and propose an ensemble of such noisy AEs which we call the Ensemble Noisy Auto-Encoder (ENAE). ENAE is a stochastic version of an AE that adds noise to the input text and selects the top sentences from an ensemble of noisy runs. In each individual experiment of the ensemble, a different randomly generated noise is added to the input representation. This architecture changes the application of the AE from a deterministic feed-forward network to a stochastic runtime model. Experiments show that the AE using local vocabularies clearly provide a more discriminative feature space and improves the recall on average 11.2%. The ENAE can make further improvements, particularly in selecting informative sentences. To cover a wide range of topics and structures, we perform experiments on two different publicly available email corpora that are specifically designed for text summarization. We used ROUGE as a fully automatic metric in text summarization and we presented the average ROUGE-2 recall for all experiments.  相似文献   

15.
Three well-known algorithms for unsupervised learning using a decision-directed approach are the random labeling of patterns according to the estimated a posteriori probabilities, the classification according to the estimated a posteriori probabilities, and the iterative solution of the maximum likelihood equations. The convergence properties of these algorithms are studied by using a sample of about 10 000 handwritten numerals. It turns out that the iterative solution of the maximum likelihood equations has the best properties among the three approaches. However, even this one fails to yield satisfactory results if the number of unknown parameters becomes large, as is usually the case in realistic problems of pattern recognition.  相似文献   

16.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

17.
This letter presents a novel unsupervised competitive learning rule called the boundary adaptation rule (BAR), for scalar quantization. It is shown both mathematically and by simulations that BAR converges to equiprobable quantizations of univariate probability density functions and that, in this way, it outperforms other unsupervised competitive learning rules.  相似文献   

18.
A large and influential class of neural network architectures uses postintegration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented here in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through preintegration lateral inhibition, does provide appropriate coding properties and can be used to learn such representations efficiently. Furthermore, this architecture is consistent with both neuroanatomical and neurophysiological data. We thus argue that preintegration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible.  相似文献   

19.
This paper proposes a novel unsupervised feature selection method by jointing self-representation and subspace learning. In this method, we adopt the idea of self-representation and use all the features to represent each feature. A Frobenius norm regularization is used for feature selection since it can overcome the over-fitting problem. The Locality Preserving Projection (LPP) is used as a regularization term as it can maintain the local adjacent relations between data when performing feature space transformation. Further, a low-rank constraint is also introduced to find the effective low-dimensional structures of the data, which can reduce the redundancy. Experimental results on real-world datasets verify that the proposed method can select the most discriminative features and outperform the state-of-the-art unsupervised feature selection methods in terms of classification accuracy, standard deviation, and coefficient of variation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号