首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
A piecewise linear projection algorithm, based on kohonen's Self-Organizing Map, is presented. Using this new algorithm, neural network is able to adapt its neural weights to accommodate with input space, while obtaining reduced 2-dimensional subspaces at each neural node. After completion of learning process, first project input data into their corresponding 2-D subspaces, then project all data in the 2-D subspaces into a reference 2-D subspace defined by a reference neural node. By piecewise linear projection, we can more easily deal with large data sets than other projection algorithms like Sammon's nonlinear mapping (NLM). There is no need to re-compute all the input data to interpolate new input data to the 2-D output space.  相似文献   

2.
一种具有统计不相关性的最佳鉴别矢量集   总被引:43,自引:4,他引:39  
金忠  杨静宇  陆建峰 《计算机学报》1999,22(10):1105-1108
在模式识别领域,基于Fisher鉴别准则语数的Foley-Sammon最佳鉴别技术在着重大的影响,特征抽取的一般原则是最好抽取模式的不相关的特征,而Foley-Sammon最佳鉴别矢量集的诸鉴别特是统计相关的。  相似文献   

3.
Dimensionality reducing mappings, often also denoted as multidimensional scaling, are the basis for multivariate data projection and visual analysis in data mining. Topology and distance preserving mapping techniques-e.g., Kohonen's self-organizing feature map (SOM) or Sammon's nonlinear mapping (NLM)-are available to achieve multivariate data projections for the following interactive visual analysis process. For large data bases, however, NLM computation becomes intractable. Also, if additional data points or data sets are to be included in the projection, a complete recomputation of the mapping is required. In general, a neural network could learn the mapping and serve for arbitrary additional data projection. However, the computational costs would also be high, and convergence is not easily achieved. In this work, a convenient hierarchical neural projection approach is introduced, where first an unsupervised neural network-e.g., a SOM-quantizes the data base, followed by fast NLM mapping of the quantized data. In the second stage of the hierarchy, an enhancement of the NLM by a recall algorithm is applied. The training and application of a second neural network, which is learning the mapping by function approximation, is quantitatively compared with this new approach. Efficient interactive visualization and analysis techniques, exploiting the achieved hierarchical neural projection for data mining, are presented.  相似文献   

4.
Subspace learning is an important approach in pattern recognition. Nonlinear discriminant analysis (NDA), due to its capability of describing nonlinear manifold structure of samples, is considered to be more powerful to undertake classification tasks in image related problems. In kernel based NDA representation, there are three spaces involved, i.e., original data space, implicitly mapped high dimension feature space and the target low dimension subspace. Existing methods mainly focus on the information in original data space to find the most discriminant low dimension subspace. The implicit high dimension feature space plays a role that connects the original space and the target subspace to realize the nonlinear dimension reduction, but the sample geometric structure information in feature space is not involved. In this work, we try to utilize and explore this information. Specifically, the locality information of samples in feature space is modeled and integrated into the traditional kernel based NDA methods. In this way, both the sample distributions in original data space and the mapped high dimension feature space are modeled and more information is expected to be explored to improve the discriminative ability of the subspace. Two algorithms, named FSLC-KDA and FSLC-KSR, are presented. Extensive experiments on ORL, Extended-YaleB, PIE, Multi-PIE and FRGC databases validate the efficacy of the proposed method.  相似文献   

5.
A novel method based on multi-modal discriminant analysis is proposed to reduce feature dimensionality. First, each class is divided into several clusters by the k-means algorithm. The optimal discriminant analysis is implemented by multi-modal mapping. Our method utilizes only those training samples on and near the effective decision boundary to generate a between-class scatter matrix, which requires less CPU time than other nonparametric discriminant analysis (NDA) approaches [Fukunaga and Mantock in IEEE Trans PAMI 5(6):671–677, 1983; Bressan and Vitria in Pattern Recognit Lett 24(5):2473–2749, 2003]. In addition, no prior assumptions about class and cluster densities are needed. In order to achieve a high verification performance of confusing handwritten numeral pairs, a hybrid feature extraction scheme is developed, which consists of a set of gradient-based wavelet features and a set of geometric features. Our proposed dimensionality reduction algorithm is used to congregate features, and it outperforms the principal component analysis (PCA) and other NDA approaches. Experiments proved that our proposed method could achieve a high feature compression performance without sacrificing its discriminant ability for classification. As a result, this new method can reduce artificial neural network (ANN) training complexity and make the ANN classifier more reliable.  相似文献   

6.
Foley-Sammon optimal discriminant vectors using kernel approach   总被引:4,自引:0,他引:4  
A new nonlinear feature extraction method called kernel Foley-Sammon optimal discriminant vectors (KFSODVs) is presented in this paper. This new method extends the well-known Foley-Sammon optimal discriminant vectors (FSODVs) from linear domain to a nonlinear domain via the kernel trick that has been used in support vector machine (SVM) and other commonly used kernel-based learning algorithms. The proposed method also provides an effective technique to solve the so-called small sample size (SSS) problem which exists in many classification problems such as face recognition. We give the derivation of KFSODV and conduct experiments on both simulated and real data sets to confirm that the KFSODV method is superior to the previous commonly used kernel-based learning algorithms in terms of the performance of discrimination.  相似文献   

7.
提出了一种基于可靠稳定的模糊核学习矢量量化(FKLVQ)聚类的Sammon非线性映射新算法。该方法通过Mercer核,将数据空间映射到高维特征空间,并在此特征空间上进行FKLVQ学习获取数据空间有效且稳定的聚类权矢量,然后在特征空间和输出空间上仅针对各空间的数据样本和它们各自的聚类权矢量进行Sammon非线性核映射。这样既降低了计算的复杂度,又使数据空间和输出空间上数据点与聚类中心间的距离信息保持相似。仿真结果验证了该方法的可靠性和稳定性。  相似文献   

8.
Sammon's (1969) nonlinear projection method is computationally prohibitive for large data sets, and it cannot project new data points. We propose a low-cost fuzzy rule-based implementation of Sammon's method for structure preserving dimensionality reduction. This method uses a sample and applies Sammon's method to project it. The input data points are then augmented by the corresponding projected (output) data points. The augmented data set thus obtained is clustered with the fuzzy c-means (FCM) clustering algorithm. Each cluster is then translated into a fuzzy rule to approximate the Sammon's nonlinear projection scheme. We consider both Mamdani-Assilian and Takagi-Sugeno models for this. Different schemes of parameter estimation are considered. The proposed schemes are applied on several data sets and are found to be quite effective to project new points, i.e., such systems have good predictability  相似文献   

9.
On self-organizing algorithms and networks for class-separability features.   总被引:2,自引:0,他引:2  
We describe self-organizing learning algorithms and associated neural networks to extract features that are effective for preserving class separability. As a first step, an adaptive algorithm for the computation of Q(-1/2) (where Q is the correlation or covariance matrix of a random vector sequence) is described. Convergence of this algorithm with probability one is proven by using stochastic approximation theory, and a single-layer linear network architecture for this algorithm is described, which we call the Q(-1/2) network. Using this network, we describe feature extraction architectures for: 1) unimodal and multicluster Gaussian data in the multiclass case; 2) multivariate linear discriminant analysis (LDA) in the multiclass case; and 3) Bhattacharyya distance measure for the two-class case. The LDA and Bhattacharyya distance features are extracted by concatenating the Q (-1/2) network with a principal component analysis network, and the two-layer network is proven to converge with probability one. Every network discussed in the study considers a flow or sequence of inputs for training. Numerical studies on the performance of the networks for multiclass random data are presented.  相似文献   

10.
Based on various approaches, several different learing algorithms have been given in the literature for neural networks. Almost all algorithms have constant learning rates or constant accelerative parameters, though they have been shown to be effective for some practical applications. The learning procedure of neural networks can be regarded as a problem of estimating (or identifying) constant parameters (i.e. connection weights of network) with a nonlinear or linear observation equation. Making use of the Kalman filtering, we derive a new back-propagation algorithm whose learning rate is computed by a time-varying Riccati difference equation. Perceptron-like and correlational learning algorithms are also obtained as special cases. Furthermore, a self-organising algorithm of feature maps is constructed within a similar framework.  相似文献   

11.
A general feature extraction framework is proposed as an extension of conventional linear discriminant analysis. Two nonlinear feature extraction algorithms based on this framework are investigated. The first is a kernel function feature extraction (KFFE) algorithm. A disturbance term is introduced to regularize the algorithm. Moreover, it is revealed that some existing nonlinear feature extraction algorithms are the special cases of this KFFE algorithm. The second feature extraction algorithm, mean-STD1-norm feature extraction algorithm, is also derived from the framework. Experiments based on both synthetic and real data are presented to demonstrate the performance of both feature extraction algorithms.  相似文献   

12.
Visual data mining with virtual reality spaces is used for the representation of data and symbolic knowledge. High quality structure-preserving and maximally discriminative visual representations can be obtained using a combination of neural networks (SAMANN and NDA) and rough sets techniques, so that a proper subsequent analysis can be made. The approach is illustrated with two types of data: for gene expression cancer data, an improvement in classification performance with respect to the original spaces was obtained; for geophysical prospecting data for cave detection, a cavity was successfully predicted.  相似文献   

13.
In this paper, an efficient feature extraction method named as constrained maximum variance mapping (CMVM) is developed. The proposed algorithm can be viewed as a linear approximation of multi-manifolds learning based approach, which takes the local geometry and manifold labels into account. The CMVM and the original manifold learning based approaches have a point in common that the locality is preserved. Moreover, the CMVM is globally maximizing the distances between different manifolds. After the local scatters have been characterized, the proposed method focuses on developing a linear transformation that can maximize the dissimilarities between all the manifolds under the constraint of locality preserving. Compared to most of the up-to-date manifold learning based methods, this trick makes contribution to pattern classification from two aspects. On the one hand, the local structure in each manifold is still kept; on the other hand, the discriminant information between manifolds can be explored. Finally, FERET face database, CMU PIE face database and USPS handwriting data are all taken to examine the effectiveness and efficiency of the proposed method. Experimental results validate that the proposed approach is superior to other feature extraction methods, such as linear discriminant analysis (LDA), locality preserving projection (LPP), unsupervised discriminant projection (UDP) and maximum variance projection (MVP).  相似文献   

14.
A nonlinear projection method based on Kohonen''s topologypreserving maps   总被引:1,自引:0,他引:1  
A nonlinear projection method is presented to visualize high-dimensional data as a 2D image. The proposed method is based on the topology preserving mapping algorithm of Kohonen. The topology preserving mapping algorithm is used to train a 2D network structure. Then the interpoint distances in the feature space between the units in the network are graphically displayed to show the underlying structure of the data. Furthermore, we present and discuss a new method to quantify how well a topology preserving mapping algorithm maps the high-dimensional input data onto the network structure. This is used to compare our projection method with a well-known method of Sammon (1969). Experiments indicate that the performance of the Kohonen projection method is comparable or better than Sammon's method for the purpose of classifying clustered data. Its time-complexity only depends on the resolution of the output image, and not on the size of the dataset. A disadvantage, however, is the large amount of CPU time required.  相似文献   

15.
Clustering Incomplete Data Using Kernel-Based Fuzzy C-means Algorithm   总被引:3,自引:0,他引:3  
  相似文献   

16.
Subspace learning is the process of finding a proper feature subspace and then projecting high-dimensional data onto the learned low-dimensional subspace. The projection operation requires many floating-point multiplications and additions, which makes the projection process computationally expensive. To tackle this problem, this paper proposes two simple-but-effective fast subspace learning and image projection methods, fast Haar transform (FHT) based principal component analysis and FHT based spectral regression discriminant analysis. The advantages of these two methods result from employing both the FHT for subspace learning and the integral vector for feature extraction. Experimental results on three face databases demonstrated their effectiveness and efficiency.   相似文献   

17.
18.
近年来,针对实际应用场景中可匹配的训练数据不足的问题,科研人员发展出了迁移学习的概念,希望通过提取源域数据的特征信息进行迁移,从而提升目标域的学习效果.本文根据迁移学习所处理的不同数据类型,构造了两种典型的模型:单类别投影基构造模型与监督多类别投影模型.由于子空间投影可以在一定程度上反映原始样本空间的特征性质.因此,本文应用线性判别分析的技巧以及最大均值差异的思想,分别构造了上述模型的求解算法并对相应的非线性核方法进行了推广.  相似文献   

19.
Multiple-instance discriminant analysis (MIDA) is proposed to cope with the feature extraction problem in multiple-instance learning. Similar to MidLABS, MIDA is also derived from linear discriminant analysis (LDA), and both algorithms can be treated as multiple-instance extensions of LDA. Different from MidLABS which learns from the bag level, MIDA is designed from the instance level. MIDA consists of two versions, i.e., binary-class MIDA (B-MIDA) and multi-class MIDA (M-MIDA), which are utilized to cope with binary-class (standard) and multi-class multiple-instance learning tasks, respectively. The block coordinate ascent approach, by which we seek positive prototypes (the most positive instance in a positive bag is termed as the positive prototype of this bag) and projection vectors alternatively and iteratively, is proposed to optimize B-MIDA and M-MIDA to obtain lower dimensional transformation subspaces. Extensive experiments empirically demonstrate the effectiveness of B-MIDA and M-MIDA in extracting discriminative components and weakening class-label ambiguities for instances in positive bags.  相似文献   

20.
字典学习通常采用线性函数捕获数据潜在特征, 该方式无法充分提取数据的内在特征结构, 近年来深度学习方法因其强大的特征表示能力而备受关注, 由此本文提出一种结合深度学习与字典学习的非线性特征表示策略, 基于深度神经网络的字典学习(deep neural network-based dictionary learning, DNNDL). DNNDL将字典学习模块融入传统深度学习网络结构中, 在通过自编码器进行映射获取的低维嵌入空间中同时学习数据字典及在其上的稀疏表示系数, 从而实现端到端方式的数据潜在特征提取. DNNDL可为已有数据以及样本外点数据生成紧凑且具判别性的表示. DNNDL不仅是一种新的深度学习网络结构, 并且可将其看作为字典学习和深度学习相结合的统一框架. 通过在4个真实数据集上进行的大量实验, 验证表明所提方法较常用方法具有更好数据表示能力.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号