首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear discriminant regression classification (LDRC) was presented recently in order to boost the effectiveness of linear regression classification (LRC). LDRC aims to find a subspace for LRC where LRC can achieve a high discrimination for classification. As a discriminant analysis algorithm, however, LDRC considers an equal importance of each training sample and ignores the different contributions of these samples to learn the discriminative feature subspace for classification. Motivated by the fact that some training samples are more effectual in learning the low-dimensional feature space than other samples, in this paper, we propose an adaptive linear discriminant regression classification (ALDRC) algorithm by taking special consideration of different contributions of the training samples. Specifically, ALDRC makes use of different weights to characterize the different contributions of the training samples and utilizes such weighting information to calculate the between-class and the within-class reconstruction errors, and then ALDRC seeks to find an optimal projection matrix that can maximize the ratio of the between-class reconstruction error over the within-class reconstruction error. Extensive experiments carried out on the AR, FERET and ORL face databases demonstrate the effectiveness of the proposed method.  相似文献   

2.
邻域保持嵌入是局部线性嵌入的线性近似,强调保持数据流形的局部结构.改进的最大间隔准则重视数据流形的判别和几何结构,提高了对数据的分类性能.文中提出的核岭回归的邻域保持最大间隔分析既保持流形的局部结构,又使不同类别的数据保持最大间隔,以此构建算法的目标函数.为了解决数据流形高度非线性化的问题,算法采用核岭回归计算特征空间的变换矩阵.先求解数据样本在核子空间中降维映射的结果,再解得核子空间.在标准人脸数据库上的实验表明该算法正确有效,并且识别性能优于普通的流形学习算法.  相似文献   

3.
基于核函数的稳健线性嵌入方法   总被引:1,自引:1,他引:0       下载免费PDF全文
LLE算法是一种新的非监督学习方法,主要针对非线性降维问题。针对该算法存在的缺点,提出了一种基于核函数的稳健线性嵌入方法,该方法通过引进核函数来优化算法邻域点的求解;在特征空间中,修正权值矩阵W,进行降噪处理,经过推导,最终将实际的子空间计算归结为标准的特征值分解问题。采用最小近邻分类器估算识别率。在Yale人脸库以及AT&T人脸库的测试结果表明,在姿态、光照、表情、训练样本数目变化的情况下,改进的算法都具有较好的识别率。  相似文献   

4.
Sparse representation classification, as one of the state-of-the-art classification methods, has been widely studied and successfully applied in face recognition since it was proposed by Wright et al. In this study, we proposed a method to generate virtual available facial images and modified the well-known linear regression classification (LRC) and collaborative representation based classification (CRC) for face recognition. The new method integrates the original and virtual symmetry facial images to form a training sample set of large size. Experimental results show that the proposed method can achieve better performance than most of the competitive face recognition methods, e.g. LRC, CRC, INNC, SRC, RCR, RRC and the method in Xu et al. (2014). This promising performance is mainly attributed to the fact that the sample combination scheme used in the new method can exploit limited original training samples to produce a large number of available training samples and to convey sufficient variations of the original training samples.  相似文献   

5.
Classification using the l 2-norm-based representation is usually computationally efficient and is able to obtain high accuracy in the recognition of faces. Among l 2-norm-based representation methods, linear regression classification (LRC) and collaborative representation classification (CRC) have been widely used. LRC and CRC produce residuals in very different ways, but they both use residuals to perform classification. Therefore, by combining the residuals of these two methods, better performance for face recognition can be achieved. In this paper, a simple weighted sum based fusion scheme is proposed to integrate LRC and CRC for more accurate recognition of faces. The rationale of the proposed method is analyzed. Face recognition experiments illustrate that the proposed method outperforms LRC and CRC.  相似文献   

6.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

7.
沈健  蒋芸  张亚男  胡学伟 《计算机科学》2016,43(12):139-145
多核学习方法是机器学习领域中的一个新的热点。核方法通过将数据映射到高维空间来增加线性分类器的计算能力,是目前解决非线性模式分析与分类问题的一种有效途径。但是在一些复杂的情况下,单个核函数构成的核学习方法并不能完全满足如数据异构或者不规则、样本规模大、样本分布不平坦等实际应用中的需求问题,因此将多个核函数进行组合以期获得更好的结果,是一种必然的发展趋势。因此提出一种基于样本加权的多尺度核支持向量机方法,通过不同尺度核函数对样本的拟合能力进行加权,从而得到基于样本加权的多尺度核支持向量机决策函数。通过在多个数据集上的实验分析可以得出所提方法对于各个数据集都获得了很高的分类准确率。  相似文献   

8.
Generalized discriminant analysis using a kernel approach   总被引:100,自引:0,他引:100  
Baudat G  Anouar F 《Neural computation》2000,12(10):2385-2404
We present a new method that we call generalized discriminant analysis (GDA) to deal with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. In the transformed space, linear properties make it easy to extend and generalize the classical linear discriminant analysis (LDA) to nonlinear discriminant analysis. The formulation is expressed as an eigenvalue problem resolution. Using a different kernel, one can cover a wide class of nonlinearities. For both simulated data and alternate kernels, we give classification results, as well as the shape of the decision function. The results are confirmed using real data to perform seed classification.  相似文献   

9.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

10.
相关向量机(Relevance vector machine, RVM)是一种函数形式等价于支持向量机(Support vector machine, SVM)的全概率模型,利用变分贝叶斯(Variational Bayesian, VB)方法求解的RVM可以给出所有参数的后验分布. 进一步,通过对样本所在原始特征空间的稀疏化,基于线性核的RVM可以在分类的同时实现对原始特征的线性选择. 本文在传统VB-RVM的基础上提出一种特征选择和分类结合方法. 该方法采用Probit模型将分类问题与回归问题有机地结合起来, 同时,通过对特征维的幂变换扩展,不仅在分类时增加了样本的信息量, 可以构造非线性分类面,而且实现了非线性特征选择的功能. 通过对仿真数据和实测数据分别进行实验, 证明了该特征选择和分类结合方法的实用性和有效性.  相似文献   

11.
A nonlinear feature extraction method is presented which can reduce the data dimension down to the number of classes, providing dramatic savings in computational costs. The dimension reducing nonlinear transformation is obtained by implicitly mapping the input data into a feature space using a kernel function, and then finding a linear mapping based on an orthonormal basis of centroids in the feature space that maximally separates the between-class relationship. The experimental results demonstrate that our method is capable of extracting nonlinear features effectively so that competitive performance of classification can be obtained with linear classifiers in the dimension reduced space.  相似文献   

12.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

13.
The paper presents an empirical comparison of the most prominent nonlinear manifold learning techniques for dimensionality reduction in the context of high-dimensional microarray data classification. In particular, we assessed the performance of six methods: isometric feature mapping, locally linear embedding, Laplacian eigenmaps, Hessian eigenmaps, local tangent space alignment and maximum variance unfolding. Unlike previous studies on the subject, the experimental framework adopted in this work properly extends to dimensionality reduction the supervised learning paradigm, by regarding the test set as an out-of-sample set of new points which are excluded from the manifold learning process. This in order to avoid a possible overestimate of the classification accuracy which may yield misleading comparative results. The different empirical approach requires the use of a fast and effective out-of-sample embedding method for mapping new high-dimensional data points into an existing reduced space. To this aim we propose to apply multi-output kernel ridge regression, an extension of linear ridge regression based on kernel functions which has been recently presented as a powerful method for out-of-sample projection when combined with a variant of isometric feature mapping. Computational experiments on a wide collection of cancer microarray data sets show that classifiers based on Isomap, LLE and LE were consistently more accurate than those relying on HE, LTSA and MVU. In particular, under different experimental conditions LLE-based classifier emerged as the most effective method whereas Isomap algorithm turned out to be the second best alternative for dimensionality reduction.  相似文献   

14.
尽管基于Fisher准则的线性鉴别分析被公认为特征抽取的有效方法之一,并被成功地用于人脸识别,但是由于光照变化、人脸表情和姿势变化,实际上的人脸图像分布是十分复杂的,因此,抽取非线性鉴别特征显得十分必要。为了能利用非线性鉴别特征进行人脸识别,提出了一种基于核的子空间鉴别分析方法。该方法首先利用核函数技术将原始样本隐式地映射到高维(甚至无穷维)特征空间;然后在高维特征空间里,利用再生核理论来建立基于广义Fisher准则的两个等价模型;最后利用正交补空间方法求得最优鉴别矢量来进行人脸识别。在ORL和NUST603两个人脸数据库上,对该方法进行了鉴别性能实验,得到了识别率分别为94%和99.58%的实验结果,这表明该方法与核组合方法的识别结果相当,且明显优于KPCA和Kernel fisherfaces方法的识别结果。  相似文献   

15.
Regression techniques, such as ridge regression (RR) and logistic regression (LR), have been widely used in supervised learning for pattern classification. However, these methods mainly exploit the class label information for linear mapping function learning. They will become less effective when the number of training samples per class is small. In visual classification tasks such as face recognition, the appearance of the training sample images also conveys important discriminative information. This paper proposes a novel regression based classification model, namely Bayesian sample steered discriminative regression (BSDR), which simultaneously exploits the sample class label and the sample appearance for linear mapping function learning by virtue of the Bayesian formula. BSDR learns a linear mapping for each class to extract the image class label features, and classification can be simply done by nearest neighbor classifier. The proposed BSDR method has advantages such as small number of mappings, insensitiveness to input feature dimensionality and robustness to small sample size. Extensive experiments on several biometric databases also demonstrate the promising classification performance of our method.  相似文献   

16.
极限学习机(Extreme learning machine, ELM)作为一种新技术具有在回归和分类中良好的泛化性能。局部空间信息的模糊C均值算法(Weighted fuzzy local information C-means, WFLICM)用邻域像素点的空间信息标记中心点的影响因子,增强了模糊C均值聚类算法的去噪声能力。基于极限学习机理论,对WFLICM进行改进优化,提出了基于ELM的局部空间信息的模糊C均值聚类图像分割算法(New kernel weighted fuzzy local information C-means based on ELM,ELM-NKWFLICM)。该方法基于ELM特征映射技术,将原始数据通过ELM特征映射技术映射到高维ELM隐空间中,再用改进的新核局部空间信息的模糊C均值聚类图像分割算法(New kernel weighted fuzzy local information C-means,NKWFLICM)进行聚类。 实验结果表明 ELM-NKWFLICM算法具有比WFLICM算法更强的去噪声能力,且很好地保留了原图像的细节,算法在处理复杂非线性数据时更高效, 同时克服了模糊聚类算法对模糊指数的敏感性问题。  相似文献   

17.
This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular we propose a method, named Orthogonal Neighborhood Preserving Projections, which works by first building an "affinity" graph for the data, in a way that is similar to the method of Locally Linear Embedding (LLE). However, in contrast with the standard LLE where the mapping between the input and the reduced spaces is implicit, ONPP employs an explicit linear mapping between the two. As a result, handling new data samples becomes straightforward, as this amounts to a simple linear transformation.We show how to define kernel variants of ONPP, as well as how to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of ONPP and to compare it with a few competing methods.  相似文献   

18.
We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called "Locally Linear Discriminant Analysis" (LLDA). The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the between-class covariance while minimizing the within-class covariance. In face recognition, linear discriminant analysis (LIDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradient-based learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a local-maxima problem. The transformation functions facilitate robust face recognition in a low-dimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data.  相似文献   

19.
Nonlinear Dimension Reduction with Kernel Sliced Inverse Regression   总被引:1,自引:0,他引:1  
Sliced inverse regression (SIR) is a renowned dimension reduction method for finding an effective low-dimensional linear subspace. Like many other linear methods, SIR can be extended to nonlinear setting via the ldquokernel trick.rdquo The main purpose of this paper is two-fold. We build kernel SIR in a reproducing kernel Hilbert space rigorously for a more intuitive model explanation and theoretical development. The second focus is on the implementation algorithm of kernel SIR for fast computation and numerical stability. We adopt a low-rank approximation to approximate the huge and dense full kernel covariance matrix and a reduced singular value decomposition technique for extracting kernel SIR directions. We also explore kernel SIR's ability to combine with other linear learning algorithms for classification and regression including multiresponse regression. Numerical experiments show that kernel SIR is an effective kernel tool for nonlinear dimension reduction and it can easily combine with other linear algorithms to form a powerful toolkit for nonlinear data analysis.  相似文献   

20.
In this paper, we propose a nonlinear feature extraction method for regression problems to reduce the dimensionality of the input space. Previously, a feature extraction method LDAr, a regressional version of the linear discriminant analysis, was proposed. In this paper, LDAr is generalized to a nonlinear discriminant analysis by using the so-called kernel trick. The basic idea is to map the input space into a high-dimensional feature space where the variables are nonlinear transformations of input variables. Then we try to maximize the ratio of distances of samples with large differences in the target value and those with small differences in the target value in the feature space. It is well known that the distribution of face images, under a perceivable variation in translation, rotation, and scaling, is highly nonlinear and the face alignment problem is a complex regression problem. We have applied the proposed method to various regression problems including face alignment problems and achieved better performances than those of conventional linear feature extraction methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号