首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Linear and kernel discriminant analysis are popular approaches for supervised dimensionality reduction. Uncorrelated and regularized discriminant analysis have been proposed to overcome the singularity problem encountered by classical discriminant analysis. In this paper, we study the properties of kernel uncorrelated and regularized discriminant analysis, called KUDA and KRDA, respectively. In particular, we show that under a mild condition, both linear and kernel uncorrelated discriminant analysis project samples in the same class to a common vector in the dimensionality-reduced space. This implies that uncorrelated discriminant analysis may suffer from the overfitting problem if there are a large number of samples in each class. We show that as the regularization parameter in KRDA tends to zero, KRDA approaches KUDA. This shows that KUDA is a special case of KRDA, and that regularization can be applied to overcome the overfitting problem in uncorrelated discriminant analysis. As the performance of KRDA depends on the value of the regularization parameter, we show that the matrix computations involved in KRDA can be simplified, so that a large number of candidate values can be crossvalidated efficiently. Finally, we conduct experiments to evaluate the proposed theories and algorithms.  相似文献   

2.
To address two problems, namely nonlinear problem and singularity problem, of linear discriminant analysis (LDA) approach in face recognition, this paper proposes a novel kernel machine-based rank-lifting regularized discriminant analysis (KRLRDA) method. A rank-lifting theorem is first proven using linear algebraic theory. Combining the rank-lifting strategy with three-to-one regularization technique, the complete regularized methodology is developed on the within-class scatter matrix. The proposed regularized scheme not only adjusts the projection directions but tunes their corresponding weights as well. Moreover, it is shown that the final regularized within-class scatter matrix approaches to the original one as the regularized parameter tends to zero. Two public available databases, namely FERET and CMU PIE face databases, are selected for evaluations. Compared with some existing kernel-based LDA methods, the proposed KRLRDA approach gives superior performance.  相似文献   

3.
This paper addresses two problems in linear discriminant analysis (LDA) of face recognition. The first one is the problem of recognition of human faces under pose and illumination variations. It is well known that the distribution of face images with different pose, illumination, and face expression is complex and nonlinear. The traditional linear methods, such as LDA, will not give a satisfactory performance. The second problem is the small sample size (S3) problem. This problem occurs when the number of training samples is smaller than the dimensionality of feature vector. In turn, the within-class scatter matrix will become singular. To overcome these limitations, this paper proposes a new kernel machine-based one-parameter regularized Fisher discriminant (K1PRFD) technique. K1PRFD is developed based on our previously developed one-parameter regularized discriminant analysis method and the well-known kernel approach. Therefore, K1PRFD consists of two parameters, namely the regularization parameter and kernel parameter. This paper further proposes a new method to determine the optimal kernel parameter in RBF kernel and regularized parameter in within-class scatter matrix simultaneously based on the conjugate gradient method. Three databases, namely FERET, Yale Group B, and CMU PIE, are selected for evaluation. The results are encouraging. Comparing with the existing LDA-based methods, the proposed method gives superior results.  相似文献   

4.
Kernelized nonlinear extensions of Fisher's discriminant analysis, discriminant analysis based on generalized singular value decomposition (LDA/GSVD), and discriminant analysis based on the minimum squared error formulation (MSE) have recently been widely utilized for handling undersampled high-dimensional problems and nonlinearly separable data sets. As the data sets are modified from incorporating new data points and deleting obsolete data points, there is a need to develop efficient updating and downdating algorithms for these methods to avoid expensive recomputation of the solution from scratch. In this paper, an efficient algorithm for adaptive linear and nonlinear kernel discriminant analysis based on regularized MSE, called adaptive KDA/RMSE, is proposed. In adaptive KDA/RMSE, updating and downdating of the computationally expensive eigenvalue decomposition (EVD) or singular value decomposition (SVD) is approximated by updating and downdating of the QR decomposition achieving an order of magnitude speed up. This fast algorithm for adaptive kernelized discriminant analysis is designed by utilizing regularization techniques and the relationship between linear and nonlinear discriminant analysis and the MSE. In addition, an efficient algorithm to compute leave-one-out cross validation is also introduced by utilizing downdating of KDA/RMSE.  相似文献   

5.
This paper presents an analysis of some regularization aspects in continuous-time model identification. The study particulary focuses on linear filter methods and shows that filtering the data before estimating their derivatives corresponds to a regularized signal derivative estimation by minimizing a compound criterion whose expression is given explicitly. A new structure based on a null phase filter corresponding to a true regularization filter is proposed and allows to discuss the filter phase effects on parameter estimation by comparing its performances with those of the Poisson filter-based methods. Based on this analysis, a formulation of continuous-time model identification as a joint system input-output signal and model parameter estimation is suggested. In this framework, two linear filter methods are interpreted and a compound criterion is proposed in which the regularization is ensured by a model fitting measure, resulting in a new regularization filter structure for signal estimation.  相似文献   

6.
一种基于正则化方法的准最佳图像复原技术   总被引:9,自引:0,他引:9       下载免费PDF全文
提出一种基于正则化方法的高效图像复原技术.正则化残量的能量越小,则恢复效果越好,基于此,利用小波变换定性地分析如何选取正则化算子,利用随机理论得到正则化残量的能量期望值,通过最小化这个期望模型确定正则化参数,从而得到正则化图像.定性分析表明,在通常情况下应选取低阻高通的正则化算子.实验结果表明,该恢复技术比传统方法的恢复性能要好,恢复效果接近最佳且性能稳定.  相似文献   

7.
提出一种基于正则化方法的高效图像复原技术。围绕最小化正则解模糊误差,设计该技术。利用泰勒级数定性地分析怎样的正则化算于使正则解模糊误差能量较小,得出结论:通常情况下应选取低阻高通的正则化算子;利用随机理论解决正则解模糊误差能量期望值最小化问题,确定正则化参数;利用小波变换估计噪声能量,在没有噪声能量信息的情况下,新方法能进行高效的图像恢复。实验结果表明本文的恢复技术比传统方法的恢复性能好,恢复效果接近最佳且性能稳定,且不需要噪声能量信息。  相似文献   

8.
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.  相似文献   

9.
本文根据正则化恢复中正则化参数应具有的性质,提出了一种基于正则化参数自适选择方案的新的空域迭代恢复算法。  相似文献   

10.
Clustering analysis of temporal gene expression data is widely used to study dynamic biological systems, such as identifying sets of genes that are regulated by the same mechanism. However, most temporal gene expression data often contain noise, missing data points, and non-uniformly sampled time points, which imposes challenges for traditional clustering methods of extracting meaningful information. In this paper, we introduce an improved clustering approach based on the regularized spline regression and an energy based similarity measure. The proposed approach models each gene expression profile as a B-spline expansion, for which the spline coefficients are estimated by regularized least squares scheme on the observed data. To compensate the inadequate information from noisy and short gene expression data, we use its correlated genes as the test set to choose the optimal number of basis and the regularization parameter. We show that this treatment can help to avoid over-fitting. After fitting the continuous representations of gene expression profiles, we use an energy based similarity measure for clustering. The energy based measure can include the temporal information and relative changes of the time series using the first and second derivatives of the time series. We demonstrate that our method is robust to noise and can produce meaningful clustering results.  相似文献   

11.
This paper presents an overview of regularized techniques in discriminant analysis. The case of continuous variables is treated first, and then the case of discrete variables. Three types of approaches are distinguished: combining standard methods, constraining models and Bayesian modelling. We include numerical experiments to assess the efficiency of regularized versions of predictive discrimination and to illustrate the superiority of regularization on variable subset selection in a small sample setting.  相似文献   

12.
We study a semi-supervised learning method based on the similarity graph and regularized Laplacian. We give convenient optimization formulation of the regularized Laplacian method and establish its various properties. In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous-time random walks and possesses several important properties of proximity measures. Both optimization and linear algebra methods can be used for efficient computation of the classification functions. We demonstrate on numerical examples that the regularized Laplacian method is robust with respect to the choice of the regularization parameter and outperforms the Laplacian-based heat kernel methods.  相似文献   

13.
In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.  相似文献   

14.
针对传统LDA类半监督特征提取方法的解矢量非正交、解空间不稳定和非线性处理能力不足等问题,提出LPA-SKFST方法.该方法的前置级LPA通过标签传播提高标记样本容量,后置级SKFST(半监督核最佳鉴别矢量集)采用双向正则方法对KFST引入全局结构保持正则和Tikhonov正则,并以成对空间求解方法求取Fisher分母矩阵奇异和非奇异时的统一形式解.在circle、iris、wine和自有珍珠光谱集的分类实验中,PCA、LDA、SLDA和SDG组的准确率随样本集、标记样本占比和标签可靠性变化而波动,LPA-SKFST组则稳定保持在85%以上.该结果证明,LPA-SKFST能克服标记样本占比和标记可靠性不足局限,在实际集和线性不可分人工集上取得一致、稳定的优秀表现.  相似文献   

15.
采用虚拟训练样本优化正则化判别分析   总被引:7,自引:0,他引:7  
在模式特征子空间中选取一组标准正交向量,使用这组向量可以生成大量的虚拟训练样本,从而实现对协方差矩阵的优化.在ORL人脸库上的实验表明,优化后协方差矩阵的特征值均显著变大,使该矩阵的逆阵稳定性得到了提高.利用优化的协方差矩阵对正则化判别分析方法进行优化,其模式分类正确率有显著提高.  相似文献   

16.
For linear discriminant analysis (LDA), the ratio trace and trace ratio are two basic criteria generalized from the classical Fisher criterion function, while the orthogonal and uncorrelated constraints are two common conditions imposed on the optimal linear transformation. The ratio trace criterion with both the orthogonal and uncorrelated constraints have been extensively studied in the literature, whereas the trace ratio criterion receives less interest mainly due to the lack of a closed-form solution and efficient algorithms. In this paper, we make an extensive study on the uncorrelated trace ratio linear discriminant analysis, with particular emphasis on the application on the undersampled problem. Two regularization uncorrelated trace ratio LDA models are discussed for which the global solutions are characterized and efficient algorithms are established. Experimental comparison on several LDA approaches are conducted on several real world datasets, and the results show that the uncorrelated trace ratio LDA is competitive with the orthogonal trace ratio LDA, but is better than the results based on ratio trace criteria in terms of the classification performance.  相似文献   

17.
Kernel discriminant analysis (KDA) is one of the state-of-the-art kernel-based methods for pattern classification and dimensionality reduction. It performs linear discriminant analysis in the feature space via kernel function. However, the performance of KDA greatly depends on the selection of the optimal kernel for the learning task of interest. In this paper, we propose a novel algorithm termed as elastic multiple kernel discriminant analysis (EMKDA) by using hybrid regularization for automatically learning kernels over a linear combination of pre-specified kernel functions. EMKDA makes use of a mixing norm regularization function to compromise the sparsity and non-sparsity of the kernel weights. A semi-infinite program based algorithm is then proposed to solve EMKDA. Extensive experiments on synthetic datasets, UCI benchmark datasets, digit and terrain database are conducted to show the effectiveness of the proposed methods.  相似文献   

18.
梁志贞  张磊 《自动化学报》2022,48(4):1033-1047
线性判别分析是一种统计学习方法. 针对线性判别分析的小样本奇异性问题和对污染样本敏感性问题, 目前许多线性判别分析的改进算法已被提出. 本文提出了基于Kullback-Leibler (KL)散度不确定集的判别分析方法. 提出的方法不仅利用了Ls范数定义类间距离和Lr范数定义类内距离, 而且对类内样本和各类中心的信息进行基于KL散度不确定集的概率建模. 首先通过优先考虑不利区分的样本提出了一种正则化对抗判别分析模型并利用广义Dinkelbach算法求解此模型. 这种算法的一个优点是在适当的条件下优化子问题不需要取得精确解. 投影(次)梯度法被用来求解优化子问题. 此外, 也提出了正则化乐观判别分析并采用交替优化技术求解广义Dinkelbach算法的优化子问题. 许多数据集上的实验表明了本文的模型优于现有的一些模型, 特别是在污染的数据集上, 正则化乐观判别分析由于优先考虑了类中心附近的样本点, 从而表现出良好的性能.  相似文献   

19.
This paper proposes an uncorrelated multilinear discriminant analysis (UMLDA) framework for the recognition of multidimensional objects, known as tensor objects. Uncorrelated features are desirable in recognition tasks since they contain minimum redundancy and ensure independence of features. The UMLDA aims to extract uncorrelated discriminative features directly from tensorial data through solving a tensor-to-vector projection. The solution consists of sequential iterative processes based on the alternating projection method, and an adaptive regularization procedure is incorporated to enhance the performance in the small sample size (SSS) scenario. A simple nearest-neighbor classifier is employed for classification. Furthermore, exploiting the complementary information from differently initialized and regularized UMLDA recognizers, an aggregation scheme is adopted to combine them at the matching score level, resulting in enhanced generalization performance while alleviating the regularization parameter selection problem. The UMLDA-based recognition algorithm is then empirically shown on face and gait recognition tasks to outperform four multilinear subspace solutions (MPCA, DATER, GTDA, TR1DA) and four linear subspace solutions (Bayesian, LDA, ULDA, R-JD-LDA).  相似文献   

20.
Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and class-related subspaces. Secondly, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problem-suited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernel-induced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号