首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
局部线性回归分类器(locality-regularized linear regression classification,LLRC)在人脸识别上表现出了高识别率以及高效性的特点,然而原始特征空间并不能保证LLRC的效率。为了提高LLRC的性能,提出了一种与LLRC相联系的新的降维方法,即面向局部线性回归分类器的判别分析方法(locality-regularized linear regression classification based discriminant analysis,LLRC-DA)。LLRC-DA根据LLRC的决策准则设计目标函数,通过最大化类间局部重构误差并最小化类内局部重构误差来寻找最优的特征子空间。此外,LLRC-DA通过对投影矩阵添加正交约束来消除冗余信息。为了有效地求解投影矩阵,利用优化变量之间的关系,提出了一种新的迹比优化算法。因此LLRC-DA非常适用于LLRC。在FERET和ORL人脸库上进行了实验,实验结果表明LLRC-DA比现有方法更具有优越性。  相似文献   

2.
A linear subspace method, which is one of discriminant methods, was proposed as a pattern recognition method and was studied. Because the method and its extensions do not encounter the situation of singular covariance matrix, we need not consider extensions such as generalized ridge discrimination, even when treating a high dimensional and sparse dataset. In addition, classifiers based on a multi-class discrimination method can function faster because of the simple decision procedure. Therefore, they have been widely used for face and speech recognition. However, it seems that sufficient studies have not been conducted about the statistical assessment of training data performance for classifier in terms of prediction accuracy. In statistics, influence functions for statistical discriminant analysis were derived and the assessments for analysis result were performed. These studies indicate that influence functions are useful for detecting large influential observations for analysis results by using discrimination methods and they contribute to enhancing the performance of a target classifier.  相似文献   

3.
Linear discriminant regression classification (LDRC) was presented recently in order to boost the effectiveness of linear regression classification (LRC). LDRC aims to find a subspace for LRC where LRC can achieve a high discrimination for classification. As a discriminant analysis algorithm, however, LDRC considers an equal importance of each training sample and ignores the different contributions of these samples to learn the discriminative feature subspace for classification. Motivated by the fact that some training samples are more effectual in learning the low-dimensional feature space than other samples, in this paper, we propose an adaptive linear discriminant regression classification (ALDRC) algorithm by taking special consideration of different contributions of the training samples. Specifically, ALDRC makes use of different weights to characterize the different contributions of the training samples and utilizes such weighting information to calculate the between-class and the within-class reconstruction errors, and then ALDRC seeks to find an optimal projection matrix that can maximize the ratio of the between-class reconstruction error over the within-class reconstruction error. Extensive experiments carried out on the AR, FERET and ORL face databases demonstrate the effectiveness of the proposed method.  相似文献   

4.
In the last decade, many variants of classical linear discriminant analysis (LDA) have been developed to tackle the under-sampled problem in face recognition. However, choosing the variants is not easy since these methods involve eigenvalue decomposition that makes cross-validation computationally expensive. In this paper, we propose to solve this problem by unifying these LDA variants in one framework: principal component analysis (PCA) plus constrained ridge regression (CRR). In CRR, one selects the target (also called class indicator) for each class, and finds a projection to locate the class centers at their class targets and the transform minimizes the within-class distances with a penalty on the transform norm as in ridge regression. Under this framework, many existing LDA methods can be viewed as PCA+CRR with particular regularization numbers and class indicators and we propose to choose the best LDA method as choosing the best member from the CRR family. The latter can be done by comparing their leave-one-out (LOO) errors and we present an efficient algorithm, which requires similar computations to the training process of CRR, to evaluate the LOO errors. Experiments on Yale Face B, Extended Yale B and CMU-PIE databases are conducted to demonstrate the effectiveness of the proposed methods.  相似文献   

5.
In this paper the regularized orthogonal linear discriminant analysis (ROLDA) is studied. The major issue of the regularized linear discriminant analysis is to choose an appropriate regularization parameter. In existing regularized linear discriminant analysis methods, they all select the “best” regularization parameter from a given parameter candidate set by using cross-validation for classification. An obvious limitation of such regularized linear discriminant analysis methods is that it is not clear how to choose an appropriate candidate set. Therefore, up to now, there is no concrete mathematical theory available in selecting an appropriate regularization parameter in practical applications of the regularized linear discriminant analysis. The present work is to fill this gap. Here we derive the mathematical relationship between orthogonal linear discriminant analysis and the regularized orthogonal linear discriminant analysis first, and then by means of this relationship we find a mathematical criterion for selecting the regularization parameter in ROLDA and consequently we develop a new regularized orthogonal linear discriminant analysis method, in which no candidate set of regularization parameter is needed. The effectiveness of our proposed regularized orthogonal linear discriminant analysis is illustrated by some real-world data sets.  相似文献   

6.
Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.  相似文献   

7.
Pattern recognition software was developed and applied together with statistical techniques to articular cartilage data from the knee joint of the baboon. The standard statistical method used for comparison was ANOVA which indicates linear discrimination. In addition a Karhunen-Loève expansion was performed to reduce the dimensionality of the data and provide independent uncorrelated variables. Nearest neighbour analysis, a non-linear method, when combined with bionomial probabilities gave discrimination that was not obtained by ANOVA. Use of pattern recognition and related techniques can improve and extend the analysis of biological data to include non-linear discrimination and classification.  相似文献   

8.
Measurement errors may affect the predictor selection of the linear regression model. These effects are studied using a measurement framework, where the variances of the measurement errors can be estimated without setting too restrictive assumptions about the measurement model. In this approach, the problem of measurement is solved in a reduced true score space, where the latent true score is multidimensional, but its dimension is smaller than the number of the measurable variables. Various measurement scales are then created to be used as predictors in the regression model. The stability of the predictor selection as well as the estimated predicted validity and the reliability of the prediction scales is examined by Monte Carlo simulations. Varying the magnitude of the measurement error variance four sets of predictors are compared: all variables, a stepwise selection, factor sums, and factor scores. The results indicate that the factor scores offer a stable method for predictor selection, whereas the other alternatives tend to give biased results leading more or less to capitalizing on chance.  相似文献   

9.
Dealing with high-dimensional data has always been a major problem with the research of pattern recognition and machine learning, and linear discriminant analysis (LDA) is one of the most popular methods for dimensionality reduction. However, it suffers from the problem of being too sensitive to outliers. Hence to solve this problem, fuzzy membership can be introduced to enhance the performance of algorithms by reducing the effects of outliers. In this paper, we analyze the existing fuzzy strategies and propose a new effective one based on Markov random walks. The new fuzzy strategy can maintain high consistency of local and global discriminative information and preserve statistical properties of dataset. In addition, based on the proposed fuzzy strategy, we then derive an efficient fuzzy LDA algorithm by incorporating the fuzzy membership into learning. Theoretical analysis and extensive simulations show the effectiveness of our algorithm. The presented results demonstrate that our proposed algorithm can achieve significantly improved results compared with other existing algorithms.  相似文献   

10.
We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called "Locally Linear Discriminant Analysis" (LLDA). The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the between-class covariance while minimizing the within-class covariance. In face recognition, linear discriminant analysis (LIDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradient-based learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a local-maxima problem. The transformation functions facilitate robust face recognition in a low-dimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data.  相似文献   

11.
12.
Remote sensing often involves the estimation of in situ quantities from remote measurements. Linear regression, where there are no non-linear combinations of regressors, is a common approach to this prediction problem in the remote sensing community. A review of recent remote sensing articles using univariate linear regression indicates that in the majority of cases, ordinary least squares (OLS) linear regression has been applied, with approximately half the articles using the in situ observations as regressors and the other half using the inverse regression with remote measurements as regressors. OLS implicitly assume an underlying normal structural data model to arrive at unbiased estimates of the response. OLS regression can be a biased predictor in the presence of measurement errors when the regression problem is based on a functional rather than structural data model. Parametric (Modified Least Squares) and non-parametric (Theil-Sen) consistent predictors are given for linear regression in the presence of measurement errors together with analytical approximations of their prediction confidence intervals. Three case studies involving estimation of leaf area index from nadir reflectance estimates are used to compare these unbiased estimators with OLS linear regression. A comparison to Geometric Mean regression, a standardized version of Reduced Major Axis regression, is also performed. The Theil-Sen approach is suggested as a potential replacement of OLS for linear regression in remote sensing applications. It offers simplicity in computation, analytical estimates of confidence intervals, robustness to outliers, testable assumptions regarding residuals and requires limited a priori information regarding measurement errors.  相似文献   

13.
This paper applies interval perturbation approach to load identification in the statistical energy analysis (SEA) framework, and the influence of the measurement errors of parameters on identified loads is revealed. By considering the damping loss factors and coupling loss factors with measurement errors as interval variables, the errors in identified loads can finally be estimated. The presented interval approach is demonstrated through the simulated study for a two-plate coupling structure and the simulated study for a plate-shell coupling structure. Meanwhile, the identified load with considering the measurement errors of the damping loss factors and coupling loss factors is compared with that without considering the measurement errors of the damping loss factors and coupling loss factors. The results show that the measurement errors of damping loss factors and coupling loss factors have a large effect on identified load, so the measurement errors of damping loss factors and coupling loss factors are non-ignorable when the high-frequency load identification based on SEA is carried out.  相似文献   

14.
This paper evaluates the statistical methodologies of cluster analysis, discriminant analysis, and Logit analysis used in the examination of intrusion detection data. The research is based on a sample of 1200 random observations for 42 variables of the KDD-99 database, that contains ‘normal’ and ‘bad’ connections. The results indicate that Logit analysis is more effective than cluster or discriminant analysis in intrusion detection. Specifically, according to the Kappa statistic that makes full use of all the information contained in a confusion matrix, Logit analysis (K = 0.629) has been ranked first, with second discriminant analysis (K = 0.583), and third cluster analysis (K = 0.460).  相似文献   

15.
This paper provides a unifying view of three discriminant linear feature extraction methods: linear discriminant analysis, heteroscedastic discriminant analysis and maximization of mutual information. We propose a model-independent reformulation of the criteria related to these three methods that stresses their similarities and elucidates their differences. Based on assumptions for the probability distribution of the classification data, we obtain sufficient conditions under which two or more of the above criteria coincide. It is shown that these conditions also suffice for Bayes optimality of the criteria. Our approach results in an information-theoretic derivation of linear discriminant analysis and heteroscedastic discriminant analysis. Finally, regarding linear discriminant analysis, we discuss its relation to multidimensional independent component analysis and derive suboptimality bounds based on information theory.  相似文献   

16.
The traditional regression analysis is usually applied to homogeneous observations. However, there are several real situations where the observations are not homogeneous. In these cases, by utilizing the traditional regression, we have a loss of performance in fitting terms. Then, for improving the goodness of fit, it is more suitable to apply the so-called clusterwise regression analysis. The aim of clusterwise linear regression analysis is to embed the techniques of clustering into regression analysis. In this way, the clustering methods are utilized for overcoming the heterogeneity problem in regression analysis. Furthermore, by integrating cluster analysis into the regression framework, the regression parameters (regression analysis) and membership degrees (cluster analysis) can be estimated simultaneously by optimizing one single objective function. In this paper the clusterwise linear regression has been analyzed in a fuzzy framework. In particular, a fuzzy clusterwise linear regression model (FCWLR model) with symmetrical fuzzy output and crisp input variables for performing fuzzy cluster analysis within a fuzzy linear regression framework is suggested. For measuring the goodness of fit of the suggested FCWLR model with fuzzy output, a fitting index is proposed. In order to illustrate the usefulness of FCWLR model in practice, several applications to artificial and real datasets are shown.  相似文献   

17.
Smoothing spline ANOVA (SSANOVA) provides an approach to semiparametric function estimation based on an ANOVA type of decomposition. Wahba et al. (1995) decomposed the regression function based on a tensor sum decomposition of inner product spaces into orthogonal subspaces, so the effects of the estimated functions from each subspace can be viewed independently. Recent research related to smoothing spline ANOVA focuses on either frequentist approaches or a Bayesian framework for variable selection and prediction. In our approach, we seek “objective” priors especially suited to estimation. The prior for linear terms including level effects is a variant of the Zellner–Siow prior (Zellner and Siow, 1980), and the prior for a smooth effect is specified in terms of effective degrees of freedom. We study this fully Bayesian SSANOVA model for Gaussian response variables, and the method is illustrated with a real data set.  相似文献   

18.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

19.
The influence of analytical inaccuracy and imprecision on the linear discriminant function is considered. Analytical shifts occurring between the analysis of samples from each of two groups give spuriously low error rates if the function is evaluated on the training set, notably at high dimensions. Inaccuracy arising after the establishment of a discriminant function may change considerably the individual group error rates whereas the overall error rate is moderately affected. Imprecision decreases the group separation by an amount comparable to that in the univariate situation. In conclusion, evaluation of the error rates of a discriminant function on an independent test set is important to obtain realistic estimates of the performance and is preferable to using unbiased statistical methods or the split-sample principle based solely upon the training set.  相似文献   

20.
Flexible discriminant analysis (FDA) is a general methodology which aims at providing tools for multigroup non linear classification. It consists in a nonparametric version of discriminant analysis by replacing linear regression by any nonparametric regression method. A new option for FDA, consisting in a nonparametric regression method based on B-spline functions, will be introduced. The relevance of the transformation (hence the discrimination) depends on the parameters defining the spline functions: degree, number and location of the knots for each continuous variable. This method called FDA-FKBS (Free Knot B-Splines) allows to determine all these parameters without the necessity of many prior parameters. It is inspired by Reversible Jumps Monte Carlo Markov Chains but the objective function is different and the Bayesian aspect is put aside.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号