首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Support vector machine (SVM) is a powerful classification methodology, where the support vectors fully describe the decision surface by incorporating local information. On the other hand, nonparametric discriminant analysis (NDA) is an improvement over LDA where the normality assumption is relaxed. NDA also detects the dominant normal directions to the decision plane. This paper introduces a novel SVM+NDA model which can be viewed as an extension to the SVM by incorporating some partially global information, especially, discriminatory information in the normal direction to the decision boundary. This can also be considered as an extension to the NDA where the support vectors improve the choice of k-nearest neighbors on the decision boundary by incorporating local information. Being an extension to both SVM and NDA, it can deal with heteroscedastic and non-normal data. It also avoids the small sample size problem. Moreover, it can be reduced to the classical SVM model, so that existing softwares can be used. A kernel extension of the model, called KSVM+KNDA is also proposed to deal with nonlinear problems. We have carried an extensive comparison of the SVM+NDA to the LDA, SVM, heteroscedastic LDA (HLDA), NDA and the combined SVM and LDA on artificial, real and face recognition data sets. Results for KSVM+KNDA have also been presented. These comparisons demonstrate the advantages and superiority of our proposed model.  相似文献   

2.
Subclass discriminant analysis   总被引:5,自引:0,他引:5  
Over the years, many discriminant analysis (DA) algorithms have been proposed for the study of high-dimensional data in a large variety of problems. Each of these algorithms is tuned to a specific type of data distribution (that which best models the problem at hand). Unfortunately, in most problems the form of each class pdf is a priori unknown, and the selection of the DA algorithm that best fits our data is done over trial-and-error. Ideally, one would like to have a single formulation which can be used for most distribution types. This can be achieved by approximating the underlying distribution of each class with a mixture of Gaussians. In this approach, the major problem to be addressed is that of determining the optimal number of Gaussians per class, i.e., the number of subclasses. In this paper, two criteria able to find the most convenient division of each class into a set of subclasses are derived. Extensive experimental results are shown using five databases. Comparisons are given against linear discriminant analysis (LDA), direct LDA (DLDA), heteroscedastic LDA (HLDA), nonparametric DA (NDA), and kernel-based LDA (K-LDA). We show that our method is always the best or comparable to the best.  相似文献   

3.
Mixture discriminant analysis (MDA) and subclass discriminant analysis (SDA) belong to the supervised classification approaches. They have advantage over the standard linear discriminant analysis (LDA) in large sample size problems, since both of them divide the samples in each class into subclasses which keep locality but LDA does not. However, since the current MDA and SDA algorithms perform subclass division in just one step in the original data space before solving the generalized eigenvalue problem, two problems are exposed: (1) they ignore the relation among classes since subclass division is performed in each isolated class; (2) they cannot guarantee good performance of classifiers in the transformed space, because locality in the original data space may not be kept in the transformed space. To address these problems, this paper presents a new approach for subclass division based on k-means clustering in the projected space, class by class using the iterative steps under EM-alike framework. Experiments are performed on the artificial data set, the UCI machine learning data sets, the CENPARMI handwritten numeral database, the NUST603 handwritten Chinese character database, and the terrain cover database. Extensive experimental results demonstrate the performance advantages of the proposed method.  相似文献   

4.
陈斌  张连海  牛铜  屈丹  李弼程 《自动化学报》2014,40(6):1208-1215
提出了一种基于最小分类错误(Minimum classification error,MCE)准则的线性判别分析方法(Linear discriminant analysis,LDA),并将其应用到连续语音识别中的特征变换.该方法采用非参数核密度估计方法进行数据概率分布估计;根据得到的概率分布,在最小分类错误准则下,采用基于梯度下降的线性搜索算法求解判别分析变换矩阵.利用判别分析变换矩阵对相邻帧梅尔滤波器组输出拼接的超矢量变换降维,得到时频特征.实验结果表明,与传统的MFCC特征相比,经过本文判别分析提取的时频特征其识别准确率提高了1.41%,相比于HLDA(Heteroscedastic LDA)和近似成对经验正确率准则(Approximate pairwise empirical accuracy criterion,aPEAC)判别分析方法,识别准确率分别提高了1.14%和0.83%.  相似文献   

5.
Robust large margin discriminant tangent analysis for face recognition   总被引:2,自引:2,他引:0  
Fisher’s Linear Discriminant Analysis (LDA) has been recognized as a powerful technique for face recognition. However, it could be stranded in the non-Gaussian case. Nonparametric discriminant analysis (NDA) is a typical algorithm that extends LDA from Gaussian case to non-Gaussian case. However, NDA suffers from outliers and unbalance problems, which cause a biased estimation of the extra-class scatter information. To address these two problems, we propose a robust large margin discriminant tangent analysis method. A tangent subspace-based algorithm is first proposed to learn a subspace from a set of intra-class and extra-class samples which are distributed in a balanced way on the local manifold patch near each sample point, so that samples from the same class are clustered as close as possible and samples from different classes will be separated far away from the tangent center. Then each subspace is aligned to a global coordinate by tangent alignment. Finally, an outlier detection technique is further proposed to learn a more accurate decision boundary. Extensive experiments on challenging face recognition data set demonstrate the effectiveness and efficiency of the proposed method for face recognition. Compared to other nonparametric methods, the proposed one is more robust to outliers.  相似文献   

6.
异方差线性判别分析(HLDA)因在语音识别中起到了巨大的特征去相关作用而被广泛利用。然而在训练数据不足或特征维数较高时,HLDA易出现不稳定性和小样本问题。根据特征的矩阵表示形式,提出了一种结构受限的HLDA。首先用二维线性判别分析(2DLDA)压缩矩阵形式的特征,然后作一维的HLDA。通过分析我们指出,二维的特征变换实际上是一种结构受限的一维特征变换。在RM库上的实验,受限HLDA对常规HLDA的词识别错误相对下降12.39%;在TIMIT库上的实验,受限HLDA对常规HLDA的音素识别错误相对下降4.43%。  相似文献   

7.
Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems.  相似文献   

8.
Fisher's linear discriminant analysis (LDA) is popular for dimension reduction and extraction of discriminant features in many pattern recognition applications, especially biometric learning. In deriving the Fisher's LDA formulation, there is an assumption that the class empirical mean is equal to its expectation. However, this assumption may not be valid in practice. In this paper, from the “perturbation” perspective, we develop a new algorithm, called perturbation LDA (P-LDA), in which perturbation random vectors are introduced to learn the effect of the difference between the class empirical mean and its expectation in Fisher criterion. This perturbation learning in Fisher criterion would yield new forms of within-class and between-class covariance matrices integrated with some perturbation factors. Moreover, a method is proposed for estimation of the covariance matrices of perturbation random vectors for practical implementation. The proposed P-LDA is evaluated on both synthetic data sets and real face image data sets. Experimental results show that P-LDA outperforms the popular Fisher's LDA-based algorithms in the undersampled case.  相似文献   

9.
To address two problems, namely nonlinear problem and singularity problem, of linear discriminant analysis (LDA) approach in face recognition, this paper proposes a novel kernel machine-based rank-lifting regularized discriminant analysis (KRLRDA) method. A rank-lifting theorem is first proven using linear algebraic theory. Combining the rank-lifting strategy with three-to-one regularization technique, the complete regularized methodology is developed on the within-class scatter matrix. The proposed regularized scheme not only adjusts the projection directions but tunes their corresponding weights as well. Moreover, it is shown that the final regularized within-class scatter matrix approaches to the original one as the regularized parameter tends to zero. Two public available databases, namely FERET and CMU PIE face databases, are selected for evaluations. Compared with some existing kernel-based LDA methods, the proposed KRLRDA approach gives superior performance.  相似文献   

10.
It is generally believed that quadratic discriminant analysis (QDA) can better fit the data in practical pattern recognition applications compared to linear discriminant analysis (LDA) method. This is due to the fact that QDA relaxes the assumption made by LDA-based methods that the covariance matrix for each class is identical. However, it still assumes that the class conditional distribution is Gaussian which is usually not the case in many real-world applications. In this paper, a novel kernel-based QDA method is proposed to further relax the Gaussian assumption by using the kernel machine technique. The proposed method solves the complex pattern recognition problem by combining the QDA solution and the kernel machine technique, and at the same time, tackles the so-called small sample size problem through a regularized estimation of the covariance matrix. Extensive experimental results indicate that the proposed method is a more sophisticated solution outperforming many traditional kernel-based learning algorithms.  相似文献   

11.
针对线性判别分析(LDA)的“小样本”和要求数据须服从高斯分布的问题,提出一种基于非参数化最大间隔准则(NMMC)的雷达目标识别方法.首先,利用自相关小波变换提取目标高分辨距离像(HRRP)的非平稳特征,将其与HRRP原信号一起作为目标的分类特征,利用NMMC实现特征提取;然后,通过支持向量机进行分类.NMMC在解决小样本问题的同时,松弛了对数据分布的类高斯要求.最后,基于5种飞机高分辨距离像数据的仿真实验验证了所提出方法的有效性.  相似文献   

12.
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.  相似文献   

13.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

14.
线性判别分析(LDA)是最经典的子空间学习和有监督判别特征提取方法之一.受到流形学习的启发,近年来众多基于LDA的改进方法被提出.尽管出发点不同,但这些算法本质上都是基于欧氏距离来度量样本的空间散布度.欧氏距离的非线性特性带来了如下两个问题:1)算法对噪声和异常样本点敏感;2)算法对流形或者是多模态数据集中局部散布度较大的样本点过度强调,导致特征提取过程中数据的本质结构特征被破坏.为了解决这些问题,提出一种新的基于非参数判别分析(NDA)的维数约减方法,称作动态加权非参数判别分析(DWNDA).DWNDA采用动态加权距离来计算类间散布度和类内散布度,不仅能够保留多模态数据集的本质结构特征,还能有效地利用边界样本点对之间的判别信息.因此,DWNDA在噪声实验中展现出对噪声和异常样本的强鲁棒性.此外,在人脸和手写体数据库上进行实验,DWNDA方法均取得了优异的实验结果.  相似文献   

15.
提出一种近邻类鉴别分析方法,线性鉴别分析是该方法的一个特例。线性鉴别分析通过最大化类间散度同时最小化类内散度寻找最佳投影,其中类间散度是所有类之间散度的总体平均;而近邻类鉴别分析中类间散度定义为各个类与其k个近邻类之间的平均散度。该方法通过选取适当的近邻类数,能够缓解线性鉴别降维后造成的部分类的重叠。实验结果表明近邻类鉴别分析方法性能稳定且优于传统的线性鉴别分析。  相似文献   

16.
In this paper, a new variant on linear discriminant analysis (LDA) that we refer to as generalizing relevance weighted LDA or GRW-LDA is proposed. GRW-LDA extends the applicability to cases that LDA cannot handle by combining the advantages of two recent LDA enhancements, namely generalized singular value decomposition based LDA and relevance weighted LDA. Experimental results on FERET face database demonstrate the effectiveness of the proposed method.  相似文献   

17.
Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.  相似文献   

18.
Dimension reduction methods are often applied in machine learning and data mining problems. Linear subspace methods are the commonly used ones, such as principal component analysis (PCA), Fisher's linear discriminant analysis (FDA), common spatial pattern (CSP), et al. In this paper, we describe a novel feature extraction method for binary classification problems. Instead of finding linear subspaces, our method finds lower-dimensional affine subspaces satisfying a generalization of the Fukunaga–Koontz transformation (FKT). The proposed method has a closed-form solution and thus can be solved very efficiently. Under normality assumption, our method can be seen as finding an optimal truncated spectrum of the Kullback–Leibler divergence. Also we show that FDA and CSP are special cases of our proposed method under normality assumption. Experiments on simulated data show that our method performs better than PCA and FDA on data that is distributed on two cylinders, even one within the other. We also show that, on several real data sets, our method provides statistically significant improvement on test set accuracy over FDA, CSP and FKT. Therefore the proposed method can be used as another preliminary data-exploring tool to help solve machine learning and data mining problems.  相似文献   

19.
Classic linear dimensionality reduction (LDR) methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are known not to be robust against outliers. Following a systematic analysis of the multi-class LDR problem in a unified framework, we propose a new algorithm, called minimal distance maximization (MDM), to address the non-robustness issue. The principle behind MDM is to maximize the minimal between-class distance in the output space. MDM is formulated as a semi-definite program (SDP), and its dual problem reveals a close connection to “weighted” LDR methods. A soft version of MDM, in which LDA is subsumed as a special case, is also developed to deal with overlapping centroids. Finally, we drop the homoscedastic Gaussian assumption made in MDM by extending it in a non-parametric way, along with a gradient-based convex approximation algorithm to significantly reduce the complexity of the original SDP. The effectiveness of our proposed methods are validated on two UCI datasets and two face datasets.  相似文献   

20.
提出了主元和线性判别的集成分析算法以实施模拟故障数据的特征提取过程和方法。该集成分析方法首先对模拟故障数据进行主元分析,然后在主元变换空间实行线性判别分析,最后将所获得的最优判别特征模式应用于模式分类器进行故障诊断。仿真结果表明,所提出的方法能够充分利用线性方法的计算简便优势,增强单一主元分析或线性判别分析的特征提取性能,获取故障数据集的本质特征,简化模式分类器的结构,降低系统运行的计算成本。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号