首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Due to the noise disturbance and limited number of training samples, within-set and between-set sample covariance matrices in canonical correlations analysis (CCA) based methods usually deviate from the true ones. In this paper, we re-estimate the covariance matrices by embedding fractional order and incorporate the class label information. First, we illustrate the effectiveness of the fractional-order embedding model through theory analysis and experiments. Then, we quote fractional-order within-set and between-set scatter matrices, which can significantly reduce the deviation of sample covariance matrices. Finally, we incorporate the supervised information, novel generalized CCA and discriminative CCA are presented for multi-view dimensionality reduction and recognition, called fractional-order embedding generalized canonical correlations analysis and fractional-order embedding discriminative canonical correlations analysis. Extensive experimental results on various handwritten numeral, face and object recognition problems show that the proposed methods are very effective and obviously outperform the existing methods in terms of classification accuracy.  相似文献   

2.
Multiset canonical correlation analysis (MCCA) is a powerful technique for analyzing linear correlations among multiple representation data. However, it usually fails to discover the intrinsic geometrical and discriminating structure of multiple data spaces in real-world applications. In this paper, we thus propose a novel algorithm, called graph regularized multiset canonical correlations (GrMCCs), which explicitly considers both discriminative and intrinsic geometrical structure in multiple representation data. GrMCC not only maximizes between-set cumulative correlations, but also minimizes local intraclass scatter and simultaneously maximizes local interclass separability by using the nearest neighbor graphs on within-set data. Thus, it can leverage the power of both MCCA and discriminative graph Laplacian regularization. Extensive experimental results on the AR, CMU PIE, Yale-B, AT&T, and ETH-80 datasets show that GrMCC has more discriminating power and can provide encouraging recognition results in contrast with the state-of-the-art algorithms.  相似文献   

3.
典型相关分析的理论及其在特征融合中的应用   总被引:22,自引:0,他引:22  
利用典型相关分析的思想,提出了一种基于特征级融合的组合特征抽取新方法.首先,探讨了将典型分析用于模式识别的理论构架,给出了其合理的描述.即先抽取同一模式的两组特征矢量,建立描述两组特征矢量之间相关性的判据准则函数,然后依此准则求取两组典型投影矢量集,通过给定的特征融合策略抽取组合的典型相关特征并用于分类.其次,解决了当两组特征矢量构成的总体协方差矩阵奇异时,典型投影矢量集的求解问题,使之适合于高维小样本的情形,推广了典型相关分析的适用范围.最后,从理论上进一步剖析了该方法之所以能有效地用于识别的内在本质.该方法巧妙地将两组特征矢量之间的相关性特征作为有效判别信息,既达到了信息融合之目的,又消除了特征之间的信息冗余,为两组特征融合用于分类识别提出了新的思路.在肯考迪亚大学CENPARMI手写体阿拉伯数字数据库和FERET人脸图像数据库上的实验结果证实了该方法的有效性和稳定性,而且识别结果优于已有的特征融合方法及基于单一特征进行识别的方法.  相似文献   

4.
In this paper, a novel LDA-based dimensionality reduction method called fractional-order embedding direct LDA (FEDLDA) is proposed. More specifically, we redefine the fractional-order between-class and within-class scatter matrices which can significantly reduce the deviation of sample covariance matrices caused by the noise disturbance and limited number of training samples; then the novel feature extraction criterion based on the direct LDA (DLDA) and the idea of fractional-order embedding is applied. Experiments on AT&T, Yale and AR face image databases are performed to test and evaluate the effectiveness of the proposed algorithms. Extensive experimental results show that FEDLDA outperforms DLDA and other closely related methods in terms of classification accuracy and efficiency.  相似文献   

5.
Many applications require an estimate for the covariance matrix that is non-singular and well-conditioned. As the dimensionality increases, the sample covariance matrix becomes ill-conditioned or even singular. A common approach to estimating the covariance matrix when the dimensionality is large is that of Stein-type shrinkage estimation. A convex combination of the sample covariance matrix and a well-conditioned target matrix is used to estimate the covariance matrix. Recent work in the literature has shown that an optimal combination exists under mean-squared loss, however it must be estimated from the data. In this paper, we introduce a new set of estimators for the optimal convex combination for three commonly used target matrices. A simulation study shows an improvement over those in the literature in cases of extreme high-dimensionality of the data. A data analysis shows the estimators are effective in a discriminant and classification analysis.  相似文献   

6.
In this paper, we present a mixture density based approach to invariant image object recognition. To allow for a reliable estimation of the mixture parameters, the dimensionality of the feature space is optionally reduced by applying a robust variant of linear discriminant analysis. Invariance to affine transformations is achieved by incorporating invariant distance measures such as tangent distance. We propose an approach to estimating covariance matrices with respect to image variabilities as well as a new approach to combined classification, called the virtual test sample method. Application of the proposed classifier to the well known US Postal Service handwritten digits recognition task (USPS) yields an excellent error rate of 2.2%. We also propose a simple, but effective approach to compensate for local image transformations, which significantly increases the performance of tangent distance on a database of 1,617 medical radiographs taken from clinical daily routine.  相似文献   

7.
子模式典型相关分析及其在人脸识别中的应用   总被引:4,自引:1,他引:3  
传统的典型相关分析 (CCA) 是有效的特征提取方法之一, 已广泛应用于包括人脸识别在内的模式识别的许多领域. 但在人脸识别为代表的高维小样本问题上该方法存在如下不足: 1) 人脸识别的小样本特性使 CCA 两组特征矢量构成的总体协方差矩阵奇异, 难以直接应用; 2) 作为一种全局线性投影方法, 不足以很好地描述非线性的人脸识别问题; 3) 缺乏对局部变化的识别鲁棒性. 本文受已提出的子模式主分量分析 (SpPCA) 的启发, 提出了子模式典型相关分析 (SpCCA). 该方法将局部与全局特征矢量之间的相关性特征作为有效的判别信息, 既达到了融合局部与全局信息的目的, 又消除了特征之间的信息冗余. 通过子模式的划分, SpCCA 避免了小样本问题, 更好地描述了非线性的人脸识别问题; 并通过投票方式融合结果, 增强了对局部变化的鲁棒性. 在 AR 与 Yale 两个人脸数据集上的实验证实了该方法比对比方法不仅有更优的识别性能, 而且更加稳定和鲁棒.  相似文献   

8.
Principal component analysis (PCA) is often applied to dimensionality reduction for time series data mining. However, the principle of PCA is based on the synchronous covariance, which is not very effective in some cases. In this paper, an asynchronism-based principal component analysis (APCA) is proposed to reduce the dimensionality of univariate time series. In the process of APCA, an asynchronous method based on dynamic time warping (DTW) is developed to obtain the interpolated time series which derive from the original ones. The correlation coefficient or covariance between the interpolated time series represents the correlation between the original ones. In this way, a novel and valid principal component analysis based on the asynchronous covariance is achieved to reduce the dimensionality. The results of several experiments demonstrate that the proposed approach APCA outperforms PCA for dimensionality reduction in the field of time series data mining.  相似文献   

9.
Correlation analysis is regarded as a significant challenge in the mining of multidimensional data streams. Great emphasis is generally placed on one-dimensional data streams with the existing correlation analysis methods for the mining of data streams. Therefore, the identification of underlying correlation among multivariate arrays (e.g. Sensor data) has long been ignored. The technique of canonical correlation analysis (CCA) has rarely been applied in multidimensional data streams. In this study, a novel correlation analysis algorithm based on CCA, called ApproxCCA, is proposed to explore the correlations between two multidimensional data streams in the environment with limited resources. By introducing techniques of unequal probability sampling and low-rank approximation to reduce the dimensionality of the product matrix composed by the sample covariance matrix and sample variance matrix, ApproxCCA successfully improves computational efficiency while ensuring the analytical precision. Experimental results of synthetic and real data sets have indicated that the computational bottleneck of traditional CCA can be overcome with ApproxCCA, and the correlations between two multidimensional data streams can also be detected accurately.  相似文献   

10.
为了有效地在半监督多视图情景下进行维数约简,提出了使用非负低秩图进行标签传播的半监督典型相关分析方法。非负低秩图捕获的全局线性近邻可以利用直接邻居和间接可达邻居的信息维持全局簇结构,同时,低秩的性质可以保持图的压缩表示。当无标签样本通过标签传播算法获得估计的标签信息后,在每个视图上构建软标签矩阵和概率类内散度矩阵。然后,通过最大化不同视图同类样本间相关性的同时最小化每个视图低维特征空间类内变化来提升特征鉴别能力。实验表明所提方法比已有相关方法能够取得更好的识别性能且更鲁棒。  相似文献   

11.
一种有效的手写体汉字组合特征的抽取与识别算法   总被引:2,自引:0,他引:2  
基于特征融合的思想,从有利于模式分类的角度,推广了典型相关分析的理论,建立了广义的典型相关分析用于图像识别的理论框架。在该框架下,首先利用广义的典型相关判据准则函数,求取两组特征矢量的广义投影矢量集,构成一对变换矩阵;然后根据所提出的新的特征融合策略,对两种手写体汉字特征进行融合,所抽取的模式的相关特征矩阵,在普通分类器下取得了良好的分类效果,优于已有的特征融合方法及基于单一特征的PCA 方法和FLDA 方法。  相似文献   

12.
采用虚拟训练样本的二次判别分析方法   总被引:2,自引:0,他引:2  
小样本问题会造成各类协方差矩阵的奇异性和不稳定性. 本文采用对训练样本进行扰动的方法来生成虚拟训练样本, 利用这些虚拟训练样本克服了各类协方差矩阵的奇异性问题, 从而可以直接使用二次判别分析 (Quadratic discriminant analysis, QDA) 方法. 本文方法克服了正则化判别分析 (Regularized discriminant analysis, RDA) 需要进行参数优化的问题. 实验结果表明, QDA 的模式识别率优于参数最优化时 RDA 算法的识别率.  相似文献   

13.
Spectro-temporal representation of speech has become one of the leading signal representation approaches in speech recognition systems in recent years. This representation suffers from high dimensionality of the features space which makes this domain unsuitable for practical speech recognition systems. In this paper, a new clustering based method is proposed for secondary feature selection/extraction in the spectro-temporal domain. In the proposed representation, Gaussian mixture models (GMM) and weighted K-means (WKM) clustering techniques are applied to spectro-temporal domain to reduce the dimensions of the features space. The elements of centroid vectors and covariance matrices of clusters are considered as attributes of the secondary feature vector of each frame. To evaluate the efficiency of the proposed approach, the tests were conducted for new feature vectors on classification of phonemes in main categories of phonemes in TIMIT database. It was shown that by employing the proposed secondary feature vector, a significant improvement was revealed in classification rate of different sets of phonemes comparing with MFCC features. The average achieved improvements in classification rates of voiced plosives comparing to MFCC features is 5.9% using WKM clustering and 6.4% using GMM clustering. The greatest improvement is about 7.4% which is obtained by using WKM clustering in classification of front vowels comparing to MFCC features.  相似文献   

14.
Ruan L  Yuan M  Zou H 《Neural computation》2011,23(6):1605-1622
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. However, parameter estimation for gaussian mixture models with high dimensionality can be challenging because of the large number of parameters that need to be estimated. In this letter, we propose a penalized likelihood estimator to address this difficulty. The [Formula: see text]-type penalty we impose on the inverse covariance matrices encourages sparsity on its entries and therefore helps to reduce the effective dimensionality of the problem. We show that the proposed estimate can be efficiently computed using an expectation-maximization algorithm. To illustrate the practical merits of the proposed method, we consider its applications in model-based clustering and mixture discriminant analysis. Numerical experiments with both simulated and real data show that the new method is a valuable tool for high-dimensional data analysis.  相似文献   

15.
基于彩色人脸图像的信息融合与识别方法   总被引:1,自引:0,他引:1       下载免费PDF全文
图像的彩色信息进行图像识别并有效地降低因利用颜色信息所带来的计算量大幅增加问题,提出了一种基于彩色图像的监督近邻保留嵌套的人脸识别方法,通过对图像的彩色信息进行信息融合并利用监督近邻保留嵌套算法来提高人脸识别的效率。首先,采用Gabor变换分别对彩色图像的每个彩色分量图提取Gabor特征;然后采用典型相关分析对所提取的Gabor特征进行特征融合,并采用监督近邻保留嵌套算法对高维彩色图像特征进行降维;最后,采用最近邻分类器对图像进行分类。实验基于XM2VTS和FRAV2D彩色人脸数据库,采用主成分分析、线性判别分析以及监督近邻保留嵌套对基于灰度图像的Gabor特征和基于彩色信息融合的Gabor特征进行降维,其结果说明多信通彩色图像融合技术与监督近邻保留嵌套结合的方法可以显著提高识别系统性能。  相似文献   

16.
高光谱图像的高维特性和波段间的高相关性,导致高光谱图像地物识别问题研究中,面临着数据量大、信息冗余的问题,降低了高光谱图像的分类识别精度。针对以上问题,提出了基于局部保留降维(Local Fisher Discriminant Analysis,LFDA)结合遗传算法(Genetic Algorithm, GA )优化极限学习机(Extreme Learning Machine, ELM)的高光谱图像分类方法。首先,采用LFDA对高光谱图像数据进行降维处理,消除信息冗余并保留局部邻域内主要特征;然后用GA优化ELM,对降维处理后的特征样本进行分类,提高高光谱图像的分类识别精度。将该方法应用于Salinas和Pavia University高光谱图像的地物识别问题研究,分类精度分别达到了98.56%和97.11%,由此验证了该方法的有效性。  相似文献   

17.
在面向大规模复杂数据的模式分类和识别问题中,绝大多数的分类器都遇到了维数灾难这一棘手的问题.在进行高维数据分类之前,基于监督流形学习的非线性降维方法可提供一种有效的解决方法.利用多项式逻辑斯蒂回归方法进行分类预测,并结合基于非线性降维的非监督流形学习方法解决图像以及非图像数据的分类问题,因而形成了一种新的分类识别方法.大量的实验测试和比较分析验证了本文所提方法的优越性.  相似文献   

18.
针对人脸识别特征提取阶段中的数据降维方法往往难以兼顾保持全局与局部特征信息的问题,以及匹配识别阶段贝叶斯分类器中小样本问题,提出了一种融合全局与局部特征的贝叶斯人脸识别方法。该方法通过核主元分析提取出人脸数据的全局非线性特征,并在此基础上通过正交化局部敏感判别分析挖掘出人脸数据的局部流形结构信息,以达到提取出具有高判别力低维本质人脸特征的目的;采用一种最大信息量协方差选择的方法,来对协方差矩阵进行估算,以解决贝叶斯分类器设计中的小样本问题。在ORL、AR、 YALE、FLW人脸库上设计实验来进行验证。结果表明,提出的特征提取算法以及对贝叶斯分类器的改进取得了比较好的效果,通过对这两个阶段的优化,可以显著提升人脸识别的效果。  相似文献   

19.
针对当前情绪识别研究中特征维数多、识别率不高的问题,提出了基于多生理信号(心电、肌电、呼吸、皮肤电)融合及FCA-ReliefF特征选择的情绪识别方法。通过将从时域和频域两个维度提取的生理信号特征进行融合,作为分类器的输入进行情绪分类。为了降低特征维度,首先进行特征相关性分析(FCA)删除相关性较大的特征;再通过ReliefF剔除分类贡献弱的特征,达到降低特征维度的目的。在公开的数据集上进行验证,并与相关研究进行对比。结果表明,提出的方法在特征维度及识别率两个方面均有优势。提出的FCA-ReliefF降维策略有效地将特征从108维减少到60维,并且将识别精度提高到98.40%,验证了方法的有效性。  相似文献   

20.
基于典型相关分析的组合特征抽取及脸像鉴别   总被引:14,自引:0,他引:14  
利用典型相关分析的思想,提出了一种基于特征级融合的组合特征抽取新方法.首先,抽取同 一模式的两组特征矢量,给出描述两组特征矢量之间相关性的判据准则函数;然后依此准则 抽取它们的典型相关特征,构成有效鉴别特征矢量用于识别.该方法巧妙地将两组特征矢量之 间的相关性特征作为有效判别信息,既达到了信息融合之目的,又消除了特征之间的信息冗余 ,为两组特征融合用于分类识别提供了新的思路.此外,从理论上进一步剖析了所提出的方法 之所以能有效地用于识别的内在本质.在Yale和ORL标准人脸数据库上的实验结果证实了所提 算法的有效性和稳定性,而且识别率大大高于用单一特征进行识别的结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号