首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 127 毫秒
1.
提出了一种新的快速对人耳图像进行特征提取的方法,先对人耳图像进行二维的离散小波分解,然后使用正交质心算法对小波分解后得到的低频信息进行降维,进而获得图像的特征向量。实验证明,该方法与模式识别领域中广泛应用的Fisherfaces方法相比,在识别率大体相当的前提下,具有计算量小、降维速度快的优点,是对人耳图像进行特征提取的一种有效手段。  相似文献   

2.
在高维、小样本的情况下使用Fisher线性鉴别分析的特征提取方法存在病态奇异问题,学者们提出了许多解决此问题的方法.针对小样本问题,并通过对现有人耳识别方法的研究,提出了一种利用KDA/GSVD算法对图像数据进行降维,运用SVM分类器对样本进行判别的人耳识别方法.此外,还对线性判别分析、广义奇异值分解和支持向量机的基本理论等内容做了简要介绍.实验证明,KDA/GSVD很好地解决了由于小样本的问题而导致的LDA算法中类内离散度矩阵不可求逆的问题,把它与支持向量机有机地结合起来,构成了一种有效的人耳识别新方法.  相似文献   

3.
小波变换具有良好的时频分析特性,而且具有较快的算法特点,同时还能起到降维的作用。张量主成分分析方法用于人耳识别能获得比PCA方法更高的识别率。综合利用这两个算法的优点,提出了一种新的人耳识别方法,对人耳图像先采用小波变换做预处理得到4个子带图像,然后对每个子带图像用张量PCA进行特征提取,最后利用最近邻的方法实现人耳图像识别。实验结果表明,利用此方法与只用主成分分析识别相比,提高了识别率。  相似文献   

4.
基于Gabor小波与深度信念网络的人脸识别方法   总被引:1,自引:0,他引:1  
柴瑞敏  曹振基 《计算机应用》2014,34(9):2590-2594
特征提取与模式分类是人脸识别的两个关键问题。针对人脸识别中的高维和小样本问题,从人脸特征的提取与降维算法入手,提出基于受限玻尔兹曼机(RBM)的二次特征提取及降维算法模型。首先把图像均匀分成若干局部图像块并进行量化,再对图像进行Gabor小波变换,通过RBM对得到的Gabor人脸特征进行编码,学习数据更本质的特征,从而达到对高维人脸特征降维的目的;并以此为基础提出基于深度信念网络(DBN)的多通道人脸识别算法。在ORL、UMIST和FERET人脸库上对不同样本规模和不同分辨率的图像进行实验,识别结果表明,与采用线性降维和浅层网络的方法相比,所提方法取得了较好的学习效率和很好的识别效果。  相似文献   

5.
针对用小波分解提取肺音特征后特征向量维数较高的问题,提出了一种结合线性判别分析和小波分解的肺音特征提取方法。在该方法中,首先对肺音信号进行小波分解,然后将小波分解得到的小波系数转化成小波能量特征向量,接着使用线性判别分析法对该特征向量进行降维处理,得到新的低维特征向量,最后用SVM对低维特征进行识别。在实验中,选取了三种肺音信号:正常肺音、爆裂音、哮鸣音,用所提出的方法将8维的小波能量特征降为2维特征,在2维特征上进行了分类识别,并和降维之前的结果进行了比较,实验结果表明利用线性判别分析对小波能量特征降维后极大地提高了识别精度。同时,和其他几种典型的肺音特征提取方法进行了比较,实验结果表明结合线性判别分析与小波分解的特征提取方法得到了更高的识别精度。  相似文献   

6.
提出一种新的基于二维离散小波分解和分块离散余弦变换的降维方法.该方法与模式识别领域中用于特征提取和降维的PCA-LDA方法进行了比较.结果表明,此方法与PCA-LDA方法在识别率上大体相当,但它比其更具有计算量小、降维速度快的优点.因此,该方法对于人脸识别是一种有效降维手段.  相似文献   

7.
张量主成分分析是一种新的主元分析方法,可以解决传统PCA方法对图像进行降维时出现的问题。小波变换具有良好的时频分析特性,同时还能起到降维的作用。综合利用这两个方法的优点,提出了一种基于张量PCA的人耳识别新方法。该方法对人耳图像采用小波变换做预处理得到4个子带图像,对其中“LL”低频子带图像用张量PCA进行特征提取,用支持向量机的方法进行识别。实验结果表明,利用此方法与传统主成分分析识别相比,提高了识别率,缩短了识别时间。在USTB人耳库上实验,该方法的识别率比传统PCA方法提高了6%,识别时间为传统PCA方法的35.23%。  相似文献   

8.
吴恩赐 《福建电脑》2011,27(5):73-75
结合Gabor小波、二维主成分分析和二维线性判别分析的特点,提出一种人脸特征提取方法。算法首先对人脸图像进行Gabor小波变换,然后进行2DPCA特征提取,提取特征脸,最后进行2DLDA处理.用支持向量机作为分类器。使用这种方法在ORL、Yale人脸库进行测试结果表明,计算复杂度得到减少的同时识别率也得到提高。  相似文献   

9.
一种新的掌纹特征提取方法研究   总被引:1,自引:1,他引:0  
提出一种基于Gabor小波和改进的广义K-L变换的掌纹识别方法。该方法首先对测试样本的掌纹ROI灰度图像进行Gabor小波变换,得到其Gabor特征向量,然后利用改进的广义K-L变换方法将高维特征向量变换到低维空间,最后将得到的低维特征向量利用欧氏距离法与训练样本库中的特征向量作匹配识别。该方法首次将基于时频变换的特征提取算法与基于子空间的特征提取算法结合起来,既充分利用了Gabor函数优良的特征提取性能,又有效解决了高维特征的降维处理问题。通过使用自行采集的数据库对该方法作对比实验,获得了94%的识别率  相似文献   

10.
基于Gabor小波和核保局投影算法的表面缺陷自动识别方法   总被引:3,自引:0,他引:3  
研究了Gabor小波变换和核保局投影(Kernel locality preserving projections, KLPP)算法的原理, 分析了热轧钢板表面缺陷的特点, 提出了一种基于Gabor小波和KLPP算法的特征提取方法, 并应用于热轧钢板表面缺陷自动识别. 首先利用Gabor小波将图像分解到5个尺度8个方向的40个分量中, 接着对原始图像和各个分量的实部和虚部分别提取均值和方差, 得到一个162维的特征向量, 然后利用KLPP算法将该特征向量的维数降到21维, 最后利用多层感知器网络对样本进行分类识别. 本文提出的特征提取方法具有计算简单、可并行处理的特点, 对沿一定方向分布的边缘和纹理具有较高的区分能力. 利用从工业现场采集的缺陷图像对本文方法进行了实验, 识别率达到93.87%.  相似文献   

11.
The goal of face recognition is to distinguish persons via their facial images. Each person's images form a cluster, and a new image is recognized by assigning it to the correct cluster. Since the images are very high-dimensional, it is necessary to reduce their dimension. Linear discriminant analysis (LDA) has been shown to be effective at dimension reduction while preserving the cluster structure of the data. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular restricts its application to datasets in which the dimension of the data does not exceed the sample size. For face recognition, however, the dimension typically exceeds the number of images in the database, resulting in what is referred to as the small sample size problem. Recently, the applicability of LDA has been extended by using the generalized singular value decomposition (GSVD) to circumvent the nonsingularity requirement, thus making LDA directly applicable to face recognition data. Our experiments confirm that LDA/GSVD solves the small sample size problem very effectively as compared with other current methods.  相似文献   

12.
A two-stage linear discriminant analysis via QR-decomposition   总被引:3,自引:0,他引:3  
Linear discriminant analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using principal component analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of singular value decomposition or generalized singular value decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.  相似文献   

13.
Facial Feature Extraction Method Based on Coefficients of Variances   总被引:1,自引:0,他引:1       下载免费PDF全文
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature extraction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.  相似文献   

14.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

15.
An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative sizes of the data dimension and sample size, overcoming a limitation of classical LDA. The optimization problem can be solved analytically by applying the Generalized Singular Value Decomposition (GSVD) technique. The pseudoinverse has been suggested and used for undersampled problems in the past, where the data dimension exceeds the number of data points. The criterion proposed in this paper provides a theoretical justification for this procedure. An approximation algorithm for the GSVD-based approach is also presented. It reduces the computational complexity by finding subclusters of each cluster and uses their centroids to capture the structure of each cluster. This reduced problem yields much smaller matrices to which the GSVD can be applied efficiently. Experiments on text data, with up to 7,000 dimensions, show that the approximation algorithm produces results that are close to those produced by the exact algorithm.  相似文献   

16.
The linear discriminant analysis (LDA) is one of the most traditional linear dimensionality reduction methods. This paper incorporates the inter-class relationships as relevance weights into the estimation of the overall within-class scatter matrix in order to improve the performance of the basic LDA method and some of its improved variants. We demonstrate that in some specific situations the standard multi-class LDA almost totally fails to find a discriminative subspace if the proposed relevance weights are not incorporated. In order to estimate the relevance weights of individual within-class scatter matrices, we propose several methods of which one employs the evolution strategies.  相似文献   

17.
鉴于Gabor特征对光照、表情等变化比较鲁棒,并已在人脸识别领域取得成功应用,提出了一种改进的Gabor-LDA算法.首先对人脸图像进行多方向、多尺度Gabor小渡滤波,然后对得到的特征向量使用改进的主成分分析方法(PCA)变换降维,采用自适应加权原理重建类内散布矩阵和类间散布矩阵,从而改进了最佳鉴别分析(LDA)判别函数,有效地解决了训练样本类均值与类中心的偏离问题.对Yale人脸库的数值试验表明,该算法比传统算法有更好的性能.  相似文献   

18.
线性判别分析(LDA)方法进行高维的人脸识别时,经常会遇到小样本问题(SSS)和边缘类重叠问题。本文提出一种新的LDA方法,重新定义类内离散度矩阵,利用参数来权衡其特征值估计的偏差和方差,以解决小样本问题;对类间离散矩阵加权,让边缘类均匀分布,防止边缘类的重叠,以提高识别率。大量的实验已经证明该方法能根据小样本问题的严重度调控参数以达到最高识别率,比传统的方法更优。  相似文献   

19.
We propose an eigenvector-based heteroscedastic linear dimension reduction (LDR) technique for multiclass data. The technique is based on a heteroscedastic two-class technique which utilizes the so-called Chernoff criterion, and successfully extends the well-known linear discriminant analysis (LDA). The latter, which is based on the Fisher criterion, is incapable of dealing with heteroscedastic data in a proper way. For the two-class case, the between-class scatter is generalized so to capture differences in (co)variances. It is shown that the classical notion of between-class scatter can be associated with Euclidean distances between class means. From this viewpoint, the between-class scatter is generalized by employing the Chernoff distance measure, leading to our proposed heteroscedastic measure. Finally, using the results from the two-class case, a multiclass extension of the Chernoff criterion is proposed. This criterion combines separation information present in the class mean as well as the class covariance matrices. Extensive experiments and a comparison with similar dimension reduction techniques are presented.  相似文献   

20.
The feature extraction is an important preprocessing step of the classification procedure particularly in high-dimensional data with limited number of training samples. Conventional supervised feature extraction methods, for example, linear discriminant analysis (LDA), generalized discriminant analysis, and non-parametric weighted feature extraction ones, need to calculate scatter matrices. In these methods, within-class and between-class scatter matrices are used to formulate the criterion of class separability. Because of the limited number of training samples, the accurate estimation of these matrices is not possible. So the classification accuracy of these methods falls in a small sample size situation. To cope with this problem, a new supervised feature extraction method namely, feature extraction using attraction points (FEUAP) has been recently proposed in which no statistical moments are used. Thus, it works well using limited training samples. To take advantage of this method and LDA one, this article combines them by a dyadic scheme. In the proposed scheme, the similar classes are grouped hierarchically by the k-means algorithm so that a tree with some nodes is constructed. Then the class of each pixel is determined from this scheme. To determine the class of each pixel, depending on the node of the tree, we use FEUAP or LDA for a limited or large number of training samples, respectively. The experimental results demonstrate the better performance of the proposed hybrid method in comparison with other supervised feature extraction methods in a small sample size situation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号