首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kernel discriminant analysis (KDA) is a widely used tool in feature extraction community. However, for high-dimensional multi-class tasks such as face recognition, traditional KDA algorithms have the limitation that the Fisher criterion is nonoptimal with respect to classification rate. Moreover, they suffer from the small sample size problem. This paper presents a variant of KDA called kernel-based improved discriminant analysis (KIDA), which can effectively deal with the above two problems. In the proposed framework, origin samples are projected firstly into a feature space by an implicit nonlinear mapping. After reconstructing between-class scatter matrix in the feature space by weighted schemes, the kernel method is used to obtain a modified Fisher criterion directly related to classification error. Finally, simultaneous diagonalization technique is employed to find lower-dimensional nonlinear features with significant discriminant power. Experiments on face recognition task show that the proposed method is superior to the traditional KDA and LDA.  相似文献   

2.
何正风  孙亚民 《计算机工程》2012,38(19):175-178
针对高维、小样本的分类问题,提出2个重要的准则,用于估计RBF单元的初始宽度.采用主成分分析方法把训练样本集投影到特征脸空间,以减少维数,用Fisher线性判别式产生一组最具判别性的特征,使不同类间的训练数据尽可能地分开,而同一类的样本尽可能地靠近.实验结果证明,该算法在分类的错误率及学习的效率上都表现出较好的性能.  相似文献   

3.
To preserve the sparsity structure in dimensionality reduction, sparsity preserving projection (SPP) is widely used in many fields of classification, which has the advantages of noise robustness and data adaptivity compared with other graph based method. However, the sparsity parameter of SPP is fixed for all samples without any adjustment. In this paper, an improved SPP method is proposed, which has an adaptive parameter adjustment strategy during sparse graph construction. With this adjustment strategy, the sparsity parameter of each sample is adjusted adaptively according to the relationship of those samples with nonzero sparse representation coefficients, by which the discriminant information of graph is enhanced. With the same expectation, similarity information both in original space and projection space is applied for sparse representation as guidance information. Besides, a new measurement is introduced to control the influence of each sample’s local structure on projection learning, by which more correct discriminant information should be preserved in the projection space. With the contributions of above strategies, the low-dimensional space with high discriminant ability is found, which is more beneficial for classification. Experimental results on three datasets demonstrate that the proposed approach can achieve better classification performance over some available state-of-the-art approaches.  相似文献   

4.
Linear discriminant regression classification (LDRC) was presented recently in order to boost the effectiveness of linear regression classification (LRC). LDRC aims to find a subspace for LRC where LRC can achieve a high discrimination for classification. As a discriminant analysis algorithm, however, LDRC considers an equal importance of each training sample and ignores the different contributions of these samples to learn the discriminative feature subspace for classification. Motivated by the fact that some training samples are more effectual in learning the low-dimensional feature space than other samples, in this paper, we propose an adaptive linear discriminant regression classification (ALDRC) algorithm by taking special consideration of different contributions of the training samples. Specifically, ALDRC makes use of different weights to characterize the different contributions of the training samples and utilizes such weighting information to calculate the between-class and the within-class reconstruction errors, and then ALDRC seeks to find an optimal projection matrix that can maximize the ratio of the between-class reconstruction error over the within-class reconstruction error. Extensive experiments carried out on the AR, FERET and ORL face databases demonstrate the effectiveness of the proposed method.  相似文献   

5.
目的 卷积神经网络在图像识别算法中得到了广泛应用。针对传统卷积神经网络学习到的特征缺少更有效的鉴别能力而导致图像识别性能不佳等问题,提出一种融合线性判别式思想的损失函数LDloss(linear discriminant loss)并用于图像识别中的深度特征提取,以提高特征的鉴别能力,进而改善图像识别性能。方法 首先利用卷积神经网络搭建特征提取所需的深度网络,然后在考虑样本分类误差最小化的基础上,对于图像多分类问题,引入LDA(linear discriminant analysis)思想构建新的损失函数参与卷积神经网络的训练,来最小化类内特征距离和最大化类间特征距离,以提高特征的鉴别能力,从而进一步提高图像识别性能,分析表明,本文算法可以获得更有助于样本分类的特征。其中,学习过程中采用均值分批迭代更新的策略实现样本均值平稳更新。结果 该算法在MNIST数据集和CK+数据库上分别取得了99.53%和94.73%的平均识别率,与现有算法相比较有一定的提升。同时,与传统的损失函数Softmax loss和Hinge loss对比,采用LDloss的深度网络在MNIST数据集上分别提升了0.2%和0.3%,在CK+数据库上分别提升了9.21%和24.28%。结论 本文提出一种新的融合判别式深度特征学习算法,该算法能有效地提高深度网络的可鉴别能力,从而提高图像识别精度,并且在测试阶段,与Softmax loss相比也不需要额外的计算量。  相似文献   

6.
为了在半监督情境下利用多视图特征中的信息提升分类性能,通过最小化输入特征向量的局部重构误差为以输入特征向量为顶点构建的图学习合适的边权重,将其用于半监督学习。通过将最小化输入特征向量的局部重构误差捕获到的输入数据的流形结构应用于半监督学习,有利于提升半监督学习中标签预测的准确性。对于训练样本图像的多视图特征的使用问题,借助于改进的典型相关分析技术学习更具鉴别性的多视图特征,将其有效融合并用于图像分类任务。实验结果表明,该方法能够在半监督情境下充分地挖掘训练样本的多视图特征表示的鉴别信息,有效地完成鉴别任务。  相似文献   

7.
对高维数据降维并选取有效特征对分类起着关键作用。针对人脸识别中存在的高维和小样本问题,从特征选取和子空间学习入手,提出了一种L_(2,1)范数正则化的不相关判别分析算法。该算法首先对训练样本矩阵进行奇异值分解;然后通过一系列变换,将原非线性的Fisher鉴别准则函数转化为线性模型;最后加入L_(2,1)范数惩罚项进行求解,得到一组最佳鉴别矢量。将训练样本和测试样本投影到该低维子空间中,利用最近欧氏距离分类器进行分类。由于加入了L_(2,1)范数惩罚项,该算法能使特征选取和子空间学习同时进行,有效改善识别性能。在ORL、YaleB及PIE人脸库上的实验结果表明,算法在有效降维的同时能进一步提高鉴别能力。  相似文献   

8.
武妍  杨洋 《计算机应用》2006,26(2):433-0435
为了获得重要的特征集合,提出了一种基于判别式分析算法和神经网络的特征选择方法。通过最小化扩展互熵误差函数来训练神经网络,这一误差函数的使用减小了神经网络传输函数的导数,降低了输出敏感度。该方法首先利用判别式分析算法得到一个有序的特征队列,然后通过正则化神经网络进行特征的选择,特征选择过程是基于单个特征的移除带来验证数据集上分类误差变化这一原理。与其他基于不同原理的四种方法进行了比较,实验结果表明,利用该算法训练的网络能够获得较高分类准确率。  相似文献   

9.
Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically, for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The problem is especially acute when sample sizes are very small and the potential number of features is very large. To obtain a general understanding of the kinds of feature-set sizes that provide good performance for a particular classification rule, performance must be evaluated based on accurate error estimation, and hence a model-based setting for optimizing the number of features is needed. This paper treats quadratic discriminant analysis (QDA) in the case of unequal covariance matrices. For two normal class-conditional distributions, the QDA classifier is determined according to a discriminant. The standard plug-in rule estimates the discriminant from a feature-label sample to obtain an estimate of the discriminant by replacing the means and covariance matrices by their respective sample means and sample covariance matrices. The unbiasedness of these estimators assures good estimation for large samples, but not for small samples.Our goal is to find an essentially analytic method to produce an error curve as a function of the number of features so that the curve can be minimized to determine an optimal number of features. We use a normal approximation to the distribution of the estimated discriminant. Since the mean and variance of the estimated discriminant will be exact, these provide insight into how the covariance matrices affect the optimal number of features. We derive the mean and variance of the estimated discriminant and compare feature-size optimization using the normal approximation to the estimated discriminant with optimization obtained by simulating the true distribution of the estimated discriminant. Optimization via the normal approximation to the estimated discriminant provides huge computational savings in comparison to optimization via simulation of the true distribution. Feature-size optimization via the normal approximation is very accurate when the covariance matrices differ modestly. The optimal number of features based on the normal approximation will exceed the actual optimal number when there is large disagreement between the covariance matrices; however, this difference is not important because the true misclassification error using the number of features obtained from the normal approximation and the number obtained from the true distribution differ only slightly, even for significantly different covariance matrices.  相似文献   

10.
张旭  张向群  赵伟  何岩峰 《计算机工程》2012,38(14):171-172
提出一种基于最近特征线(NFL)的二维非参数化判别分析算法,用于人脸识别等模式分类问题。该算法在子空间学习阶段运用NFL思想计算训练集中各样例的最近特征距离,计算得到低维投影空间,在低维投影空间中进行分类。通过ORL标准人脸数据库进行实验,结果表明该算法的鲁棒性优于传统算法。  相似文献   

11.
针对人脸识别中由于姿态、光照及噪声等影响造成的识别率不高的问题,提出一种基于多任务联合判别稀疏表示的人脸识别方法。首先提取人脸的局部二值特征,并基于多个特征建立一个联合分类误差与表示误差的过完备字典学习目标函数。然后,使用一种多任务联合判别字典学习方法,将多任务联合判别字典与最优线性分类器参数联合学习,得到具有良好表征和鉴别能力的字典及相应的分类器,进而提高人脸识别效果。实验结果表明,所提方法相比其他稀疏人脸识别方法具有更好的识别性能。  相似文献   

12.
吕佳 《计算机应用》2012,32(3):643-645
针对在半监督分类问题中单独使用全局学习容易出现的在整个输入空间中较难获得一个优良的决策函数的问题,以及单独使用局部学习可在特定的局部区域内习得较好的决策函数的特点,提出了一种结合全局和局部正则化的半监督二分类算法。该算法综合全局正则项和局部正则项的优点,基于先验知识构建的全局正则项能平滑样本的类标号以避免局部正则项学习不充分的问题,通过基于局部邻域内样本信息构建的局部正则项使得每个样本的类标号具有理想的特性,从而构造出半监督二分类问题的目标函数。通过在标准二类数据集上的实验,结果表明所提出的算法其平均分类正确率和标准误差均优于基于拉普拉斯正则项方法、基于正则化拉普拉斯正则项方法和基于局部学习正则项方法。  相似文献   

13.
This paper proposes a new feature selection methodology. The methodology is based on the stepwise variable selection procedure, but, instead of using the traditional discriminant metrics such as Wilks' Lambda, it uses an estimation of the misclassification error as the figure of merit to evaluate the introduction of new features. The expected misclassification error rate (MER) is obtained by using the densities of a constructed function of random variables, which is the stochastic representation of the conditional distribution of the quadratic discriminant function estimate. The application of the proposed methodology results in significant savings of computational time in the estimation of classification error over the traditional simulation and cross-validation methods. One of the main advantages of the proposed method is that it provides a direct estimation of the expected misclassification error at the time of feature selection, which provides an immediate assessment of the benefits of introducing an additional feature into an inspection/classification algorithm.  相似文献   

14.
Mixture discriminant analysis (MDA) and subclass discriminant analysis (SDA) belong to the supervised classification approaches. They have advantage over the standard linear discriminant analysis (LDA) in large sample size problems, since both of them divide the samples in each class into subclasses which keep locality but LDA does not. However, since the current MDA and SDA algorithms perform subclass division in just one step in the original data space before solving the generalized eigenvalue problem, two problems are exposed: (1) they ignore the relation among classes since subclass division is performed in each isolated class; (2) they cannot guarantee good performance of classifiers in the transformed space, because locality in the original data space may not be kept in the transformed space. To address these problems, this paper presents a new approach for subclass division based on k-means clustering in the projected space, class by class using the iterative steps under EM-alike framework. Experiments are performed on the artificial data set, the UCI machine learning data sets, the CENPARMI handwritten numeral database, the NUST603 handwritten Chinese character database, and the terrain cover database. Extensive experimental results demonstrate the performance advantages of the proposed method.  相似文献   

15.
适用于小样本问题的具有类内保持的正交特征提取算法   总被引:1,自引:0,他引:1  
在人脸识别中, 具有正交性的特征提取算法是一类有效的特征提取算法, 但受到小样本问题的制约. 本文在正交判别保局投影的基础上, 提出了一种适用于小样本问题的具有类内保持的正交特征提取算法. 算法根据同类样本之间的空间结构信息, 重新定义了类内散度矩阵与类间散度矩阵, 进而给出了一个新的目标函数. 然而新的目标函数对于人脸识别问题, 同样存在着小样本问题. 为此本文将原始数据空间降到一个低维的子空间, 从而避免了总体散度矩阵奇异, 并在理论上证明了在该子空间中求解判别矢量集, 等价于在原空间中求解判别矢量集. 人脸库上的实验结果表明本文算法的有效性.  相似文献   

16.
线性判别分析(LDA)是最经典的子空间学习和有监督判别特征提取方法之一.受到流形学习的启发,近年来众多基于LDA的改进方法被提出.尽管出发点不同,但这些算法本质上都是基于欧氏距离来度量样本的空间散布度.欧氏距离的非线性特性带来了如下两个问题:1)算法对噪声和异常样本点敏感;2)算法对流形或者是多模态数据集中局部散布度较大的样本点过度强调,导致特征提取过程中数据的本质结构特征被破坏.为了解决这些问题,提出一种新的基于非参数判别分析(NDA)的维数约减方法,称作动态加权非参数判别分析(DWNDA).DWNDA采用动态加权距离来计算类间散布度和类内散布度,不仅能够保留多模态数据集的本质结构特征,还能有效地利用边界样本点对之间的判别信息.因此,DWNDA在噪声实验中展现出对噪声和异常样本的强鲁棒性.此外,在人脸和手写体数据库上进行实验,DWNDA方法均取得了优异的实验结果.  相似文献   

17.
This paper presents a novel pattern classification approach - a kernel and Bayesian discriminant based classifier which utilizes the distribution characteristics of the samples in each class. A kernel combined with Bayesian discriminant in the subspace spanned by the eigenvectors which are associated with the smaller eigenvalues in each class is adopted as the classification criterion. To solve the problem of the matrix inverse, the smaller eigenvalues are substituted by a small threshold which is decided by minimizing the training error in a given database. Application of the proposed classifier to the issue of handwritten numeral recognition demonstrates that it is promising in practical applications.  相似文献   

18.
AdaBoost算法是一种典型的集成学习框架,通过线性组合若干个弱分类器来构造成强学习器,其分类精度远高于单个弱分类器,具有很好的泛化误差和训练误差。然而AdaBoost 算法不能精简输出模型的弱分类器,因而不具备良好的可解释性。本文将遗传算法引入AdaBoost算法模型,提出了一种限制输出模型规模的集成进化分类算法(Ensemble evolve classification algorithm for controlling the size of final model,ECSM)。通过基因操作和评价函数能够在AdaBoost迭代框架下强制保留物种样本的多样性,并留下更好的分类器。实验结果表明,本文提出的算法与经典的AdaBoost算法相比,在基本保持分类精度的前提下,大大减少了分类器数量。  相似文献   

19.
一种基于融合重构的子空间学习的零样本图像分类方法   总被引:1,自引:0,他引:1  
图像分类是计算机视觉中一个重要的研究子领域.传统的图像分类只能对训练集中出现过的类别样本进行分类.然而现实应用中,新的类别不断涌现,因而需要收集大量新类别带标记的数据,并重新训练分类器.与传统的图像分类方法不同,零样本图像分类能够对训练过程中没有见过的类别的样本进行识别,近年来受到了广泛的关注.零样本图像分类通过语义空间建立起已见类别和未见类别之间的关系,实现知识的迁移,进而完成对训练过程中没有见过的类别样本进行分类.现有的零样本图像分类方法主要是根据已见类别的视觉特征和语义特征,学习从视觉空间到语义空间的映射函数,然后利用学习好的映射函数,将未见类别的视觉特征映射到语义空间,最后在语义空间中用最近邻的方法实现对未见类别的分类.但是由于已见类和未见类的类别差异,以及图像的分布不同,从而容易导致域偏移问题.同时直接学习图像视觉空间到语义空间的映射会导致信息损失问题.为解决零样本图像分类知识迁移过程中的信息损失以及域偏移的问题,本文提出了一种图像分类中基于子空间学习和重构的零样本分类方法.该方法在零样本训练学习阶段,充分利用未见类别已知的信息,来减少域偏移,首先将语义空间中的已见类别和未见类别之间的关系迁移到视觉空间中,学习获得未见类别视觉特征原型.然后根据包含已见类别和未见类别在内的所有类别的视觉特征原型所在的视觉空间和语义特征原型所在的语义空间,学习获得一个潜在类别原型特征空间,并在该潜在子空间中对齐视觉特征和语义特征,使得所有类别在潜在子空间中的表示既包含视觉空间下的可分辨性信息,又包含语义空间下的类别关系信息,同时在子空间的学习过程中利用重构约束,减少信息损失,同时也缓解了域偏移问题.最后零样本分类识别阶段,在不同的空间下根据最近邻算法对未见类别样本图像进行分类.本文的主要贡献在于:一是通过对语义空间中类别间关系的迁移,学习获得视觉空间中未见类别的类别原型,使得在训练过程中充分利用未见类别的信息,一定程度上缓解域偏移问题.二是通过学习一个共享的潜在子空间,该子空间既包含了图像视觉空间中丰富的判别性信息,也包含了语义空间中的类别间关系信息,同时在子空间学习过程中,通过重构,缓解知识迁移过程中信息损失的问题.本文在四个公开的零样本分类数据集上进行对比实验,实验结果表明本文提出的零样本分类方法取得了较高的分类平均准确率,证明了本文方法的有效性.  相似文献   

20.
A new manifold learning method, called improved semi-supervised local fisher discriminant analysis (iSELF), for gene expression data classification is proposed. Motivated by the fact that semi-supervised and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the global structure of unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on Eigen decompositions. Experiments on synthetic data and SRBCT, DLBCL and brain tumor gene expression datasets are performed to test and evaluate the proposed method. The experimental results and comparisons demonstrate the effectiveness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号