首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
张宏达  王晓丹  徐海龙 《控制与决策》2009,24(11):1723-1728

针对决策有向无环图支持向量机(DDAGSVM)需训练大量支持向量机(SVM)和误差积累的问题,提出一种线性判别分析(LDA)与SVM 混合的多类分类算法.首先根据高维样本在低维空间中投影的特点,给出一种优化LDA 分类阈值;然后以优化LDA 对每个二类问题的分类误差作为类间线性可分度,对线性可分度较低的问题采用非线性SVM 加以解决,并以分类误差作为对应二类问题的可分度;最后将可分度作为混合DDAG 分类器的决策依据.实验表明,与DDAGSVM 相比,所提出算法在确保泛化精度的条件下具有更高的训练和分类速度.

  相似文献   

2.
为了加快并行下降方法(CD)用于线性支持向量机(SVM)时的最终收敛速度,将Rosenbrock算法(R)用于线性SVM.在内循环,R 通过解一个单变量子问题来更新狑的一个分量,并同时固定其他分量不变;在外循环,采用Gram-schmidt过程构建新的搜索方向.实验结果表明,与CD 相比,R 加快了最终的收敛,在分类中能更快地获得更高的测试精度.  相似文献   

3.
最大间隔最小体积球形支持向量机   总被引:9,自引:1,他引:8  
结合支持向量机(SVM)类间最大分类间隔和支持向量数据描述(SVDD)类内最小描述体积思想,提出一种新的学习机器模型———最大间隔最小体积球形支持向量机(MMHSVM).模型建立两个大小不一的同心超球,将正负类样本分别映射到小超球内和大超球外,模型目标函数最大化两超球间隔,实现正负类类间间隔的最大化和各类类内体积的最小化,提高了模型的分类能力.理论分析和实验结果表明该算法是有效的.  相似文献   

4.
用个人计算机解较大型混合整数线性规划问题   总被引:3,自引:0,他引:3  
用个人计算机解较大型混合整数线性规划问题胡清淮(武汉化工学院)魏一鸣(北京科技大学)SOLVINGRELATIVELYLARGESIZEMIXEDINTEGERLINEARPROGRAMMINGPROBLEMSUSINGPERSONALCOMPUTE...  相似文献   

5.
本文介绍了利用VISUALFOXPRO的应用程序接口FOXTOOLS调用WINDOWS的DLL库开发多媒体播放器的原理及实现技术,利用该播放器可以对波形文件(.WAV),数字音乐,(.MID),视频图象(.AVI)动画(.FLA)音频,压缩视频(.MPG)V-CD视频,(.DAT)等进行控制和播放。  相似文献   

6.
混合型符号几何规划的递归二次规划算法   总被引:1,自引:0,他引:1  
混合型符号几何规划的递归二次规划算法张希,张可村(西安交通大学科学计算与应用软件系)ASUCCESSIVEQUADRATICPROGRAMMINGALGORITHMFORMIXEDSIGNOMIALGEOMETRICPROGRAMMING¥Zhang...  相似文献   

7.
带有变量中误差的Logit回归模型及其计算   总被引:3,自引:0,他引:3  
带有变量中误差的Logit回归模型及其计算吕纯濂,陈舜华(南京气象学院)H.Kuchenhoff(慕尼黑大学统计与科学理论研究所)THELOGITREGRESSIONMODELWITHERRORS-IN-VARIABLESANDITSCALCULAT...  相似文献   

8.
刘叶青  刘三阳  谷明涛 《控制与决策》2009,24(12):1895-1898

为了加快并行下降方法(CD)用于线性支持向量机(SVM)时的最终收敛速度,将Rosenbrock算法(R)用于线性SVM.在内循环,R 通过解一个单变量子问题来更新狑的一个分量,并同时固定其他分量不变;在外循环,采用Gram-schmidt过程构建新的搜索方向.实验结果表明,与CD 相比,R 加快了最终的收敛,在分类中能更快地获得更
高的测试精度.

  相似文献   

9.

结合支持向量机(SVM)类间最大分类间隔和支持向量数据描述(SVDD)类内最小描述体积思想,提出一种新的学习机器模型———最大间隔最小体积球形支持向量机(MMHSVM).模型建立两个大小不一的同心超球,将正负类样本分别映射到小超球内和大超球外,模型目标函数最大化两超球间隔,实现正负类类间间隔的最大化和各类类内体积的最小化,提高了模型的分类能力.理论分析和实验结果表明该算法是有效的.

  相似文献   

10.
线性连续回归模型基于Laguerre多项式逼近的Markov参数估计   总被引:4,自引:0,他引:4  
线性连续回归模型基于Laguerre多项式逼近的Markov参数估计赵明旺(武汉钢铁学院自动化系)MARKOVPARAMETERESTIMATIONFORLINEARCONTINUOUSREGRESSIVEMODELSVIALAGUERRE-POLY...  相似文献   

11.
运用小波进行图像分解提取低频子带图,并利用优化的线性判别分析(LDA)算法寻找最优投影子空间,从而映射提取人脸特征,实现人脸的分类识别。该方法避免了传统LDA算法中类内离散度矩阵非奇异的要求,解决了边缘类重叠问题,具有更广泛的应用空间。实验表明:该方法优于传统的LDA方法和主分量分析(PCA)方法。  相似文献   

12.
Linear discriminant analysis (LDA) is one of the most popular methods of classification. For high-dimensional microarray data classification, due to the small number of samples and large number of features, classical LDA has sub-optimal performance corresponding to the singularity and instability of the within-group covariance matrix. Two modified LDA approaches (MLDA and NLDA) were applied for microarray classification and their performance criteria were compared with other popular classification algorithms across a range of feature set sizes (number of genes) using both simulated and real datasets. The results showed that the overall performance of the two modified LDA approaches was as competitive as support vector machines and other regularized LDA approaches and better than diagonal linear discriminant analysis, k-nearest neighbor, and classical LDA. It was concluded that the modified LDA approaches can be used as an effective classification tool in limited sample size and high-dimensional microarray classification problems.  相似文献   

13.
分别用降秩线性判别分析(RRLDA)、降秩二次判别分析(RRQDA)和主成分分析+线性判别分析(PCA+LDA)三种模型对数据进行了分析,并在元音测试数据集上进行了测试.分别画出了这三种模型的误分类率曲线,画出了RRLDA和PCA+LDA分别降至二维后的最优分类面.从实验结果中可以发现,RRLDA模型的实验结果优于PC...  相似文献   

14.
Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems.  相似文献   

15.
The purpose of conventional linear discriminant analysis (LDA) is to find an orientation which projects high dimensional feature vectors of different classes to a more manageable low dimensional space in the most discriminative way for classification. The LDA technique utilizes an eigenvalue decomposition (EVD) method to find such an orientation. This computation is usually adversely affected by the small sample size problem. In this paper we have presented a new direct LDA method (called gradient LDA) for computing the orientation especially for small sample size problem. The gradient descent based method is used for this purpose. It also avoids discarding the null space of within-class scatter matrix and between-class scatter matrix which may have discriminative information useful for classification.  相似文献   

16.
LDA/QR, a linear discriminant analysis (LDA) based dimension reduction algorithm is presented. It achieves the efficiency by introducing a QR decomposition on a small-size matrix, while keeping competitive classification accuracy. Its theoretical foundation is also presented.  相似文献   

17.
Feature extraction is an important component of a pattern recognition system. It performs two tasks: transforming input parameter vector into a feature vector and/or reducing its dimensionality. A well-defined feature extraction algorithm makes the classification process more effective and efficient. Two popular methods for feature extraction are linear discriminant analysis (LDA) and principal component analysis (PCA). In this paper, the minimum classification error (MCE) training algorithm (which was originally proposed for optimizing classifiers) is investigated for feature extraction. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithm. LDA, PCA, and MCE and GMCE algorithms extract features through linear transformation. Support vector machine (SVM) is a recently developed pattern classification algorithm, which uses non-linear kernel functions to achieve non-linear decision boundaries in the parametric space. In this paper, SVM is also investigated and compared to linear feature extraction algorithms.  相似文献   

18.
Fractional-step dimensionality reduction   总被引:10,自引:0,他引:10  
Linear projections for dimensionality reduction, computed using linear discriminant analysis (LDA), are commonly based on optimization of certain separability criteria in the output space. The resulting optimization problem is linear, but these separability criteria are not directly related to the classification accuracy in the output space. Consequently, a trial and error procedure has to be invoked, experimenting with different separability criteria that differ in the weighting function used and selecting the one that performed best on the training set. Often, even the best weighting function among the trial choices results in poor classification of data in the subspace. In this short paper, we introduce the concept of fractional dimensionality and develop an incremental procedure, called the fractional-step LDA (F-LDA) to reduce the dimensionality in fractional steps. The F-LDA algorithm is more robust to the selection of weighting function and for any given weighting function, it finds a subspace in which the classification accuracy is higher than that obtained using LDA  相似文献   

19.
PCA-LDA算法在性别鉴别中的应用   总被引:4,自引:0,他引:4       下载免费PDF全文
何国辉  甘俊英 《计算机工程》2006,32(19):208-210
结合主元分析(Principal Components Analysis, PCA)与线性鉴别分析(Linear Discriminant Analysis, LDA)的特点,提出用于性别鉴别的PCA-LDA算法。该算法通过PCA算法求得训练样本的特征子空间,并在此基础上计算LDA算法的特征子空间。将PCA算法与LDA算法的特征子空间进行融合,获得PCA-LDA算法的融合特征空间。训练样本与测试样本分别朝融合特征空间投影,从而得到识别特征。利用最近邻准则即可完成性别鉴别。基于ORL(Olivetti Research Laboratory)人脸数据库的实验结果表明,PCA-LDA算法比PCA算法识别性能好,在性别鉴别中是一种有效的方法。  相似文献   

20.
Self-organizing algorithms for generalized eigen-decomposition   总被引:1,自引:0,他引:1  
We discuss a new approach to self-organization that leads to novel adaptive algorithms for generalized eigen-decomposition and its variance for a single-layer linear feedforward neural network. First, we derive two novel iterative algorithms for linear discriminant analysis (LDA) and generalized eigen-decomposition by utilizing a constrained least-mean-squared classification error cost function, and the framework of a two-layer linear heteroassociative network performing a one-of-m classification. By using the concept of deflation, we are able to find sequential versions of these algorithms which extract the LDA components and generalized eigenvectors in a decreasing order of significance. Next, two new adaptive algorithms are described to compute the principal generalized eigenvectors of two matrices (as well as LDA) from two sequences of random matrices. We give a rigorous convergence analysis of our adaptive algorithms by using stochastic approximation theory, and prove that our algorithms converge with probability one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号