首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  完全免费   10篇
  自动化技术   27篇
  2019年   1篇
  2017年   1篇
  2015年   1篇
  2014年   1篇
  2012年   2篇
  2011年   3篇
  2010年   2篇
  2009年   1篇
  2008年   4篇
  2007年   2篇
  2006年   2篇
  2005年   1篇
  2004年   2篇
  2002年   1篇
  2001年   2篇
  1998年   1篇
排序方式: 共有27条查询结果,搜索用时 31 毫秒
1.
最大散度差和大间距线性投影与支持向量机   总被引:31,自引:2,他引:29       下载免费PDF全文
首先对Fisher鉴别准则作了必要的修正,并基于新的鉴别准则设计了最大散度差分 类器;然后探讨了当参数C趋向无穷大时,最大散度差分类器的极限情况,得到了大间距线 性投影分类器;最后通过分析说明,大间距线性投影分类器实际上是在模式样本线性可分的条 件下,线性支持向量机的一种特殊情况.在ORL和NUST603人脸库上的测试结果表明,最 大散度差分类器和大间距线性投影分类器可以与线性支持向量机、不相关线性鉴别分析相媲 美,优于Foley-Sammon鉴别分析方法.  相似文献
2.
基于最大散度差鉴别准则的自适应分类算法   总被引:6,自引:0,他引:6       下载免费PDF全文
首先证明了,当类内散布矩阵非奇异时,特定参数值c0下最大散度差的最优鉴别方向等同于Fisher最优鉴别方向;其次,给出了最大散度差分类算法的识别率随参数C变化的曲线.该曲线通常为一脉冲曲线.随着参数C的增大,识别率也逐渐增大.当参数C增大到c0时,识别率达到最大值.另外,以往的研究成果表明:当类内散布矩阵奇异时,最大散度差鉴别准则逐步逼近大间距线性投影准则.而且,随着参数C的不断增大,最大散度差分类算法的识别率也单调增大并最终稳定到大间距线性投影分类算法的识别率上.为此,我们提出了基于最大散度差鉴别准则的自适应分类算法.新算法可以根据训练样本的特性(类内散布矩阵是否奇异)自动选择恰当的参数C.在UCI机器学习数据库上的6个数据集以及AR人脸图像数据库上的测试结果表明,自适应最大散度差分类算法具有良好的分类性能.  相似文献
3.
Structured large margin machines: sensitive to data distributions   总被引:4,自引:0,他引:4  
This paper proposes a new large margin classifier—the structured large margin machine (SLMM)—that is sensitive to the structure of the data distribution. The SLMM approach incorporates the merits of “structured” learning models, such as radial basis function networks and Gaussian mixture models, with the advantages of “unstructured” large margin learning schemes, such as support vector machines and maxi-min margin machines. We derive the SLMM model from the concepts of “structured degree” and “homospace”, based on an analysis of existing structured and unstructured learning models. Then, by using Ward’s agglomerative hierarchical clustering on input data (or data mappings in the kernel space) to extract the underlying data structure, we formulate SLMM training as a sequential second order cone programming. Many promising features of the SLMM approach are illustrated, including its accuracy, scalability, extensibility, and noise tolerance. We also demonstrate the theoretical importance of the SLMM model by showing that it generalizes existing approaches, such as SVMs and M4s, provides novel insight into learning models, and lays a foundation for conceiving other “structured” classifiers. Editor: Dale Schuurmans. This work was supported by the Hong Kong Research Grant Council under Grants G-T891 and B-Q519.  相似文献
4.
Soft Margins for AdaBoost   总被引:3,自引:0,他引:3  
Recently ensemble methods like ADABOOST have been applied successfully in many problems, while seemingly defying the problems of overfitting.ADABOOST rarely overfits in the low noise regime, however, we show that it clearly does so for higher noise levels. Central to the understanding of this fact is the margin distribution. ADABOOST can be viewed as a constraint gradient descent in an error function with respect to the margin. We find that ADABOOST asymptotically achieves a hard margin distribution, i.e. the algorithm concentrates its resources on a few hard-to-learn patterns that are interestingly very similar to Support Vectors. A hard margin is clearly a sub-optimal strategy in the noisy case, and regularization, in our case a mistrust in the data, must be introduced in the algorithm to alleviate the distortions that single difficult patterns (e.g. outliers) can cause to the margin distribution. We propose several regularization methods and generalizations of the original ADABOOST algorithm to achieve a soft margin. In particular we suggest (1) regularized ADABOOSTREG where the gradient decent is done directly with respect to the soft margin and (2) regularized linear and quadratic programming (LP/QP-) ADABOOST, where the soft margin is attained by introducing slack variables.Extensive simulations demonstrate that the proposed regularized ADABOOST-type algorithms are useful and yield competitive results for noisy data.  相似文献
5.
Large margin vs. large volume in transductive learning   总被引:2,自引:0,他引:2  
We consider a large volume principle for transductive learning that prioritizes the transductive equivalence classes according to the volume they occupy in hypothesis space. We approximate volume maximization using a geometric interpretation of the hypothesis space. The resulting algorithm is defined via a non-convex optimization problem that can still be solved exactly and efficiently. We provide a bound on the test error of the algorithm and compare it to transductive SVM (TSVM) using 31 datasets.  相似文献
6.
Classification Accuracy Based on Observed Margin   总被引:1,自引:0,他引:1  
J. Shawe-Taylor 《Algorithmica》1998,22(1-2):157-172
Following recent results [10] showing the importance of the fat-shattering dimension in explaining the beneficial effect of a large margin on generalization performance, the current paper investigates how the margin on a test example can be used to give greater certainty of correct classification in the distribution independent model. Hence, generalization analysis is possible at three distinct phases, a priori using a standard pac analysis, after training based on properties of the chosen hypothesis [10], and finally in this paper at testing based on properties of the test example. The results also show that even if the classifier does not classify all of the training examples correctly, the fact that a new example has a larger margin than that on the misclassified test examples, can be used to give very good estimates for the generalization performance in terms of the fat-shattering dimension measured at a scale proportional to the excess margin. The estimate relies on a sufficiently large number of the correctly classified training examples having a margin roughly equal to that used to estimate generalization, indicating that the corresponding output values need to be ``well sampled.' Received January 31, 1997; revised June 9, 1997, and July 18, 1997.  相似文献
7.
Fisher大间距线性分类器   总被引:1,自引:1,他引:0  
作为一种著名的特征抽取方法,Fisher线性鉴别分析的基本思想是选择使得Fisher准则函数达到最大值的向量(称为最优鉴别向量)作为最优投影方向,以便使得高维输入空间中的模式样本在该向量投影后,在类间散度达到最大的同时,类内散度最小。大间距线性分类器是寻找一个最优投影矢量(最优分隔超平面的法向量),它可使得投影后的两类样本之间的分类间距(Margin)最大。为了获得更佳的识别效果,结合Fisher线性鉴别分析和大间距分类器的优点,提出了一种新的线性投影分类算法——Fisher大间距线性分类器。该分类器的主要思想就是寻找最优投影矢量wbest(最优超平面的法向量),使得高维输入空间中的样本模式在wbest上投影后,在使类间间距达到最大的同时,使类内离散度尽可能地小。并从理论上讨论了与其他线性分类器的联系。在ORL人脸库和FERET人脸数据库上的实验结果表明,该线性投影分类算法的识别率优于其他分类器。  相似文献
8.
Support vector machine (SVM) theory was originally developed on the basis of a linearly separable binary classification problem, and other approaches have been later introduced for this problem. In this paper it is demonstrated that all these approaches admit the same dual problem formulation in the linearly separable case and that all the solutions are equivalent. For the non-linearly separable case, all the approaches can also be formulated as a unique dual optimization problem, however, their solutions are not equivalent. Discussions and remarks in the article point to an in-depth comparison between SVM formulations and associated parameters.  相似文献
9.
SVM theory was originally developed on the basis of a separable binary classification problem, and other approaches have been later introduced. In this paper, we demonstrated that all these approaches admit the same dual problem formulation.  相似文献
10.
Unlike linear discriminant analysis, the large margin linear projection (LMLP) classifier presented in this paper, which also roots in linear Fisher discriminant, takes full advantage of the singularity of within-class scatter matrix, and classifies projected points in one-dimensional space by itself. Theoretical analysis and experimental results both reveal that LMLP is well suited for high-dimensional small-sample pattern recognition problems such as face recognition.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号