首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 218 毫秒
1.
基于随机粗糙样本的结构风险最小化原则   总被引:2,自引:1,他引:1       下载免费PDF全文
提出了退火熵,生长函数和VC维等概念,构建了基于VC维的学习过程一致收敛速度的界。然后以这些界为基础,给出基于随机粗糙样本的结构风险最小化原则。最后证明该原则是一致的并且推导出了关于渐近收敛速度的界。  相似文献   

2.
基于模糊随机样本的结构风险最小化原则   总被引:2,自引:0,他引:2       下载免费PDF全文
基于模糊随机样本,提出了熵、退火熵、生长函数和VC维等概念,并在此基础上构建了基于VC维的学习过程一致收敛速度的界;给出了基于模糊随机样本的结构风险最小化原则(FSSRM原则),证明了基于FSSRM原则下收敛速度渐进分析的相关性质。  相似文献   

3.
基于复随机样本的结构风险最小化原则   总被引:4,自引:0,他引:4  
统计学习理论目前是处理小样本学习问题的最佳理论.然而,该理论主要是针对实随机样本的,它难以讨论和处理现实世界中客观存在的涉及复随机样本的小样本统计学习问题.结构风险最小化原则是统计学习理论的核心内容之一,是构建支持向量机的重要基础.基于此,研究了基于复随机样本的统计学习理论的结构风险最小化原则.首先,给出了标志复可测函数集容量的退火熵、生长函数和VC维的定义,并证明了它们的一些性质;其次,构建了基于复随机样本的学习过程一致收敛速度的界;最后,给出了基于复随机样本的结构风险最小化原则,证明了该原则是一致的,同时推导出了收敛速度的界.  相似文献   

4.
基于随机粗糙样本的统计学习理论研究   总被引:1,自引:1,他引:0       下载免费PDF全文
介绍随机粗糙理论的基本内容。提出随机粗糙经验风险泛函,随机粗糙期望风险泛函,随机粗糙经验风险最小化原则等概念。最后证明基于随机粗糙样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界。  相似文献   

5.
基于双重随机样本的统计学习理论的理论基础   总被引:6,自引:2,他引:4  
介绍双重随机理论的基本内容。提出双重随机经验风险泛函,双重随机期望风险泛函,双重随机经验风险最小化原则等概念。最后证明基于双重随机样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界。为系统建立基于不确定样本的统计学习理论并构建相应的支持向量机奠定了理论基础。  相似文献   

6.
基于复拟随机样本的统计学习理论的理论基础   总被引:3,自引:1,他引:2       下载免费PDF全文
引入复拟(概率)随机变量,准范数的定义。给出了复拟随机变量的期望和方差的概念及若干性质;证明了基于复拟随机变量的马尔可夫不等式,契比雪夫不等式和辛钦大数定律;提出了拟概率空间中复经验风险泛函、复期望风险泛函以及复经验风险最小化原则等定义。证明并讨论了基于复拟随机样本的统计学习理论的关键定理和学习过程一致收敛速度的界,为系统建立基于复拟随机样本的统计学习理论奠定了理论基础。  相似文献   

7.
模型复杂性是决定学习机器泛化性能的关键因素,对其进行合理的控制是模型选择的重要原则.极限学习机(extreme learning machine,ELM)作为一种新的机器学习算法,表现出了优越的学习性能.但对于如何在ELM的模型选择过程中合理地度量和控制其模型复杂性这一基本问题,目前尚欠缺系统的研究.本文讨论了基于Vapnik-Chervonenkis(VC)泛化界的ELM模型复杂性控制方法(记作VM),并与其他4种经典模型选择方法进行了系统的比较研究.在人工和实际数据集上的实验表明,与其他4种经典方法相比,VM具有更优的模型选择性能:能选出同时具有最低模型复杂性和最低(或近似最低)实际预测风险的ELM模型.此外,本文也为VC维理论的实际应用价值研究提供了一个新的例证.  相似文献   

8.
描述了一种基于显微镜图像下的二维融合图的三维重建技术,该技术可以还原出二维融合图像的三维特征。在分析原理的基础上,使用Direct3D在VC环境下实现了三维重建技术。  相似文献   

9.
引入复gλ随机变量、准范数的定义,给出了复gλ随机变量的期望和方差的概念及若干性质;证明了基于复gλ随机变量的马尔可夫不等式、契比雪夫不等式和辛钦大数定律;提出了Sugeno测度空间中复经验风险泛函、复期望风险泛函以及复经验风险最小化原则严格一致性等定义;证明并构建了基于复gλ随机样本的统计学习理论的关键定理和学习过程一致收敛速度的界,为系统建立基于复gλ随机样本的统计学习理论奠定了理论基础。  相似文献   

10.
首先介绍基于COM技术的VC与MATLAB混合编程的一般链接方法,然后根据数字图像处理中的实际需求,详细讨论多维数组中数据在VC及MATLAB间相互传输的过程,最后对算法在两个编译器中不同的运算速度进行比较。工程应用结果表明基于COM技术的VC与MATLAB混合编程性能稳定,运行速度较快,极大减少了开发人员的代码量和出错率,可以在实际项目中运用。  相似文献   

11.
Accurate prediction of the generalization ability of a learning algorithm is an important problem in computational learning theory. The classical Vapnik-Chervonenkis (VC) generalization bounds are too general and therefore overestimate the expected error. Recently obtained data-dependent bounds are still overestimated. To find out why the bounds are loose, we reject the uniform convergence principle and apply a purely combinatorial approach that is free of any probabilistic assumptions, makes no approximations, and provides an empirical control of looseness. We introduce new data-dependent complexity measures: a local shatter coefficient and a nonscalar local shatter profile, which can give much tighter bounds than the classical VC shatter coefficient. An experiment on real datasets shows that the effective local measures may take very small values; thus, the effective local VC dimension takes values in [0, 1] and therefore is not related to the dimension of the space. Konstantin Vorontsov. Born 1971. Graduated from the Faculty of Control and Applied Mathematics, Moscow Institute of Physics and Technology, in 1994. Received candidate’s degree in 1999. Currently is with the Dorodnicyn Computing Centre, Russian Academy of Sciences. Deputy director for research of Forecsys company (). Scientific interests: computational learning theory, machine learning, data mining, probability theory, and combinatorics. Author of 40 papers. Homepage:  相似文献   

12.
Statistical learning theory based on real-valued random samples has been regarded as a better theory on statistical learning with small sample. The key theorem of learning theory and bounds on the rate of convergence of learning processes are important theoretical foundations of statistical learning theory. In this paper, the theoretical foundations of the statistical learning theory based on fuzzy number samples are discussed. The concepts of fuzzy expected risk functional, fuzzy empirical risk functional and fuzzy empirical risk minimization principle are redefined. The key theorem of learning theory based on fuzzy number samples is proved. Furthermore, the bounds on the rate of convergence of learning processes based on fuzzy number samples are discussed.  相似文献   

13.
We exhibit upper bounds for the Vapnik-Chervonenkis (VC) dimension of a wide family of concept classes that are defined by algorithms using analytic Pfaffian functions. We give upper bounds on the VC dimension of concept classes in which the membership test for whether an input belongs to a concept in the class can be performed either by a computation tree or by a circuit with sign gates containing Pfaffian functions as operators. These new bounds are polynomial both in the height of the tree and in the depth of the circuit. As consequence we obtain polynomial VC dimension not also for classes of concepts whose membership test can be defined by polynomial time algorithms but also for those defined by well-parallelizable sequential exponential time algorithms.  相似文献   

14.
We introduce a method based on Kolmogorov complexity to prove lower bounds on communication complexity. The intuition behind our technique is close to information theoretic methods.We use Kolmogorov complexity for three different things: first, to give a general lower bound in terms of Kolmogorov mutual information; second, to prove an alternative to Yao’s minmax principle based on Kolmogorov complexity; and finally, to identify hard inputs.We show that our method implies the rectangle and corruption bounds, known to be closely related to the subdistribution bound. We apply our method to the hidden matching problem, a relation introduced to prove an exponential gap between quantum and classical communication. We then show that our method generalizes the VC dimension and shatter coefficient lower bounds. Finally, we compare one-way communication and simultaneous communication in the case of distributional communication complexity and improve the previous known result.  相似文献   

15.
王星  方滨兴  张宏莉  何慧  赵蕾 《软件学报》2013,24(11):2508-2521
在关系分类模型的学习过程中,目前还没有类似统计学习理论中学习界限的支撑.研究关系分类的学习界限显得尤为重要,为此,提出了一些适用于关系分类模型的学习界限.首先推导出在模型假设空间有限和无限情况下的学习界限.接着提出一个衡量关系模型关联数据能力的复杂性度量——关系维,并证明了该复杂度和关系模型的生长函数之间的关系,得到有限VC 维和有限关系维下的学习界限.然后分析了该界限可学习和有意义的条件,并对界限的可行性进行了详细的分析.最后分析了基于马尔可夫逻辑网的传统学习界限和关系分类中的学习情况,实验结果表明,所提出的界限能够解释实际关系分类中遇到的一些问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号