首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
一种基于内容图像检索的半监督和主动学习算法   总被引:1,自引:0,他引:1  
为了提高图像检索中相关反馈算法的效率,提出了一种新的基于相关概率的主动学习算法SVMpr,并结合半监督学习,设计了基于半监督的主动学习图像检索框架。在相关反馈过程中,首先利用半监督学习算法TSVM对标记样本进行训练,然后根据提出的主动学习算法从未标记图像中选取k幅有利于优化学习过程的图像并反馈给用户标记。与传统的相关反馈算法相比,该文提出的图像检索框架显著提高了学习器的效率和性能,并快速收敛于用户的查询概念。  相似文献   

2.
宋贤霞 《福建电脑》2011,27(6):63-64
将主动学习算法引入到图像检索中,以SVM作为分类器提出一种新的相关反馈算法,有效解决相关反馈技术中固有的小样本问题,提高了SVM的分类性能,从而使检索系统的检索精度有一定的提高.  相似文献   

3.
一种改进的SVM相关反馈图像检索方法*   总被引:1,自引:1,他引:0  
提出了一种改进的支持向量机SVM( Support Vector Machine) 的相关反馈图像检索方法。在这种方法的交互过程中, SVM 分类器不仅对本次反馈过程中用户所提交的标记的正例和反例样本进行学习, 还对历次反馈过程中的正例和反例样本进行学习, 并根据训练后的分类器进行检索。实验结果表明, 该方法在样本集非常小的情况下, 仍可以检索出较多的相关图像, 在有限训练样本情况下具有良好的推广能力。  相似文献   

4.
基于偏袒性半监督集成的SVM主动反馈方案   总被引:1,自引:0,他引:1  
现有的SVM主动反馈算法普遍受到小样本问题和不对称分布问题的制约。针对这些问题,文中提出一种基于偏袒性半监督集成的SVM主动反馈技术。该算法在集成学习框架中使用未标记数据以增加个体分类器之间的差异性,从而获得高效的集成分类模型。同时,高效的集成分类模型更有利于寻找富有信息样本,进而也提高主动反馈的效率。此外,文中还设计一种偏袒加权策略,使得集成分类模型对正样本给予更大的关注程度,以应对正负样本间的不对称分布问题。实验结果表明,偏袒性半监督集成可有效改进SVM主动反馈的性能,且文中算法的检索精度明显优于其它同类相关反馈算法。  相似文献   

5.
为解决监督学习过程中难以获得大量带有类标记样本且样本数据标记代价较高的问题,结合主动学习和半监督学习方法,提出基于Tri-training半监督学习和凸壳向量的SVM主动学习算法.通过计算样本集的壳向量,选择最有可能成为支持向量的壳向量进行标记.为解决以往主动学习算法在选择最富有信息量的样本标记后,不再进一步利用未标记样本的问题,将Tri-training半监督学习方法引入SVM主动学习过程,选择类标记置信度高的未标记样本加入训练样本集,利用未标记样本集中有利于学习器的信息.在UCI数据集上的实验表明,文中算法在标记样本较少时获得分类准确率较高和泛化性能较好的SVM分类器,降低SVM训练学习的样本标记代价.  相似文献   

6.
主动学习已被证明是提升基于内容图像检索性能的一种重要技术。而相关反馈技术可以有效地减少用户标注。提出一种主动学习算法,带权Co-ASVM,用于改进相关反馈中样本选择的性能。颜色和纹理可以认为是一张图片的两个充分不相关的视图,分别计算颜色和纹理两种特征空间的权值,并在两种特征空间上分别进行SVM学习,对未标注样本进行分类;为了减少反馈样本的冗余,提出一种K-means聚类的主动反馈策略,将未标注样本返回给用户标注。实验表明,该图像检索方法有较高的准确性,并且有不错的检索效果。  相似文献   

7.
徐海龙 《控制与决策》2010,25(2):282-286
针对SVM训练学习过程中难以获得大量带有类标注样本的问题,提出一种基于距离比值不确定性抽样的主动SVM增量训练算法(DRB-ASVM),并将其应用于SVM增量训练.实验结果表明,在保证不影响分类精度的情况下,应用主动学习策略的SVM选择的标记样本数量大大低于随机选择的标记样本数量,从而降低了标记的工作量或代价,并且提高了训练速度.  相似文献   

8.
提出一种基于SVM和Adaboost集成学习相结合的相关反馈算法。在相关反馈过程中选择最具信息的样本训练支持向量机,可以有效减少相关反馈的次数和所需学习样本的数量,通过两者的互补来有效地提高图像检索的精度。最后提出Adaboost算法对SVM分类器进行加权投票,这样进一步提高了图像检索的性能。实验表明,该方法较好地解决了图像检索中的小样本选择问题,能够显著提高图像检索的效率和性能。  相似文献   

9.
相关反馈技术是基于内容图像检索研究的热点。本文针对现有SVM相关反馈中假定相关图像的所有特征为相关这一不完全准确假设,提出了MISVM短期机器学习相关反馈方法。该方法采用多示例学习方法确定图像中每个特征的相关程度来提高SVM的分类准确性;在此基础上,为进一步提高系统反馈速度与准确率,通过保存以前训练好的分类器和反馈样本,提出了基于LMISVM长期机器学习的相关反馈方法。文中提出的两种方法与其它方法进行了比较实验,结果表明该方法优于其它方法。  相似文献   

10.
梁竞敏  唐斌 《微计算机信息》2012,(5):174-176,173
语义图像检索已成为解决简单视觉特征和用户检索高级语义之间存在的"语义鸿沟"问题的关键,本文试图提出一种基于SVM和Adaboost集成学习相结合的相关反馈算法。在相关反馈过程中选择最具信息的样本训练支持向量机,可以有效减少相关反馈的次数和所需学习样本的数量,通过两者的互补来有效地提高图像检索的精度。最后提出Adaboost算法对SVM分类器进行加权投票,这样进一步提高了图像检索的性能。实验表明,该方法能较好地解决了图像检索中的小样本选择问题,并能显著提高图像检索的效率和性能。  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

16.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
20.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号