首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
刘梦迪  陈燕俐  陈蕾 《计算机应用》2016,36(8):2274-2281
现有图像自动标注技术算法可以大致划分为基于语义的标注算法、基于矩阵分解的标注算法、基于概率的标注算法以及基于图学习的标注算法等四大类。介绍了各类别中具有代表性的图像自动标注算法,分析了这些算法的问题模型及其功能特点,并归纳了图像自动标注算法中主要的优化求解方法及算法评价中常用的图像数据集和性能评价指标。最后,指出了图像自动标注技术目前存在的主要问题,并且提出了这些问题的解决思路。分析结果表明,对于图像自动标注技术的研究,可充分利用现有算法的优势互补,或借助多学科交叉的优势,寻找更有效的算法。  相似文献   

2.
为了在图像底层特征与高层语义之间建立关系,提高图像自动标注的精确度,结合基于图学习的方法和基于分类的标注算法,提出了基于连续预测的半监督学习图像语义标注的方法,并对该方法的复杂度进行分析。该方法利用标签数据提供的信息和标签事例与无标签事例之间的关系,根据邻接点(事例)属于同一个类的事实,构建K邻近图。用一个基于图的分类器,通过核函数有效地计算邻接信息。在建立图的基础上,把经过划分后的样本节点集通过基于连续预测的多标签半监督学习方法进行标签传递。实验表明,提出的算法在图像标注中的标注词的平均查准率、平均查全率方面有显著的提高。  相似文献   

3.
针对图像检索中的语义鸿沟问题,提出了一种新颖的自动图像标注方法。该方法首先采用了一种基于软约束的半监督图像聚类算法(SHMRF-Kmeans)对已标注图像的区域进行语义聚类,这种聚类方法可以同时考虑图像的视觉信息和语义信息。并利用图算法——Manifold排序学习算法充分发掘语义概念与区域聚类中心的关系,得到两者的联合概率关系表。然后利用此概率关系表标注未知标注的图像。该方法与以前的方法相比可以更加充分地结合图像的视觉特征和高层语义。通过在通用图像集上的实验结果表明,本文提出的自动图像标注方法是有效的。  相似文献   

4.
图像语义的标注需要解决图像高层语义和底层特征间存在的语义鸿沟。采用基于图像分割、并结合图像区域特征抽取的方法,建立图像区域语义与底层特征间的关联,采用基于距离的分类算法,计算区域特征间的相似性,并对具有相同或相近特征的区域的语义采用关联关键字的方法进行区分,用关键字实现图像语义的自动标注。  相似文献   

5.
基于概念索引的图像自动标注   总被引:2,自引:0,他引:2  
在基于内容的图像检索中,建立图像底层视觉特征与高层语义的联系是个难题.一个新的解决方法是按照图像的语义内容进行自动标注.为了缩小语义差距,采用基于支持向量机(SVM)的多类分类器为空间映射方法,将图像的底层特征映射为具有一定高层语义的模型特征以实现概念索引,使用的模型特征为多类分类的结果以概率形式组合而成.在模型特征组成的空间中,再使用核函数方法对关键词进行了概率估计,从而提供概念化的图像标注以用于检索.实验表明,与底层特征相比,使用模型特征进行自动标注的结果F度量相对提高14%.  相似文献   

6.
朱旭东  熊贇 《计算机工程》2022,48(4):173-178+190
图像多标签分类作为计算机视觉领域的重要研究方向,在图像识别、检测等场景下得到广泛应用。现有图像多标签分类方法无法有效利用标签相关性信息以及标签语义与图像特征的对应关系,导致分类能力较差。提出一种图像多标签分类的新算法,通过利用标签共现信息和标签先验知识构建图模型,使用多尺度注意力学习图像特征中目标,并利用标签引导注意力融合标签语义特征和图像特征信息,从而将标签相关性和标签语义信息融入到模型学习中。在此基础上,基于图注意力机制构建动态图模型,并对标签信息图模型进行动态更新学习,以充分融合图像信息和标签信息。在图像多标签分类任务上的实验结果表明,相比于现有最优算法MLGCN,该算法在VOC-2007数据集及COCO-2012数据集上的mAP值分别提高了0.6、1.2个百分点,性能有明显提升。  相似文献   

7.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

8.
零样本多标签图像分类是对含多个标签且测试类别标签在训练过程中没有相应训练样本的图像进行分类标注。已有的研究表明,多标签图像类别间存在相互关联,合理利用标签间相互关系是多标签图像分类技术的关键,如何实现已见类到未见类的模型迁移,并利用标签间相关性实现未见类的分类是零样本多标签分类需要解决的关键问题。针对这一挑战性的学习任务,提出一种深度示例差异化分类算法。首先利用深度嵌入网络实现图像视觉特征空间至标签语义特征空间的跨模态映射,然后在语义空间利用示例差异化算法实现多标签分类。通过在主流数据集Natural Scene和IAPRTC-12上与已有算法进行对比实验,验证了所提方法的先进性和有效性,同时验证了嵌入网络的先进性。  相似文献   

9.
提出一种基于空间金字塔分块与PLSA方法相结合的场景分类方法.该方法首先通过空间金字塔分块的方式来构建图像区域集合,然后利用概率潜在语义分析(PLSA)从图像的区域集合中发现潜在语义模型,最后根据潜在语义模型找出所有图像区域中潜在语义出现概率来构建区域潜在语义特征,并使用该特征构建SVM模型进行场景分类.在13类场景图像上的试验表明,和其他方法相比,该方法中不需要进行大量的手工标注,而且具有更高的分类准确率.  相似文献   

10.
缩小图像低层视觉特征与高层语义之间的鸿沟,以提高图像语义自动标注的精度,是研究大规模图像数据管理的关键。提出一种融合多特征的深度学习图像自动标注方法,将图像视觉特征以不同权重组合成词包,根据输入输出变量优化深度信念网络,完成大规模图像数据语义自动标注。在通用Corel图像数据集上的实验表明,融合多特征的深度学习图像自动标注方法,考虑图像不同特征的影响,提高了图像自动标注的精度。  相似文献   

11.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

16.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

17.
18.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

19.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

20.
学习控制技术·方法应用的发展新动向   总被引:2,自引:0,他引:2  
分析和概述了当前学习控制系统所采用的技术、学习方法及应用的发展新动向 .从所采用的技术来看 ,学习控制正在从采用单一的技术向采用混合技术的方向发展 ;从学习方法和应用来看 ,学习控制正在从采用较为简单的参数学习向采用较为复杂的结构学习、环境学习和复杂对象学习的方向发展  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号