共查询到10条相似文献,搜索用时 140 毫秒
1.
Image automatic annotation is a significant and challenging problem in pattern recognition and computer vision. Current image annotation models almost used all the training images to estimate joint generation probabilities between images and keywords, which would inevitably bring a lot of irrelevant images. To solve the above problem, we propose a hierarchical image annotation model which combines advantages of discriminative model and generative model. In first annotation layer, discriminative model is used to assign topic annotations to unlabeled images, and then relevant image set corresponding to each unlabeled image is obtained. In second annotation layer, we propose a keywords-oriented method to establish links between images and keywords, and then our iterative algorithm is used to expand relevant image sets. Candidate labels will be given higher weights by using our method based on visual keywords. Finally, generative model is used to assign detailed annotations to unlabeled images on expanded relevant image sets. Experiments conducted on Corel 5K datasets verify the effectiveness of our hierarchical image annotation model. 相似文献
2.
Finding semantically similar images is a problem that relies on image annotations manually assigned by amateurs or professionals,
or automatically computed by some algorithm using low-level image features. These image annotations create a keyword space
where a dissimilarity function quantifies the semantic relationship among images. In this setting, the objective of this paper
is two-fold. First, we compare amateur to professional user annotations and propose a model of manual annotation errors, more
specifically, an asymmetric binary model. Second, we examine different aspects of search by semantic similarity. More specifically,
we study the accuracy of manual annotations versus automatic annotations, the influence of manual annotations with different
accuracies as a result of incorrect annotations, and revisit the influence of the keyword space dimensionality. To assess
these aspects we conducted experiments on a professional image dataset (Corel) and two amateur image datasets (one with 25,000
Flickr images and a second with 269,648 Flickr images) with a large number of keywords, with different similarity functions
and with both manual and automatic annotation methods. We find that Amateur-level manual annotations offers better performance
for top ranked results in all datasets (MP@20). However, for full rank measures (MAP) in the real datasets (Flickr) retrieval
by semantic similarity with automatic annotations is similar or better than amateur-level manual annotations. 相似文献
3.
Grégoire Mesnil Antoine Bordes Jason Weston Gal Chechik Yoshua Bengio 《Machine Learning》2014,94(2):281-301
Recently, large scale image annotation datasets have been collected with millions of images and thousands of possible annotations. Latent variable models, or embedding methods, that simultaneously learn semantic representations of object labels and image representations can provide tractable solutions on such tasks. In this work, we are interested in jointly learning representations both for the objects in an image, and the parts of those objects, because such deeper semantic representations could bring a leap forward in image retrieval or browsing. Despite the size of these datasets, the amount of annotated data for objects and parts can be costly and may not be available. In this paper, we propose to bypass this cost with a method able to learn to jointly label objects and parts without requiring exhaustively labeled data. We design a model architecture that can be trained under a proxy supervision obtained by combining standard image annotation (from ImageNet) with semantic part-based within-label relations (from WordNet). The model itself is designed to model both object image to object label similarities, and object label to object part label similarities in a single joint system. Experiments conducted on our combined data and a precisely annotated evaluation set demonstrate the usefulness of our approach. 相似文献
4.
The vast amount of images available on the Web request for an effective and efficient search service to help users find relevant images.The prevalent way is to provide a keyword interface for users to submit queries.However,the amount of images without any tags or annotations are beyond the reach of manual efforts.To overcome this,automatic image annotation techniques emerge,which are generally a process of selecting a suitable set of tags for a given image without user intervention.However,there are three main challenges with respect to Web-scale image annotation:scalability,noiseresistance and diversity.Scalability has a twofold meaning:first an automatic image annotation system should be scalable with respect to billions of images on the Web;second it should be able to automatically identify several relevant tags among a huge tag set for a given image within seconds or even faster.Noise-resistance means that the system should be robust enough against typos and ambiguous terms used in tags.Diversity represents that image content may include both scenes and objects,which are further described by multiple different image features constituting different facets in annotation.In this paper,we propose a unified framework to tackle the above three challenges for automatic Web image annotation.It mainly involves two components:tag candidate retrieval and multi-facet annotation.In the former content-based indexing and concept-based codebook are leveraged to solve scalability and noise-resistance issues.In the latter the joint feature map has been designed to describe different facets of tags in annotations and the relations between these facets.Tag graph is adopted to represent tags in the entire annotation and the structured learning technique is employed to construct a learning model on top of the tag graph based on the generated joint feature map.Millions of images from Flickr are used in our evaluation.Experimental results show that we have achieved 33% performance improvements compared with those single facet approaches in terms of three metrics:precision,recall and F1 score. 相似文献
5.
随着Web2.0下海量共享图像的出现,如何获取图像具有描述力的精准区域标注具有重要研究意义。本文提出一种基于区域语义多样性密度的图像标注框架,重点考虑区域间的视觉特征差异和空间结构差异。具体来说,基于距离相似度的特征多样性密度实现了区域特征语义标注;引入负相关示例的惩罚作用实现了区域空间关系语义及属性语义标注。在NUS-WIDE和MSRC数据集上验证了方法的有效性,区域属性标注的正确率在80%以上,同时基于属性标注的图像检索的平均查准率达到82%。 相似文献
6.
7.
目的 基于深度模型的跟踪算法往往需要大规模的高质量标注训练数据集,而人工逐帧标注视频数据会耗费大量的人力及时间成本。本文提出一个基于Transformer模型的轻量化视频标注算法(Transformer-based label network,TLNet),实现对大规模稀疏标注视频数据集的高效逐帧标注。方法 该算法通过Transformer模型来处理时序的目标外观和运动信息,并融合前反向的跟踪结果。其中质量评估子网络用于筛选跟踪失败帧,进行人工标注;回归子网络则对剩余帧的初始标注进行优化,输出更精确的目标框标注。该算法具有强泛化性,能够与具体跟踪算法解耦,应用现有的任意轻量化跟踪算法,实现高效的视频自动标注。结果 在2个大规模跟踪数据集上生成标注。对于LaSOT (large-scale single object tracking)数据集,自动标注过程仅需约43 h,与真实标注的平均重叠率(mean intersection over union,mIoU)由0.824提升至0.871。对于TrackingNet数据集,本文使用自动标注重新训练3种跟踪算法,并在3个数据集上测试跟踪性能,使用本文标注训练的模型在跟踪性能上超过使用TrackingNet原始标注训练的模型。结论 本文算法TLNet能够挖掘时序的目标外观和运动信息,对前反向跟踪结果进行帧级的质量评估并进一步优化目标框。该方法与具体跟踪算法解耦,具有强泛化性,并能节省超过90%的人工标注成本,高效地生成高质量的视频标注。 相似文献
8.
图像自动标注是模式识别与计算机视觉等领域中重要而又具有挑战性的问题.针对现有模型存在数据利用率低与易受正负样本不平衡影响等问题,提出了基于判别模型与生成模型的新型层叠图像自动标注模型.该模型第一层利用判别模型对未标注图像进行主题标注,获得相应的相关图像集;第二层利用提出的面向关键词的方法建立图像与关键词之间的联系,并使用提出的迭代算法分别对语义关键词与相关图像进行扩展;最后利用生成模型与扩展的相关图像集对未标注图像进行详细标注.该模型综合了判别模型与生成模型的优点,通过利用较少的相关训练图像来获得更好的标注结果.在Corel 5K图像库上进行的实验验证了该模型的有效性. 相似文献
9.
Pattern matching, or querying, over annotations is a general purpose paradigm for inspecting, navigating, mining, and transforming
annotation repositories—the common representation basis for modern pipelined text processing architectures. The open-ended
nature of these architectures and expressiveness of feature structure-based annotation schemes account for the natural tendency
of such annotation repositories to become very dense, as multiple levels of analysis get encoded as layered annotations. This
particular characteristic presents challenges for the design of a pattern matching framework capable of interpreting ‘flat’
patterns over arbitrarily dense annotation lattices. We present an approach where a finite state device applies (compiled)
pattern grammars over what is, in effect, a linearized ‘projection’ of a particular route through the lattice. The route is
derived by a mix of static grammar analysis and runtime interpretation of navigational directives within an extended grammar
formalism; it selects just the annotations sequence appropriate for the patterns at hand. For expressive and efficient pattern
matching in dense annotations stores, our implemented approach achieves a mix of lattice traversal and finite state scanning
by exposing a language which, to its user, provides constructs for specifying sequential, structural, and configurational constraints among annotations. 相似文献
10.
目的 随着高光谱成像技术的飞速发展,高光谱数据的应用越来越广泛,各场景高光谱图像的应用对高精度详细标注的需求也越来越旺盛。现有高光谱分类模型的发展大多集中于有监督学习,大多数方法都在单个高光谱数据立方中进行训练和评估。由于不同高光谱数据采集场景不同且地物类别不一致,已训练好的模型并不能直接迁移至新的数据集得到可靠标注,这也限制了高光谱图像分类模型的进一步发展。本文提出跨数据集对高光谱分类模型进行训练和评估的模式。方法 受零样本学习的启发,本文引入高光谱类别标签的语义信息,拟通过将不同数据集的原始数据及标签信息分别映射至同一特征空间以建立已知类别和未知类别的关联,再通过将训练数据集的两部分特征映射至统一的嵌入空间学习高光谱图像视觉特征和类别标签语义特征的对应关系,即可将该对应关系应用于测试数据集进行标签推理。结果 实验在一对同传感器采集的数据集上完成,比较分析了语义—视觉特征映射和视觉—语义特征映射方向,对比了5种基于零样本学习的特征映射方法,在高光谱图像分类任务中实现了对分类模型在不同数据集上的训练和评估。结论 实验结果表明,本文提出的基于零样本学习的高光谱分类模型可以实现跨数据集对分类模型进行训练和评估,在高光谱图像分类任务中具有一定的发展潜力。 相似文献