首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
融合语义主题的图像自动标注   总被引:7,自引:0,他引:7  
由于语义鸿沟的存在,图像自动标注已成为一个重要课题.在概率潜语义分析的基础上,提出了一种融合语义主题的方法以进行图像的标注和检索.首先,为了更准确地建模训练数据,将每幅图像的视觉特征表示为一个视觉"词袋";然后设计一个概率模型分别从视觉模态和文本模态中捕获潜在语义主题,并提出一种自适应的不对称学习方法融合两种语义主题.对于每个图像文档,它在各个模态上的主题分布通过加权进行融合,而权值由该文档的视觉词分布的熵值来确定.于是,融合之后的概率模型适当地关联了视觉模态和文本模态的信息,因此能够很好地预测未知图像的语义标注.在一个通用的Corel图像数据集上,将提出的方法与几种前沿的图像标注方法进行了比较.实验结果表明,该方法具有更好的标注和检索性能.  相似文献   

2.
Image classification is to assign a category of an image and image annotation is to describe individual components of an image by using some annotation terms. These two learning tasks are strongly related. The main contribution of this paper is to propose a new discriminative and sparse topic model (DSTM) for image classification and annotation by combining visual, annotation and label information from a set of training images. The essential features of DSTM different from existing approaches are that (i) the label information is enforced in the generation of both visual words and annotation terms such that each generative latent topic corresponds to a category; (ii) the zero-mean Laplace distribution is employed to give a sparse representation of images in visual words and annotation terms such that relevant words and terms are associated with latent topics. Experimental results demonstrate that the proposed method provides the discrimination ability in classification and annotation, and its performance is better than the other testing methods (sLDA-ann, abc-corr-LDA, SupDocNADE, SAGE and MedSTC) for LabelMe, UIUC, NUS-WIDE and PascalVOC07 images.  相似文献   

3.
针对有效利用图像底层视觉特征和图像语义特征进行图像标注,提出一种改进的AP(Affinity Propagation)聚类标注模型。首先采用半监督距离测度学习算法,融合图像语义信息,训练得到新的距离测度。然后使用新的距离测度对每一类图像进行AP聚类,生成各类图像的聚类中心,计算待标注图像到各类图像聚类中心的平均距离,确定待标注图像类别。最后计算待标注图像到类内各个聚类中心的距离,确定待标注图像类内类别,统计该类别下图像的标注词,作为待标注图像的标注词。在Corel5K和NUS-WIDE数据集上进行了实验,经验证,该方法有效提高了标注精度。  相似文献   

4.
图像语义自动标注问题是现阶段一个具有挑战性的难题。在跨媒体相关模型基础上,提出了融合图像类别信息的图像语义标注新方法,并利用关联规则挖掘算法改善标注结果。首先对图像进行低层特征提取,用“视觉词袋”描述图像;然后对图像特征分别进行K-means聚类和基于支持向量机的多类别分类,得到图像相似性关系和类别信息;计算语义标签和图像之间的概率关系,并将图像类别信息作为权重融合到标签的统计概率中,得到候选标注词集;最后以候选标注词概率为依据,利用改善的关联规则挖掘算法挖掘文本关联度,并对候选标注词集进行等频离散化处理,从而得到最终标注结果。在图像集Corel上进行的标注实验取得了较为理想的标注结果。  相似文献   

5.
6.
7.
This paper presents a unified annotation and retrieval framework, which integrates region annotation with image retrieval for performance reinforcement. To integrate semantic annotation with region-based image retrieval, visual and textual fusion is proposed for both soft matching and Bayesian probabilistic formulations. To address sample insufficiency and sample asymmetry in the annotation classifier training phase, we present a region-level multi-label image annotation scheme based on pair-wise coupling support vector machine (SVM) learning. In the retrieval phase, to achieve semantic-level region matching we present a novel retrieval scheme which differs from former work: the query example uploaded by users is automatically annotated online, and the user can judge its annotation quality. Based on the user’s judgment, two novel schemes are deployed for semantic retrieval: (1) if the user judges the photo to be well annotated, Semantically supervised Integrated Region Matching is adopted, which is a keyword-integrated soft region matching method; (2) If the user judges the photo to be poorly annotated, Keyword Integrated Bayesian Reasoning is adopted, which is a natural integration of a Visual Dictionary in online content-based search. In the relevance feedback phase, we conduct both visual and textual learning to capture the user’s retrieval target. Better annotation and retrieval performance than current methods were reported on both COREL 10,000 and Flickr web image database (25,000 images), which demonstrated the effectiveness of our proposed framework.  相似文献   

8.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

9.
李东艳  李绍滋  柯逍 《计算机应用》2010,30(10):2610-2613
针对图像标注中所使用数据集存在的数据不平衡问题,提出一种新的基于外部数据库的自动平衡模型。该模型先依据原始数据库中词频分布来找出低频点,再根据自动平衡模式,对每个低频词,从外部数据库中增加相应的图片;然后对图片进行特征提取,对Corel 5k数据集中的47065个视觉词汇和从外部数据库中追加的图片中提取出来的996个视觉词汇进行聚类;最后利用基于外部数据库的图像自动标注改善模型对图像进行标注。此方法克服了图像标注中数据库存在的不平衡问题,使得至少被正确标注一次的词的数量、精确率和召回率等均明显提高。  相似文献   

10.
11.
This paper addresses automatic image annotation problem and its application to multi-modal image retrieval. The contribution of our work is three-fold. (1) We propose a probabilistic semantic model in which the visual features and the textual words are connected via a hidden layer which constitutes the semantic concepts to be discovered to explicitly exploit the synergy among the modalities. (2) The association of visual features and textual words is determined in a Bayesian framework such that the confidence of the association can be provided. (3) Extensive evaluation on a large-scale, visually and semantically diverse image collection crawled from Web is reported to evaluate the prototype system based on the model. In the proposed probabilistic model, a hidden concept layer which connects the visual feature and the word layer is discovered by fitting a generative model to the training image and annotation words through an Expectation-Maximization (EM) based iterative learning procedure. The evaluation of the prototype system on 17,000 images and 7736 automatically extracted annotation words from crawled Web pages for multi-modal image retrieval has indicated that the proposed semantic model and the developed Bayesian framework are superior to a state-of-the-art peer system in the literature.  相似文献   

12.
基于多模态关联图的图像语义标注方法   总被引:1,自引:0,他引:1  
郭玉堂  罗斌 《计算机应用》2010,30(12):3295-3297
为了改善图像标注的性能,提出了一种基于多模态关联图的图像语义标注方法。该方法用一个无向图表达了图像区域特征、标注词以及图像三者之间的关系,结合图像区域特征相似性和语义间的相关性提取图像语义信息,提高了图像标注的精度。利用逆向文档频率(IDF)修正图像节点与其标注词节点之间边的权值,克服了传统方法中因高频词引起的偏差,有效地提高了图像标注的性能。在Corel图像数据集上进行了实验,实验结果验证了该方法的有效性。  相似文献   

13.
为了提高对Web动画素材的组织、管理,该文提出了基于文本特征和视觉特征融合的Web动画素材标注算法。首先利用自动提取的Web动画素材上下文信息,结合Web动画素材名称、页面主题、URL以及ALT等属性组成特征集,提取出文本关键字;然后利用视觉与标注字之间的相关性,对自动提取的标注字进行过滤,实现Web动画素材的自动标注。实验表明该文提出的基于文本特征和视觉特征融合的Web动画素材标注算法可有效地应用于Web动画素材自动标注。  相似文献   

14.
本文提出了一种基于期望最大化(EM)算法的局部图像特征的语义提取方法。首先提取图像的局部图像特征,统计特征在视觉词汇本中的出现频率,将图像表示成词袋模型;引入文本分析中的潜在语义分析技术建立从低层图像特征到高层图像语义之间的映射模型;然后利用EM算法拟合概率模型,得到图像局部特征的潜在语义概率分布;最后利用该模型提取出的图像在潜在语义上的分布来进行图像分析和理解。与其他基于语义的图像理解方法相比,本文方法不需要手工标注,以无监督的方式直接从图像低层特征中发掘图像的局部潜在语义,既求得了局部语义信息,又获得了局部语义的空间分布特性,因而能更好地对场景建模。为验证本文算法获取语义的有效性,在15类场景图像上进行了实验,实验结果表明,该方法取得了良好的分类准确率。  相似文献   

15.
16.
17.
Automatic image annotation aims at predicting a set of semantic labels for an image. Because of large annotation vocabulary, there exist large variations in the number of images corresponding to different labels (“class-imbalance”). Additionally, due to the limitations of human annotation, several images are not annotated with all the relevant labels (“incomplete-labelling”). These two issues affect the performance of most of the existing image annotation models. In this work, we propose 2-pass k-nearest neighbour (2PKNN) algorithm. It is a two-step variant of the classical k-nearest neighbour algorithm, that tries to address these issues in the image annotation task. The first step of 2PKNN uses “image-to-label” similarities, while the second step uses “image-to-image” similarities, thus combining the benefits of both. We also propose a metric learning framework over 2PKNN. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label data. In addition to the features provided by Guillaumin et al. (2009) that are used by almost all the recent image annotation methods, we benchmark using new features that include features extracted from a generic convolutional neural network model and those computed using modern encoding techniques. We also learn linear and kernelized cross-modal embeddings over different feature combinations to reduce semantic gap between visual features and textual labels. Extensive evaluations on four image annotation datasets (Corel-5K, ESP-Game, IAPR-TC12 and MIRFlickr-25K) demonstrate that our method achieves promising results, and establishes a new state-of-the-art on the prevailing image annotation datasets.  相似文献   

18.
已有工作表明,融入图像视觉语义信息可以提升文本机器翻译模型的效果。已有的工作多数将图片的整体视觉语义信息融入到翻译模型,而图片中可能包含不同的语义对象,并且这些不同的局部语义对象对解码端单词的预测具有不同程度的影响和作用。基于此,该文提出一种融合图像注意力的多模态机器翻译模型,将图片中的全局语义和不同部分的局部语义信息与源语言文本的交互信息作为图像注意力融合到文本注意力权重中,从而进一步增强解码端隐含状态与源语言文本的对齐信息。在多模态机器翻译数据集Multi30k上英语—德语翻译对以及人工标注的印尼语—汉语翻译对上的实验结果表明,该文提出的模型相比已有的基于循环神经网络的多模态机器翻译模型效果具有较好的提升,证明了该模型的有效性。  相似文献   

19.
为生成有效表示图像场景语义的视觉词典,提高场景语义标注性能,提出一种基于形式概念分析(FCA)的图像场景语义标注模型。该方法首先将训练图像集与其初始的视觉词典抽象为形式背景,采用信息熵标识了各视觉单词的权重,并分别构造了各场景类别概念格结构;然后再利用各视觉单词权重的均值刻画概念格内涵上各组合视觉单词标注图像的贡献,按照类别视觉词典生成阈值,从格结构上有效提取了标注各类场景图像语义的视觉词典;最后,利用K最近邻标注测试图像的场景语义。在Fei-Fei Scene 13类自然场景图像数据集上进行实验,并与Fei-Fei方法和Bai方法相比,结果表明该方法在β=0.05和γ=15时,标注分类精度更优。  相似文献   

20.
为减小图像检索中语义鸿沟的影响,提出了一种基于视觉语义主题的图像自动标注方法.首先,提取图像前景与背景区域,并分别进行预处理;然后,基于概率潜在语义分析与高斯混合模型建立图像底层特征、视觉语义主题与标注关键词间的联系,并基于该模型实现对图像的自动标注.采用corel 5数据库进行验证,实验结果证明了本文方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号