首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
融合语义主题的图像自动标注   总被引:7,自引:0,他引:7  
由于语义鸿沟的存在,图像自动标注已成为一个重要课题.在概率潜语义分析的基础上,提出了一种融合语义主题的方法以进行图像的标注和检索.首先,为了更准确地建模训练数据,将每幅图像的视觉特征表示为一个视觉"词袋";然后设计一个概率模型分别从视觉模态和文本模态中捕获潜在语义主题,并提出一种自适应的不对称学习方法融合两种语义主题.对于每个图像文档,它在各个模态上的主题分布通过加权进行融合,而权值由该文档的视觉词分布的熵值来确定.于是,融合之后的概率模型适当地关联了视觉模态和文本模态的信息,因此能够很好地预测未知图像的语义标注.在一个通用的Corel图像数据集上,将提出的方法与几种前沿的图像标注方法进行了比较.实验结果表明,该方法具有更好的标注和检索性能.  相似文献   

2.
Semantic gap has become a bottleneck of content-based image retrieval in recent years. In order to bridge the gap and improve the retrieval performance, automatic image annotation has emerged as a crucial problem. In this paper, a hybrid approach is proposed to learn the semantic concepts of images automatically. Firstly, we present continuous probabilistic latent semantic analysis (PLSA) and derive its corresponding Expectation–Maximization (EM) algorithm. Continuous PLSA assumes that elements are sampled from a multivariate Gaussian distribution given a latent aspect, instead of a multinomial one in traditional PLSA. Furthermore, we propose a hybrid framework which employs continuous PLSA to model visual features of images in generative learning stage and uses ensembles of classifier chains to classify the multi-label data in discriminative learning stage. Therefore, the framework can learn the correlations between features as well as the correlations between words. Since the hybrid approach combines the advantages of generative and discriminative learning, it can predict semantic annotation precisely for unseen images. Finally, we conduct the experiments on three baseline datasets and the results show that our approach outperforms many state-of-the-art approaches.  相似文献   

3.
为减少图像检索中图像信息的缺失与语义鸿沟的影响,提出了一种基于多特征融合与PLSA-GMM的图像自动标注方法.首先,提取图像的颜色特征、形状特征和纹理特征,三者融合作为图像的底层特征;然后,基于概率潜在语义分析(PLSA)与高斯混合模型(GMM)建立图像底层特征、视觉语义主题与标注关键词间的联系,并基于该模型实现对图像的自动标注.采用Corel 5k数据库进行验证,实验结果证明了本文方法的有效性.  相似文献   

4.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

5.
目的 由于图像检索中存在着低层特征和高层语义之间的“语义鸿沟”,图像自动标注成为当前的关键性问题.为缩减语义鸿沟,提出了一种混合生成式和判别式模型的图像自动标注方法.方法 在生成式学习阶段,采用连续的概率潜在语义分析模型对图像进行建模,可得到相应的模型参数和每幅图像的主题分布.将这个主题分布作为每幅图像的中间表示向量,那么图像自动标注的问题就转化为一个基于多标记学习的分类问题.在判别式学习阶段,使用构造集群分类器链的方法对图像的中间表示向量进行学习,在建立分类器链的同时也集成了标注关键词之间的上下文信息,因而能够取得更高的标注精度和更好的检索效果.结果 在两个基准数据集上进行的实验表明,本文方法在Corel5k数据集上的平均精度、平均召回率分别达到0.28和0.32,在IAPR-TC12数据集上则达到0.29和0.18,其性能优于大多数当前先进的图像自动标注方法.此外,从精度—召回率曲线上看,本文方法也优于几种典型的具有代表性的标注方法.结论 提出了一种基于混合学习策略的图像自动标注方法,集成了生成式模型和判别式模型各自的优点,并在图像语义检索的任务中表现出良好的有效性和鲁棒性.本文方法和技术不仅能应用于图像检索和识别的领域,经过适当的改进之后也能在跨媒体检索和数据挖掘领域发挥重要作用.  相似文献   

6.
7.
8.
图像语义自动标注问题是现阶段一个具有挑战性的难题。在跨媒体相关模型基础上,提出了融合图像类别信息的图像语义标注新方法,并利用关联规则挖掘算法改善标注结果。首先对图像进行低层特征提取,用“视觉词袋”描述图像;然后对图像特征分别进行K-means聚类和基于支持向量机的多类别分类,得到图像相似性关系和类别信息;计算语义标签和图像之间的概率关系,并将图像类别信息作为权重融合到标签的统计概率中,得到候选标注词集;最后以候选标注词概率为依据,利用改善的关联规则挖掘算法挖掘文本关联度,并对候选标注词集进行等频离散化处理,从而得到最终标注结果。在图像集Corel上进行的标注实验取得了较为理想的标注结果。  相似文献   

9.
传统潜在语义分析(Latent Semantic Analysis, LSA)方法无法获得场景目标空间分布信息和潜在主题的判别信息。针对这一问题提出了一种基于多尺度空间判别性概率潜在语义分析(Probabilistic Latent Semantic Analysis, PLSA)的场景分类方法。首先通过空间金字塔方法对图像进行空间多尺度划分获得图像空间信息,结合PLSA模型获得每个局部块的潜在语义信息;然后串接每个特定局部块中的语义信息得到图像多尺度空间潜在语义信息;最后结合提出的权值学习方法来学习不同图像主题间的判别信息,从而得到图像的多尺度空间判别性潜在语义信息,并将学习到的权值信息嵌入支持向量基(Support Vector Machine, SVM)分类器中完成图像的场景分类。在常用的三个场景图像库(Scene-13、Scene-15和Caltech-101)上的实验表明,该方法平均分类精度比现有许多state-of-art方法均优。验证了其有效性和鲁棒性。  相似文献   

10.
自动图像标注因其对图像理解和网络图像检索的重要意义,近年来已成为新的热点研究课题.在图像标注的CMRM模型基础上,提出了一种基于词间相关性的CMRM标注方法.该方法提取了标注字之间的词间相关关系,并利用图学习算法,通过将词间相关性矩阵叠加到初始标注矩阵的方法对标注结果进行了改善.利用Corel5k标注图像库中的自然场景图像进行实验.实验结果表明,该方法很好地完成了对测试集图像的自动标注,在查全率与查准率上较CMRM模型有所提高.  相似文献   

11.
基于多模态关联图的图像语义标注方法   总被引:1,自引:0,他引:1  
郭玉堂  罗斌 《计算机应用》2010,30(12):3295-3297
为了改善图像标注的性能,提出了一种基于多模态关联图的图像语义标注方法。该方法用一个无向图表达了图像区域特征、标注词以及图像三者之间的关系,结合图像区域特征相似性和语义间的相关性提取图像语义信息,提高了图像标注的精度。利用逆向文档频率(IDF)修正图像节点与其标注词节点之间边的权值,克服了传统方法中因高频词引起的偏差,有效地提高了图像标注的性能。在Corel图像数据集上进行了实验,实验结果验证了该方法的有效性。  相似文献   

12.
Zhang  Weifeng  Hu  Hua  Hu  Haiyang 《Multimedia Tools and Applications》2018,77(17):22385-22406

Automatic image annotation aims to predict labels for images according to their semantic contents and has become a research focus in computer vision, as it helps people to edit, retrieve and understand large image collections. In the last decades, researchers have proposed many approaches to solve this task and achieved remarkable performance on several standard image datasets. In this paper, we propose a novel learning to rank approach to address image auto-annotation problem. Unlike typical learning to rank algorithms for image auto-annotation which directly rank annotations for image, our approach consists of two phases. In the first phase, neural ranking models are trained to rank image’s semantic neighbors. Then nearest-neighbor based models propagate annotations from these semantic neighbors to the image. Thus our approach integrates learning to rank algorithms and nearest-neighbor based models, including TagProp and 2PKNN, and inherits their advantages. Experimental results show that our method achieves better or comparable performance compared with the state-of-the-art methods on four challenging benchmarks including Corel5K, ESP Games, IAPR TC-12 and NUS-WIDE.

  相似文献   

13.
The problem of sharp boundary widely exists in image classification algorithms that use traditional association rules. This problem makes classification more difficult and inaccurate. On the other hand, massive image data will produce a lot of redundant association rules, which greatly decrease the accuracy and efficiency of image classification. To relieve the influence of these two problems, this paper proposes a novel approach integrating fuzzy association rules and decision tree to accomplish the task of automatic image annotation. According to the original features with membership functions, the approach first obtains fuzzy feature vectors, which can describe the ambiguity and vagueness of images. Then fuzzy association rules are generated from fuzzy feature vectors with fuzzy support and fuzzy confidence. Fuzzy association rules can capture correlations between low-level visual features and high-level semantic concepts of images. Finally, to tackle the large size of fuzzy association rules base, we adopt decision tree to reduce the unnecessary rules. As a result, the algorithm complexity is decreased to a large extent. We conduct the experiments on two baseline datasets, i.e. Corel5k and IAPR-TC12. The evaluation measures include precision, recall, F-measure and rule number. The experimental results show that our approach performs better than many state-of-the-art automatic image annotation approaches.  相似文献   

14.
为减小图像检索中语义鸿沟的影响,提出了一种基于视觉语义主题的图像自动标注方法.首先,提取图像前景与背景区域,并分别进行预处理;然后,基于概率潜在语义分析与高斯混合模型建立图像底层特征、视觉语义主题与标注关键词间的联系,并基于该模型实现对图像的自动标注.采用corel 5数据库进行验证,实验结果证明了本文方法的有效性.  相似文献   

15.
This paper presents a novel approach to automatic image annotation which combines global, regional, and contextual features by an extended cross-media relevance model. Unlike typical image annotation methods which use either global or regional features exclusively, as well as neglect the textual context information among the annotated words, the proposed approach incorporates the three kinds of information which are helpful to describe image semantics to annotate images by estimating their joint probability. Specifically, we describe the global features as a distribution vector of visual topics and model the textual context as a multinomial distribution. The global features provide the global distribution of visual topics over an image, while the textual context relaxes the assumption of mutual independence among annotated words which is commonly adopted in most existing methods. Both the global features and textual context are learned by a probability latent semantic analysis approach from the training data. The experiments over 5k Corel images have shown that combining these three kinds of information is beneficial in image annotation.  相似文献   

16.
Automatic image annotation has emerged as an important research topic due to its potential application on both image understanding and web image search. Due to the inherent ambiguity of image-label mapping and the scarcity of training examples, the annotation task has become a challenge to systematically develop robust annotation models with better performance. From the perspective of machine learning, the annotation task fits both multi-instance and multi-label learning framework due to the fact that an image is usually described by multiple semantic labels (keywords) and these labels are often highly related to respective regions rather than the entire image. In this paper, we propose an improved Transductive Multi-Instance Multi-Label (TMIML) learning framework, which aims at taking full advantage of both labeled and unlabeled data to address the annotation problem. The experiments over the well known Corel 5000 data set demonstrate that the proposed method is beneficial in the image annotation task and outperforms most existing image annotation algorithms.  相似文献   

17.
Finding semantically similar images is a problem that relies on image annotations manually assigned by amateurs or professionals, or automatically computed by some algorithm using low-level image features. These image annotations create a keyword space where a dissimilarity function quantifies the semantic relationship among images. In this setting, the objective of this paper is two-fold. First, we compare amateur to professional user annotations and propose a model of manual annotation errors, more specifically, an asymmetric binary model. Second, we examine different aspects of search by semantic similarity. More specifically, we study the accuracy of manual annotations versus automatic annotations, the influence of manual annotations with different accuracies as a result of incorrect annotations, and revisit the influence of the keyword space dimensionality. To assess these aspects we conducted experiments on a professional image dataset (Corel) and two amateur image datasets (one with 25,000 Flickr images and a second with 269,648 Flickr images) with a large number of keywords, with different similarity functions and with both manual and automatic annotation methods. We find that Amateur-level manual annotations offers better performance for top ranked results in all datasets (MP@20). However, for full rank measures (MAP) in the real datasets (Flickr) retrieval by semantic similarity with automatic annotations is similar or better than amateur-level manual annotations.  相似文献   

18.
缩小图像低层视觉特征与高层语义之间的鸿沟,以提高图像语义自动标注的精度,是研究大规模图像数据管理的关键。提出一种融合多特征的深度学习图像自动标注方法,将图像视觉特征以不同权重组合成词包,根据输入输出变量优化深度信念网络,完成大规模图像数据语义自动标注。在通用Corel图像数据集上的实验表明,融合多特征的深度学习图像自动标注方法,考虑图像不同特征的影响,提高了图像自动标注的精度。  相似文献   

19.
20.
图像自动标注是模式识别与计算机视觉等领域中的重要问题。针对现有图像自动标注模型普遍受到语义鸿沟问题的影响,提出了基于关键词同现的图像自动标注改善方法,该方法利用数据集中标注词间的关联性来改善图像自动标注的结果。此外,针对上述方法不能反映更广义的人的知识以及易受数据库规模影响等问题,提出了基于语义相似的图像自动标注改善方法,通过引入具有大量词汇、包含了人知识的结构化电子词典WordNet来计算词汇间的关系并改善图像自动标注结果。实验结果表明,提出的两个图像自动标注改善方法在各项评价指标上相比以往模型均有所提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号