首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
改进k-means算法在图像标注和检索中的应用   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种基于改进的k-means算法的图像标注和检索方法。首先对训练图像进行分割,采用改进的k-means算法对分割后的区域进行聚类。改进的k-means算法首先采用遗传聚类算法确定聚类数k,然后对聚类中心进行选择。在图像标注时,首先通过已标注的图像求出语义概念和聚类区域的关联度,用它作为待标注图像的先验知识,然后结合区域的低层特征,对未标注的图像进行标注。在一个包含1 000幅图像的图像库进行实验,采用标注的语义关键字进行检索,结果表明,提出的方法是有效的。  相似文献   

2.
首先采用改进的k均值无监督图像分割算法将图像分割成不同的区域,提出信息瓶颈聚类方法对分割后的区域进行聚类,建立图像语义概念和聚类区域之间的相互关系.然后对未标注的图像进行分割,在给出分割区域的条件下,计算每个语义概念的条件概率,使用条件概率最大的语义关键字实现图像语义的自动标注.对一个包含500幅图像的图像库进行实验,结果表明,本文方法比其它方法更有效.  相似文献   

3.
基于互信息约束聚类的图像语义标注   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种基于互信息约束聚类的图像标注算法。采用语义约束对信息瓶颈算法进行改进,并用改进的信息瓶颈算法对分割后的图像区域进行聚类,建立图像语义概念和聚类区域之间的相互关系;对未标注的图像,提出一种计算语义概念的条件概率的方法,同时考虑训练图像的先验知识和区域的低层特征,最后使用条件概率最大的语义关键字对图像区域语义自动标注。对一个包含500幅图像的图像库进行实验,结果表明,该方法比其他方法更有效。  相似文献   

4.
提出了一种基于高层语义的图像检索方法,该方法首先将图像分割成区域,提取每个区域的颜色、形状、位置特征,然后使用这些特征对图像对象进行聚类,得到每幅图像的语义特征向量;采用模糊C均值算法对图像进行聚类,在图像检索时,查询图像和聚类中心比较,然后在距离最小的类中进行检索。实验表明,提出的方法可以明显提高检索效率,缩小低层特征和高层语义之间的“语义鸿沟”。  相似文献   

5.
针对图像检索中的语义鸿沟问题,提出了一种新颖的自动图像标注方法。该方法首先采用了一种基于软约束的半监督图像聚类算法(SHMRF-Kmeans)对已标注图像的区域进行语义聚类,这种聚类方法可以同时考虑图像的视觉信息和语义信息。并利用图算法——Manifold排序学习算法充分发掘语义概念与区域聚类中心的关系,得到两者的联合概率关系表。然后利用此概率关系表标注未知标注的图像。该方法与以前的方法相比可以更加充分地结合图像的视觉特征和高层语义。通过在通用图像集上的实验结果表明,本文提出的自动图像标注方法是有效的。  相似文献   

6.
提出了一种新的利用图像语义词汇表进行图像自动标注与检索的方法.采用混合层次模型在已标注好的训练图像集上计算图像区域类与关键字的联合概率分布,并用生成的模型标注未曾观察过的测试图像集,或用来进行基于语义的图像检索.实验结果表明,该方法在标注、检索精度和效率方面均优于当前其他方法.  相似文献   

7.
首先采用基于颜色聚类的方法将图像分割成区域,提取每个区域的Gabor小波纹理特征和灰度共生矩阵纹理特征,接着采用信息熵对特征进行选择,使用选择后的特征对图像区域进行聚类,得到每幅图像的语义特征向量;然后提出遗传模糊C均值算法对图像进行聚类。在图像检索时,查询图像和聚类中心比较,在距离最小的类中进行检索。实验表明,提出的方法可以明显提高检索效率,提高了检索的精度。  相似文献   

8.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

9.
图像语义自动标注成为基于内容的图像检索研究的热点,提出一种改进的SML两级图像语义自动标注方法.首先采用监督多类标注方法 SML对图像进行粗略标注,然后用基于本体的最优语义标注方法(Oostia)对粗略标注的结果进行精细标注,Oostia方法通过4种不同方式对粗略标注关键字进行扩展,充分挖掘图像中丰富的语义信息.文中提出的方法与其它相关方法进行了比较,实验结果表明,该方法优于其它方法.  相似文献   

10.
由于传统的基于内容图像检索存在的语义鸿沟问题,其在某些特定的领域无法满足用户的需求。图像语义自动标注的出现能够有效地解决这方面的问题。该文提出了先使用Normalized Cuts方法对图像进行区域分割并提取出每个区域的低层视觉特征,再利用BP神经网络算法来学习图像区域和标注字的对应关系来进行图像语义的自动标注的方法,实验结果证明了此方法的有效性和准确性。  相似文献   

11.
This paper presents a unified annotation and retrieval framework, which integrates region annotation with image retrieval for performance reinforcement. To integrate semantic annotation with region-based image retrieval, visual and textual fusion is proposed for both soft matching and Bayesian probabilistic formulations. To address sample insufficiency and sample asymmetry in the annotation classifier training phase, we present a region-level multi-label image annotation scheme based on pair-wise coupling support vector machine (SVM) learning. In the retrieval phase, to achieve semantic-level region matching we present a novel retrieval scheme which differs from former work: the query example uploaded by users is automatically annotated online, and the user can judge its annotation quality. Based on the user’s judgment, two novel schemes are deployed for semantic retrieval: (1) if the user judges the photo to be well annotated, Semantically supervised Integrated Region Matching is adopted, which is a keyword-integrated soft region matching method; (2) If the user judges the photo to be poorly annotated, Keyword Integrated Bayesian Reasoning is adopted, which is a natural integration of a Visual Dictionary in online content-based search. In the relevance feedback phase, we conduct both visual and textual learning to capture the user’s retrieval target. Better annotation and retrieval performance than current methods were reported on both COREL 10,000 and Flickr web image database (25,000 images), which demonstrated the effectiveness of our proposed framework.  相似文献   

12.
一种基于颜色信息的图象检索方法   总被引:1,自引:0,他引:1       下载免费PDF全文
由于传统的基于颜色的图象检索都是基于颜色直方图的检索,其很难将颜色信息和其他信息结合起来,因此,降低了图象检索的准确度.为了提高图象检索的准确度,提出了一种基于颜色聚类表的图象检索方法,该方法首先定义颜色聚类表,并对图象进行颜色聚类;然后利用聚类后的颜色信息构造聚类表,并利用聚类表作为特征来对图象进行检索,同时给出颜色聚类表的获取方法;最后,利用该方法进行了仿真实验.实验结果表明,利用颜色聚类表,根据图象的聚类结果来实现检索,可以很方便地将颜色信息与其他信息结合起来.  相似文献   

13.
提出一种新的图像本体标注的框架,结合领域本体中概念的关系,通过层次概率标注来获得图像高层语义概念的标注,实现待标注图像语义的自动标注。我们将图像的语义可以定义为属性概念和高层抽象概念,采用二次标注方法实现对于图像语义的自动标注。实验证明,本文的方法可以使图像获得丰富的高层抽象语义概念标注,从而缩小"语义鸿沟",有效提高了检索的效率和精确度。  相似文献   

14.
There is an increasing need for automatic image annotation tools to enable effective image searching in digital libraries. In this paper, we present a novel probabilistic model for image annotation based on content-based image retrieval techniques and statistical analysis. One key difficulty in applying statistical methods to the annotation of images is that the number of manually labeled images used to train the methods is normally insufficient. Numerous keywords cannot be correctly assigned to appropriate images due to lacking or missing information in the labeled image databases. To deal with this challenging problem, we also propose an enhanced model in which the annotated keywords of a new image are defined in terms of their similarity at different semantic levels, including the image level, keyword level, and concept level. To avoid missing some relevant keywords, the model labels the keywords with the same concepts as the new image. Our experimental results show that the proposed models are effective for annotating images that have different qualities of training data.  相似文献   

15.
Continual progress in the fields of computer vision and machine learning has provided opportunities to develop automatic tools for tagging images; this facilitates searching and retrieving. However, due to the complexity of real-world image systems, effective and efficient image annotation is still a challenging problem. In this paper, we present an annotation technique based on the use of image content and word correlations. Clusters of images with manually tagged words are used as training instances. Images within each cluster are modeled using a kernel method, in which the image vectors are mapped to a higher-dimensional space and the vectors identified as support vectors are used to describe the cluster. To measure the extent of the association between an image and a model described by support vectors, the distance from the image to the model is computed. A closer distance indicates a stronger association. Moreover, word-to-word correlations are also considered in the annotation framework. To tag an image, the system predicts the annotation words by using the distances from the image to the models and the word-to-word correlations in a unified probabilistic framework. Simulated experiments were conducted on three benchmark image data sets. The results demonstrate the performance of the proposed technique, and compare it to the performance of other recently reported techniques.  相似文献   

16.
王娟  赖思渝  李明东 《计算机应用》2009,29(7):1947-1950
为了提高图像标注与检索的性能,提出了一种基于区域分割与相关反馈的图像标注与检索算法。该算法利用视觉特征与标注信息的相关性,采用基于区域的视觉特征对每幅图像采用聚类方法获得其一组视觉相似图像。通过计算与其距离最近的前3个分类的相似度,然后对这些关键字概率向量进行整合,获得最适合该图像的关键字概率向量,对图像进行标注。利用用户的反馈信息,修正查询关键词与每个分类之间的关系,进一步提高图像检索的准确性。实验结果表明,提出的算法具有更高的查准率与查全率。  相似文献   

17.
Text-based image retrieval may perform poorly due to the irrelevant and/or incomplete text surrounding the images in the web pages. In such situations, visual content of the images can be leveraged to improve the image ranking performance. In this paper, we look into this problem of image re-ranking and propose a system that automatically constructs multiple candidate “multi-instance bags (MI-bags)”, which are likely to contain relevant images. These automatically constructed bags are then utilized by ensembles of Multiple Instance Learning (MIL) classifiers and the images are re-ranked according to the final classification responses. Our method is unsupervised in the sense that, the only input to the system is the text query itself, without any user feedback or annotation. The experimental results demonstrate that constructing multiple instance bags based on the retrieval order and utilizing ensembles of MIL classifiers greatly enhance the retrieval performance, achieving on par or better results compared to the state-of-the-art.  相似文献   

18.
With the advancement of imaging techniques and IT technologies, image retrieval has become a bottle neck. The key for efficient and effective image retrieval is by a text-based approach in which automatic image annotation is a critical task. As an important issue, the metadata of the annotation, i.e., the basic unit of an image to be labeled, has not been fully studied. A habitual way is to label the segments which are produced by a segmentation algorithm. However, after a segmentation process an object has often been broken into pieces, which not only produces noise for annotation but also increases the complexity of the model. We adopt an attention-driven image interpretation method to extract attentive objects from an over-segmented image and use the attentive objects for annotation. By such doing, the basic unit of annotation has been upgraded from segments to attentive objects. Visual classifiers are trained and a concept association network (CAN) is constructed for object recognition. A CAN consists of a number of concept nodes in which each node is a trained neural network (visual classifier) to recognize a single object. The nodes are connected through their correlation links forming a network. Given that an image contains several unknown attentive objects, all the nodes in CAN generate their own responses which propagate to other nodes through the network simultaneously. For a combination of nodes under investigation, these loopy propagations can be characterized by a linear system. The response of a combination of nodes can be obtained by solving the linear system. Therefore, the annotation problem is converted into finding out the node combination with the maximum response. Annotation experiments show a better accuracy of attentive objects over segments and that the concept association network improves annotation performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号