首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
图像自动标注是模式识别与计算机视觉等领域中重要而又具有挑战性的问题.针对现有模型存在数据利用率低与易受正负样本不平衡影响等问题,提出了基于判别模型与生成模型的新型层叠图像自动标注模型.该模型第一层利用判别模型对未标注图像进行主题标注,获得相应的相关图像集;第二层利用提出的面向关键词的方法建立图像与关键词之间的联系,并使用提出的迭代算法分别对语义关键词与相关图像进行扩展;最后利用生成模型与扩展的相关图像集对未标注图像进行详细标注.该模型综合了判别模型与生成模型的优点,通过利用较少的相关训练图像来获得更好的标注结果.在Corel 5K图像库上进行的实验验证了该模型的有效性.  相似文献   

2.
Image classification is to assign a category of an image and image annotation is to describe individual components of an image by using some annotation terms. These two learning tasks are strongly related. The main contribution of this paper is to propose a new discriminative and sparse topic model (DSTM) for image classification and annotation by combining visual, annotation and label information from a set of training images. The essential features of DSTM different from existing approaches are that (i) the label information is enforced in the generation of both visual words and annotation terms such that each generative latent topic corresponds to a category; (ii) the zero-mean Laplace distribution is employed to give a sparse representation of images in visual words and annotation terms such that relevant words and terms are associated with latent topics. Experimental results demonstrate that the proposed method provides the discrimination ability in classification and annotation, and its performance is better than the other testing methods (sLDA-ann, abc-corr-LDA, SupDocNADE, SAGE and MedSTC) for LabelMe, UIUC, NUS-WIDE and PascalVOC07 images.  相似文献   

3.
Automatic image annotation (AIA) is an effective technology to improve the performance of image retrieval. In this paper, we propose a novel AIA scheme based on hidden Markov model (HMM). Compared with the previous HMM-based annotation methods, SVM based semi-supervised learning, i.e. transductive SVM (TSVM), is triggered out for remarkably boosting the reliability of HMM with less users’ labeling effort involved (denoted by TSVM-HMM). This guarantees that the proposed TSVM-HMM based annotation scheme integrates the discriminative classification with the generative model to mutually complete their advantages. In addition, not only the relevance model between the visual content of images and the textual keywords but also the property of keyword correlation is exploited in the proposed AIA scheme. Particularly, to establish an enhanced correlation network among keywords, both co-occurrence based and WordNet based correlation techniques are well fused and are able to be helpful for benefiting from each other. The final experimental results reveal that the better annotation performance can be achieved at less labeled training images.  相似文献   

4.
There is an increasing need for automatic image annotation tools to enable effective image searching in digital libraries. In this paper, we present a novel probabilistic model for image annotation based on content-based image retrieval techniques and statistical analysis. One key difficulty in applying statistical methods to the annotation of images is that the number of manually labeled images used to train the methods is normally insufficient. Numerous keywords cannot be correctly assigned to appropriate images due to lacking or missing information in the labeled image databases. To deal with this challenging problem, we also propose an enhanced model in which the annotated keywords of a new image are defined in terms of their similarity at different semantic levels, including the image level, keyword level, and concept level. To avoid missing some relevant keywords, the model labels the keywords with the same concepts as the new image. Our experimental results show that the proposed models are effective for annotating images that have different qualities of training data.  相似文献   

5.
Finding semantically similar images is a problem that relies on image annotations manually assigned by amateurs or professionals, or automatically computed by some algorithm using low-level image features. These image annotations create a keyword space where a dissimilarity function quantifies the semantic relationship among images. In this setting, the objective of this paper is two-fold. First, we compare amateur to professional user annotations and propose a model of manual annotation errors, more specifically, an asymmetric binary model. Second, we examine different aspects of search by semantic similarity. More specifically, we study the accuracy of manual annotations versus automatic annotations, the influence of manual annotations with different accuracies as a result of incorrect annotations, and revisit the influence of the keyword space dimensionality. To assess these aspects we conducted experiments on a professional image dataset (Corel) and two amateur image datasets (one with 25,000 Flickr images and a second with 269,648 Flickr images) with a large number of keywords, with different similarity functions and with both manual and automatic annotation methods. We find that Amateur-level manual annotations offers better performance for top ranked results in all datasets (MP@20). However, for full rank measures (MAP) in the real datasets (Flickr) retrieval by semantic similarity with automatic annotations is similar or better than amateur-level manual annotations.  相似文献   

6.
Semantic gap has become a bottleneck of content-based image retrieval in recent years. In order to bridge the gap and improve the retrieval performance, automatic image annotation has emerged as a crucial problem. In this paper, a hybrid approach is proposed to learn the semantic concepts of images automatically. Firstly, we present continuous probabilistic latent semantic analysis (PLSA) and derive its corresponding Expectation–Maximization (EM) algorithm. Continuous PLSA assumes that elements are sampled from a multivariate Gaussian distribution given a latent aspect, instead of a multinomial one in traditional PLSA. Furthermore, we propose a hybrid framework which employs continuous PLSA to model visual features of images in generative learning stage and uses ensembles of classifier chains to classify the multi-label data in discriminative learning stage. Therefore, the framework can learn the correlations between features as well as the correlations between words. Since the hybrid approach combines the advantages of generative and discriminative learning, it can predict semantic annotation precisely for unseen images. Finally, we conduct the experiments on three baseline datasets and the results show that our approach outperforms many state-of-the-art approaches.  相似文献   

7.
Scalable search-based image annotation   总被引:4,自引:0,他引:4  
With the popularity of digital cameras, more and more people have accumulated considerable digital images on their personal devices. As a result, there are increasing needs to effectively search these personal images. Automatic image annotation may serve the goal, for the annotated keywords could facilitate the search processes. Although many image annotation methods have been proposed in recent years, their effectiveness on arbitrary personal images is constrained by their limited scalability, i.e. limited lexicon of small-scale training set. To be scalable, we propose a search-based image annotation algorithm that is analogous to information retrieval. First, content-based image retrieval technology is used to retrieve a set of visually similar images from a large-scale Web image set. Second, a text-based keyword search technique is used to obtain a ranked list of candidate annotations for each retrieved image. Third, a fusion algorithm is used to combine the ranked lists into a final candidate annotation list. Finally, the candidate annotations are re-ranked using Random Walk with Restarts and only the top ones are reserved as the final annotations. The application of both efficient search techniques and Web-scale image set guarantees the scalability of the proposed algorithm. Moreover, we provide an annotation rejection scheme to point out the images that our annotation system cannot handle well. Experimental results on U. Washington dataset show not only the effectiveness and efficiency of the proposed algorithm but also the advantage of image retrieval using annotation results over that using visual features.  相似文献   

8.
提出了一种基于图划分和图像搜索引擎的图像标注改善算法,通过对待标注图像的候选标注词进行去噪处理,提高标注的准确性.算法的核心思想是将候选标注词作为图的顶点,将标注词间的相关度作为边的权值,从而把图像标注改善问题转换为图划分问题.用2个参数对标注词间的相似度进行加权处理后计算出边的权值:参数1是根据图像搜索引擎返回结果计算出的候选标注词与待标注图像视觉特征之间的相关度;参数2是候选标注词在待标注图像所属页面中的重要程度,此参数仅适用于Web图像.然后,用启发式最大割算法对构造出的图进行二划分,最后从划分出的2个顶点集中选择其一作为最终标注.实验结果表明,对比已有方法,使用本算法对非Web图像和Web图像进行标注改善后,最终的标注结果都更加准确.  相似文献   

9.
This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and single-labeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.  相似文献   

10.
The development of technology generates huge amounts of non-textual information, such as images. An efficient image annotation and retrieval system is highly desired. Clustering algorithms make it possible to represent visual features of images with finite symbols. Based on this, many statistical models, which analyze correspondence between visual features and words and discover hidden semantics, have been published. These models improve the annotation and retrieval of large image databases. However, image data usually have a large number of dimensions. Traditional clustering algorithms assign equal weights to these dimensions, and become confounded in the process of dealing with these dimensions. In this paper, we propose weighted feature selection algorithm as a solution to this problem. For a given cluster, we determine relevant features based on histogram analysis and assign greater weight to relevant features as compared to less relevant features. We have implemented various different models to link visual tokens with keywords based on the clustering results of K-means algorithm with weighted feature selection and without feature selection, and evaluated performance using precision, recall and correspondence accuracy using benchmark dataset. The results show that weighted feature selection is better than traditional ones for automatic image annotation and retrieval.  相似文献   

11.
目的 由于图像检索中存在着低层特征和高层语义之间的“语义鸿沟”,图像自动标注成为当前的关键性问题.为缩减语义鸿沟,提出了一种混合生成式和判别式模型的图像自动标注方法.方法 在生成式学习阶段,采用连续的概率潜在语义分析模型对图像进行建模,可得到相应的模型参数和每幅图像的主题分布.将这个主题分布作为每幅图像的中间表示向量,那么图像自动标注的问题就转化为一个基于多标记学习的分类问题.在判别式学习阶段,使用构造集群分类器链的方法对图像的中间表示向量进行学习,在建立分类器链的同时也集成了标注关键词之间的上下文信息,因而能够取得更高的标注精度和更好的检索效果.结果 在两个基准数据集上进行的实验表明,本文方法在Corel5k数据集上的平均精度、平均召回率分别达到0.28和0.32,在IAPR-TC12数据集上则达到0.29和0.18,其性能优于大多数当前先进的图像自动标注方法.此外,从精度—召回率曲线上看,本文方法也优于几种典型的具有代表性的标注方法.结论 提出了一种基于混合学习策略的图像自动标注方法,集成了生成式模型和判别式模型各自的优点,并在图像语义检索的任务中表现出良好的有效性和鲁棒性.本文方法和技术不仅能应用于图像检索和识别的领域,经过适当的改进之后也能在跨媒体检索和数据挖掘领域发挥重要作用.  相似文献   

12.
Confronted with the explosive growth of web images, the web image annotation has become a critical research issue for image search and index. Sparse feature selection plays an important role in improving the efficiency and performance of web image annotation. Meanwhile, it is beneficial to developing an effective mechanism to leverage the unlabeled training data for large-scale web image annotation. In this paper we propose a novel sparse feature selection framework for web image annotation, namely sparse Feature Selection based on Graph Laplacian (FSLG)2. FSLG applies the l2,1/2-matrix norm into the sparse feature selection algorithm to select the most sparse and discriminative features. Additional, graph Laplacian based semi-supervised learning is used to exploit both labeled and unlabeled data for enhancing the annotation performance. An efficient iterative algorithm is designed to optimize the objective function. Extensive experiments on two web image datasets are performed and the results illustrate that our method is promising for large-scale web image annotation.  相似文献   

13.
In this paper, we propose a probabilistic framework for efficient retrieval and indexing of image collections. This framework uncovers the hierarchical structure underlying the collection from image features based on a hybrid model that combines both generative and discriminative learning. We adopt the generalized Dirichlet mixture and maximum likelihood for the generative learning in order to estimate accurately the statistical model of the data. Then, the resulting model is refined by a new discriminative likelihood that enhances the power of relevant features. Consequently, this new model is suitable for modeling high-dimensional data described by both semantic and low-level (visual) features. The semantic features are defined according to a known ontology while visual features represent the visual appearance such as color, shape, and texture. For validation purposes, we propose a new visual feature which has nice invariance properties to image transformations. Experiments on the Microsoft's collection (MSRCID) show clearly the merits of our approach in both retrieval and indexing.  相似文献   

14.
Multi-level annotation of images is a promising solution to enable semantic image retrieval by using various keywords at different semantic levels. In this paper, we propose a multi-level approach to interpret and annotate the semantics of natural images by using both the dominant image components and the relevant semantic image concepts. In contrast to the well-known image-based and region-based approaches, we use the concept-sensitive salient objects as the dominant image components to achieve automatic image annotation at the content level. By using the concept-sensitive salient objects for image content representation and feature extraction, a novel image classification technique is developed to achieve automatic image annotation at the concept level. To detect the concept-sensitive salient objects automatically, a set of detection functions are learned from the labeled image regions by using support vector machine (SVM) classifiers with an automatic scheme for searching the optimal model parameters. To generate the semantic image concepts, the finite mixture models are used to approximate the class distributions of the relevant concept-sensitive salient objects. An adaptive EM algorithm has been proposed to determine the optimal model structure and model parameters simultaneously. In addition, a large number of unlabeled samples have been integrated with a limited number of labeled samples to achieve more effective classifier training and knowledge discovery. We have also demonstrated that our algorithms are very effective to enable multi-level interpretation and annotation of natural images.  相似文献   

15.
This paper addresses automatic image annotation problem and its application to multi-modal image retrieval. The contribution of our work is three-fold. (1) We propose a probabilistic semantic model in which the visual features and the textual words are connected via a hidden layer which constitutes the semantic concepts to be discovered to explicitly exploit the synergy among the modalities. (2) The association of visual features and textual words is determined in a Bayesian framework such that the confidence of the association can be provided. (3) Extensive evaluation on a large-scale, visually and semantically diverse image collection crawled from Web is reported to evaluate the prototype system based on the model. In the proposed probabilistic model, a hidden concept layer which connects the visual feature and the word layer is discovered by fitting a generative model to the training image and annotation words through an Expectation-Maximization (EM) based iterative learning procedure. The evaluation of the prototype system on 17,000 images and 7736 automatically extracted annotation words from crawled Web pages for multi-modal image retrieval has indicated that the proposed semantic model and the developed Bayesian framework are superior to a state-of-the-art peer system in the literature.  相似文献   

16.
As the main challenge for object tracking is to account for drastic appearance change, a hierarchical framework that exploits the strength of both generative and discriminative models is devised in this paper. Our hierarchical framework consists of three appearance models: local-histogram-based model, weighted alignment pooling model, and sparsity-based discriminative model. Sparse representation is adopted in local-histogram-based model layer that considers the spatial information among local patches with a dual-threshold update schema to deal with occlusion. The weighted alignment pooling layer is introduced to weight the local image patches of the candidates after sparse representation. Different from the above two generative methods, the global discriminant model layer employs candidates to sparsely represent positive and negative templates. After that, an effective hierarchical fusion strategy is developed to fuse the three models via their similarities and the confidence. In addition, three reasonable online dictionary and template update strategies are proposed. Finally, experiments on various current popular image sequences demonstrate that our proposed tracker performs favorably against several state-of-the-art algorithms.  相似文献   

17.
图像自动标注是计算机视觉与模式识别等领域中的重要问题.针对现有模型未对文本关键词的视觉描述形式进行建模,导致标注结果中大量出现与图像视觉内容无关的标注词等问题,提出了基于相关视觉关键词的图像自动标注模型VKRAM.该模型将标注词分为非抽象标注词与抽象标注词.首先建立非抽象标注词的视觉关键词种子,并提出了一个新方法抽取非抽象标注词对应的视觉关键词集合;接着根据抽象关键词的特点,运用提出的基于减区域的算法抽取抽象关键词对应的视觉关键词种子与视觉关键词集合;然后提出一个自适应参数方法与快速求解算法用于确定不同视觉关键词的相似度阈值;最后将上述方法相结合并用于图像自动标注中.该模型能从一定程度上解决标注结果中出现的大量无关标注词问题.实验结果表明,该模型在大多数指标上相比以往模型均有所提高.  相似文献   

18.
The vast amount of images available on the Web request for an effective and efficient search service to help users find relevant images.The prevalent way is to provide a keyword interface for users to submit queries.However,the amount of images without any tags or annotations are beyond the reach of manual efforts.To overcome this,automatic image annotation techniques emerge,which are generally a process of selecting a suitable set of tags for a given image without user intervention.However,there are three main challenges with respect to Web-scale image annotation:scalability,noiseresistance and diversity.Scalability has a twofold meaning:first an automatic image annotation system should be scalable with respect to billions of images on the Web;second it should be able to automatically identify several relevant tags among a huge tag set for a given image within seconds or even faster.Noise-resistance means that the system should be robust enough against typos and ambiguous terms used in tags.Diversity represents that image content may include both scenes and objects,which are further described by multiple different image features constituting different facets in annotation.In this paper,we propose a unified framework to tackle the above three challenges for automatic Web image annotation.It mainly involves two components:tag candidate retrieval and multi-facet annotation.In the former content-based indexing and concept-based codebook are leveraged to solve scalability and noise-resistance issues.In the latter the joint feature map has been designed to describe different facets of tags in annotations and the relations between these facets.Tag graph is adopted to represent tags in the entire annotation and the structured learning technique is employed to construct a learning model on top of the tag graph based on the generated joint feature map.Millions of images from Flickr are used in our evaluation.Experimental results show that we have achieved 33% performance improvements compared with those single facet approaches in terms of three metrics:precision,recall and F1 score.  相似文献   

19.
目的 在日常的图像采集工作中,由于场景光照条件差或设备的补光能力不足,容易产生低照度图像。为了解决低照度图像视觉感受差、信噪比低和使用价值低(难以分辨图像内容)等问题,本文提出一种基于条件生成对抗网络的低照度图像增强方法。方法 本文设计一个具备编解码功能的卷积神经网络(CNN)模型作为生成模型,同时加入具备二分类功能的CNN作为判别模型,组成生成对抗网络。在模型训练的过程中,以真实的亮图像为条件,依靠判别模型监督生成模型以及结合判别模型与生成模型间的相互博弈,使得本文网络模型具备更好的低照度图像增强能力。在本文方法使用过程中,无需人工调节参数,图像输入模型后端到端处理并输出结果。结果 将本文方法与现有方法进行比较,利用本文方法增强的图像在亮度、清晰度以及颜色还原度等方面有了较大的提升。在峰值信噪比、直方图相似度和结构相似性等图像质量评价指标方面,本文方法比其他方法的最优值分别提高了0.7 dB、3.9%和8.2%。在处理时间上,本文方法处理图像的速度远远超过现有的传统方法,可达到实时增强的要求。结论 通过实验比较了本文方法与现有方法对于低照度图像的处理效果,表明本文方法具有更优的处理效果,同时具有更快的处理速度。  相似文献   

20.
Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号