首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Automatic image annotation aims at predicting a set of semantic labels for an image. Because of large annotation vocabulary, there exist large variations in the number of images corresponding to different labels (“class-imbalance”). Additionally, due to the limitations of human annotation, several images are not annotated with all the relevant labels (“incomplete-labelling”). These two issues affect the performance of most of the existing image annotation models. In this work, we propose 2-pass k-nearest neighbour (2PKNN) algorithm. It is a two-step variant of the classical k-nearest neighbour algorithm, that tries to address these issues in the image annotation task. The first step of 2PKNN uses “image-to-label” similarities, while the second step uses “image-to-image” similarities, thus combining the benefits of both. We also propose a metric learning framework over 2PKNN. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label data. In addition to the features provided by Guillaumin et al. (2009) that are used by almost all the recent image annotation methods, we benchmark using new features that include features extracted from a generic convolutional neural network model and those computed using modern encoding techniques. We also learn linear and kernelized cross-modal embeddings over different feature combinations to reduce semantic gap between visual features and textual labels. Extensive evaluations on four image annotation datasets (Corel-5K, ESP-Game, IAPR-TC12 and MIRFlickr-25K) demonstrate that our method achieves promising results, and establishes a new state-of-the-art on the prevailing image annotation datasets.  相似文献   

2.
目的 针对细粒度图像分类中的背景干扰问题,提出一种利用自上而下注意图分割的分类模型。方法 首先,利用卷积神经网络对细粒度图像库进行初分类,得到基本网络模型。再对网络模型进行可视化分析,发现仅有部分图像区域对目标类别有贡献,利用学习好的基本网络计算图像像素对相关类别的空间支持度,生成自上而下注意图,检测图像中的关键区域。再用注意图初始化GraphCut算法,分割出关键的目标区域,从而提高图像的判别性。最后,对分割图像提取CNN特征实现细粒度分类。结果 该模型仅使用图像的类别标注信息,在公开的细粒度图像库Cars196和Aircrafts100上进行实验验证,最后得到的平均分类正确率分别为86.74%和84.70%。这一结果表明,在GoogLeNet模型基础上引入注意信息能够进一步提高细粒度图像分类的正确率。结论 基于自上而下注意图的语义分割策略,提高了细粒度图像的分类性能。由于不需要目标窗口和部位的标注信息,所以该模型具有通用性和鲁棒性,适用于显著性目标检测、前景分割和细粒度图像分类应用。  相似文献   

3.
Tian  Peng  Mo  Hongwei  Jiang  Laihao 《Applied Intelligence》2021,51(11):7781-7793

Understanding scene image includes detecting and recognizing objects, estimating the interaction relationships of the detected objects, and describing image regions with sentences. However, since the complexity and variety of scene image, existing methods take object detection or vision relationship estimate as the research targets in scene understanding, and the obtained results are not satisfactory. In this work, we propose a Multi-level Semantic Tasks Generation Network (MSTG) to leverage mutual connections across object detection, visual relationship detection and image captioning, to solve jointly and improve the accuracy of the three vision tasks and achieve the more comprehensive and accurate understanding of scene image. The model uses a message pass graph to mutual connections and iterative updates across the different semantic features to improve the accuracy of scene graph generation, and introduces a fused attention mechanism to improve the accuracy of image captioning while using the mutual connections and refines of different semantic features to improve the accuracy of object detection and scene graph generation. Experiments on Visual Genome and COCO datasets indicate that the proposed method can jointly learn the three vision tasks to improve the accuracy of those visual tasks generation.

  相似文献   

4.
Zhang  Haofeng  Long  Yang  Shao  Ling 《Multimedia Tools and Applications》2019,78(17):24147-24165

Conventional zero-shot learning methods usually learn mapping functions to project image features into semantic embedding spaces, in which to find the nearest neighbors with predefined attributes. The predefined attributes including both seen classes and unseen classes are often annotated with high dimensional real values by experts, which costs a lot of human labors. In this paper, we propose a simple but effective method to reduce the annotation work. In our strategy, only unseen classes are needed to be annotated with several binary codes, which lead to only about one percent of original annotation work. In addition, we design a Visual Similes Annotation System (ViSAS) to annotate the unseen classes, and build both linear and deep mapping models and test them on four popular datasets, the experimental results show that our method can outperform the state-of-the-art methods in most circumstances.

  相似文献   

5.
The explosion of the Internet provides us with a tremendous resource of images shared online. It also confronts vision researchers the problem of finding effective methods to navigate the vast amount of visual information. Semantic image understanding plays a vital role towards solving this problem. One important task in image understanding is object recognition, in particular, generic object categorization. Critical to this problem are the issues of learning and dataset. Abundant data helps to train a robust recognition system, while a good object classifier can help to collect a large amount of images. This paper presents a novel object recognition algorithm that performs automatic dataset collecting and incremental model learning simultaneously. The goal of this work is to use the tremendous resources of the web to learn robust object category models for detecting and searching for objects in real-world cluttered scenes. Humans contiguously update the knowledge of objects when new examples are observed. Our framework emulates this human learning process by iteratively accumulating model knowledge and image examples. We adapt a non-parametric latent topic model and propose an incremental learning framework. Our algorithm is capable of automatically collecting much larger object category datasets for 22 randomly selected classes from the Caltech 101 dataset. Furthermore, our system offers not only more images in each object category but also a robust object category model and meaningful image annotation. Our experiments show that OPTIMOL is capable of collecting image datasets that are superior to the well known manually collected object datasets Caltech 101 and LabelMe.  相似文献   

6.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

7.
多标签图像分类是多标签数据分类问题中的研究热点.针对目前多标签图像分类方法只学习图像的视觉表示特征,忽略了图像标签之间的相关信息以及标签语义与图像特征的对应关系等问题,提出了一种基于多头图注意力网络与图模型的多标签图像分类模型(ML-M-GAT).该模型利用标签共现关系与标签属性信息构建图模型,使用多头注意力机制学习标签的注意力权重,并利用标签权重将标签语义特征与图像特征进行融合,从而将标签相关性与标签语义信息融入到多标签图像分类模型中.为验证本文所提模型的有效性,在公开数据集VOC-2007和COCO-2014上进行实验,实验结果表明, ML-M-GAT模型在两个数据集上的平均均值精度(mAP)分别为94%和82.2%,均优于CNN-RNN、ResNet101、MLIR、MIC-FLC模型,比ResNet101模型分别提高了4.2%和3.9%.因此,本文所提的ML-M-GAT模型能够利用图像标签信息提高多标签图像分类性能.  相似文献   

8.
由于图像数据中普遍存在的“语义鸿沟”问题,传统的基于内容的图像检索技术对于数字图书馆中的图像检索往往力不从心。而图像标注能有效地弥补语义的缺失。文中分析了图像语义标注的现状以及存在的问题,提出了基于语义分类的文物语义标注方法。算法首先通过构建一个Bayes语义分类器对待标注图像进行语义分类,进而通过在语义类内部建立基于统计的标注模型,实现了图像的语义标注。在针对文物图像进行标注的实验中,该方法获得了较好的标注准确率和效率。  相似文献   

9.
Semantic gap has become a bottleneck of content-based image retrieval in recent years. In order to bridge the gap and improve the retrieval performance, automatic image annotation has emerged as a crucial problem. In this paper, a hybrid approach is proposed to learn the semantic concepts of images automatically. Firstly, we present continuous probabilistic latent semantic analysis (PLSA) and derive its corresponding Expectation–Maximization (EM) algorithm. Continuous PLSA assumes that elements are sampled from a multivariate Gaussian distribution given a latent aspect, instead of a multinomial one in traditional PLSA. Furthermore, we propose a hybrid framework which employs continuous PLSA to model visual features of images in generative learning stage and uses ensembles of classifier chains to classify the multi-label data in discriminative learning stage. Therefore, the framework can learn the correlations between features as well as the correlations between words. Since the hybrid approach combines the advantages of generative and discriminative learning, it can predict semantic annotation precisely for unseen images. Finally, we conduct the experiments on three baseline datasets and the results show that our approach outperforms many state-of-the-art approaches.  相似文献   

10.
It is a remarkable fact that images are related to objects constituting them. In this paper, we propose to represent images by using objects appearing in them. We introduce the novel concept of object bank (OB), a high-level image representation encoding object appearance and spatial location information in images. OB represents an image based on its response to a large number of pre-trained object detectors, or ‘object filters’, blind to the testing dataset and visual recognition task. Our OB representation demonstrates promising potential in high level image recognition tasks. It significantly outperforms traditional low level image representations in image classification on various benchmark image datasets by using simple, off-the-shelf classification algorithms such as linear SVM and logistic regression. In this paper, we analyze OB in detail, explaining our design choice of OB for achieving its best potential on different types of datasets. We demonstrate that object bank is a high level representation, from which we can easily discover semantic information of unknown images. We provide guidelines for effectively applying OB to high level image recognition tasks where it could be easily compressed for efficient computation in practice and is very robust to various classifiers.  相似文献   

11.
提出一个基于EM迭代的非监督图像多标签区域标定算法,它能够非常有效地将基于全图的标签自动标定到图像的对应局部区域上。首先对所有图像进行SIFT特征点的密集采样,然后对所有的SIFT特征点进行K-m eans聚类,获得词典,再构造EM迭代过程计算出每幅图像中每个标签对每个存在WORD的置信度,最后选择那些置信度较高的WORD,确定每幅图像中每个标签置信度最高的对应区域。实验表明,在样本数据充分的情况下,该算法在解决非监督自动标定、标签表观的多样性以及多标签等问题上都取得了不错的效果。  相似文献   

12.
目的 随着高光谱成像技术的飞速发展,高光谱数据的应用越来越广泛,各场景高光谱图像的应用对高精度详细标注的需求也越来越旺盛。现有高光谱分类模型的发展大多集中于有监督学习,大多数方法都在单个高光谱数据立方中进行训练和评估。由于不同高光谱数据采集场景不同且地物类别不一致,已训练好的模型并不能直接迁移至新的数据集得到可靠标注,这也限制了高光谱图像分类模型的进一步发展。本文提出跨数据集对高光谱分类模型进行训练和评估的模式。方法 受零样本学习的启发,本文引入高光谱类别标签的语义信息,拟通过将不同数据集的原始数据及标签信息分别映射至同一特征空间以建立已知类别和未知类别的关联,再通过将训练数据集的两部分特征映射至统一的嵌入空间学习高光谱图像视觉特征和类别标签语义特征的对应关系,即可将该对应关系应用于测试数据集进行标签推理。结果 实验在一对同传感器采集的数据集上完成,比较分析了语义—视觉特征映射和视觉—语义特征映射方向,对比了5种基于零样本学习的特征映射方法,在高光谱图像分类任务中实现了对分类模型在不同数据集上的训练和评估。结论 实验结果表明,本文提出的基于零样本学习的高光谱分类模型可以实现跨数据集对分类模型进行训练和评估,在高光谱图像分类任务中具有一定的发展潜力。  相似文献   

13.
莫宏伟  田朋 《控制与决策》2021,36(12):2881-2890
视觉场景理解包括检测和识别物体、推理被检测物体之间的视觉关系以及使用语句描述图像区域.为了实现对场景图像更全面、更准确的理解,将物体检测、视觉关系检测和图像描述视为场景理解中3种不同语义层次的视觉任务,提出一种基于多层语义特征的图像理解模型,并将这3种不同语义层进行相互连接以共同解决场景理解任务.该模型通过一个信息传递图将物体、关系短语和图像描述的语义特征同时进行迭代和更新,更新后的语义特征被用于分类物体和视觉关系、生成场景图和描述,并引入融合注意力机制以提升描述的准确性.在视觉基因组和COCO数据集上的实验结果表明,所提出的方法在场景图生成和图像描述任务上拥有比现有方法更好的性能.  相似文献   

14.
由于遥感图像包含物体类别多样,单个语义类别标签无法全面地描述图像内容,而多标签图像分类任务更加具有挑战性.通过探索深度图卷积网络(GCN),解决了多标签遥感图像分类缺乏对标签语义信息相关性利用的问题,提出了一种新的基于图卷积的多标签遥感图像分类网络,它包含图像特征学习模块、基于图卷积网络的分类器学习模块和图像特征差异化模块三个部分.在公开多标签遥感数据集Planet和UCM上与相关模型进行对比,在多标签遥感图像分类任务上可以得到了较好的分类结果.该方法使用图卷积等模块将多标签图像分类方法应用到遥感领域,提高了模型分类能力,缩短了模型训练时间.  相似文献   

15.
为了解决多模态命名实体识别方法中存在的图文语义缺失、多模态表征语义不明确等问题,提出了一种图文语义增强的多模态命名实体识别方法。其中,利用多种预训练模型分别提取文本特征、字符特征、区域视觉特征、图像关键字和视觉标签,以全面描述图文数据的语义信息;采用Transformer和跨模态注意力机制,挖掘图文特征间的互补语义关系,以引导特征融合,从而生成语义补全的文本表征和语义增强的多模态表征;整合边界检测、实体类别检测和命名实体识别任务,构建了多任务标签解码器,该解码器能对输入特征进行细粒度语义解码,以提高预测特征的语义准确性;使用这个解码器对文本表征和多模态表征进行联合解码,以获得全局最优的预测标签。在Twitter-2015和Twitter-2017基准数据集的大量实验结果显示,该方法在平均F1值上分别提升了1.00%和1.41%,表明该模型具有较强的命名实体识别能力。  相似文献   

16.
针对图像自动标注中底层视觉特征与高层语义之间的语义鸿沟问题,在传统字典学习的基础上,提出一种基于多标签判别字典学习的图像自动标注方法。首先,为每幅图像提取多种类型特征,将多种特征组合作为字典学习输入特征空间的输入信息;然后,设计一个标签一致性正则化项,将原始样本的标签信息融入到初始的输入特征数据中,结合标签一致性判别字典和标签一致性正则化项进行字典学习;最后,通过得到的字典和稀疏编码矩阵求解标签稀疏编向量,实现未知图像的语义标注。在Corel 5K数据集上测试其标注性能,所提标注方法平均查准率和平均查全率分别可达到35%和48%;与传统的稀疏编码方法(MSC)相比,分别提高了10个百分点和16个百分点;与距离约束稀疏/组稀疏编码方法(DCSC/DCGSC)相比,分别提高了3个百分点和14个百分点。实验结果表明,所提方法能够较好地预测未知图像的语义信息,与当前几种流行的图像标注方法进行比较,所提方法具有较好的标注性能。  相似文献   

17.
在图像语义分割中使用卷积网络进行特征提取时,由于最大池化和下采样操作的重复组合引起了特征分辨率降低,从而导致上下文信息丢失,使得分割结果失去对目标位置的敏感性。虽然基于编码器-解码器架构的网络通过跳跃连接在恢复分辨率的过程中逐渐细化了输出精度,但其将相邻特征简单求和的操作忽略了特征之间的差异性,容易导致目标局部误识别等问题。为此,文中提出了基于深度特征融合的图像语义分割方法。该方法采用多组全卷积VGG16模型并联组合的网络结构,结合空洞卷积并行高效地处理金字塔中的多尺度图像,提取了多个层级的上下文特征,并通过自顶向下的方法逐层融合,最大限度地捕获上下文信息;同时,以改进损失函数而得到的逐层标签监督策略为辅助支撑,联合后端像素建模的全连接条件随机场,无论是在模型训练的难易程度还是预测输出的精度方面都有一定的优化。实验数据表明,通过对表征不同尺度上下文信息的各层深度特征进行逐层融合,图像语义分割算法在目标对象的分类和空间细节的定位方面都有所提升。在PASCAL VOC 2012和PASCAL CONTEXT两个数据集上获得的实验结果显示,所提方法分别取得了80.5%和45.93%的mIoU准确率。实验数据充分说明,并联框架中的深度特征提取、特征逐层融合和逐层标签监督策略能够联合优化算法架构。特征对比表明,该模型能够捕获丰富的上下文信息,得到更加精细的图像语义特征,较同类方法具有明显的优势。  相似文献   

18.
针对现有图像语义分割中存在小目标对象分割精度不高等问题,提出一种结合上下文注意力的卷积自校正图像语义分割模型.使用上下文注意力机制挖掘局部区域内细粒度特征,结合上下文循环神经网络和残差学习充分挖掘图像的深层隐含语义特征;构建辅助分割模型,在给定图像和边界框注释的情况下生成每像素的标签分布,提出卷积自校正模型,实现分割模...  相似文献   

19.
深度学习依赖于大数据在很多的任务中取得巨大成功,但目前大部分方法都依赖于严格标注的数据,或者假定仅含一个物体大致位于图片近中心位置且背景较少。而现实场景中背景复杂,出现的物体多样,增加了分类的难度,而且标注的代价很大。本文关注于弱监督场景下的分类任务,提出了基于注意力机制(Attention)结合递归神经网络的深度模型,利用图片级的标注进行多标号学习,利用损失函数进行梯度下降训练自动调整关注区域,使模型每次关注图片的局域区域,并在数据集PASCAL VOC 2007/2012上验证算法的有效性,与其他方法相比具有更强的可解释性。  相似文献   

20.
基于目标的图像标注一直是图像处理和计算机视觉领域中一个重要的研究问题.图像目标的多尺度性、多形变性使得图像标注十分困难.目标分割和目标识别是目标图像标注任务中两大关键问题.本文提出一种基于形式概念分析(Formal concept analysis, FCA)和语义关联规则的目标图像标注方法, 针对目标建议算法生成图像块中存在的高度重叠问题, 借鉴形式概念分析中概念格的思想, 按照图像块的共性将其归成几个图像簇挖掘图像类别模式, 利用类别概率分布判决和平坦度判决分别去除目标噪声块和背景噪声块, 最终得到目标语义簇; 针对语义目标判别问题, 首先对有效图像簇进行特征融合形成共性特征描述, 通过分类器进行类别判决, 生成初始目标图像标注, 然后利用图像语义标注词挖掘语义关联规则, 进行图像标注的语义补充, 以避免挖掘类别模式时丢失较小的语义目标.实验表明, 本文提出的图像标注算法既能保证语义标注的准确性, 又能保证语义标注的完整性, 具有较好的图像标注性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号