首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose a novel scene categorization method based on contextual visual words. In the proposed method, we extend the traditional ‘bags of visual words’ model by introducing contextual information from the coarser scale and neighborhood regions to the local region of interest based on unsupervised learning. The introduced contextual information provides useful information or cue about the region of interest, which can reduce the ambiguity when employing visual words to represent the local regions. The improved visual words representation of the scene image is capable of enhancing the categorization performance. The proposed method is evaluated over three scene classification datasets, with 8, 13 and 15 scene categories, respectively, using 10-fold cross-validation. The experimental results show that the proposed method achieves 90.30%, 87.63% and 85.16% recognition success for Dataset 1, 2 and 3, respectively, which significantly outperforms the methods based on the visual words that only represent the local information in the statistical manner. We also compared the proposed method with three representative scene categorization methods. The result confirms the superiority of the proposed method.  相似文献   

2.
目的 以词袋模型为基础的拷贝图像检索方法是当前最有效的方法。然而,由于局部特征量化存在信息损失,导致视觉词汇区别能力不足和视觉词汇误匹配增加,从而影响了拷贝图像检索效果。针对视觉词汇的误匹配问题,提出一种基于近邻上下文的拷贝图像检索方法。该方法通过局部特征的上下文关系消除视觉词汇歧义,提高视觉词汇的区分度,进而提高拷贝图像的检索效果。方法 首先,以距离和尺度关系选择图像中某局部特征点周围的特征点作为该特征点的上下文,选取的上下文中的局部特征点称为近邻特征点;再以近邻特征点的信息以及与该局部特征的关系为该局部特征构建上下文描述子;然后,通过计算上下文描述子的相似性对局部特征匹配对进行验证;最后,以正确匹配特征点的个数衡量图像间的相似性,并以此相似性选取若干候选图像作为返回结果。结果 在Copydays图像库进行实验,与Baseline方法进行比较。在干扰图像规模为100 k时,相对于Baseline方法,mAP提高了63%。当干扰图像规模从100 k增加到1 M时,Baseline的mAP值下降9%,而本文方法下降3%。结论 本文拷贝图像检索方法对图像编辑操作,如旋转、图像叠加、尺度变换以及裁剪有较高的鲁棒性。该方法可以有效地应用到图像防伪、图像去重等领域。  相似文献   

3.
This paper presents a shift invariant scene classification method based on local autocorrelation of similarities with subspaces. Although conventional scene classification methods used bag-of-visual words for scene classification, superior accuracy of kernel principal component analysis (KPCA) of visual words to bag-of-visual words was reported. Here we also use KPCA of visual words to extract rich information for classification. In the original KPCA of visual words, all local parts mapped into subspace were integrated by summation to be robust to the order, the number, and the shift of local parts. This approach discarded the effective properties for scene classification such as the relation with neighboring regions. To use them, we use (normalized) local autocorrelation (LAC) feature of the similarities with subspaces (outputs of KPCA of visual words). The feature has both the relation with neighboring regions and the robustness to shift of objects in scenes. The proposed method is compared with conventional scene classification methods using the same database and protocol, and we demonstrate the effectiveness of the proposed method.  相似文献   

4.
李策  贾盛泽  曲延云 《自动化学报》2019,45(6):1198-1206
针对自然场景图像目标材质视觉特征映射中,尚存在特征提取困难、图像无对应标签等问题,本文提出了一种自然场景图像的目标材质视觉特征映射算法.首先,从图像中获取能表征材质视觉重要特征的反射层图像;然后,对获取的反射层图像进行前景、背景分割,得到目标图像;最后,利用循环生成对抗网络对材质视觉特征进行无监督学习,获得对图像目标材质视觉特征空间的高阶表达,实现了目标材质视觉特征的映射.实验结果表明,所提算法能够有效地获取自然场景图像目标的材质视觉特征,并进行材质视觉特征映射;与同类算法相比,具有更好的主、客观效果.  相似文献   

5.
针对基于深度特征的图像标注模型训练复杂、时空开销大的不足,提出一种由深 度学习中间层特征表示图像视觉特征、由正例样本均值向量表示语义概念的图像标注方法。首 先,通过预训练深度学习模型的中间层直接输出卷积结果作为低层视觉特征,并采用稀疏编码 方式表示图像;然后,采用正例均值向量法为每个文本词汇构造视觉特征向量,从而构造出文 本词汇的视觉特征向量库;最后,计算测试图像与所有文本词汇的视觉特征向量相似度,并取 相似度最大的若干词汇作为标注词。多个数据集上的实验证明了所提出方法的有效性,就 F1 值而言,该方法在 IAPR TC-12 数据集上的标注性能比采用端到端深度特征的 2PKNN 和 JEC 分 别提高 32%和 60%。  相似文献   

6.
This paper proposes a novel method based on Spectral Regression (SR) for efficient scene recognition. First, a new SR approach, called Extended Spectral Regression (ESR), is proposed to perform manifold learning on a huge number of data samples. Then, an efficient Bag-of-Words (BOW) based method is developed which employs ESR to encapsulate local visual features with their semantic, spatial, scale, and orientation information for scene recognition. In many applications, such as image classification and multimedia analysis, there are a huge number of low-level feature samples in a training set. It prohibits direct application of SR to perform manifold learning on such dataset. In ESR, we first group the samples into tiny clusters, and then devise an approach to reduce the size of the similarity matrix for graph learning. In this way, the subspace learning on graph Laplacian for a vast dataset is computationally feasible on a personal computer. In the ESR-based scene recognition, we first propose an enhanced low-level feature representation which combines the scale, orientation, spatial position, and local appearance of a local feature. Then, ESR is applied to embed enhanced low-level image features. The ESR-based feature embedding not only generates a low dimension feature representation but also integrates various aspects of low-level features into the compact representation. The bag-of-words is then generated from the embedded features for image classification. The comparative experiments on open benchmark datasets for scene recognition demonstrate that the proposed method outperforms baseline approaches. It is suitable for real-time applications on mobile platforms, e.g. tablets and smart phones.  相似文献   

7.
This paper proposes a method for scene categorization by integrating region contextual information into the popular Bag-of-Visual-Words approach. The Bag-of-Visual-Words approach describes an image as a bag of discrete visual words, where the frequency distributions of these words are used for image categorization. However, the traditional visual words suffer from the problem when faced these patches with similar appearances but distinct semantic concepts. The drawback stems from the independently construction each visual word. This paper introduces Region-Conditional Random Fields model to learn each visual word depending on the rest of the visual words in the same region. Comparison with the traditional Conditional Random Fields model, there are two areas of novelty. First, the initial label of each patch is automatically defined based on its visual feature rather than manually labeling with semantic labels. Furthermore, the novel potential function is built under the region contextual constraint. The experimental results on the three well-known datasets show that Region Contextual Visual Words indeed improves categorization performance compared to traditional visual words.  相似文献   

8.
基于全局优化策略的场景分类算法   总被引:1,自引:0,他引:1  
提出一种基于全局优化策略的场景分类算法.该算法基于整幅图像提取全局场景特征——空间包络特征.从图像块中提取视觉单词,且定义隐变量表示该视觉单词语义,然后引入隐状态结构图描述整幅图像的视觉单词上下文;在场景分类策略上,构造由相容函数组成的目标函数,其中相容函数度量全局场景特征、隐变量与场景类别标记的相容度,通过求解目标函数的全局最优解推断图像的场景类别标记.在标准场景图像库上的对比实验表明该算法优于当前有代表性的场景分类算法.  相似文献   

9.
This paper presents a scene classification method based on local co-occurrence in a KPCA space of local blob words. Scene classification based on local correlation of binarized projection lengths in subspaces obtained by Kernel Principal Component Analysis (KPCA) of visual words has been recently proposed, and its effectiveness has been demonstrated. However, the local correlation of two binary features (0 or 1) becomes 1 only when both features take a value of 1. The local correlation becomes 0 in all other cases ((0,1), (1,0) and (0,0)), which might lead to the loss of useful information for effective classification. In this study, all combinations of co-occurrence of binary features are used instead of local correlation. We conducted the experiments using a database containing 13 scene categories and found that the proposed method using local co-occurrence features achieves an accuracy of more than 84%, which is higher than the accuracy of conventional methods based on local correlation features.  相似文献   

10.
自动图像标注是一项具有挑战性的工作,它对于图像分析理解和图像检索都有着重要的意义.在自动图像标注领域,通过对已标注图像集的学习,建立语义概念空间与视觉特征空间之间的关系模型,并用这个模型对未标注的图像集进行标注.由于低高级语义之间错综复杂的对应关系,使目前自动图像标注的精度仍然较低.而在场景约束条件下可以简化标注与视觉特征之间的映射关系,提高自动标注的可靠性.因此提出一种基于场景语义树的图像标注方法.首先对用于学习的标注图像进行自动的语义场景聚类,对每个场景语义类别生成视觉场景空间,然后对每个场景空间建立相应的语义树.对待标注图像,确定其语义类别后,通过相应的场景语义树,获得图像的最终标注.在Corel5K图像集上,获得了优于TM(translation model)、CMRM(cross media relevance model)、CRM(continous-space relevance model)、PLSA-GMM(概率潜在语义分析-高期混合模型)等模型的标注结果.  相似文献   

11.
12.
史静  朱虹  王栋  杜森 《中国图象图形学报》2017,22(12):1750-1757
目的 目前对于场景分类问题,由于其内部结构的多样性和复杂性,以及光照和拍摄角度的影响,现有算法大多通过单纯提取特征进行建模,并没有考虑场景图像中事物之间的相互关联,因此,仍然不能达到一个理想的分类效果。本文针对场景分类中存在的重点和难点问题,充分考虑人眼的视觉感知特性,利用显著性检测,并结合传统的视觉词袋模型,提出了一种融合视觉感知特性的场景分类算法。方法 首先,对图像进行多尺度分解,并提取各尺度下的图像特征,接着,检测各尺度下图像的视觉显著区域,最后,将显著区域信息与多尺度特征进行有机融合,构成多尺度融合窗选加权SIFT特征(WSSIFT),对场景进行分类。结果 为了验证本文算法的有效性,该算法在3个标准数据集SE、LS以及IS上进行测试,并与不同方法进行比较,分类准确率提高了约3%~17%。结论 本文提出的融合视觉感知特性的场景分类算法,有效地改善了单纯特征描述的局限性,并提高了图像的整体表达。实验结果表明,该算法对于多个数据集都具有较好的分类效果,适用于场景分析、理解、分类等机器视觉领域。  相似文献   

13.
王彦杰  刘峡壁  贾云得 《软件学报》2012,23(7):1787-1795
基于视觉词的统计建模和判别学习,提出一种视觉词软直方图的图像表示方法.假设属于同一视觉词的图像局部特征服从高斯混合分布,利用最大-最小后验伪概率判别学习方法从样本中估计该分布,计算局部特征与视觉词的相似度.累加图像中每个视觉词与对应局部特征的相似度,在全部视觉词集合上进行结果的归一化,得到图像的视觉词软直方图.讨论了两种具体实现方法:一种是基于分类的软直方图方法,该方法根据相似度最大原则建立局部特征与视觉词的对应关系;另一种是完全软直方图方法,该方法将每个局部特征匹配到所有视觉词.在数据库Caltech-4和PASCAL VOC 2006上的实验结果表明,该方法是有效的.  相似文献   

14.
谢飞  龚声蓉  刘纯平  季怡 《计算机科学》2015,42(11):293-298
基于视觉单词的人物行为识别由于在特征中加入了中层语义信息,因此提高了识别的准确性。然而,视觉单词提取时由于前景和背景存在相互干扰,使得视觉单词的表达能力受到影响。提出一种结合局部和全局特征的视觉单词生成方法。该方法首先用显著图检测出前景人物区域,采用提出的动态阈值矩阵对人物区域用不同的阈值来分别检测时空兴趣点,并计算周围的3D-SIFT特征来描述局部信息。在此基础上,采用光流直方图特征描述行为的全局运动信息。通过谱聚类将局部和全局特征融合成视觉单词。实验证明,相对于流行的局部特征视觉单词生成方法,所提出的方法在简单背景的KTH数据集上的识别率比平均识别率提高了6.4%,在复杂背景的UCF数据集上的识别率比平均识别率提高了6.5%。  相似文献   

15.
采用分层特征网络估计查询图像的相机位姿,会出现检索失败和检索速度慢的问题。对分层特征网络进行分析,提出采用动态遍历与预聚类的视觉定位方法。依据场景地图进行图像预聚类,利用图像全局描述符获得候选帧集合并动态遍历查询图像,利用图像局部特征描述符进行特征点匹配,通过PnP算法估计查询图像的相机位姿,由此构建基于MobileNetV3的分层特征网络,以准确提取全局描述符与局部特征点。在典型数据集上与AS、CSL、DenseVLAD、NetVLAD等主流视觉定位方法的对比结果表明,该方法能够改善光照与季节变化场景下对候选帧的检索效率,提升位姿估计精度和候选帧检索速度。  相似文献   

16.
随着深度学习的快速发展,基于深度学习的场景识别方法逐渐取代传统的基于手工特征的场景识别方法,成为未来研究的主要方向。针对基于深度学习的场景识别方法,对基本思想进行了总结,将其大体分为以下四类:深度学习与视觉词袋结合场景识别法、基于显著部分的场景识别法、多层特征融合场景识别法、融合知识表示的场景识别法,分析了各个方法的特点及局限性,并对识别效果进行了比较,最后对未来研究方向进行展望。  相似文献   

17.
18.
针对词袋模型易受到无关的背景视觉噪音干扰的问题,提出了一种结合显著性检测与词袋模型的目标识别方法。首先,联合基于图论的视觉显著性算法与一种全分辨率视觉显著性算法,自适应地从原始图像中获取感兴趣区域。两种视觉显著性算法的联合可以提高获取的前景目标的完整性。然后,使用尺度不变特征变换描述子从感兴趣区域中提取特征向量,并通过密度峰值聚类算法对特征向量进行聚类,生成视觉字典直方图。最后,利用支持向量机对目标进行识别。在PASCAL VOC 2007和MSRC-21数据库上的实验结果表明,该方法相比同类方法可以有效地提高目标识别性能。  相似文献   

19.
Semantic image segmentation aims to partition an image into non-overlapping regions and assign a pre-defined object class label to each region. In this paper, a semantic method combining low-level features and high-level contextual cues is proposed to segment natural scene images. The proposed method first takes the gist representation of an image as its global feature. The image is then over-segmented into many super-pixels and histogram representations of these super-pixels are used as local features. In addition, co-occurrence and spatial layout relations among object classes are exploited as contextual cues. Finally the features and cues are integrated into the inference framework based on conditional random field by defining specific potential terms and introducing weighting functions. The proposed method has been compared with state-of-the-art methods on the MSRC database, and the experimental results show its effectiveness.  相似文献   

20.
虽然卷积神经网络(CNN)可以提取局部特征,长短期记忆网络(LSTM)可以提取全局特征,它们都表现出了较 好的分类效果,但CNN在获取文本的上下文全局信息方面有些不足,而LSTM容易忽略词语之间隐含的特征信息。因此,提 出了用CNN_BiLSTM_Attention 并行模型进行文本情感分类。首先,使用CNN提取局部特征,同时BiLSTM提取带有上下文 语义信息的全局特征,之后将两者提取的特征拼接在一起,进行特征融合。这样使得模型既能捕获局部短语级特征,又能捕获 上下文结构信息,并对特征词的重要程度,利用注意力机制分配不同权重,进而提高模型的分类效果。通过与单一模型CNN、 LSTM等深度神经网络模型的对比,本文所提的CNN_BiLSTM_Attention并行模型在综合评价指标F1 score 和准确率上都有 提升,实验结果表明,本文所提模型在文本情感分类任务中取得了较好的结果,比其他神经网络模型有更好的实用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号