首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
2.
3.
显著性目标检测旨在于一个场景中自动检测能够引起人类注意的目标或区域,在自底向上的方法中,基于多核支持向量机(SVM)的集成学习取得了卓越的效果。然而,针对每一张要处理的图像,该方法都要重新训练,每一次训练都非常耗时。因此,该文提出一个基于加权的K近邻线性混合(WKNNLB)显著性目标检测方法:利用现有的方法来产生初始的弱显著图并获得训练样本,引入加权的K近邻(WKNN)模型来预测样本的显著性值,该模型不需要任何训练过程,仅需选择一个最优的K值和计算与测试样本最近的K个训练样本的欧式距离。为了减少选择K值带来的影响,多个加权的K近邻模型通过线性混合的方式融合来产生强的显著图。最后,将多尺度的弱显著图和强显著图融合来进一步提高检测效果。在常用的ASD和复杂的DUT-OMRON数据集上的实验结果表明了该算法在运行时间和性能上的有效性和优越性。当采用较好的弱显著图时,该算法能够取得更好的效果。  相似文献   

4.
Salient object detection is essential for applications, such as image classification, object recognition and image retrieval. In this paper, we design a new approach to detect salient objects from an image by describing what does salient objects and backgrounds look like using statistic of the image. First, we introduce a saliency driven clustering method to reveal distinct visual patterns of images by generating image clusters. The Gaussian Mixture Model (GMM) is applied to represent the statistic of each cluster, which is used to compute the color spatial distribution. Second, three kinds of regional saliency measures, i.e, regional color contrast saliency, regional boundary prior saliency and regional color spatial distribution, are computed and combined. Then, a region selection strategy integrating color contrast prior, boundary prior and visual patterns information of images is presented. The pixels of an image are divided into either potential salient region or background region adaptively based on the combined regional saliency measures. Finally, a Bayesian framework is employed to compute the saliency value for each pixel taking the regional saliency values as priority. Our approach has been extensively evaluated on two popular image databases. Experimental results show that our approach can achieve considerable performance improvement in terms of commonly adopted performance measures in salient object detection.  相似文献   

5.
6.
Saliency detection has become a valuable tool for many image processing tasks, like image retargeting, object recognition, and adaptive compression. With the rapid development of the saliency detection methods, people have approved the hypothesis that “the appearance contrast between the salient object and the background is high”, and build their saliency methods on some priors that explain this hypothesis. However, these methods are not satisfactory enough. We propose a two-stage salient region detection method. The input image is first segmented into superpixels. In the first stage, two measures which measure the isolation and distribution of each superpixel are proposed, we consider that both of these two measures are important for finding the salient regions, thus the image-feature-based saliency map is obtained by combining the two measures. Then, in the second stage, we incorporate into the image-feature-based saliency map a location prior map to emphasize the foci of attention. In this algorithm, six priors that explain what is the salient region are exploited. The proposed method is compared with the state-of-the-art saliency detection methods using one of the largest publicly available standard databases, the experimental result indicates that the proposed method has better performance. We also demonstrate how the saliency map of the proposed method can be used to create high quality of initial segmentation masks for subsequent image processing, like Grabcut based salient object segmentation.  相似文献   

7.
在目标分类领域,当前主流的目标分类方法是基于视觉词典模型,而时间效率低、视觉单词同义性和歧义性及单词空间信息的缺失等问题严重制约了其分类性能。针对这些问题,该文提出一种基于弱监督的精确位置敏感哈希(E2LSH)和显著图加权的目标分类方法。首先,引入E2LSH算法对训练图像集的特征点聚类生成一组视觉词典,并提出一种弱监督策略对E2LSH中哈希函数的选取进行监督,以降低其随机性,提高视觉词典的区分性。然后,利用GBVS(Graph-Based Visual Saliency)显著度检测算法对图像进行显著度检测,并依据单词所处区域的显著度值为其分配权重;最后,利用显著图加权的视觉语言模型完成目标分类。在数据集Caltech-256和Pascal VOC 2007上的实验结果表明,所提方法能够较好地提高词典生成效率,提高目标表达的分辨能力,其目标分类性能优于当前主流方法。  相似文献   

8.
In this paper, a new method for saliency detection is proposed. Based on the defined features of the salient object, we solve the problem of saliency detection from three aspects. Firstly, from the view of global information, we partition the image into two clusters, namely, salient component and background component by employing Principal Component Analysis (PCA) and k-means clustering. Secondly, the maximal salient information is applied to find the position of saliency and eliminate the noise. Thirdly, we enhance the saliency for the salient regions while weaken the background regions. Finally, the saliency map is obtained based on these aspects. Experimental results show that the proposed method achieves better results than the state of the art methods. And this method can be applied for graph based salient object segmentation.  相似文献   

9.
熊羽  左小清  黄亮  陈震霆 《激光技术》2014,38(2):165-171
为了解决利用单一特征对彩色遥感图像进行分类效果不理想、普适性不强等问题,提出了一种基于颜色和纹理特征组合的支持向量机彩色遥感图像分类方法。该方法尝试将彩色遥感图像的颜色信息和纹理信息相结合作为支持向量机算法分类的特征向量,据此对遥感影像进行分类,并进行了实验验证。结果表明,颜色和纹理特征组合的支持向量机分类方法能够取得较高的分类精度,其分类效果优于传统的单一颜色或纹理特征分类,是一种有效的彩色遥感图像分类方法。  相似文献   

10.
Superpixels provide an over-segmentation representation of a natural image. However, they lack information of the entire object. In this paper, we propose a method to obtain superpixels through a merging strategy based on the bottom-up saliency values of superpixels. The proposed method aims to obtain meaningful superpixels, i.e., make the objects as complete as possible. The proposed method creates an over-segmented representation of an image. The saliency value of each superpixel is calculated through a biologically plausible saliency model in a way of statistical theory. Two adjacent superpixels are merged if the merged superpixel is more salient than the unmerged ones. The merging process is performed in an iterative way. Experimental evaluation on test images shows that the obtained saliency-based superpixels can extract the salient objects more effectively than the existing methods.  相似文献   

11.
There have been remarkable improvements in the salient object detection in the recent years. During the past few years, graph-based saliency detection algorithms have been proposed and made advances. Nevertheless, most of the state-of-the-art graph-based approaches are usually designed with low-level features, misleading assumption, fixed predefined graph structure and weak affinity matrix, which determine that they are not robust enough to handle images with complex or cluttered background.In this paper, we propose a robust label propagation-based mechanism for salient object detection throughout an adaptive graph to tackle above issues. Low-level features as well as deep features are integrated into the proposed framework to measure the similarity between different nodes. In addition, a robust mechanism is presented to calculate seeds based on the distribution of salient regions, which can achieve desirable results even if the object is in contact with the image boundary and the image scene is complex. Then, an adaptive graph with multiview connections is constructed based on different cues to learn the graph affinity matrix, which can better capture the characteristics between spatially adjacent and distant regions. Finally, a novel RLP-AGMC model, i.e. robust label propagation throughout an adaptive graph with multiview connections, is put forward to calculate saliency maps in combination with the obtained seed vectors. Comprehensive experiments on six public datasets demonstrate the proposed method outperforms fourteen existing state-of-the-art methods in terms of various evaluation metrics.  相似文献   

12.
13.
随着影像分辨率的提高,传统的光谱特征不能有效地描述复杂的高分辨率影像信息,从而影响高分辨率遥感影像的分类。为了弥补传统光谱方法的不足,提出了一种加权对象相关指数(WOCI),并将其应用到基于支持向量机(SVM)的影像分类中。该指数是通过考虑具有相似性光谱的对象来构建的,可全面地描述影像的上下文结构。结果表明与仅考虑光谱特征和像素空间特征进行分类的方法相比,基于WOCI特征的分类结果有更高的精确性,且分类精度提高了7.16%。  相似文献   

14.
概率框架下多特征显著性检测算法   总被引:1,自引:0,他引:1       下载免费PDF全文
杨小冈  李维鹏  马玛双 《电子学报》2019,47(11):2378-2385
显著性检测是计算机视觉的一项基础问题,广泛地用于注视点预测、目标检测、场景分类等视觉任务当中.为提升多特征条件下图像的显著性检测精度,以显著图的联合概率分布为基础,结合先验知识,设计一种概率框架下的多特征显著性检测算法.首先分析了单一特征显著性检测的潜在缺陷,继而推导出多特征下显著图的联合概率分布;然后根据显著图的稀有性,稀疏性,紧凑性与中心先验推导出显著图的先验分布,并使用正态分布假设简化了显著图的条件分布;随后根据显著图的联合概率分布得到其极大后验估计,并基于多阈值假设构建了分布参数的有监督学习模型.数据集实验表明:相比于精度最高的单一特征显著性检测方法,多特征算法在有监督和启发式方法下的平均误差降低了6.98%和6.81%,平均F-measure提高了1.19%和1.16%;单幅图像的多特征融合耗时仅为11.8ms.算法精度较高,实时性好,且可根据不同任务选择所需的特征类别与先验信息,能够满足多特征显著性检测的性能要求.  相似文献   

15.
With the emerging development of three-dimensional (3D) related technologies, 3D visual saliency modeling is becoming particularly important and challenging. This paper presents a new depth perception and visual comfort guided saliency computational model for stereoscopic 3D images. The prominent advantage of the proposed model is that we incorporate the influence of depth perception and visual comfort on 3D visual saliency computation. The proposed saliency model is composed of three components: 2D image saliency, depth saliency and visual comfort based saliency. In the model, color saliency, texture saliency and spatial compactness are computed respectively and fused to derive 2D image saliency. Global disparity contrast is considered to compute depth saliency. Particularly, we train a visual comfort prediction function to distinguish stereoscopic image pair as high comfortable stereo viewing (HCSV) or low comfortable stereo viewing (LCSV), and devise different computational rules to generate a visual comfort based saliency map. The final 3D saliency map is obtained by using a linear combination and enhanced by a “saliency-center bias” model. Experimental results show that the proposed 3D saliency model outperforms the state-of-the-art models on predicting human eye fixations and visual comfort assessment.  相似文献   

16.
基于视觉注意模型和进化规划的感兴趣区检测方法   总被引:7,自引:0,他引:7  
根据生物注意机制,该文提出了一种基于视觉注意模型和进化规划的感兴趣区检测方法。采用进化规划方法分割图像候选区域;区域兴趣度由视觉注意模型产生的局部显著和进化规划计算的全局显著共同度量。在视觉注意模型中,图像经过小波多尺度变换和计算中央周边差得到局部显著度。注意焦点在显著度增强因子的作用下,选取候选区域得到感兴趣区。实验结果表明,所提方法检测的感兴趣区更接近人眼的视觉注意机制,并取得了较为满意的对象检测和兴趣度量结果。  相似文献   

17.
非清晰区域抑制下的显著对象检测方法   总被引:1,自引:0,他引:1  
基于上下文感知的显著区域检测模型(Context-Aware,CA)对于大目标和复杂背景图像中显著对象检测存在检测内容缺失和误检的问题.在CA模型的基础上,引入图像清晰度的视觉反差特性,提出非清晰区域抑制下的图像显著对象检测方法.该方法以离散度作为判断图像中是否存在清晰度差异的标准,并对存在差异的图像进行抑制.实验结果表明,非清晰区域抑制的CA方法可以在较好的解决大目标检测和复杂背景误检问题,提高了显著对象检测精度.  相似文献   

18.
赵永威  周苑  李弼程  柯圣财 《电子学报》2016,44(9):2181-2188
传统的视觉词典模型(Bag of Visual Words Model,BoVWM)中广泛存在视觉单词同义性和歧义性问题.且视觉词典中的一些噪声单词-“视觉停用词”,也会降低视觉词典的语义分辨能力.针对这些问题,本文提出了基于近义词自适应软分配和卡方模型的图像目标分类方法.首先,该方法利用概率潜在语义分析模型(Probabilistic Latent Semantic Analysis,PLSA)分析图像中视觉单词的语义共生概率,挖掘图像隐藏的语义主题,进而得到语义主题在某一视觉单词上的概率分布;其次,引入K-L散度度量视觉单词间的语义相关性,获取语义相关的近义词;然后,结合自适应软分配策略实现SIFT特征点与若干语义相关的近义词之间的软映射;最后,利用卡方模型滤除“视觉停用词”,重构视觉词汇分布直方图,并采用SVM分类器完成目标分类.实验结果表明,新方法能够有效克服视觉单词同义性和歧义性问题带来的不利影响,增强视觉词典的语义分辨能力,较好地改善了目标分类性能.  相似文献   

19.
张家辉  谢毓湘  郭延明 《信号处理》2020,36(11):1804-1810
场景图像分类是机器视觉中一个热门的方向,场景图像具有内容丰富、概念复杂的特点。已有的基于深度网络的场景分类算法,往往是通过改进网络结构或者数据增强等方式提升场景识别效果,但是缺少对图像中场景要素和对象要素之间关系的考虑。基于此,本文在分析现有基于深度网络的场景分类技术的基础上提出了一种局部特征显著化的场景分类算法。该算法旨在结合场景局部特征和对象局部特征的特点,利用两类不同特征存在的互补关系,分别对其进行优化,得到更具判别力的场景特征描述。局部特征显著化算法在MIT Indoor67数据集上得到的测试精度为88.88%,实验结果验证了该算法的有效性。   相似文献   

20.
In this paper, a new saliency detection model is proposed based on a space-to-frequency transformation. Firstly, the equivalence of spatial filtering and spectral modulation is demonstrated to explain the intrinsic mechanism of typical frequency-based saliency models. Then a novel frequency-based saliency model is presented based on the Fourier Transformation of multiple spatial Gabor filters. Besides, a new saliency measurement is proposed to implement the competition between saliency maps at multiple scales and the fusion of color channels. In experiments, we use a set of typical psychological patterns and four popular human fixation datasets to test and evaluate the proposed model. In addition, a new energy-based criterion is proposed to evaluate the performance of our model and is compared with five traditional saliency metrics for validation. Experimental results show that our model outperforms most of the competing models in salient object detection and human fixation prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号