首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对Zhai和Shah提出的原始时空显著性检测模型在空间显著性方面仅仅使用了图像的亮度信息,忽略彩色图像中的色彩信息的不足,提出了一种基于HSV颜色模型的空间显著性计算方法。该方法充分利用图像中的亮度信息和彩色信息,从像素级和区域级两个层次上进行显著性的计算。将改进的空间显著性计算与Zhai和Shah提出的时间显著性计算以及时空融合框架进行整合,检测视频中的显著目标。实验证明改进方法在光照不均和背景较复杂的情况下获取的空间显著区域和显著目标比原始方法更准确。  相似文献   

2.
Salient Region Detection by Modeling Distributions of Color and Orientation   总被引:3,自引:0,他引:3  
We present a robust salient region detection framework based on the color and orientation distribution in images. The proposed framework consists of a color saliency framework and an orientation saliency framework. The color saliency framework detects salient regions based on the spatial distribution of the component colors in the image space and their remoteness in the color space. The dominant hues in the image are used to initialize an expectation-maximization (EM) algorithm to fit a Gaussian mixture model in the hue-saturation (H-S) space. The mixture of Gaussians framework in H-S space is used to compute the inter-cluster distance in the H-S domain as well as the relative spread among the corresponding colors in the spatial domain. Orientation saliency framework detects salient regions in images based on the global and local behavior of different orientations in the image. The oriented spectral information from the Fourier transform of the local patches in the image is used to obtain the local orientation histogram of the image. Salient regions are further detected by identifying spatially confined orientations and with the local patches that possess high orientation entropy contrast. The final saliency map is selected as either color saliency map or orientation saliency map by automatically identifying which of the maps leads to the correct identification of the salient region. The experiments are carried out on a large image database annotated with ldquoground-truthrdquo salient regions, provided by Microsoft Research Asia, which enables us to conduct robust objective level comparisons with other salient region detection algorithms.  相似文献   

3.
结合区域和边界信息的图像显著度检测   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 图像显著度检测是许多图像应用的核心问题,为了能够在复杂背景下准确提取图像中前景对象的位置和尺度信息,提出一种结合区域和边界信息的图像显著度检测方法。方法 对于图像区域信息,提出一种基于图像等照度线的方法检测显著区域信息。该方法针对不同的特征(颜色、亮度和方向)提出统一的计算方法,使得不同特征下获得的显著信息具有一致的度量标准,从而方便后续多特征显著度图的融合。对于图像边界信息,采用一种结合多尺度Beltrami过滤器的全局方法检测显著边界信息。多尺度Beltrami过滤器可以显著增强图像中的边界信息。利用全局显著度检测方法对经过过滤器处理过的图像可以准确地获取图像中最为显著的边界信息。最后,由于区域和边界分别代表图像中的不同类型信息,可以直接采用线性融合方式构建最终的图像显著度图。结果 与其他9种流行图像显著度检测算法相比,本文算法无论在简单还是复杂背景下均能够较为准确地检测出图像中的显著度信息(Precision、Recall、F测试中获得的平均值为0.5905,0.6554,0.7470的最高测试结果)。结论 提出一种结合区域和边界信息的图像显著度检测算法,通过区域和边界信息相结合的方式实现图像中显著对象的准确检测。实验结果表明本文算法具有良好的适用性和鲁棒性,为图像中复杂背景下对象检测打下坚实基础。  相似文献   

4.
目的 显著性检测领域的研究重点和难点是检测具有复杂结构信息的显著物体。传统的基于图像块的检测算法,主要根据相对规则的图像块进行检测,计算过程中不能充分利用图像不规则的结构和纹理的信息,对算法精度产生影响。针对上述问题,本文提出一种基于不规则像素簇的显著性检测算法。方法 根据像素点的颜色信息量化颜色空间,同时寻找图像的颜色中心,将每个像素的颜色替代为最近的颜色中心的颜色。然后根据相同颜色标签的连通域形成不规则像素簇,并以连通域的中心为该簇的位置中心,以该连通域对应颜色中心的颜色为该簇整体的颜色。通过像素簇的全局对比度得到对比度先验图,利用目标粗定位法估计显著目标的中心,计算图像的中心先验图。然后将对比度先验图与中心先验图结合得到初始显著图。为了使显著图更加均匀地突出显著目标,利用图模型及形态学变化改善初始显著图效果。结果 将本文算法与5种公认表现最好的算法进行对比,并通过5组图像进行验证,采用客观评价指标精确率—召回率(precision-recall,PR)曲线以及精确率和召回率的调和平均数F-measure进行评价,结果表明本文算法在PR曲线上较其他算法表现良好,在F-measure方面相比其他5种算法均有00.3的提升,且有更佳的视觉效果。结论 本文通过更合理地对像素簇进行划分,并对目标物体进行粗定位,更好地考虑了图像的结构和纹理特征,在显著性检测中有较好的检测效果,普适性强。  相似文献   

5.

Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.

  相似文献   

6.
针对移动镜头下的运动目标检测中的背景建模复杂、计算量大等问题,提出一种基于运动显著性的移动镜头下的运动目标检测方法,在避免复杂的背景建模的同时实现准确的运动目标检测。该方法通过模拟人类视觉系统的注意机制,分析相机平动时场景中背景和前景的运动特点,计算视频场景的显著性,实现动态场景中运动目标检测。首先,采用光流法提取目标的运动特征,用二维高斯卷积方法抑制背景的运动纹理;然后采用直方图统计衡量运动特征的全局显著性,根据得到的运动显著图提取前景与背景的颜色信息;最后,结合贝叶斯方法对运动显著图进行处理,得到显著运动目标。通用数据库视频上的实验结果表明,所提方法能够在抑制背景运动噪声的同时,突出并准确地检测出场景中的运动目标。  相似文献   

7.
为了能正确检测显著性图中的多个显著性目标, 提出了一种基于全局颜色对比的显著性目标检测算法。该算法首先提取图像的全局颜色对比度特征, 然后把显著性图和全局颜色对比度作为特征输入条件随机场框架中, 得到二值显著性掩模, 最后经区域描绘子计算得到包含显著性目标的最小外接矩形。在两种公开的数据集上的实验结果表明, 该算法在精度、召回率以及F-测度方面的表现优于现有其他几种算法, 在计算效率上也具有一定的优势。因此, 所提出的算法在检测效果上优于现有的显著性目标检测算法, 而且还能够检测到多个显著性目标。  相似文献   

8.
目的 为了解决图像显著性检测中存在的边界模糊,检测准确度不够的问题,提出一种基于目标增强引导和稀疏重构的显著检测算法(OESR)。方法 基于超像素,首先从前景角度计算超像素的中心加权颜色空间分布图,作为前景显著图;由图像边界的超像素构建背景模板并对模板进行预处理,以优化后的背景模板作为稀疏表示的字典,计算稀疏重构误差,并利用误差传播方式进行重构误差的校正,得到背景差异图;最后,利用快速目标检测方法获取一定数量的建议窗口,由窗口的对象性得分计算目标增强系数,以此来引导两种显著图的融合,得到最终显著检测结果。结果 实验在公开数据集上与其他12种流行算法进行比较,所提算法对具有不同背景复杂度的图像能够较准确的检测出显著区域,对显著对象的提取也较为完整,并且在评价指标检测上与其他算法相比,在MSRA10k数据集上平均召回率提高4.1%,在VOC2007数据集上,平均召回率和F检验分别提高18.5%和3.1%。结论 本文提出一种新的显著检测方法,分别利用颜色分布与对比度方法构建显著图,并且在显著图融合时采用一种目标增强系数,提高了显著图的准确性。实验结果表明,本文算法能够检测出更符合视觉特性的显著区域,显著区域更加准确,适用于自然图像的显著性目标检测、目标分割或基于显著性分析的图像标注。  相似文献   

9.
目的 传统显著性检测模型大多利用手工选择的中低层特征和先验信息进行物体检测,其准确率和召回率较低,随着深度卷积神经网络的兴起,显著性检测得以快速发展。然而,现有显著性方法仍存在共性缺点,难以在复杂图像中均匀地突显整个物体的明确边界和内部区域,主要原因是缺乏足够且丰富的特征用于检测。方法 在VGG(visual geometry group)模型的基础上进行改进,去掉最后的全连接层,采用跳层连接的方式用于像素级别的显著性预测,可以有效结合来自卷积神经网络不同卷积层的多尺度信息。此外,它能够在数据驱动的框架中结合高级语义信息和低层细节信息。为了有效地保留物体边界和内部区域的统一,采用全连接的条件随机场(conditional random field,CRF)模型对得到的显著性特征图进行调整。结果 本文在6个广泛使用的公开数据集DUT-OMRON(Dalian University of Technology and OMRON Corporation)、ECSSD(extended complex scene saliency dataset)、SED2(segmentation evalution database 2)、HKU、PASCAL-S和SOD(salient objects dataset)上进行了测试,并就准确率—召回率(precision-recall,PR)曲线、F测度值(F-measure)、最大F测度值、加权F测度值和均方误差(mean absolute error,MAE)等性能评估指标与14种最先进且具有代表性的方法进行比较。结果显示,本文方法在6个数据集上的F测度值分别为0.696、0.876、0.797、0.868、0.772和0.785;最大F测度值分别为0.747、0.899、0.859、0.889、0.814和0.833;加权F测度值分别为0.656、0.854、0.772、0.844、0.732和0.762;MAE值分别为0.074、0.061、0.093、0.049、0.099和0.124。无论是前景和背景颜色相似的图像集,还是多物体的复杂图像集,本文方法的各项性能均接近最新研究成果,且优于大多数具有代表性的方法。结论 本文方法对各种场景的图像显著性检测都具有较强的鲁棒性,同时可以使显著性物体的边界和内部区域更均匀,检测结果更准确。  相似文献   

10.
This paper presents a new hybrid approach for detecting salient objects in an image. It consists of two processes: local saliency estimation and global-homogeneity refinement. We model the salient object detection problem as a region growing and competition process by propagating the influence of foreground and background seed-patches. First, the initial local saliency of each image patch is measured by fusing local contrasts with spatial priors, thereby the seed-patches of foreground and background are constructed. Later, the global-homogeneous information is utilized to refine the saliency results by evaluating the ratio of the foreground and background likelihoods propagated from the seed-patches. Despite the idea is simple, our method can effectively achieve consistent performance for detecting object saliency. The experimental results demonstrate that our proposed method can accomplish remarkable precision and recall rates with good computational efficiency.  相似文献   

11.
罗晓林  罗雷 《计算机科学》2016,43(Z6):171-174, 183
针对多视点视频的压缩问题,提出一种基于视觉显著性分析的编码算法。该算法根据人眼对显著性区域的失真更加敏感这一特性,通过控制显著性区域与非显著性区域的编码质量来有效提高多视点视频编码的效率。首先,利用融合颜色与运动信息的视频显著性滤波器提取出多视点视频图像像素级精度的视觉显著性图;然后,将所有视点视频的视觉显著性图转换为编码宏块的显著性表示;最后,利用感知视频编码的原理实现基于显著性的宏块质量自适应控制。实验结果表明,该算法有效地提高了多视点视频编码的率失真效率及主观视频质量。  相似文献   

12.
目的 图像显著性检测方法对前景与背景颜色、纹理相似或背景杂乱的场景,存在背景难抑制、检测对象不完整、边缘模糊以及方块效应等问题。光场图像具有重聚焦能力,能提供聚焦度线索,有效区分图像前景和背景区域,从而提高显著性检测的精度。因此,提出一种基于聚焦度和传播机制的光场图像显著性检测方法。方法 使用高斯滤波器对焦堆栈图像的聚焦度信息进行衡量,确定前景图像和背景图像。利用背景图像的聚焦度信息和空间位置构建前/背景概率函数,并引导光场图像特征进行显著性检测,以提高显著图的准确率。另外,充分利用邻近超像素的空间一致性,采用基于K近邻法(K-nearest neighbor,K-NN)的图模型显著性传播机制进一步优化显著图,均匀地突出整个显著区域,从而得到更加精确的显著图。结果 在光场图像基准数据集上进行显著性检测实验,对比3种主流的传统光场图像显著性检测方法及两种深度学习方法,本文方法生成的显著图可以有效抑制背景区域,均匀地突出整个显著对象,边缘也更加清晰,更符合人眼视觉感知。查准率达到85.16%,高于对比方法,F度量(F-measure)和平均绝对误差(mean absolute error,MAE)分别为72.79%和13.49%,优于传统的光场图像显著性检测方法。结论 本文基于聚焦度和传播机制提出的光场图像显著性模型,在前/背景相似或杂乱背景的场景中可以均匀地突出显著区域,更好地抑制背景区域。  相似文献   

13.
图像显著性检测是为了检测到能够引起视觉注意力的对象区域,利用混合的特征编码能够避免单一的特征编码在检测图像中对象显著性和显著区域精确边界时候的不足。提出一种基于图像区域对比信息和图像语义信息混合编码的图像显著性检测方法。结合图像对比信息编码以及原始图像的语义信息编码,通过卷积神经网络来进行图像显著性检测,保证对显著对象进行有效的检测以及对显著区域边缘细节的处理能力。实验结果表明,在主流的显著性检测数据集上,采用该方法能够有效地检测到图像中的显著对象以及显著区域的精确边界。  相似文献   

14.
目的 图像显著适配旨在自动调节图像尺寸,对图像内容进行非均匀缩放,以便在受限的展示空间内更好地保留显著物体。为了解决显示适配过程中显著物体部分扭曲的问题,提出一种基于显著物体检测的图像显示适配方法。方法 本文方法采用显著物体分割结果来替代显著性图,以改进显示适配结果。首先,采用显著性融合和传播的方法生成显著性图;接着,结合输入图像和显著性图,采用自适应三阈值方法实现显著物体分割;然后,以此为基础,生成输入图像的曲边网格表示;最后,通过对不同网格的非均匀缩放,生成符合目标尺寸的适配结果。结果 在面向图像显示适配的公开数据集RetargetMe上,将本文方法与现有的10种代表性显示适配方法的结果进行了人工评估和比较。本文方法可以有效地减少显著物体出现部分扭曲的现象,能在48.8%的图像上取得无明显缺陷的适配效果,比现有最好的方法提高了5%。结论 基于显著物体检测的图像显示适配方法有助于提高显示适配过程中对显著物体处理的一致性,减少由于显著物体部分扭曲而引起的明显人工处理痕迹,从而达到提升显示适配效果的目的。  相似文献   

15.
Zhang  Xufan  Wang  Yong  Chen  Zhenxing  Yan  Jun  Wang  Dianhong 《Multimedia Tools and Applications》2020,79(31-32):23147-23159

Saliency detection is a technique to analyze image surroundings to extract relevant regions from the background. In this paper, we propose a simple and effective saliency detection method based on image sparse representation and color features combination. First, the input image is segmented into non-overlapping super-pixels, so as to perform the saliency detection at the region level to reduce computational complexity. Then, a background optimization selection scheme is used to construct an appropriate background template. Based on this, a primary saliency map is obtained by using image sparse representation. Next, through the linear combination of color coefficients we generate an improved saliency map with more prominent salient regions. Finally, the two saliency maps are integrated within Bayesian framework to obtain the final saliency map. Experimental results show that the proposed method has desirable detection performance in terms of detection accuracy and running time.

  相似文献   

16.
针对传统显著性检测算法分割精度低以及基于深度学习的显著性检测算法对像素级人工注释数据依赖性过强等不足,提出一种基于图割精细化和可微分聚类的无监督显著性目标检测算法。该算法采用由“粗”到“精”的思想,仅利用单张图像的特征便可以实现精确的显著性目标检测。首先利用Frequency-tuned算法根据图像自身的颜色和亮度得到显著粗图,然后根据图像的统计特性进行二值化并结合中心优先假设得到显著目标的候选区域,进而利用基于单图像进行图割的GrabCut算法对显著目标进行精细化分割,最后为克服背景与目标极为相似时检测不精确的困难,引入具有良好边界分割效果的无监督可微分聚类算法对单张显著图做进一步的优化。所提出的算法在ECSSD和SOD数据集上进行测试并与现有的7种算法进行对比,结果表明得到的优化显著图更接近于真值图,在ECSSD和SOD数据集上分别实现了14.3%和23.4%的平均绝对误差(MAE)。  相似文献   

17.
Image saliency analysis plays an important role in various applications such as object detection, image compression, and image retrieval. Traditional methods for saliency detection ignore texture cues. In this paper, we propose a novel method that combines color and texture cues to robustly detect image saliency. Superpixel segmentation and the mean-shift algorithm are adopted to segment an original image into small regions. Then, based on the responses of a Gabor filter, color and texture features are extracted to produce color and texture sub-saliency maps. Finally, the color and texture sub-saliency maps are combined in a nonlinear manner to obtain the final saliency map for detecting salient objects in the image. Experimental results show that the proposed method outperforms other state-of-the-art algorithms for images with complex textures.  相似文献   

18.
Location information, i.e., the position of content in image plane, is considered as an important supplement in saliency detection. The effect of location information is usually evaluated by integrating it with the selected saliency detection methods and measuring the improvement, which is highly influenced by the selection of saliency methods. In this paper, we provide direct and quantitative analysis of the importance of location information for saliency detection in natural images. We firstly analyze the relationship between content location and saliency distribution on four public image datasets, and validate the distribution by simply treating location based Gaussian distribution as saliency map. To further validate the effectiveness of location information, we propose a location based saliency detection approach, which completely initializes saliency maps with location information and propagate saliency among patches based on color similarity, and discuss the robustness of location information’s effect. The experimental results show that location information plays a positive role in saliency detection, and the proposed method can outperform most state-of-the-art saliency detection methods and handle natural images with different object positions and multiple salient objects.  相似文献   

19.
Salient objects extraction from a still image is a very hot topic, as it owns a lot of useful applications (e.g., image compression, content-based image retrieval, digital watermarking). In this paper, targeted to improve the performance of the extraction approach, we propose a two step salient objects extraction framework based on image segmentation and saliency detection (TIS). Specially, during the first step, the image is segmented into several regions using image segmentation algorithm and the saliency map for the whole image is detected with saliency detection algorithm. In the second step, for each region, some features are extracted for the SVM algorithm to classify the region as a background region or a salient region twice. Experimental results show that our proposed framework can extract the salient objects more precisely and can achieve a good extraction results, compared with previous salient objects extraction methods.  相似文献   

20.
何伟  齐琦  张国云  吴健辉 《计算机应用》2016,36(8):2306-2310
针对基于视觉显著性的运动目标检测算法存在时空信息简单融合及忽略运动信息的问题,提出一种动态融合视觉显著性信息和运动信息的运动目标检测方法。该方法首先计算每个像素的局部显著度和全局显著度,并通过贝叶斯准则生成空间显著图;然后,利用结构随机森林算法预测运动边界,生成运动边界图;其次,根据空间显著图和运动边界图属性的变化,动态确定最佳融合权值;最后,根据动态融合权值计算并标记运动目标。该方法既发挥了显著性算法和运动边界算法的优势,又克服了各自的不足,与传统背景差分法和三帧差分法相比,检出率和误检率的最大优化幅度超过40%。实验结果表明,该方法能够准确、完整地检测出运动目标,提升了对场景的适应性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号