首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Color is the most informative low-level feature and might convey tremendous saliency information of a given image. Unfortunately, color feature is seldom fully exploited in the previous saliency models. Motivated by the three basic disciplines of a salient object which are respectively center distribution prior, high color contrast to surroundings and compact color distribution, in this paper, we design a comprehensive salient object detection system which takes the advantages of color contrast together with color distribution and outputs high quality saliency maps. The overall procedure flow of our unified framework contains superpixel pre-segmentation, color contrast and color distribution computation, combination, and final refinement.In color contrast saliency computation, we calculate center-surrounded color contrast and then employ the distribution prior in order to select correct color components. A global saliency smoothing procedure that is based on superpixel regions is introduced as well. This processing step preferably alleviates the saliency distortion problem, leading to the entire object being highlighted uniformly. Finally, a saliency refinement approach is adopted to eliminate artifacts and recover unconnected parts within the combined saliency maps.In visual comparison, our method produces higher quality saliency maps which stress out the total object meanwhile suppress background clutter. Both qualitative and quantitative experiments show our approach outperforms 8 state-of-the-art methods, achieving the highest precision rate 96% (3% improvement from the current highest), when evaluated via one of the most popular data sets. Excellent content-aware image resizing also could be achieved using our saliency maps.  相似文献   

3.
Saliency detection has been researched for conventional images with standard aspect ratios, however, it is a challenging problem for panoramic images with wide fields of view. In this paper, we propose a saliency detection algorithm for panoramic landscape images of outdoor scenes. We observe that a typical panoramic image includes several homogeneous background regions yielding horizontally elongated distributions, as well as multiple foreground objects with arbitrary locations. We first estimate the background of panoramic images by selecting homogeneous superpixels using geodesic similarity and analyzing their spatial distributions. Then we iteratively refine an initial saliency map derived from background estimation by computing the feature contrast only within local surrounding area whose range and shape are changed adaptively. Experimental results demonstrate that the proposed algorithm detects multiple salient objects faithfully while suppressing the background successfully, and it yields a significantly better performance of panorama saliency detection compared with the recent state-of-the-art techniques.  相似文献   

4.
5.
Salient object detection is essential for applications, such as image classification, object recognition and image retrieval. In this paper, we design a new approach to detect salient objects from an image by describing what does salient objects and backgrounds look like using statistic of the image. First, we introduce a saliency driven clustering method to reveal distinct visual patterns of images by generating image clusters. The Gaussian Mixture Model (GMM) is applied to represent the statistic of each cluster, which is used to compute the color spatial distribution. Second, three kinds of regional saliency measures, i.e, regional color contrast saliency, regional boundary prior saliency and regional color spatial distribution, are computed and combined. Then, a region selection strategy integrating color contrast prior, boundary prior and visual patterns information of images is presented. The pixels of an image are divided into either potential salient region or background region adaptively based on the combined regional saliency measures. Finally, a Bayesian framework is employed to compute the saliency value for each pixel taking the regional saliency values as priority. Our approach has been extensively evaluated on two popular image databases. Experimental results show that our approach can achieve considerable performance improvement in terms of commonly adopted performance measures in salient object detection.  相似文献   

6.
Saliency detection has become a valuable tool for many image processing tasks, like image retargeting, object recognition, and adaptive compression. With the rapid development of the saliency detection methods, people have approved the hypothesis that “the appearance contrast between the salient object and the background is high”, and build their saliency methods on some priors that explain this hypothesis. However, these methods are not satisfactory enough. We propose a two-stage salient region detection method. The input image is first segmented into superpixels. In the first stage, two measures which measure the isolation and distribution of each superpixel are proposed, we consider that both of these two measures are important for finding the salient regions, thus the image-feature-based saliency map is obtained by combining the two measures. Then, in the second stage, we incorporate into the image-feature-based saliency map a location prior map to emphasize the foci of attention. In this algorithm, six priors that explain what is the salient region are exploited. The proposed method is compared with the state-of-the-art saliency detection methods using one of the largest publicly available standard databases, the experimental result indicates that the proposed method has better performance. We also demonstrate how the saliency map of the proposed method can be used to create high quality of initial segmentation masks for subsequent image processing, like Grabcut based salient object segmentation.  相似文献   

7.
Image saliency detection is the basis of perceptual image processing, which is significant to subsequent image processing methods. Most saliency detection methods can detect only a single object with a high‐contrast background, but they have no effect on the extraction of a salient object from images with complex low‐contrast backgrounds. With the prior knowledge, this paper proposes a method for detecting salient objects by combining the boundary contrast map and the geodesics‐like maps. This method can highlight the foreground uniformly and extract the salient objects efficiently in images with low‐contrast backgrounds. The classical receiver operating characteristics (ROC) curve, which compares the salient map with the ground truth map, does not reflect the human perception. An ROC curve with distance (distance receiver operating characteristic, DROC) is proposed in this paper, which takes the ROC curve closer to the human subjective perception. Experiments on three benchmark datasets and three low‐contrast image datasets, with four evaluation methods including DROC, show that on comparing the eight state‐of‐the‐art approaches, the proposed approach performs well.  相似文献   

8.
In this paper, we propose a novel approach to automatically detect salient regions in an image. Firstly, some corner superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the background labels based on ranking algorithm. Subsequently, we further employ an objectness measure to pick out and propagate foreground labels. Furthermore, an integration algorithm is devised to fuse both background-based saliency map and foreground-based saliency map, meanwhile an original energy function is acted as refinement before integration. Finally, results from multiscale saliency maps are integrated to further improve the detection performance. Our experimental results on five benchmark datasets demonstrate the effectiveness of the proposed method. Our method produces more accurate saliency maps with better precision-recall curve, higher F-measure and lower mean absolute error than other 13 state-of-the-arts approaches on ASD, SED, ECSSD, iCoSeg and PASCAL-S datasets.  相似文献   

9.
Saliency prediction on RGB-D images is an underexplored and challenging task in computer vision. We propose a channel-wise attention and contextual interaction asymmetric network for RGB-D saliency prediction. In the proposed network, a common feature extractor provides cross-modal complementarity between the RGB image and corresponding depth map. In addition, we introduce a four-stream feature-interaction module that fully leverages multiscale and cross-modal features for extracting contextual information. Moreover, we propose a channel-wise attention module to highlight the feature representation of salient regions. Finally, we refine coarse maps through a corresponding refinement block. Experimental results show that the proposed network achieves a performance comparable with state-of-the-art saliency prediction methods on two representative datasets.  相似文献   

10.
Saliency prediction can be regarded as the human spontaneous activity. The most effective saliency model should highly approximate the response of viewers to the perceived information. In the paper, we exploit the perception response for saliency detection and propose a heuristic framework to predict salient region. First, to find the perceptually meaningful salient regions, an orientation selectivity based local feature and a visual Acuity based global feature are proposed to jointly predict candidate salient regions. Subsequently, to further boost the accuracy of saliency map, we introduce a visual error sensitivity based operator to activate the meaningful salient regions from a local and global perspective. In addition, an adaptive fusion method based on free energy principle is designed to combine the sub-saliency maps from each image channel to obtain the final saliency map. Experimental results on five natural and emotional datasets demonstrate the superiority of the proposed method compared to twelve state-of-the-art algorithms.  相似文献   

11.
现有的大部分基于扩散理论的显著性物体检测方法只用了图像的底层特征来构造图和扩散矩阵,并且忽视了显著性物体在图像边缘的可能性。针对此,该文提出一种基于图像的多层特征的扩散方法进行显著性物体检测。首先,采用由背景先验、颜色先验、位置先验组成的高层先验方法选取种子节点。其次,将选取的种子节点的显著性信息通过由图像的底层特征构建的扩散矩阵传播到每个节点得到初始显著图,并将其作为图像的中层特征。然后结合图像的高层特征分别构建扩散矩阵,再次运用扩散方法分别获得中层显著图、高层显著图。最后,非线性融合中层显著图和高层显著图得到最终显著图。该算法在3个数据集MSRA10K,DUT-OMRON和ECSSD上,用3种量化评价指标与现有4种流行算法进行实验结果对比,均取得最好的效果。  相似文献   

12.
Aggregation of local and global contextual information by exploiting multi-level features in a fully convolutional network is a challenge for the pixel-wise salient object detection task. Most existing methods still suffer from inaccurate salient regions and blurry boundaries. In this paper, we propose a novel edge-aware global and local information aggregation network (GLNet) to fully exploit the integration of side-output local features and global contextual information and utilization of contour information of salient objects. The global guidance module (GGM) is proposed to learn discriminative multi-level information with the direct guidance of global semantic knowledge for more accurate saliency prediction. Specifically, the GGM consists of two key components, where the global feature discrimination module exploits the inter-channel relationship of global semantic features to boost representation power, and the local feature discrimination module enables different side-output local features to selectively learn informative locations by fusing with global attentive features. Besides, we propose an edge-aware aggregation module (EAM) to employ the correlation between salient edge information and salient object information for generating estimated saliency maps with explicit boundaries. We evaluate our proposed GLNet on six widely-used saliency detection benchmark datasets by comparing with 17 state-of-the-art methods. Experimental results show the effectiveness and superiority of our proposed method on all the six benchmark datasets.  相似文献   

13.
This paper addresses a novel approach to automatically extract video salient objects based on visual attention mechanism and seeded object growing technique. First, a dynamic visual attention model to capture the object motions by global motion estimation and compensation is constructed. Through combining it with a static attention model, a saliency map is formed. Then, with a modified inhibition of return (MIOR) strategy, the winner-take-all (WTA) neural network is used to scan the saliency map for the most salient locations selected as attention seeds. Lastly, the particle swarm optimization (PSO) algorithm is employed to grow the attention objects modeled by Markov random field (MRF) from the seeds. Experiments verify that our presented approach could extract both of stationary and moving salient objects efficiently.  相似文献   

14.
Many videos capture and follow salient objects in a scene. Detecting such salient objects is thus of great interests to video analytics and search. However, the discovery of salient objects in an unsupervised way is a challenging problem as there is no prior knowledge of the salient objects provided. Different from existing salient object detection methods, we propose to detect and track salient object by finding a spatio-temporal path which has the largest accumulated saliency density in the video. Inspired by the observation that salient video objects usually appear in consecutive frames, we leverage the motion coherence of videos into the path discovery and make the salient object detection more robust. Without any prior knowledge of the salient objects, our method can detect salient objects of various shapes and sizes, and is able to handle noisy saliency maps and moving cameras. Experimental results on two public datasets validate the effectiveness of the proposed method in both qualitative and quantitative terms. Comparisons with the state-of-the-art methods further demonstrate the superiority of our method on salient object detection in videos.  相似文献   

15.
Saliency detection has gained popularity in many applications, and many different approaches have been proposed. In this paper, we propose a new approach based on singular value decomposition (SVD) for saliency detection. Our algorithm considers both the human-perception mechanism and the relationship between the singular values of an image decomposed by SVD and its salient regions. The key concept of our proposed algorithms is based on the fact that salient regions are the important parts of an image. The singular values of an image are divided into three groups: large, intermediate, and small singular values. We propose the hypotheses that the large singular values mainly contain information about the non-salient background and slight information about the salient regions, while the intermediate singular values contain most or even all of the saliency information. The small singular values contain little or even none of the saliency information. These hypotheses are validated by experiments. By regularization based on the average information, regularization using the leading largest singular values or regularization based on machine learning, the salient regions will become more conspicuous. In our proposed approach, learning-based methods are proposed to improve the accuracy of detecting salient regions in images. Gaussian filters are also employed to enhance the saliency information. Experimental results prove that our methods based on SVD achieve superior performance compared to other state-of-the-art methods for human-eye fixations, as well as salient-object detection, in terms of the area under the receiver operating characteristic (ROC) curve (AUC) score, the linear correlation coefficient (CC) score, the normalized scan-path saliency (NSS) score, the F-measure score, and visual quality.  相似文献   

16.
The purpose of image retargeting is to automatically adapt a given image to fit the size of various displays without introducing severe visual distortions. The seam carving method can effectively achieve this task and it needs to define image importance to detect the salient context of images. In this paper we present a new image importance map and a new seam criterion for image retargeting. We first decompose an image into a cartoon and a texture part. The higher order statistics (HOS) on the cartoon part provide reliable salient edges. We construct a salient object window and a distance dependent weight to modify the HOS. The weighted HOS effectively protects salient objects from distortion by seam carving. We also propose a new seam criterion which tends to spread seam uniformly in nonsallient regions and helps to preserve large scale geometric structures. We call our method salient edge and region aware image retargeting (SERAR). We evaluate our method visually, and compare the results with related methods. Our method performs well in retargeting images with cluttered backgrounds and in preserving large scale structures.  相似文献   

17.
传统显著性目标检测方法常假设只有单个显著性目标,其效果依赖显著性阈值的选取,并不符合实际应用需求。近来利用目标检测方法得到显著性目标检测框成为一种新的解决思路。SSD模型可同时精确检测多个不同尺度的目标对象,但小尺寸目标检测精度不佳。为此,该文引入去卷积模块与注意力残差模块,构建了面向多显著性目标检测的DAR-SSD模型。实验结果表明,DAR-SSD检测精度显著高于SOD模型;相比原始SSD模型,在小尺度和多显著性目标情形下性能提升明显;相比MDF和DCL等深度学习框架下的方法,也体现了复杂背景情形下的良好检测性能。  相似文献   

18.
李婷  吴迪  郭凤姣  屈宗顺  万琴 《光电子.激光》2020,31(11):1231-1238
在真实场景中,物体的尺寸往往是多样的,基于大 图像的目标检测很难检测所有的物体。为了检测较小尺寸目标,本文利用显著图和稳定区域 融合,建立小目标检测算法模型。首先利用基于颜色名空间的显著性检测算法生成显著图, 同时采用基于最大稳定极值区域(MSER)算法提取局部稳定区域,MSER算法是目前针对图像 变形最为稳定的特征检测算法;其次采用像素乘性融合稳定区域和显著图以降低虚警概率; 最后调用一些图像处理过程,包括形态学重建操作、灰度变换、形态空穴填充操作,能够有 效抑制背景,同时均匀的突出显著性目标,以推断和优化最终结果。为了验证该算法的有效 性和实用性,以PR曲线为评价指标,比较了几种主流算法的性能,包括AZ-NET、FPN、PGAN 。通过对Sky数据集和Ground数据集的测试,表明该算法能够很好地适应目标尺寸的变化, 在检准率和检全率方面优于现有的小目标检测算法,具有良好的鲁棒性。  相似文献   

19.
提出了一种基于显著性特征的可见光与红外图像融合算法来改善目标的融合质量.引入显著检测器对红外图像进行处理,生成显著映射;进一步分析红外图像并检测兴趣点,提取图像中的显著兴趣点;通过计算显著兴趣点的凸壳确定显著区域;利用显著兴趣点凸壳对初始显著映射进行优化,使目标定位更加精确.根据区域映射获取可见光图像的背景区域;根据不同的融合准则对目标、背景区域进行融合,获得最终的融合图像.结果表明与当前可见光图像融合技术相比,所提算法在标准差、联合熵与边缘信息因子等指标方面具有优势,其融合图像的细节纹理更清晰.  相似文献   

20.
基于区域特征融合的RGBD显著目标检测   总被引:2,自引:2,他引:0       下载免费PDF全文
杜杰  吴谨  朱磊 《液晶与显示》2016,31(1):117-123
为了对各类自然场景中的显著目标进行检测,本文提出了一种将图像的深度信息引入区域显著性计算的方法,用于目标检测。首先对图像进行多尺度分割得到若干区域,然后对区域多类特征学习构建回归随机森林,采用监督学习的方法赋予每个区域特征显著值,最后采用最小二乘法对多尺度的显著值融合,得到最终的显著图。实验结果表明,本文算法能较准确地定位RGBD图像库中每幅图的显著目标。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号