首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Salient object detection is to identify objects or regions with maximum visual recognition in an image, which brings significant help and improvement to many computer visual processing tasks. Although lots of methods have occurred for salient object detection, the problem is still not perfectly solved especially when the background scene is complex or the salient object is small. In this paper, we propose a novel Weak Feature Boosting Network (WFBNet) for the salient object detection task. In the WFBNet, we extract the unpredictable regions (low confidence regions) of the image via a polynomial function and enhance the features of these regions through a well-designed weak feature boosting module (WFBM). Starting from a coarse saliency map, we gradually refine it according to the boosted features to obtain the final saliency map, and our network does not need any post-processing step. We conduct extensive experiments on five benchmark datasets using comprehensive evaluation metrics. The results show that our algorithm has considerable advantages over the existing state-of-the-art methods.  相似文献   

2.
视觉显著性度量是图像显著区域提取中的一个关键问题,现有的方法主要根据图像的底层视觉特征,构造相应的显著图。不同的特征对视觉显著性的贡献是不同的,为此提出一种能够自动进行特征选择和加权的图像显著区域检测方法。提取图像的亮度、颜色和方向等特征,构造相应的特征显著图。提出一种新的特征融合策略,动态计算各特征显著图的权值,整合得到最终的显著图,检测出图像中的显著区域。在多幅自然图像上进行实验,实验结果表明,该方法在运算速度和检测效果方面都取得了不错的效果。  相似文献   

3.
目的 为了解决图像显著性检测中存在的边界模糊,检测准确度不够的问题,提出一种基于目标增强引导和稀疏重构的显著检测算法(OESR)。方法 基于超像素,首先从前景角度计算超像素的中心加权颜色空间分布图,作为前景显著图;由图像边界的超像素构建背景模板并对模板进行预处理,以优化后的背景模板作为稀疏表示的字典,计算稀疏重构误差,并利用误差传播方式进行重构误差的校正,得到背景差异图;最后,利用快速目标检测方法获取一定数量的建议窗口,由窗口的对象性得分计算目标增强系数,以此来引导两种显著图的融合,得到最终显著检测结果。结果 实验在公开数据集上与其他12种流行算法进行比较,所提算法对具有不同背景复杂度的图像能够较准确的检测出显著区域,对显著对象的提取也较为完整,并且在评价指标检测上与其他算法相比,在MSRA10k数据集上平均召回率提高4.1%,在VOC2007数据集上,平均召回率和F检验分别提高18.5%和3.1%。结论 本文提出一种新的显著检测方法,分别利用颜色分布与对比度方法构建显著图,并且在显著图融合时采用一种目标增强系数,提高了显著图的准确性。实验结果表明,本文算法能够检测出更符合视觉特性的显著区域,显著区域更加准确,适用于自然图像的显著性目标检测、目标分割或基于显著性分析的图像标注。  相似文献   

4.
Salient object detection aims to identify both spatial locations and scales of the salient object in an image. However, previous saliency detection methods generally fail in detecting the whole objects, especially when the salient objects are actually composed of heterogeneous parts. In this work, we propose a saliency bias and diffusion method to effectively detect the complete spatial support of salient objects. We first introduce a novel saliency-aware feature to bias the objectness detection for saliency detection on a given image and incorporate the saliency clues explicitly in refining the saliency map. Then, we propose a saliency diffusion method to fuse the saliency confidences of different parts from the same object for discovering the whole salient object, which uses the learned visual similarities among object regions to propagate the saliency values across them. Benefiting from such bias and diffusion strategy, the performance of salient object detection is significantly improved, as shown in the comprehensive experimental evaluations on four benchmark data sets, including MSRA-1000, SOD, SED, and THUS-10000.  相似文献   

5.
Salient objects extraction from a still image is a very hot topic, as it owns a lot of useful applications (e.g., image compression, content-based image retrieval, digital watermarking). In this paper, targeted to improve the performance of the extraction approach, we propose a two step salient objects extraction framework based on image segmentation and saliency detection (TIS). Specially, during the first step, the image is segmented into several regions using image segmentation algorithm and the saliency map for the whole image is detected with saliency detection algorithm. In the second step, for each region, some features are extracted for the SVM algorithm to classify the region as a background region or a salient region twice. Experimental results show that our proposed framework can extract the salient objects more precisely and can achieve a good extraction results, compared with previous salient objects extraction methods.  相似文献   

6.
显著目标检测是计算机视觉的研究热点。显著目标检测算法存在一些问题,如:算法常采用单一损失函数,缺乏对多维特征损失的考虑,可能带来局限性;最高层特征图来源单一;特征图融合常使用对应像素相加,不能有效突出图像中感兴趣区域。针对上述问题,结合结构性相似、交并比和交叉熵三种损失函数来捕捉图像细节,采用对应像素相乘操作融合特征图,令模型对显著区域更加敏感;通过残差特征图增强模块逆向构建更高层特征图强化其语义信息;采用特征金字塔结构融合不同尺度信息,完成编码解码模块。在5个数据集的对比实验表明该方法性能超过主流算法,能实现有效的显著目标检测。  相似文献   

7.
针对传统显著目标检测方法中目标不能均匀高亮,背景噪声难以抑制的问题,提出了一种融合多尺度对比与贝叶斯模型的显著目标检测方法。将图像分割为一系列紧凑且颜色相同的超像素,并通过K-means算法对所得超像素重聚类得到多尺度分割图;引入背景先验及凸包中心先验计算不同尺度下的显著图,并加权融合成粗略显著图;将粗略显著图二值化得到的区域假定为前景目标,再计算观测似然概率,使用贝叶斯模型进一步抑制图像的背景并凸出显著区域。在公开数据集MSRA-1000上与6种主流算法进行对比,实验表明提出的算法相比其他算法能更均匀地高亮显著目标,有更高的查准率和更低的平均绝对误差。  相似文献   

8.
目的 传统显著性检测模型大多利用手工选择的中低层特征和先验信息进行物体检测,其准确率和召回率较低,随着深度卷积神经网络的兴起,显著性检测得以快速发展。然而,现有显著性方法仍存在共性缺点,难以在复杂图像中均匀地突显整个物体的明确边界和内部区域,主要原因是缺乏足够且丰富的特征用于检测。方法 在VGG(visual geometry group)模型的基础上进行改进,去掉最后的全连接层,采用跳层连接的方式用于像素级别的显著性预测,可以有效结合来自卷积神经网络不同卷积层的多尺度信息。此外,它能够在数据驱动的框架中结合高级语义信息和低层细节信息。为了有效地保留物体边界和内部区域的统一,采用全连接的条件随机场(conditional random field,CRF)模型对得到的显著性特征图进行调整。结果 本文在6个广泛使用的公开数据集DUT-OMRON(Dalian University of Technology and OMRON Corporation)、ECSSD(extended complex scene saliency dataset)、SED2(segmentation evalution database 2)、HKU、PASCAL-S和SOD(salient objects dataset)上进行了测试,并就准确率—召回率(precision-recall,PR)曲线、F测度值(F-measure)、最大F测度值、加权F测度值和均方误差(mean absolute error,MAE)等性能评估指标与14种最先进且具有代表性的方法进行比较。结果显示,本文方法在6个数据集上的F测度值分别为0.696、0.876、0.797、0.868、0.772和0.785;最大F测度值分别为0.747、0.899、0.859、0.889、0.814和0.833;加权F测度值分别为0.656、0.854、0.772、0.844、0.732和0.762;MAE值分别为0.074、0.061、0.093、0.049、0.099和0.124。无论是前景和背景颜色相似的图像集,还是多物体的复杂图像集,本文方法的各项性能均接近最新研究成果,且优于大多数具有代表性的方法。结论 本文方法对各种场景的图像显著性检测都具有较强的鲁棒性,同时可以使显著性物体的边界和内部区域更均匀,检测结果更准确。  相似文献   

9.
图像显著性特征已被广泛地应用于图像分割、图像检索和图像压缩等领域,针对传统算法耗时较长,易受噪声影响等问题,提出了一种基于HSV色彩空间改进的多尺度显著性检测方法。该方法选择HSV色彩空间的色调、饱和度和亮度作为视觉特征,先通过高斯金字塔分解获得三种尺度的图像序列,然后使用改进的SR算法从三种尺度的图像序列中提出每个特征图,最后将这些特征图进行点对点的平方融合和线性融合。与其它算法的对比实验表明,该方法具有较好的检测效果和鲁棒性,能够较快速地检测出图像的显著性区域,能够突显整个显著性目标。  相似文献   

10.
We investigate the issue of ship target segmentation in infrared (IR) images, and propose an efficient method based on feature map integration. It consists of mainly two procedures: salient region detection based on multiple feature map integration and salient region segmentation based on locally adaptive thresholding. Firstly, a saliency map is constructed by integrating multiple features of IR ship targets, including gray level intensity, local contrast, salient linear structures, and edge strength. Secondly, we propose an adaptive thresholding method to segment each local salient region, and a target selection procedure based on shape features is used to remove background and obtain the true target. Experimental results show that the proposed method performs well for IR ship target segmentation. The advantage of the proposed method is demonstrated in both visual and quantitative comparisons, especially for IR images with a bright background or a ship target close to port.  相似文献   

11.
As an important problem in image understanding, salient object detection is essential for image classification, object recognition, as well as image retrieval. In this paper, we propose a new approach to detect salient objects from an image by using content-sensitive hypergraph representation and partitioning. Firstly, a polygonal potential Region-Of-Interest (p-ROI) is extracted through analyzing the edge distribution in an image. Secondly, the image is represented by a content-sensitive hypergraph. Instead of using fixed features and parameters for all the images, we propose a new content-sensitive method for feature selection and hypergraph construction. In this method, the most discriminant color channel which maximizes the difference between p-ROI and the background is selected for each image. Also the number of neighbors in hyperedges is adjusted automatically according to the image content. Finally, an incremental hypergraph partitioning is utilized to generate the candidate regions for the final salient object detection, in which all the candidate regions are evaluated by p-ROI and the best match one will be the selected as final salient object. Our approach has been extensively evaluated on a large benchmark image database. Experimental results show that our approach can not only achieve considerable improvement in terms of commonly adopted performance measures in salient object detection, but also provide more precise object boundaries which is desirable for further image processing and understanding.  相似文献   

12.
目的 基于超像素分割的显著物体检测模型在很多公开数据集上表现优异,但在实际场景应用时,超像素分割的数量和大小难以自适应图像和目标大小的变化,从而使性能下降,且分割过多会耗时过大。为解决这一问题,本文提出基于布尔图和灰度稀缺性的小目标显著性检测方法。方法 利用布尔图的思想,提取图像中较为突出的闭合区域,根据闭合区域的大小赋予其显著值,形成一幅显著图;利用灰度稀缺性,为图像中的稀缺灰度值赋予高显著值,抑制烟雾、云、光照渐晕等渐变背景,生成另一幅显著图;将两幅显著图融合,得到具有全分辨率、目标突出且轮廓清晰的显著图。结果 在3个数据集上与14种显著性模型进行对比,本文算法生成的显著图能有效抑制背景,并检测出多个小目标。其中,在复杂背景数据集上,本文算法具有最高的F值(F-measure)和最小的MAE(mean absolute error)值,AUC(area under ROC curve)值仅次于DRFI(discriminative regional feature integration)和ASNet(attentive saliency network)模型,AUC和F-measure值比BMS(Boolean map based saliency)模型分别提高了1.9%和6.9%,MAE值降低了1.8%;在SO200数据集上,本文算法的F-measure值最高,MAE值仅次于ASNet,F-measure值比BMS模型提高了3.8%,MAE值降低了2%;在SED2数据集上,本文算法也优于6种传统模型。在运行时间方面,本文算法具有明显优势,处理400×300像素的图像时,帧频可达12帧/s。结论 本文算法具有良好的适应性和鲁棒性,对于复杂背景下的小目标具有良好的显著性检测效果。  相似文献   

13.
目的 现有的显著对象检测模型能够很好地定位显著对象,但是在获得完整均匀的对象和保留清晰边缘的任务上存在不足。为了得到整体均匀和边缘清晰的显著对象,本文提出了结合语义辅助和边缘特征的显著对象检测模型。方法 模型利用设计的语义辅助特征融合模块优化骨干网的侧向输出特征,每层特征通过语义辅助选择性融合相邻的低层特征,获得足够的结构信息并增强显著区域的特征强度,进而检测出整体均匀的显著对象。通过设计的边缘分支网络以及显著对象特征得到精确的边缘特征,将边缘特征融合到显著对象特征中,加强特征中显著对象边缘区域的可区分性,以便检测出清晰的边缘。同时,本文设计了一个双向多尺度模块来提取网络中的多尺度信息。结果 在4种常用的数据集ECSSD (extended complex scene saliency dataset)、DUT-O (Dalian University of Technology and OMRON Corporation)、HKU-IS和DUTS上与12种较流行的显著模型进行比较,本文模型的最大F值度量(max F-measure,MaxF)和平均绝对误差(mean absolution error,MAE)分别是0.940、0.795、0.929、0.870和0.041、0.057、0.034、0.043。从实验结果看,本文方法得到的显著图更接近真值图,在MaxF和MAE上取得最佳性能的次数多于其他12种方法。结论 本文提出的结合语义辅助和边缘特征的显著对象检测模型十分有效。语义辅助特征融合和边缘特征的引入使检测出的显著对象更为完整均匀,对象的边缘区分性也更强,多尺度特征提取进一步改善了显著对象的检测效果。  相似文献   

14.
目的 显著物体检测的目标是提取给定图像中最能吸引人注意的物体或区域,在物体识别、图像显示、物体分割、目标检测等诸多计算机视觉领域中都有广泛应用。已有的基于局部或者全局对比度的显著物体检测方法在处理内容复杂的图像时,容易造成检测失败,其主要原因可以总结为对比度参考区域设置的不合理。为提高显著物体检测的完整性,提出背景驱动的显著物体检测算法,在显著值估计和优化中充分利用背景先验。方法 首先采用卷积神经网络学习图像的背景分布,然后从得到的背景图中分割出背景区域作为对比度计算参考区域来估计区域显著值。最后,为提高区域显著值的一致性,采用基于增强图模型的优化实现区域显著值的扩散,即在传统k-正则图局部连接的基础上,添加与虚拟节点之间的先验连接和背景区域节点之间的非局部连接,实现背景先验信息的嵌入。结果 在公开的ASD、SED、SOD和THUS-10000数据库上进行实验验证,并与9种流行的算法进行对比。本文算法在4个数据库上的平均准确率、查全率、F-measure和MAE指标分别为0.873 6、0.795 2、0.844 1和0.112 2,均优于当前流行的算法。结论 以背景区域作为对比度计算参考区域可以明显提高前景区域的显著值。卷积神经网络可以有效学习图像的背景分布并分割出背景区域。基于增强图模型的优化可以进一步实现显著值在前景和背景区域的扩散,提高区域显著值的一致性,并抑制背景区域的显著性响应。实验结果表明,本文算法能够准确、完整地检测图像的显著区域,适用于复杂图像的显著物体检测或物体分割应用。  相似文献   

15.
16.
This paper proposes a hierarchical approach to region-based image retrieval (HIRBIR) based on wavelet transform whose decomposition property is similar to human visual processing. First, automated image segmentation is performed fast in the low-low (LL) frequency subband of the wavelet domain that shows the desirable low image resolution. In the proposed system, boundaries between segmented regions are deleted to improve the robustness of region-based image retrieval against segmentation-related uncertainty. Second, a region feature vector is hierarchically represented by information in all wavelet subbands, and each feature component of a feature vector is a unified color–texture feature. Such a feature vector captures well the distinctive features (e.g., semantic texture) inside one region. Finally, employing a hierarchical feature vector, the weighted distance function for region matching is tuned meaningfully and easily, and a progressive stepwise indexing mechanism with relevance feedback is performed naturally and effectively in our system. Through experimental results and comparison with other methods, the proposed HIRBIR shows a good tradeoff between retrieval effectiveness and efficiency as well as easy implementation for region-based image retrieval.  相似文献   

17.
基于尺度不变特征变换(SIFT)特征的图像匹配存在特征点数量大、运算时间长等问题。为此,引入视觉注意机制,提出一种基于显著图的SIFT特征检测与匹配方法。比较常用的显著图计算模型,选择谱残差方法提取图片的显著图。对显著图进行二值化和形态学等处理,得到规则合理的显著区域。在显著区域内提取SIFT特征,生成特征向量,进行图像匹配。实验结果表明,该方法能提高运算效率,并且得到的SIFT特征更加稳定。  相似文献   

18.
多先验特征与综合对比度的图像显著性检测   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 图像的显著性检测在计算机视觉中应用非常广泛,现有的方法通常在复杂背景区域下表现不佳,由于显著性检测的低层特征并不可靠,同时单一的特征也很难得到高质量的显著图。提出了一种通过增加特征的多样性来实现显著性检测的方法。方法 在高层先验知识的基础上,对背景先验特征和中心先验特征重新进行了定义,并考虑人眼视觉一般会对暖色调更为关注,从而加入颜色先验。另外在图像低层特征上使用目前较为流行的全局对比度和局部对比度特征,在特征融合时针对不同情况分别采取线性和非线性的一种新的融合策略,得到高质量的显著图。结果 在MSRA-1000和DUT-OMRON两个公开数据库进行对比验证,实验结果表明,基于多先验特征与综合对比度的图像显著性检测算法具有较高的查准率、召回率和F-measure值,相较于RBD算法均提高了1.5%以上,综合性能均优于目前的10种主流算法。结论 相较于基于低层特征和单一先验特征的算法,本文算法充分利用了图像信息,能在突出全局对比度的同时也保留较多的局部信息,达到均匀突出显著性区域的效果,有效地抑制复杂的背景区域,得到更加符合视觉感知的显著图。  相似文献   

19.
Wang  Jun  Zhao  Zhengyun  Yang  Shangqin  Chai  Xiuli  Zhang  Wanjun  Zhang  Miaohui 《Applied Intelligence》2022,52(6):6208-6226

High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics.

  相似文献   

20.
在显著性目标检测中,背景区域和前景区域区分度不高会导致检测结果不理想。针对这一问题,提出一种基于邻域优化机制的图像显著性目标检测算法。首先对图像进行超像素分割;然后在CIELab颜色空间建立对比图和分布图,并通过一种新的合并方式进行融合;最后在空间距离等约束下,建立邻域更新机制,对初始显著性图进行优化。实验对比表明,该算法显著性目标检测效果更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号