首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
提出了一种新的基于可见光图像的海上小目标检测方法。由于频率调谐方法将图像空间域整体均值与高斯滤波后差分结果,作为显著性度量的标准,因此当图像背景中存在较多杂波干扰时,显著目标检测效果不理想。提出的方法对基于频率调谐的显著性检测方法进行了改进,首先对图像LAB空间中3个特征分量进行分块,在每个分块区域中应用频率调谐显著性检测方法,进而将其结果合并为总显著图,以检测海上小目标。该方法克服了频率调谐方法,当海面背景中存在大量海杂波,无法有效提取小目标的缺陷。实验结果表明了该方法的有效性。  相似文献   

2.
In image classification based on bag of visual words framework, image patches used for creating image representations affect the classification performance significantly. However, currently, patches are sampled mainly based on processing low-level image information or just extracted regularly or randomly. These methods are not effective, because patches extracted through these approaches are not necessarily discriminative for image categorization. In this paper, we propose to utilize both bottom-up information through processing low-level image information and top-down information through exploring statistical properties of training image grids to extract image patches. In the proposed work, an input image is divided into regular grids, each of which is evaluated based on its bottom-up information and/or top-down information. Subsequently, every grid is assigned a saliency value based on its evaluation result, so that a saliency map can be created for the image. Finally, patch sampling from the input image is performed on the basis of the obtained saliency map. Furthermore, we propose a method to fuse these two kinds of information. The proposed methods are evaluated on both object categories and scene categories. Experiment results demonstrate their effectiveness.  相似文献   

3.
In this paper, we proposed a novel method for No-Reference Image Quality Assessment (NR-IQA) by combining deep Convolutional Neural Network (CNN) with saliency map. We first investigate the effect of depth of CNNs for NR-IQA by comparing our proposed ten-layer Deep CNN (DCNN) for NR-IQA with the state-of-the-art CNN architecture proposed by Kang et al. (2014). Our results show that the DCNN architecture can deliver a higher accuracy on the LIVE dataset. To mimic human vision, we introduce saliency maps combining with CNN to propose a Saliency-based DCNN (SDCNN) framework for NR-IQA. We compute a saliency map for each image and both the map and the image are split into small patches. Each image patch is assigned with a patch importance value based on its saliency patch. A set of Salient Image Patches (SIPs) are selected according to their saliency and we only apply the model on those SIPs to predict the quality score for the whole image. Our experimental results show that the SDCNN framework is superior to other state-of-the-art approaches on the widely used LIVE dataset. The TID2008 and the CISQ image quality datasets are utilised to report cross-dataset results. The results indicate that our proposed SDCNN can generalise well on other datasets.  相似文献   

4.
The human visual system (HSV) is quite adept at swiftly detecting objects of interest in complex visual scene. Simulating human visual system to detect visually salient regions of an image has been one of the active topics in computer vision. Inspired by random sampling based bagging ensemble learning method, an ensemble dictionary learning (EDL) framework for saliency detection is proposed in this paper. Instead of learning a universal dictionary requiring a large number of training samples to be collected from natural images, multiple over-complete dictionaries are independently learned with a small portion of randomly selected samples from the input image itself, resulting in more flexible multiple sparse representations for each of the image patches. To boost the distinctness of salient patch from background region, we present a reconstruction residual based method for dictionary atom reduction. Meanwhile, with the obtained multiple probabilistic saliency responses for each of the patches, the combination of them is finally carried out from the probabilistic perspective to achieve better predictive performance on saliency region. Experimental results on several open test datasets and some natural images demonstrate that the proposed EDL for saliency detection is much more competitive compared with some existing state-of-the-art algorithms.  相似文献   

5.
Nonlocal mean (NM) is an efficient method for many low-level image processing tasks. However, it is challenging to directly utilize NM for saliency detection. This is because that conventional NM method can only extract the structure of the image itself and is based on regular pixel-level graph. However, saliency detection usually requires human perceptions and more complex connectivity of image elements. In this paper, we propose a novel generalized nonlocal mean (GNM) framework with the object-level cue which fuses the low-level and high-level cues to generate saliency maps. For a given image, we first use uniqueness to describe the low-level cue. Second, we adopt the objectness algorithm to find potential object candidates, then we pool the object measures onto patches to generate two high-level cues. Finally, by fusing these three cues as an object-level cue for GNM, we obtain the saliency map of the image. Extensive experiments show that our GNM saliency detector produces more precise and reliable results compared to state-of-the-art algorithms.  相似文献   

6.
Salient Region Detection by Modeling Distributions of Color and Orientation   总被引:3,自引:0,他引:3  
We present a robust salient region detection framework based on the color and orientation distribution in images. The proposed framework consists of a color saliency framework and an orientation saliency framework. The color saliency framework detects salient regions based on the spatial distribution of the component colors in the image space and their remoteness in the color space. The dominant hues in the image are used to initialize an expectation-maximization (EM) algorithm to fit a Gaussian mixture model in the hue-saturation (H-S) space. The mixture of Gaussians framework in H-S space is used to compute the inter-cluster distance in the H-S domain as well as the relative spread among the corresponding colors in the spatial domain. Orientation saliency framework detects salient regions in images based on the global and local behavior of different orientations in the image. The oriented spectral information from the Fourier transform of the local patches in the image is used to obtain the local orientation histogram of the image. Salient regions are further detected by identifying spatially confined orientations and with the local patches that possess high orientation entropy contrast. The final saliency map is selected as either color saliency map or orientation saliency map by automatically identifying which of the maps leads to the correct identification of the salient region. The experiments are carried out on a large image database annotated with ldquoground-truthrdquo salient regions, provided by Microsoft Research Asia, which enables us to conduct robust objective level comparisons with other salient region detection algorithms.  相似文献   

7.
针对基于颜色直方图的显著图无法突出边缘轮廓和纹理细节的问题,结合图像的颜色特征、空间位置特征、纹理特征以及直方图,提出了一种基于SLIC融合纹理和直方图的图像显著性检测方法。该方法首先通过SLIC算法对图像进行超像素分割,提取基于颜色和空间位置的显著图;然后分别提取基于颜色直方图的显著图和基于纹理特征的显著图;最后将前两个阶段得到的显著图进行融合得到最终的显著图。此外,通过简单的阈值分割方法得到图像中的显著性目标。实验结果表明,与经典显著性检测算法相比,提出的算法性能明显优于其他算法性能。  相似文献   

8.
目的 显著性检测是基于对人类视觉的研究,用来帮助计算机传感器感知世界的重要研究手段。现有显著性检测方法大多仅能检测出人类感兴趣的显著点或区域,无法突出对象整体的显著性以及无法区分对象不同层次的显著性。针对上述问题,提出一种基于分层信息融合的物体级显著性检测方法。方法 与当前大多数方法不同,本文同时运用了中级别超像素和物体级别区域两种不同层次的结构信息来获取对象的显著图。首先,将图像分割为中级别的超像素,利用自下而上的方法构造初始显著图;然后通过谱聚类方法将中级别的超像素聚类成物体级的区域,并运用自上而下的先验来调整初始先验图;最后,通过热核扩散过程,将超像素级别上的显著性扩散到物体级的区域上,最终获得一致的均匀的物体级显著性图。结果 在MSRA1000标准数据库上与其他16种相关算法在准确率-召回率曲线及F度量等方面进行了定量比较,检测的平均精度和F-检验分数比其他算法高出5%以上。结论 通过多层次信息融合最终生成的显著图,实现了突出对象整体显著性以及区分不同对象显著性的目标。本文方法同样适用于多目标的显著性检测。  相似文献   

9.
目的 许多先前的显著目标检测工作都是集中在2D的图像上,并不能适用于RGB-D图像的显著性检测。本文同时提取颜色特征以及深度特征,提出了一种基于特征融合和S-D概率矫正的RGB-D显著性检测方法,使得颜色特征和深度特征相互补充。方法 首先,以RGB图像的4个边界为背景询问节点,使用特征融合的Manifold Ranking输出RGB图像的显著图;其次,依据RGB图像的显著图和深度特征计算S-D矫正概率;再次,计算深度图的显著图并依据S-D矫正概率对该显著图进行S-D概率矫正;最后,对矫正后的显著图提取前景询问节点再次使用特征融合的Manifold Ranking方法进行显著优化,得到最终的显著图。结果 利用本文RGB-D显著性检测方法对RGBD数据集上的1 000幅图像进行了显著性检测,并与6种不同的方法进行对比,本文方法的显著性检测结果更接近人工标定结果。Precision-Recall曲线(PR曲线)显示在相同召回率下本文方法的准确率较其中5种方法高,且处理单幅图像的时间为2.150 s,与其他算法相比也有一定优势。结论 本文方法能较准确地对RGB-D图像进行显著性检测。  相似文献   

10.
Location information, i.e., the position of content in image plane, is considered as an important supplement in saliency detection. The effect of location information is usually evaluated by integrating it with the selected saliency detection methods and measuring the improvement, which is highly influenced by the selection of saliency methods. In this paper, we provide direct and quantitative analysis of the importance of location information for saliency detection in natural images. We firstly analyze the relationship between content location and saliency distribution on four public image datasets, and validate the distribution by simply treating location based Gaussian distribution as saliency map. To further validate the effectiveness of location information, we propose a location based saliency detection approach, which completely initializes saliency maps with location information and propagate saliency among patches based on color similarity, and discuss the robustness of location information’s effect. The experimental results show that location information plays a positive role in saliency detection, and the proposed method can outperform most state-of-the-art saliency detection methods and handle natural images with different object positions and multiple salient objects.  相似文献   

11.
A variety of saliency models based on different schemes and methods have been proposed in the recent years, and the performance of these models often vary with images and complement each other. Therefore it is a natural idea whether we can elevate saliency detection performance by fusing different saliency models. This paper proposes a novel and general framework to adaptively fuse saliency maps generated using various saliency models based on quality assessment of these saliency maps. Given an input image and its multiple saliency maps, the quality features based on the input image and saliency maps are extracted. Then, a quality assessment model, which is learned using the boosting algorithm with multiple kernels, indicates the quality score of each saliency map. Next, a linear summation method with power-law transformation is exploited to fuse these saliency maps adaptively according to their quality scores. Finally, a graph cut based refinement method is exploited to enhance the spatial coherence of saliency and generate the high-quality final saliency map. Experimental results on three public benchmark datasets with state-of-the-art saliency models demonstrate that our saliency fusion framework consistently outperforms all saliency models and other fusion methods, and effectively elevates saliency detection performance.  相似文献   

12.
We propose a mesh saliency detection approach using absorbing Markov chain. Unlike most of the existing methods based on some center-surround operator, our method employs feature variance to obtain insignificant regions and considers both background and foreground cues. Firstly, we partition an input mesh into a set of segments using Ncuts algorithm and then each segment is over segmented into patches based on Zernike coefficients. Afterwards, some background patches are selected by computing feature variance within the segments. Secondly, the absorbed time of each node is calculated via absorbing Markov chain with the background patches as absorbing nodes, which gives a preliminary saliency measure. Thirdly, a refined saliency result is generated in a similar way but with foreground nodes extracted from the preliminary saliency map as absorbing nodes, which inhibits the background and efficiently enhances salient foreground regions. Finally, a Laplacian-based smoothing procedure is utilized to spread the patch saliency to each vertex. Experimental results demonstrate that our scheme performs competitively against the state-of-the-art approaches.  相似文献   

13.
显著性提取方法在图像处理、计算机视觉领域有着广泛的应用.然而,基于全局特征和基于局部特征的显著性区域提取算法存在各自的缺点,为此本文提出了一种融合全局和局部特征的显著性提取算法.首先,对图像进行不重叠地分块,当每个图像块经过主成分分析(Principle component analysis,PCA)映射到高维空间后,根据孤立的特征点对应显著性区域的规律得到基于全局特征的显著图;其次,根据邻域内中心块与其他块的颜色不相似性得到基于局部特征的显著图;最后,按照贝叶斯理论将这两个显著图融合为最终的显著图.在公认的三个图像数据库上的仿真实验验证了所提算法在显著性提取和目标分割上比其他先进算法更有效.  相似文献   

14.
Image saliency analysis plays an important role in various applications such as object detection, image compression, and image retrieval. Traditional methods for saliency detection ignore texture cues. In this paper, we propose a novel method that combines color and texture cues to robustly detect image saliency. Superpixel segmentation and the mean-shift algorithm are adopted to segment an original image into small regions. Then, based on the responses of a Gabor filter, color and texture features are extracted to produce color and texture sub-saliency maps. Finally, the color and texture sub-saliency maps are combined in a nonlinear manner to obtain the final saliency map for detecting salient objects in the image. Experimental results show that the proposed method outperforms other state-of-the-art algorithms for images with complex textures.  相似文献   

15.
李卫中 《计算机应用》2020,40(8):2365-2371
针对现有多曝光图像融合算法得到的图像质量不高以及算法效率低的问题,提出了基于场景局部特征的多曝光图像融合算法。首先,将不同曝光量的图像序列划分为规则的图像块,并且相邻的图像块有一定像素的重叠区域。对于静态场景,根据图像的局部方差、局部可视性以及局部显著性特征这三个指标计算每一个图像块的权重值;对于动态场景,除了应用前面所述的三个局部特征指标外,还需要将局部相似性指标用于动态场景融合过程中以去除运动物体导致的鬼影现象。其次,利用加权求和的方法得到最佳的图像块。最后,将输出的图像块进行融合,并且将图像块重叠区域的像素求平均,从而得到最终的融合结果。选取12组不同自然场景的曝光序列,从主观和客观两方面与现有的基于像素和基于特征的7种算法进行了分析和比较。实验结果表明:无论在静态场景还是动态场景的测试中,所提算法都保留了更多的场景信息,获得了令人满意的视觉效果,同时该算法还保持了较高的计算效率。  相似文献   

16.
基于HVS的多尺度显著性检测算法   总被引:1,自引:0,他引:1  
为提高图像显著性检测的准确性,借鉴有关人类视觉系统的研究成果,提出了一种基于人类视觉系统(HVS)的多尺度显著性检测方法.该方法先将图像分割成小的图像片以获取图像的局部信息,然后采用PCA进行特征抽取,在得到的低维空间中计算图像片的差异.通过结合人类视觉系统和多尺度方法降低背景的显著度,提高显著性目标的显著值.实验结果表明,该方法在检测效果和抗噪能力等方面均可获得较为满意的结果.  相似文献   

17.
目的 显著性检测是图像和视觉领域一个基础问题,传统模型对于显著性物体的边界保留较好,但是对显著性目标的自信度不够高,召回率低,而深度学习模型对于显著性物体的自信度高,但是其结果边界粗糙,准确率较低。针对这两种模型各自的优缺点,提出一种显著性模型以综合利用两种方法的优点并抑制各自的不足。方法 首先改进最新的密集卷积网络,训练了一个基于该网络的全卷积网络(FCN)显著性模型,同时选取一个现有的基于超像素的显著性回归模型,在得到两种模型的显著性结果图后,提出一种融合算法,融合两种方法的结果以得到最终优化结果,该算法通过显著性结果Hadamard积和像素间显著性值的一对一非线性映射,将FCN结果与传统模型的结果相融合。结果 实验在4个数据集上与最新的10种方法进行了比较,在HKU-IS数据集中,相比于性能第2的模型,F值提高了2.6%;在MSRA数据集中,相比于性能第2的模型,F值提高了2.2%,MAE降低了5.6%;在DUT-OMRON数据集中,相比于性能第2的模型,F值提高了5.6%,MAE降低了17.4%。同时也在MSRA数据集中进行了对比实验以验证融合算法的有效性,对比实验结果表明提出的融合算法改善了显著性检测的效果。结论 本文所提出的显著性模型,综合了传统模型和深度学习模型的优点,使显著性检测结果更加准确。  相似文献   

18.
目的 为准确描述图像的显著信息,提出一种结合整体一致性和局部差异性的显著性检测方法,并将显著性特征融入到目标分割中。方法 首先,利用频率调谐法(IG)对目标整体特征的一致性进行显著性检测。然后,引入NIF算法检测显著目标的局部差异性。最后结合两种算法形成最终的显著性检测方法,并应用于图像目标分割。结果 在公认的Weizmann数据集上验证本文方法显示目标的绝对效率并与其他算法对比,实验结果表明本文方法在精确率,召回率,F1-measure(分别为0.445 6,0.751 2,0.576 4)等方面优于当前流行的算法。并且在融合显著性的图像目标分割中,取得满意的实验结果。结论 提出一种新的显著性检测算法,综合体现目标的整体和局部特征,并在公开数据集上取得较高的统计评价。实验结果表明,该算法能够对自然图像进行较准确的显著性检测,并成功地应用于自然图像的目标分割。  相似文献   

19.
This paper deals with the super-resolution (SR) problem based on a single low-resolution (LR) image. Inspired by the local tangent space alignment algorithm in [16] for nonlinear dimensionality reduction of manifolds, we propose a novel patch-learning method using locally affine patch mapping (LAPM) to solve the SR problem. This approach maps the patch manifold of low-resolution image to the patch manifold of the corresponding high-resolution (HR) image. This patch mapping is learned by a training set of pairs of LR/HR images, utilizing the affine equivalence between the local low-dimensional coordinates of the two manifolds. The latent HR image of the input (an LR image) is estimated by the HR patches which are generated by the proposed patch mapping on the LR patches of the input. We also give a simple analysis of the reconstruction errors of the algorithm LAPM. Furthermore we propose a global refinement technique to improve the estimated HR image. Numerical results are given to show the efficiency of our proposed methods by comparing these methods with other existing algorithms.  相似文献   

20.
目的 图像的显著性目标检测是计算机视觉领域的重要研究课题。针对现有显著性目标检测结果存在的纹理细节刻画不明显和边缘轮廓显示不完整的问题,提出一种融合多特征与先验信息的显著性目标检测方法,该方法能够高效而全面地获取图像中的显著性区域。方法 首先,提取图像感兴趣的点集,计算全局对比度图,利用贝叶斯方法融合凸包和全局对比度图获得对比度特征图。通过多尺度下的颜色直方图得到颜色空间图,根据信息熵定理计算最小信息熵,并将该尺度下的颜色空间图作为颜色特征图。通过反锐化掩模方法提高图像清晰度,利用局部二值算子(LBP)获得纹理特征图。然后,通过图形正则化(GR)和流行排序(MR)算法得到中心先验图和边缘先验图。最后,利用元胞自动机融合对比度特征图、颜色特征图、纹理特征图、中心先验图和边缘先验图获得初级显著图,再通过快速引导滤波器优化处理得到最终显著图。结果 在2个公开的数据集MSRA10K和ECSSD上验证本文算法并与12种具有开源代码的流行算法进行比较,实验结果表明,本文算法在准确率-召回率(PR)曲线、受试者工作特征(ROC)曲线、综合评价指标(F-measure)、平均绝对误差(MAE)和结构化度量指标(S-measure)等方面有显著提升,整体性能优于对比算法。结论 本文算法充分利用了图像的对比度特征、颜色特征、纹理特征,采用中心先验和边缘先验算法,在全面提取显著性区域的同时,能够较好地保留图像的纹理信息和细节信息,使得边缘轮廓更加完整,满足人眼的层次要求和细节要求,并具有一定的适用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号