首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 749 毫秒
1.
图像显著性检测能够获取一幅图像的视觉显著性区域,是计算机视觉的研究热点之一。提出一种结合颜色特征和对比度特征的图像显著性检测方法。首先构造图像在HSV空间的颜色函数以获取图像颜色特征;然后使用SLIC超像素分割算法对图像进行预处理,基于超像素块的对比度特征计算图像显著性;最后将融合颜色特征和对比度特征的显著图经过导向滤波优化形成最终的显著图。使用本文算法在公开数据集MSRA-1000上进行图像显著性检测,并与其他6种算法进行比较。实验结果表明本文算法结合了图像像素点和像素块的信息,检测的图像显著性区域轮廓更加完整,优于其他方法。  相似文献   

2.
As human vision system is highly sensitive to motion present in a scene, motion saliency forms an important feature in a video sequence. Motion information is used for video compression, object segmentation, object tracking and in many other applications. Though its applications are extensive, accurate detection of motion in a given video is complex and computationally expensive for the solutions reported in the literature. Decomposing a video into visually similar and residual videos is a robust way to detect motion salient regions. The existing decomposition techniques require large execution time as the standard form of the problem is NP-hard. We propose a novel algorithm which detects the motion salient regions by decomposing the input video into background and residual videos in much lesser time without sacrificing the accuracy of the decomposition. In addition, the proposed algorithm is completely parallelizable that ensures further reduction in computational time with the use of advanced multicore processors.  相似文献   

3.
Color is the most informative low-level feature and might convey tremendous saliency information of a given image. Unfortunately, color feature is seldom fully exploited in the previous saliency models. Motivated by the three basic disciplines of a salient object which are respectively center distribution prior, high color contrast to surroundings and compact color distribution, in this paper, we design a comprehensive salient object detection system which takes the advantages of color contrast together with color distribution and outputs high quality saliency maps. The overall procedure flow of our unified framework contains superpixel pre-segmentation, color contrast and color distribution computation, combination, and final refinement.In color contrast saliency computation, we calculate center-surrounded color contrast and then employ the distribution prior in order to select correct color components. A global saliency smoothing procedure that is based on superpixel regions is introduced as well. This processing step preferably alleviates the saliency distortion problem, leading to the entire object being highlighted uniformly. Finally, a saliency refinement approach is adopted to eliminate artifacts and recover unconnected parts within the combined saliency maps.In visual comparison, our method produces higher quality saliency maps which stress out the total object meanwhile suppress background clutter. Both qualitative and quantitative experiments show our approach outperforms 8 state-of-the-art methods, achieving the highest precision rate 96% (3% improvement from the current highest), when evaluated via one of the most popular data sets. Excellent content-aware image resizing also could be achieved using our saliency maps.  相似文献   

4.
This study presents a novel and highly efficient superpixel algorithm, namely, depth-fused adaptive superpixel (DFASP), which can generate accurate superpixels in a degraded image. In many applications, particularly in actual scenes, vision degradation, such as motion blur, overexposure, and underexposure, often occurs. Well-known color-based superpixel algorithms are incapable of producing accurate superpixels in degraded images because of the ambiguity of color information caused by vision degradation. To eliminate this ambiguity, we use depth and color information to generate superpixels. We map the depth and color information to a high-dimensional feature space. Then, we develop a fast multilevel clustering algorithm to produce superpixels. Furthermore, we design an adaptive mechanism to adjust the color and depth information automatically during pixel clustering. Experimental results demonstrate that regardless of boundary recall, under segmentation error, run time, or achievable segmentation accuracy, DFASP is better than state-of-the-art superpixel methods.  相似文献   

5.
双层特征优化的视觉运动目标跟踪算法   总被引:4,自引:4,他引:0  
视觉监控中运动目标跟踪容易受到遮挡、目标快 速运动与外观变化等因素的素影响,单层特征难以有 效解决这些问题。为此,提出一种像素级与区域级特征组合优化的视觉跟踪算法。首 先在像素级利用 目标和背景区域颜色特征的后验概率对目标与背景进行初步判别;然后对候选区域进行超像 素分割,并依据 像素级的判断结果,在超像素区域内利用投票决策模型对目标与背景信息进行统计分析,得 到精确的目标位 置分布;最后结合均值漂移迭代搜索得到目标的准确位置,并利用双层判别结果对目标跟踪 过程的遮挡情况 进行检测,同时动态更新目标以及背景区域信息以适应目标外观与场景变化。与典型算法进 行对比的实验结 果表明,本文算法能够有效应对目标遮挡与快速运动等因素的影响,适用于复杂场景条件下 实时的运动目标跟踪。  相似文献   

6.
马龙  王鲁平  李飚  沈振康 《信号处理》2010,26(12):1825-1832
提出了视觉注意驱动的基于混沌分析的运动检测方法(MDSA)。MDSA首先基于视觉注意机制提取图像的显著区域,而后对显著区域进行混沌分析以检测运动目标。算法技术路线为:首先根据场景图像提取多种视觉敏感的底层图像特征;然后根据特征综合理论将这些特征融合起来得到一幅反映场景图像中各个位置视觉显著性的显著图;而后对显著性水平最高的图像位置所在的显著区域运用混沌分析的方法进行运动检测;根据邻近优先和返回抑制原则提取下一最显著区域并进行运动检测,直至遍历所有的显著区域。本文对传统的显著区域提取方法进行了改进以减少计算量:以邻域标准差代替center-surround算子评估图像各位置的局部显著度,采用显著点聚类的方法代替尺度显著性准则提取显著区域;混沌分析首先判断各显著区域的联合直方图(JH)是否呈现混沌特征,而后依据分维数以一固定阈值对存在混沌的JH中各散点进行分类,最后将分类结果对应到显著区域从而实现运动分割。MDSA具有较好的运动分割效果和抗噪性能,对比实验和算法开销分析证明MDSA优于基于马塞克的运动检测方法(MDM)。   相似文献   

7.
针对复杂背景下显著性检测方法不能够有效地抑制背景,进而准确地检测目标这一问题,提出了超像素内容感知先验的多尺度贝叶斯显著性检测方法.首先,将目标图像分割为多尺度的超像素图,在每个尺度上引入内容感知的对比度先验、中心位置先验、边界连通背景先验来计算单一尺度上的目标显著值;其次,融合多个尺度的内容感知先验显著值生成一个粗略的显著图;然后,将粗略显著图值作为先验概率,根据颜色直方图和凸包中心先验计算观测似然概率,再使用多尺度贝叶斯模型来获取最终显著目标;最后,使用了3个公开的数据集、5种评估指标、7种现有的方法进行对比实验,结果表明本文方法在显著性目标检测方面具有更好的表现.  相似文献   

8.
一种基于立体视觉显著性的多视点视频比特分配方法   总被引:1,自引:1,他引:0  
针对多视点立体视频压缩编码,提出了一种基于立 体视觉显著性的比 特分配方法。研究综合利用多视点立体视频数据中场景的运动、深度以及深度边缘信息提取 人眼感兴趣区 域(ROI)的方法;然后根据ROI的划分结果优化区域比特分配。实验结果表 明,本文提出的算法能有效提 高ROI区域的编码性能,同时整体视频的率失真性能有一定程度的提高。  相似文献   

9.
In this study, a spatiotemporal saliency detection and salient region determination approach for H.264 videos is proposed. After Gaussian filtering in Lab color space, the phase spectrum of Fourier transform is used to generate the spatial saliency map of each video frame. On the other hand, the motion vector fields from each H.264 compressed video bitstream are backward accumulated. After normalization and global motion compensation, the phase spectrum of Fourier transform for the moving parts is used to generate the temporal saliency map of each video frame. Then, the spatial and temporal saliency maps of each video frame are combined to obtain its spatiotemporal saliency map using adaptive fusion. Finally, a modified salient region determination scheme is used to determine salient regions (SRs) of each video frame. Based on the experimental results obtained in this study, the performance of the proposed approach is better than those of two comparison approaches.  相似文献   

10.
Recently Saliency maps from input images are used to detect interesting regions in images/videos and focus on processing these salient regions. This paper introduces a novel, macroblock level visual saliency guided video compression algorithm. This is modelled as a 2 step process viz. salient region detection and frame foveation. Visual saliency is modelled as a combination of low level, as well as high level features which become important at the higher-level visual cortex. A relevance vector machine is trained over 3 dimensional feature vectors pertaining to global, local and rarity measures of conspicuity, to yield probabilistic values which form the saliency map. These saliency values are used for non-uniform bit-allocation over video frames. To achieve these goals, we also propose a novel video compression architecture, incorporating saliency, to save tremendous amount of computation. This architecture is based on thresholding of mutual information between successive frames for flagging frames requiring re-computation of saliency, and use of motion vectors for propagation of saliency values.  相似文献   

11.
吴岳洲 《光电子.激光》2009,(12):1626-1630
针对视频分析中难以完全将前景(FG)和运动阴影正确分离,提出一种基于阴影HSV颜色空间特性与Gabor筛选器的阴影分割方法。首先,采用一种基于复杂背景(BG)的运动目标检测方法提取出运动目标;其次,采用基于HSV颜色空间阴影特性初步判定阴影区域;最后,设计基于感兴趣区域(ROI,region of interest)的Gabor筛选器对初步判定后的阴影区域进行筛选,从而检测出阴影。对不同光照和环境条件下的视频序列进行测试结果表明,方法效果好,阴影检测率高,可应用于智能视频监控的目标检测。  相似文献   

12.
In this paper, we propose a novel multi-graph-based method for salient object detection in natural images. Starting from image decomposition via a superpixel generation algorithm, we utilize color, spatial and background label to calculate edge weight matrix of the graphs. By considering superpixels as the nodes and region similarities as the edge weights, local, global and high contrast graphs are created. Then, an integration technique is applied to form the saliency maps using degree vectors of the graphs. Extensive experiments on three challenging datasets show that the proposed unsupervised method outperforms the several different state-of-the-art unsupervised methods.  相似文献   

13.
Object tracking is always a very attractive research topic in computer vision and image processing. In this paper, an innovative method called salient-sparse-collaborative tracker (SSCT) is put forward, which exploits both object saliency and sparse representation. Within the proposed collaborative appearance model, the object salient feature map is built to create a salient-sparse discriminative model (SSDM) and a salient-sparse generative model (SSGM). In the SSDM module, the presented sparse model effectively distinguishes the target region from its background by using the salient feature map that further helps locate the object in complex environment. In the SSGM module, a sparse representation method with salient feature map is designed to improve the effectiveness of the templates and deal with occlusions. The update scheme takes advantage of salient correction, thus the SSCT algorithm can both handle the appearance variation as well as reduce tracking drifts effectively. Plenty of experiments with quantitative and qualitative comparisons on benchmark reveal the SSCT tracker is more competitive than several popular approaches.  相似文献   

14.
Detection of salient objects in image and video is of great importance in many computer vision applications. In spite of the fact that the state of the art in saliency detection for still images has been changed substantially over the last few years, there have been few improvements in video saliency detection. This paper proposes a novel non-local fully convolutional network architecture for capturing global dependencies more efficiently and investigates the use of recently introduced non-local neural networks in video salient object detection. The effect of non-local operations is studied separately on static and dynamic saliency detection in order to exploit both appearance and motion features. A novel deep non-local fully convolutional network architecture is introduced for video salient object detection and tested on two well-known datasets DAVIS and FBMS. The experimental results show that the proposed algorithm outperforms state-of-the-art video saliency detection methods.  相似文献   

15.
夏思珂  雷志勇 《光电子.激光》2021,32(12):1300-1306
针对提取到的图像特征受背景信息干扰,不能有针对性地提取到所需要的图像信息影 响检索精度。为了解决这一问题,本文提出一种基于改进VGGNet(visual geometry group network)和蚁群算法的图像显著性区 域检索算法。首先,利用类激活映射(class activation mapping, CMA)算法对图像显著性区域进行提取,剔除图像背景信息;然后使 用训练好的RS-VGG16模型提取图像显著性区域特征来表征图像。引入主成分分析(principal component analysis, PCA)算法,对高维特征 进行降维的同时减少特征信息的损失。最后,引入蚁群算法对检索结果进行优化。在corel_ 5000数据集上,选取基于VGG16网络的图像全局特征检索算法以及传统的BOF (bag of features)图像检索算法进 行对比试验。本文提出算法相较于基于VGG16网络的图像检索算法,平均查准率(mean average precision, MAP)值平均提升约4.36% ,相较于传统的BOF算法,MAP值平均提升约16.99%。实验结果表明本 文提出算法能够很好地去除图像背景信息的干扰,具有更优的检索性能。  相似文献   

16.
A compressed domain video saliency detection algorithm, which employs global and local spatiotemporal (GLST) features, is proposed in this work. We first conduct partial decoding of a compressed video bitstream to obtain motion vectors and DCT coefficients, from which GLST features are extracted. More specifically, we extract the spatial features of rarity, compactness, and center prior from DC coefficients by investigating the global color distribution in a frame. We also extract the spatial feature of texture contrast from AC coefficients to identify regions, whose local textures are distinct from those of neighboring regions. Moreover, we use the temporal features of motion intensity and motion contrast to detect visually important motions. Then, we generate spatial and temporal saliency maps, respectively, by linearly combining the spatial features and the temporal features. Finally, we fuse the two saliency maps into a spatiotemporal saliency map adaptively by comparing the robustness of the spatial features with that of the temporal features. Experimental results demonstrate that the proposed algorithm provides excellent saliency detection performance, while requiring low complexity and thus performing the detection in real-time.  相似文献   

17.
Dense trajectory methods have recently been proved to be successful in recognizing actions in realistic videos. However, their performance is still limited due to the uniform dense sampling, which does not discriminate between action-related areas and background. This paper proposes to improve the dense trajectories for recognizing actions captured in realistic scenes, especially in the presence of camera motion. Firstly, based on the observation that the motion in action-related areas is usually much more irregular than the camera motion in background, we recover the salient regions in a video by implementing low-rank matrix decomposition on the motion information and use the saliency maps to indicate action-related areas. Considering action-related regions are changeable but continuous with time, we temporally split a video into subvideos and compute the salient regions subvideo by subvideo. In addition, to ensure spatial continuity, we spatially divide a subvideo into patches and arrange the vectorized optical flow of all the spatial patches to collect the motion information for salient region detection. Then, after the saliency maps of all subvideos in a video are obtained, we incorporate them into dense tracking to extract saliency-based dense trajectories to describe actions. To evaluate the performance of the proposed method, we conduct experiments on four benchmark datasets, namely, Hollywood2, YouTube, HMDB51 and UCF101, and show that the performance of our method is competitive with the state of the art.  相似文献   

18.
针对当前基于流形排序的显著性检测算法缺乏子空间信息的挖掘和节点间传播不准确的问题,该文提出一种基于低秩背景约束与多线索传播的图像显著性检测算法.融合颜色、位置和边界连通度等初级视觉先验形成背景高级先验,约束图像特征矩阵的分解,强化低秩矩阵与稀疏矩阵的差异,充分描述子空间结构信息,从而有效地将前景与背景分离;引入稀疏感知和局部平滑等线索改进传播矩阵的构建,增强颜色特征出现概率低的节点的传播能力,加强局部区域内节点的关联性,准确凸显节点的属性,得到紧密且连续的显著区域.在3个基准数据集上的实验结果与图像检索领域的应用证明了该文算法的有效性和鲁棒性.  相似文献   

19.
针对当前基于流形排序的显著性检测算法缺乏子空间信息的挖掘和节点间传播不准确的问题,该文提出一种基于低秩背景约束与多线索传播的图像显著性检测算法。融合颜色、位置和边界连通度等初级视觉先验形成背景高级先验,约束图像特征矩阵的分解,强化低秩矩阵与稀疏矩阵的差异,充分描述子空间结构信息,从而有效地将前景与背景分离;引入稀疏感知和局部平滑等线索改进传播矩阵的构建,增强颜色特征出现概率低的节点的传播能力,加强局部区域内节点的关联性,准确凸显节点的属性,得到紧密且连续的显著区域。在3个基准数据集上的实验结果与图像检索领域的应用证明了该文算法的有效性和鲁棒性。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号