首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
近年来,可见光视频序列的人体运动识别研究已经取得了一定的进展。由于这些数据源容易受到目标颜色、光照强度和背景杂波的影响,因此将深度信息应用于人体运动识别。本文首先采用了基于时空兴趣点的人体运动的局部表征方法,分别实现了Harris时空兴趣点与基于Gabor滤波器的时空兴趣点(STIPs)检测方法在深度信息上的应用。然后对相应结果进行立方体描述并提取了深度立方体相似特征(DCSF)。最后利用基于时空码本的支持向量机(SVM)动作分类器完成对动作的分类。实验表明,基于Gabor滤波器的检测方法在深度数据集上取得了更好的识别效果。  相似文献   

2.
《现代电子技术》2017,(19):86-90
为了解决传统基于卡尔曼滤波算法进行艺术体操轨迹跟踪时存在的跟踪漂移以及跟踪效率低等问题,研究基于计算机视觉的艺术体操轨迹跟踪方法,通过Vi Be运动目标检索算法对图像的颜色以及深度信息建模,基于图像颜色以及深度的波动检测出视频中的运动目标,采用KCF算法实现运动目标的初步跟踪,在该方法的基础上,通过改进KCF算法解决运动目标被遮挡出现的跟踪漂移问题,提高运动目标跟踪的精度和稳定性。通过Hermite插值运算运动目标质心,基于时刻t的运动模糊方向获取瞬时质心轨迹,得到最佳的运动目标质心轨迹,采用曲线拟合措施获取精确的运动目标质心轨迹。实验结果说明,所提方法可准确跟踪艺术体操运动轨迹,具有较高的跟踪效率和稳定性。  相似文献   

3.
在无人艇(USV)的导航、避障等多种任务中,目标检测与跟踪都十分重要,但水面环境复杂,存在目标尺度变化、遮挡、光照变化以及摄像头抖动等诸多问题。该文提出基于时空信息融合的无人艇水面视觉目标检测跟踪,在空间上利用深度学习检测,提取单帧深度语义特征,在时间上利用相关滤波跟踪,计算帧间方向梯度特征相关性,通过特征对比将时空信息进行融合,实现了持续稳定地对水面目标进行检测与跟踪,兼顾了实时性和鲁棒性。实验结果表明,该算法平均检测速度和精度相对较高,在检测跟踪速度为15 fps情况下,检测跟踪精确度为0.83。  相似文献   

4.
在无人艇(USV)的导航、避障等多种任务中,目标检测与跟踪都十分重要,但水面环境复杂,存在目标尺度变化、遮挡、光照变化以及摄像头抖动等诸多问题.该文提出基于时空信息融合的无人艇水面视觉目标检测跟踪,在空间上利用深度学习检测,提取单帧深度语义特征,在时间上利用相关滤波跟踪,计算帧间方向梯度特征相关性,通过特征对比将时空信息进行融合,实现了持续稳定地对水面目标进行检测与跟踪,兼顾了实时性和鲁棒性.实验结果表明,该算法平均检测速度和精度相对较高,在检测跟踪速度为15?fps情况下,检测跟踪精确度为0.83.  相似文献   

5.
王然 《电子质量》2011,(12):7-10
在运动目标检测技术中,使用传统的高斯混合背景模型所得到的检测结果并不能完美地获取运动目标的轮廓信息,而图像中像素的梯度信息,刚好就是反映了各物体的轮廓和边界,并且相对于颜色信息而言,梯度信息对于噪声并不敏感。为此,该文对传统的高斯混合背景模型进行了改进,提出基于梯度时空信息的高斯混合背景模型,证明了改进的算法确实能够取...  相似文献   

6.
下一最佳观测方位的确定是视觉领域一个比较困难的问题。该文提出一种基于视觉目标深度图像利用遮挡和轮廓信息确定下一最佳观测方位的方法。该方法首先对当前观测方位下获取的视觉目标深度图像进行遮挡检测。其次根据深度图像遮挡检测结果和视觉目标轮廓构建未知区域,并采用类三角剖分方式对各未知区域进行建模。然后根据建模所得的各小三角形的中点、法向量、面积等信息构造目标函数。最后通过对目标函数的优化求解得到下一最佳观测方位。实验结果表明所提方法可行且有效。  相似文献   

7.
相比于单部雷达系统,空间分置的网络化雷达由于空间分集、频率分集等优势,通常具备更优的探测性能。当前基于网络化雷达系统的融合检测方法大多仅依据目标回波幅度信息,未考虑相参雷达系统能够获取的多普勒信息对融合检测的增益。直观地,网络化雷达系统中不同雷达观测到目标的空间位置与径向速度应当满足一定的物理约束,利用该额外信息应当能够提升目标与虚警的可分性。基于此,该文提出了多普勒信息辅助的网络化雷达融合检测算法:首先利用多雷达站对同一目标角度与多普勒速度观测的耦合性构建观测间需要满足的约束不等式组,然后基于运筹学中两阶段法对该不等式组是否有可行解做出判断,进而对目标是否存在做出判决。仿真实验表明,所提算法能够有效提升网络化雷达系统的融合检测性能。同时,该文还针对所提算法分析了雷达布站位置及目标位置对融合检测性能的影响。  相似文献   

8.
针对水下遥操作作业时目标深度定位及水下复杂环境成像困难的问题,利用基于相机阵列的信息获取系统,使用基于深度的集成成像三维重建算法及相邻像素灰度方差评价函数,实现了一次处理可同时实现有遮挡情况下目标的深度估计及清晰成像。应用LabVIEW软件编程建立了CCD相机自动扫描阵列成像系统,对存在遮挡物的水中目标场景进行成像及深度重建实验,场景中位于不同深度的3个独立目标重建深度值与实际深度值的误差分别为0.21%、1.26%和1.34%。实验结果表明,应用该方法仅需一次重建即可去除遮挡物,并获得目标的深度值。  相似文献   

9.
该文针对双基地雷达提出一种多元信息辅助的检测跟踪一体化方法。结合雷达在目标跟踪阶段获取的目标位置与回波幅度等多元先验信息,辅助设计跟踪波门内检测门限,以期提升目标的检测与跟踪性能。该文首先根据已获取的目标位置先验信息,在概率数据互联(PDA)框架下基于贝叶斯最小错误准则修正了传统似然比检测器。为进一步提升弱目标探测性能,该文引入航迹终结准则松弛了门限设置规则,并计算了跟踪波门内的平均虚警概率和检测概率。最后,该文重新推导了多元信息辅助情况下PDA算法的关联概率计算方式,完整地给出了算法流程,并通过仿真实验验证了算法的可行性和有效性。  相似文献   

10.
刘强 《电视技术》2012,36(1):126-128
针对复杂场景下的火焰检测困难,提出了一种结合颜色信息、运动信息以及统计信息的火焰检测算法。该方法首先将视频转换到YCrCb颜色空间,利用颜色信息分割出可疑的火焰区域,接着通过时域滤波提取出视频中的运动区域,结合颜色信息与运动信息的结果进行统计分析,根据统计值的大小确定视频中是否存在火焰及火焰所处位置。实验结果表明,本方法具有较好的实时性和自适应性,能在复杂背景下对场景中是否存在火焰做出快速准确的判断,且能处理摄像机运动的情况。  相似文献   

11.
This paper presents an effective method for the detection and tracking of multiple moving objects from a video sequence captured by a moving camera without additional sensors. Moving object detection is relatively difficult for video captured by a moving camera, since camera motion and object motion are mixed. In the proposed method, the feature points in the frames are found and then classified as belonging to foreground or background features. Next, moving object regions are obtained using an integration scheme based on foreground feature points and foreground regions, which are obtained using an image difference scheme. Then, a compensation scheme based on the motion history of the continuous motion contours obtained from three consecutive frames is applied to increase the regions of moving objects. Moving objects are detected using a refinement scheme and a minimum bounding box. Finally, moving object tracking is achieved using a Kalman filter based on the center of gravity of a moving object region in the minimum bounding box. Experimental results show that the proposed method has good performance.  相似文献   

12.
Low cost RGB-D cameras such as the Microsoft’s Kinect or the Asus’s Xtion Pro are completely changing the computer vision world, as they are being successfully used in several applications and research areas. Depth data are particularly attractive and suitable for applications based on moving objects detection through foreground/background segmentation approaches; the RGB-D applications proposed in literature employ, in general, state of the art foreground/background segmentation techniques based on the depth information without taking into account the color information. The novel approach that we propose is based on a combination of classifiers that allows improving background subtraction accuracy with respect to state of the art algorithms by jointly considering color and depth data. In particular, the combination of classifiers is based on a weighted average that allows to adaptively modifying the support of each classifier in the ensemble by considering foreground detections in the previous frames and the depth and color edges. In this way, it is possible to reduce false detections due to critical issues that can not be tackled by the individual classifiers such as: shadows and illumination changes, color and depth camouflage, moved background objects and noisy depth measurements. Moreover, we propose, for the best of the author’s knowledge, the first publicly available RGB-D benchmark dataset with hand-labeled ground truth of several challenging scenarios to test background/foreground segmentation algorithms.  相似文献   

13.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.  相似文献   

14.
王辉  孙洪 《信号处理》2016,32(12):1425-1434
针对基于矩阵分解的运动目标检测方法易受自然场景中背景的小幅抖动和摄像头抖动等因素影响的问题,提出了一种利用多尺度积的低秩稀疏矩阵分解算法。算法假设,静态背景视频序列中,每帧图像背景可近似视为处于同一低秩子空间中,图像前景则可视为偏离低秩空间的残差部分。首先对图像序列进行滤波、仿射变换等预处理得到视频序列观测数据矩阵;然后对数据矩阵进行低秩稀疏分解得到序列图像的低秩背景部分和每帧图像的稀疏前景部分;最后对稀疏前景部分采用小波变换模极大值与多尺度积方法检测目标边缘,并进行形态学处理,得到准确的运动目标。实验结果表明,算法检测到的运动目标清晰、完整,能有效地处理光照变化、摄像头小幅度抖动、图像背景局部小幅度变化等情况下的运动目标检测。   相似文献   

15.
Video inpainting under constrained camera motion.   总被引:1,自引:0,他引:1  
A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings.  相似文献   

16.
A scheme based on a difference scheme using object structures and color analysis is proposed for video object segmentation in rainy situations. Since shadows and color reflections on the wet ground pose problems for conventional video object segmentation, the proposed method combines the background construction-based video object segmentation and the foreground extraction-based video object segmentation where pixels in both the foreground and background from a video sequence are separated using histogram-based change detection from which the background can be constructed and detection of the initial moving object masks based on a frame difference mask and a background subtraction mask can be further used to obtain coarse object regions. Shadow regions and color-reflection regions on the wet ground are removed from the initial moving object masks via a diamond window mask and color analysis of the moving object. Finally, the boundary of the moving object is refined using connected component labeling and morphological operations. Experimental results show that the proposed method performs well for video object segmentation in rainy situations.  相似文献   

17.
Unlike 2D saliency detection, 3D saliency detection can consider the effects of depth and binocular parallax. In this paper, we propose a 3D saliency detection approach based on background detection via depth information. With the aid of the synergism between a color image and the corresponding depth map, our approach can detect the distant background and surfaces with gradual changes in depth. We then use the detected background to predict the potential characteristics of the background regions that are occluded by foreground objects through polynomial fitting; this step imitates the human imagination/envisioning process. Finally, a saliency map is obtained based on the contrast between the foreground objects and the potential background. We compare our approach with 14 state-of-the-art saliency detection methods on three publicly available databases. The proposed model demonstrates good performance and succeeds in detecting and removing backgrounds and surfaces of gradually varying depth on all tested databases.  相似文献   

18.
A novel algorithm to segment foreground from a similarly colored background   总被引:1,自引:0,他引:1  
Color similarity between the background and the foreground causes most moving object detection algorithms to fail. This paper proposes a novel algorithm designed to segment the foreground from a similarly colored background. Central to this algorithm is that the motion cue of the moving object is useful for foreground modeling. We predict the position of the moving object in the current frame using historical motion information, and then use the prediction information to construct a predictive model. The mixture foreground model is a union of the predictive model and the general foreground model. Final segmentation is obtained by combining a likelihood modification technique and the mixture foreground model. Experimental results on typical sequences show that the proposed algorithm is efficient.  相似文献   

19.
This study presents a danger estimation system to prevent accidents among infants. A video camera positioned above the infant's crib captures video. The proposed system can monitor the behavior of infants aged zero to six months. If there is a change in behavior or any other unusual occurrence, the system alerts the person responsible to attend to the baby immediately. The proposed system operates in three phases, which are foreground color model (FC model) construction, infant detection, and degree of danger analysis. During FC model construction, the foreground color histogram is created iteratively; the background image does not have to be constructed first. A motion-history image (MHI) is also obtained based on the motion of the infant. The color and motion information supplied by the FC model and the MHI are combined to detect the infant, who is regarded as the foreground object in the input frame. Moreover, six features of infant behavior are extracted from the detected infant to measure the degree of danger faced by the infant, and the result is used to warn the baby-sitter if needed. Experimental results show that the proposed method is robust and efficient.  相似文献   

20.
In this paper, we present a novel foreground object detection scheme that integrates the top-down information based on the expectation maximization (EM) framework. In this generalized EM framework, the top-down information is incorporated in an object model. Based on the object model and the state of each target, a foreground model is constructed. This foreground model can augment the foreground detection for the camouflage problem. Thus, an object's state-specific Markov random field (MRF) model is constructed for detection based on the foreground model and the background model. This MRF model depends on the latent variables that describe each object's state. The maximization of the MRF model is the M-step in the EM framework. Besides fusing spatial information, this MRF model can also adjust the contribution of the top-down information for detection. To obtain detection result using this MRF model, sampling importance resampling is used to sample the latent variable and the EM framework refines the detection iteratively. Besides the proposed generalized EM framework, our method does not need any prior information of the moving object, because we use the detection result of moving object to incorporate the domain knowledge of the object shapes into the construction of top-down information. Moreover, in our method, a kernel density estimation (KDE)-Gaussian mixture model (GMM) hybrid model is proposed to construct the probability density function of background and moving object model. For the background model, it has some advantages over GMM- and KDE-based methods. Experimental results demonstrate the capability of our method, particularly in handling the camouflage problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号