首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目的 基于卡尔曼滤波的视频目标跟踪算法需要事先获得过程噪声和观测噪声方差,但在实际应用中,无法得知这两种噪声方差的准确值。此外,由于目标运动的随机性和视频场景中背景的复杂性,噪声方差也会随时间发生动态变化。如果设定的噪声方差不准确,跟踪精度会受影响,严重时会导致目标跟踪失败。考虑到上述问题,提出一种新的解决方法。方法 将带遗忘因子的推广递推最小二乘法(EFRLS)运用到视频目标跟踪研究领域。在该算法中,无需使用噪声方差,首先利用Mean Shift算法获得目标位置的初步估计,再利用EFRLS算法估计下一帧目标的位置。结果 该算法明显好于传统Mean Shift算法,并且与Kalman结合Mean Shift算法的跟踪性能相当。此外,在目标发生严重遮挡时,该算法优于Kalman结合Mean Shift算法,具有较好的跟踪性能。结论 本文算法无需设置噪声参数,可以实现目标在发生严重遮挡和遮挡后目标重新出现的情况下的准确跟踪,提高了跟踪的鲁棒性,具有一定的工程使用价值。  相似文献   

2.
去雾技术已经在单幅图像上取得了较大的进展,但是由于时间复杂度较高,无法满足高清视频去雾的实时性要求。针对该问题,提出了基于硬件架构优化的暗通道先验去雾算法,通过硬件导向的双尺度联合滤波法和阈值比较法来简化透射率和大气光值的计算复杂度,同时利用帧间依赖性约束来抑制视频去雾中的闪烁噪声。实验结果表明,所提算法的去雾速度达到了148.2 Mpixel/s,相比软件的方式提高了约55倍,实际对1920×1080分辨率全高清视频的去雾速度达到69 fps,满足实时性要求且去雾质量高。  相似文献   

3.
We present a robust and real-time stabilized active camera tracking system (ACTS), which consists of three algorithmic modules: visual tracking, pan-tilt control, and digital video stabilization. We propose an efficient correlation-based framework for visual tracking module that is designed to handle the problems which severely deteriorate the performance of a traditional tracker. The problems that it handles are template drift, noise, object fading (obscuration), background clutter, intermittent occlusion, varying illumination in the scene, high computational complexity, and varying shape, scale, and velocity of the manoeuvring object during its motion. The pan-tilt control module is a predictive open-loop car-following control strategy, which moves the camera efficiently and smoothly so that the target being tracked is always at the center of the video frame. Video stabilization module is required to eliminate the vibration in the video, when the system is mounted on a vibratory platform such as truck, helicopter, ship, etc. We present a very efficient video stabilization method that adds no extra computational overhead to the overall system. It exploits the coordinates of the target, computed by the tracker module, to sense the amount of vibration and then filters it out of the video. The proposed system works at full frame rate (30 fps), and has been successfully used in uncontrolled real-world environment. Experimental results show the efficiency, precision, and robustness of the proposed stabilized ACTS.  相似文献   

4.
为提高分层卷积相关滤波视觉跟踪算法的实时性能,提出一种稀疏卷积特征的实时目标跟踪算法。首先,在分析不同层卷积特征的基础上,采用等间隔采样的方式提取每个卷积层的稀疏卷积特征;然后,对每个卷积层特征的相关滤波响应值进行加权组合,得到目标预测的位置;最后,采用稀疏的模型更新策略进一步提高算法的运行速度。在OTB-2015新增的50组数据上对所提算法进行测试,实验结果表明,该算法的平均距离精度为82.2%,比原分层卷积特征跟踪算法提高了5.25个百分点,对目标姿态以及遮挡等变化具有较好的鲁棒性。该算法的平均跟踪速度为32.6帧/s,是原分层卷积特征跟踪算法的近3倍,能达到实时跟踪的效果。  相似文献   

5.
《Real》2002,8(2):145-155
Real-time object tracking is recently becoming more and more important in the field of video analysis and processing. Applications like traffic control, user–computer interaction, on-line video processing and production and video surveillance need reliable and economically affordable video tracking tools. It seems, however, that most of the available solutions are computationally intensive and sometimes require expensive video hardware, quite often without guaranteeing a suitable level of reliability. In this paper, we present a new approach to real-time object tracking from colour video sequences. It relies on contours in order to track the shape, position and orientation of objects, without exploiting snakes or “traditional” active contours. A closed-loop control approach is adopted to enforce motion tracking stability, while a separate shape model is maintained, featuring a two-stage model and a median filtering technique to cope with temporary occlusions and noise. The system was tested in several different environments with different constraints, and gave very encouraging performance. Experimental results are reported and commented on.  相似文献   

6.
李静  杨涛  潘泉  程咏梅 《计算机应用》2006,26(7):1583-1586
提出一种基于层叠分类器的快速相关跟踪算法。首先利用目标模板色彩分布信息对原始图像数据进行变换,锐化匹配相似度函数峰值,增强算法在复杂环境下的稳定性;然后提出了用平均灰度差和Harr型特征构造层叠分类器,分层刻画目标模板与搜索窗口在统计特征和局部特征上的相似性,并采用积分图快速计算特征,从而大幅度减少在非最优匹配点上的计算量,且特征计算与模板大小无关。大量实验结果表明,该算法大大降低了相关跟踪的时间复杂度,具有跟踪稳定、实时性强等特点。目前,以该算法为核心的实时目标跟踪系统对图像大小为320×240的视频序列内任意尺寸目标的平均处理速度达到20帧/s。  相似文献   

7.
现实中目标在被长期跟踪时容易发生形变、遮挡、光照干扰以及其它问题,现有跟踪算法虽能解决该系列问题但算法计算量巨大导致跟踪系统实时性能较差,很难应用于实际场合。因此准确快速跟踪目标成为近年来非常有挑战的热点课题。以国外学者Zdenek Kalal等人提出的TLD(Tracking-Learning-Detection)框架为基础,提出了三点改进方法。一根据目标所占整幅图像的面积大小动态调整被处理图像的分辨率,从总体上减少样本数量;二在目标邻近区域扫描生成样本,缩小检测器的检测范围;三更换检测部分中分类器模板匹配方法,实现快速匹配,提高算法运行速度。针对与不同的场景,实验表明上述问题在改进后的算法中得到了较大的改善,算法的计算量有效降低,系统运行速度得到提高。且对于实时摄像头监控,改进后算法在保证目标跟踪准确率的同时拥有较好的实时性。  相似文献   

8.
For intelligent video surveillance, the adaptive tracking of multiple moving objects is still an open issue. In this paper, a new multi-object tracking method based on video frames is proposed. A type of particle filtering combined with the SIFT (Scale Invariant Feature Transform) is proposed for motion tracking, where SIFT key points are treated as parts of particles to improve the sample distribution. Then, a queue chain method is adopted to record data associations among different objects, which could improve the detection accuracy and reduce the computational complexity. By actual road tests and comparisons, the system tracks multi-objects with better performance, e.g., real time implementation and robust against mutual occlusions, indicating that it is effective for intelligent video surveillance systems.  相似文献   

9.
Color-based visual object tracking is one of the most commonly used tracking methods. Among many tracking methods, the mean shift tracker is used most often because it is simple to implement and consumes less computational time. However, mean shift trackers exhibit several limitations when used for long-term tracking. In challenging conditions that include occlusions, pose variations, scale changes, and illumination changes, the mean shift tracker does not work well. In this paper, an improved tracking algorithm based on a mean shift tracker is proposed to overcome the weaknesses of existing methods based on mean shift tracker. The main contributions of this paper are to integrate mean shift tracker with an online learning-based detector and to newly define the Kalman filter-based validation region for reducing computational burden of the detector. We combine the mean shift tracker with the online learning-based detector, and integrate the Kalman filter to develop a novel tracking algorithm. The proposed algorithm can reinitialize the target when it converges to a local minima and it can cope with scale changes, occlusions and appearance changes by using the online learning-based detector. It updates the target model for the tracker in order to ensure long-term tracking. Moreover, the validation region obtained by using the Kalman filter and the Mahalanobis distance is used in order to operate detector in real-time. Through a comparison against various mean shift tracker-based methods and other state-of-the-art methods on eight challenging video sequences, we demonstrate that the proposed algorithm is efficient and superior in terms of accuracy and speed. Hence, it is expected that the proposed method can be applied to various applications which need to detect and track an object in real-time.  相似文献   

10.
基于TMS320DM642的实时运动目标检测与跟踪系统   总被引:1,自引:0,他引:1  
CAMShift算法凭借其良好的实时性和鲁棒性而计算量小,在目标跟踪邻域占有重要地位;而单纯的CAMShift是一种需要人预先对目标进行识别和判断以获得目标颜色模式的半自动跟踪器;文章采取帧差法和CAMShift算法相结合的方法,设计了一种自动的运动目标检测与跟踪系统;该系统首先用时间连续的三帧双差分对运动目标的识别和提取,自动地选取搜索窗;再通过CAMShift算法计算目标的精确位置并调整搜索窗口大小;最后通过串口发送云台控制信息,驱动云台运动使目标始终保持在视场之内;在以DM642为核心的硬件平台上,实现了系统的软件算法;实验表明在对比度较高时,无人为干预的情况下,系统能有效、实时地跟踪目标。  相似文献   

11.
We introduce a new GPGPU-based real-time dense stereo matching algorithm. The algorithm is based on a progressive multi-resolution pipeline which includes background modeling and dense matching with adaptive windows. For applications in which only moving objects are of interest, this approach effectively reduces the overall computation cost quite significantly, and preserves the high definition details. Running on an off-the-shelf commodity graphics card, our implementation achieves a 36 fps stereo matching on 1024 × 768 stereo video with a fine 256 pixel disparity range. This is effectively same as 7200 M disparity evaluations per second. For scenes where the static background assumption holds, our approach outperforms all published alternative algorithms in terms of the speed performance, by a large margin. We envision a number of potential applications such as real-time motion capture, as well as tracking, recognition and identification of moving objects in multi-camera networks.  相似文献   

12.
目前智能视频监控对视频目标跟踪算法的实时性、准确性和鲁棒性都提出了很高的要求,而已有算法无法完全满足应用需求。在TLD(Tracking Learning Detector)框架下,提出一种基于视觉背景提取(Visual Background extractor,ViBe)的前景分类算法,提高了TLD算法检测目标的速度;用核相关滤波器(Kernelized Correlation Filters,KCF)实现了TLD框架中的跟踪器,提高了算法的精度及鲁棒性。采用OTB-2013评估基准中针对视频监控的视频序列进行测试,并与其他4种具有代表性跟踪算法进行了对比。测试结果表明:该算法的鲁棒性和准确性均优于对比算法,处理速度可达到40帧/s;相比于标准TLD算法,跟踪距离精度提高了1.52倍,成功率提高了1.2倍;相比于KCF算法,虽然跟踪速度有所下降,但跟踪距离精度提高了2.7倍,成功率提高了2.04倍。  相似文献   

13.
In this paper, we present a new algorithm for the computation of the focus of expansion in a video sequence. Although several algorithms have been proposed in the literature for its computation, almost all of them are based on the optical flow vectors between a pair of consecutive frames, so being very sensitive to noise, optical flow errors and camera vibrations. Our algorithm is based on the computation of the vanishing point of point trajectories, thus integrating information for more than two consecutive frames. It can improve performance in the presence of erroneous correspondences and occlusions in the field of view of the camera.The algorithm has been tested with virtual sequences generated with Blender, as well as some real sequences from both, the public KITTI benchmark, and a number of challenging video sequences also proposed in this paper. For comparison purposes, some algorithms from the literature have also been implemented. The results show that the algorithm has proven to be very robust, outperforming the compared algorithms, specially in outdoor scenes, where the lack of texture can make optical flow algorithms yield inaccurate results. Timing evaluation proves that the proposed algorithm can reach up to 15fps, showing its suitability for real-time applications.  相似文献   

14.
The ability to produce dynamic Depth of Field effects in live video streams was until recently a quality unique to movie cameras. In this paper, we present a computational camera solution coupled with real-time GPU processing to produce runtime dynamic Depth of Field effects. We first construct a hybrid-resolution stereo camera with a high-res/low-res camera pair. We recover a low-res disparity map of the scene using GPU-based Belief Propagation, and subsequently upsample it via fast Cross/Joint Bilateral Upsampling. With the recovered high-resolution disparity map, we warp the high-resolution video stream to nearby viewpoints to synthesize a light field toward the scene. We exploit parallel processing and atomic operations on the GPU to resolve visibility when multiple pixels warp to the same image location. Finally, we generate racking focus and tracking focus effects from the synthesized light field rendering. All processing stages are mapped onto NVIDIA’s CUDA architecture. Our system can produce racking and tracking focus effects for the resolution of 640×480 at 15 fps.  相似文献   

15.
Fractal image compression (FIC) is a very popular coding technique use in image/video applications due to its simplicity and superior performance. The major drawback with FIC is that it is very time consuming algorithm, especially when a full search is attempted. Hence, it is very challenging to achieve a real-time operation if this algorithm is implemented on general processors. In this paper, a new parallel architecture with bit-width reduction scheme is implemented. The hardware is synthesized on Altera Cyclone II FPGA whose architecture is optimized at circuit level in order to achieve a real-time operation. The performance of the proposed architecture is evaluated in terms of runtime, peak-signal-to-noise-ratio (PSNR) and compression efficiency. On average the speedup of 3 was attainable through a bit-width reduction while the PSNR was maintained at acceptable level. Empirical results demonstrate that this firmware is competitive when compared to other existing hardware with PSNR averaging at 29.9 dB, 5.82% compression efficiency and a runtime equivalent to video speed of 101 frames per second (fps).  相似文献   

16.
针对在基于视频的空中签名认证系统中,现有方法无法满足指尖跟踪的准确性、实时性和鲁棒性要求的问题,在对比研究目前常用的多种跟踪方法的基础上,提出一种基于时间上下文的跟踪-学习-检测(TLD)方法。在原始TLD算法的基础上引入时间上下文信息,即相邻两帧间指尖运动具有连续性的先验知识,自适应地缩小检测和跟踪的搜索范围,以提高跟踪的速度。对12组公开的1组自录的视频序列的实验结果表明,改进后的TLD算法能够准确地跟踪指尖,并且跟踪速度达到43帧/秒;与原始TLD跟踪算法相比,准确率提高了15%,跟踪速度至少提高1倍,达到了指尖跟踪的准确性、实时性和鲁棒性要求。  相似文献   

17.
基于机器视觉的快速车道线识别   总被引:1,自引:0,他引:1  
为了克服已有车道线识别算法运算复杂、速度较慢以及鲁棒性欠缺等不足之处, 提出一种新的快速车道线识别算法, 首先通过对图像的灰度变化分析, 得出车道线轮廓像素, 然后运用B-Spline曲线拟合车道线轮廓, 得到最终的识别效果图。实验表明, 该算法在速度和识别率上都能取得优异的表现。在嵌入式平台上, 该算法取得了12 fps的速度, 符合智能驾驶的实际需求。  相似文献   

18.
A technique for real-time object recognition in digital images is described. On the one hand, our approach combines robustness against occlusions, clutter, arbitrary illumination changes, and noise with invariance under rigid motion, i.e., translation and rotation. On the other hand, the computational effort is small in order to fulfill requirements of real-time applications. Our approach uses a modification of the generalized Hough transform (GHT) to improve the GHT's performance: A novel efficient limitation of the search space in combination with a hierarchical search strategy is implemented to reduce the computational effort. To meet the demands for high precision in industrial tasks, a subsequent refinement adjusts the final pose parameters. An empirical performance evaluation of the modified GHT is presented by comparing it to two standard 2D object recognition techniques.  相似文献   

19.
陈双叶  王善喜 《计算机科学》2016,43(Z6):203-206
针对传统的帧差法检测运动目标时易出现空洞及Meanshift算法在复杂环境下易丢失跟踪目标甚至导致跟踪失败的缺点,提出了采用动态阈值五帧差分与跟踪目标实时模板更新的改进Meanshift的运动目标检测与跟踪算法,以提高系统的实时性和鲁棒性。结果表明该方法是可行的,能准确检测出运动目标,以提高目标跟踪的可靠性。  相似文献   

20.
This paper addresses robust and ultrafast pose tracking on mobile devices, such as smartphones and small drones. Existing methods, relying on either vision analysis or inertial sensing, are either too computational heavy to achieve real-time performance on a mobile platform, or not sufficiently robust to address unique challenges in mobile scenarios, including rapid camera motions, long exposure time of mobile cameras, etc. This paper presents a novel hybrid tracking system which utilizes on-device inertial sensors to greatly accelerate the visual feature tracking process and improve its robustness. In particular, our system adaptively resizes each video frame based on inertial sensor data and applies a highly efficient binary feature matching method to track the object pose in each resized frame with little accuracy degradation. This tracking result is revised periodically by a model-based feature tracking method (Hare et al. 2012) to reduce accumulated errors. Furthermore, an inertial tracking method and a solution of fusing its results with the feature tracking results are employed to further improve the robustness and efficiency. We first evaluate our hybrid system using a dataset consisting of 16 video clips with synchronized inertial sensing data and then assess its performance in a mobile augmented reality application. Experimental results demonstrated our method’s superior performance to a state-of-the-art feature tracking method (Hare et al. 2012), a direct tracking method (Engel et al. 2014) and the Vuforia SDK (Ibañez and Figueras 2013), and can run at more than 40 Hz on a standard smartphone. We will release the source code with the pubilication of this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号