首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一种视频多运动目标跟踪方法研究   总被引:1,自引:0,他引:1  
图像运动目标跟踪是自动监控、道路导航、交通监控等重要系统的核心技术。为了解决视频中多个运动目标的跟踪,利用帧间差分、Kalman滤波器以及少量特征匹配来自动提取各个运动目标并进行跟踪。针对灰度图像序列,采用一种基于图像序列的梯度信息,使用帧间差分法自动提取运动目标,利用Kalman滤波器预测目标位置,在估计目标在下一帧的位置范围后,根据目标直方图特征来缩小搜索范围实现目标对象的准确跟踪。实验结果表示,该方法具有较小的运算量和较好的实时性,同等条件下具有较高准确性。  相似文献   

2.
动态场景图像序列中运动目标检测新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   

3.
基于点模式匹配的视频文字跟踪和笔画提取   总被引:1,自引:0,他引:1       下载免费PDF全文
给出一种在复杂背景下的视频文字跟踪和文字笔画提取方法。用基于Harris角点特征的点模式匹配法跟踪视频序列中静止和运动的文字,以确定文字序列的时间属性,比较了采用图像整体像素匹配和点模式匹配的跟踪精度。用基于多帧融合思想的前景/背景识别算法提取视频文字笔画并作OCR识别。实验结果显示,点模式匹配的跟踪算法比图像整体像素匹配的算法跟踪精度更高,在图像背景复杂、变化快的情况下,基于多帧融合的文字笔画提取方法优于传统的二值化方法。  相似文献   

4.
一种移动机器人对运动目标的检测跟踪方法   总被引:2,自引:1,他引:1  
从序列图像中有效地自动提取运动目标区域和跟踪运动目标是自主机器人运动控制的研究热点之一.给出了连续图像帧差分和二次帧差分改进的图像HIS差分模型,采用自适应运动目标区域检测、自适应阴影部分分割和噪声消除算法,对无背景图像条件下自动提取运动目标区域.定义了一些运动目标的特征分析和计算,通过特征匹配识别所需跟踪目标的区域.采用Kalman预报器对运动目标状态的一步预测估计和两步增量式跟踪算法,能快速平滑地实现移动机器人对运动目标的跟踪驱动控制.实验结果表明该方法有效.  相似文献   

5.
基于最小平方中值定理的立体视觉里程计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于最小平方中值定理(LMedS)的立体视觉里程计方法。利用图像中尺度不变的SIFT特征点作为路标,基于KD树的最邻近点搜索算法来实现左右图像对特征点的匹配和前后帧间特征点跟踪。通过特征点的三维重建,基于最小平方中值定理估计出机器人的运动距离和方向信息。实验表明该方法在不同图像间匹配、三维路标跟踪和机器人运动估计中具有很强的鲁棒性。  相似文献   

6.
基于区域的手指三维运动跟踪   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了基于区域的多连接体(手指)的三维运动跟踪算法.该算法首先用多约束融合的方法以及手指的运动特性,得到初始帧手指的三维结构;然后根据刚性多连接体的运动模型以及相应的姿势约束模型,给出了这一特殊运动模型三维运动估计的优化算法,此算法能够鲁棒地估计手指的三维运动;最后利用区域跟踪的方法获取多连接体三维运动,并在真实的手指序列图象中实现了该算法.实验结果证实了该算法的有效性.  相似文献   

7.
提出了高斯平稳随机场中运动小目标的识别算法,对平稳随机场的图像序列进行训练,由最大似然估计法估计出各对应点处的概率均值和方差,根据3-σ原则对运动图像序列的各帧进行概率域值化处理,对图像序列进行差分多帧叠加,在叠加帧上根据双向链表,由链表的深度进行轨迹判决,试验证明该方法的有效性。  相似文献   

8.
对红外图像中的目标进行检测和跟踪是精确制导武器中非常困难的也是急需解决的难点之一,针对红外图像的特点,提出了一种新的用于红外图像序列弱小目标检测和跟踪的处理方法,并首先探讨了完备格下的图像连通性,其重构滤波器具有简化图像,并能保证轮廓完整的特性。该新方法包括帧内处理和帧间处理,其中帧内处理使用灰度级重构tophat滤波器实现背景去除和图像简化,再结合目标的灰度、形状、面积信息完成图像分割;帧间处理使用目标的空间、时间运动信息实现序列的目标检测和跟踪。仿真实验结果表明,这种方法在检测和跟踪红外图像中的弱小目标时,具有有效性和鲁棒性。  相似文献   

9.
提出一种利用运动目标三维轨迹重建的视频时域同步算法.待同步的视频序列由不同相机在同一场景中同时拍摄得到,对场景及相机运动不做限制性约束.假设每帧图像的相机投影矩阵已知,首先基于离散余弦变换基重建运动目标的三维轨迹.然后提出一种基于轨迹基系数矩阵的秩约束,用于衡量不同序列子段间的空间时间对准程度.最后构建代价矩阵,并利用基于图的方法实现视频间的非线性时域同步.我们不依赖已知的点对应关系,不同视频中的跟踪点甚至可以对应不同的三维点,只要它们之间满足以下假设:观测序列中跟踪点对应的三维点,其空间位置可以用参考序列中所有跟踪点对应的三维点集的子集的线性组合描述,且该线性关系维持不变.与多数现有方法要求特征点跟踪持续整个图像序列不同,本文方法可以利用长短不一的图像点轨迹.本文在仿真数据和真实数据集上验证了提出方法的鲁棒性和性能.  相似文献   

10.
基于Level Set方法的人脸轮廓提取与跟踪   总被引:13,自引:0,他引:13  
提出一种基于level set方法的图像序列中人脸轮廓提取与跟踪算法,首先利用图像帧间差分快速检测出运动区域,并根据人脸图像的投影映射规则确定人脸所在的外接矩形,然后以此矩形作为初始曲线,采用一种改进的1evelset模型精确提取出入脸轮廓。由于图像序列中人脸是一直运动的,该文引入一阶线性Kalman滤波模型对人脸运动进行估计,从而较好地跟踪了运动中的人脸轮廓,实验结果表明该方法是有效的。  相似文献   

11.
The purpose of this study is to investigate a variational formulation of the problem of three-dimensional (3D) interpretation of temporal image sequences based on the 3D brightness constraint and anisotropic regularization. The method allows movement of both the viewing system and objects and does not require the computation of image motion prior to 3D interpretation. Interpretation follows the minimization of a functional with two terms: a term of conformity of the 3D interpretation to the image sequence first-order spatio-temporal variations, and a term of regularization based on anisotropic diffusion to preserve the boundaries of interpretation. The Euler–Lagrange partial differential equations corresponding to the functional are solved efficiently via the half-quadratic algorithm. Results of several experiments on synthetic and real image sequences are given to demonstrate the validity of the method and its implementation.  相似文献   

12.
We present a method for a 3D snake model construction and terrestrial snake locomotion synthesis in 3D virtual environments using image sequences. The snake skeleton is extracted and partitioned into equal segments using a new iterative algorithm for solving the equipartition problem. This method is applied to 3D model construction and at the motion analysis stage. Concerning the snake motion, the snake orientation is controlled by a path planning method. An animation synthesis algorithm, based on a physical motion model and tracking data from image sequences, describes the snake’s velocity and skeleton shape transitions. Moreover, the proposed motion planning algorithm allows a large number of skeleton shapes, providing a general method for aperiodic motion sequences synthesis in any motion graph. Finally, the snake locomotion is adapted to the 3D local ground, while its behavior can be easily controlled by the model parameters yielding the appropriate realistic animations.  相似文献   

13.
提出一种在双目视觉中利用随机三角形纹理进行动态变形表面四维测量的方法。生成随机的三角形纹理,将纹理转印到纸、布等需要进行测量的对象表面上;用标定的两个同步相机拍摄表面的动态变形过程,获得两个同步的图像序列。使用提出的方法,检测每个图像上的三角形;使用提出的三角形描述符和三角形极线约束方法,匹配第一帧图像上的三角形,并根据匹配的结果,测量物体表面在第一帧中的三维信息;根据所测的信息生成每个三角形的局部三维拓扑结构;在两个同步的图像序列中追踪三角形,得到四维测量的结果,并且利用局部拓扑结构检测并修复出现的错误。模拟实验、实际数据实验,验证了该方法的有效性和可行性。  相似文献   

14.
Most approaches for motion analysis and interpretation rely on restrictive parametric models and involve iterative methods which depend heavily on initial conditions and are subject to instability. Further difficulties are encountered in image regions where motion is not smooth-typically around motion boundaries. This work addresses the problem of visual motion analysis and interpretation by formulating it as an inference of motion layers from a noisy and possibly sparse point set in a 4D space. The core of the method is based on a layered 4D representation of data and a voting scheme for affinity propagation. The inherent problem caused by the ambiguity of 2D to 3D interpretation is usually handled by adding additional constraints, such as rigidity. However, enforcing such a global constraint has been problematic in the combined presence of noise and multiple independent motions. By decoupling the processes of matching, outlier rejection, segmentation, and interpretation, we extract accurate motion layers based on the smoothness of image motion, and then locally enforce rigidity for each layer in order to infer its 3D structure and motion. The proposed framework is noniterative and consistently handles both smooth moving regions and motion discontinuities without using any prior knowledge of the motion model.  相似文献   

15.
A method for spatio-temporally smooth and consistent estimation of cardiac motion from MR cine sequences is proposed. Myocardial motion is estimated within a four-dimensional (4D) registration framework, in which all three-dimensional (3D) images obtained at different cardiac phases are simultaneously registered. This facilitates spatio-temporally consistent estimation of motion as opposed to other registration-based algorithms which estimate the motion by sequentially registering one frame to another. To facilitate image matching, an attribute vector (AV) is constructed for each point in the image, and is intended to serve as a “morphological signature” of that point. The AV includes intensity, boundary, and geometric moment invariants (GMIs). Hierarchical registration of two image sequences is achieved by using the most distinctive points for initial registration of two sequences and gradually adding less-distinctive points to refine the registration. Experimental results on real data demonstrate good performance of the proposed method for cardiac image registration and motion estimation. The motion estimation is validated via comparisons with motion estimates obtained from MR images with myocardial tagging.  相似文献   

16.
Concerns the 3D interpretation of image sequences showing multiple objects in motion. Each object exhibits smooth motion except at certain time instants when a motion discontinuity may occur. The objects are assumed to contain point features which are detected as the images are acquired. Estimating feature trajectories in the first two frames amounts to feature matching. As more images are acquired, existing trajectories are extended. Both initial detection and extension of trajectories are done by enforcing pertinent constraints from among the following: similarity of the image plane arrangement of neighboring features, smoothness of the 3D motion and smoothness of the image plane motion. The constraints are incorporated into energy functions which are minimized using 2D Hopfield networks. Wrong matches that result from convergence to local minima are eliminated using a 1D Hopfield-like network. Experimental results on several image sequences are shown.  相似文献   

17.
基于非标定序列影像的目标三维重建是一项非常重要的技术和研究热点,它使数据获取变得十分方便。基于影像序列的点匹配,得到的是一些点云,基于此,提出一个混合的三维重建方法:第一,通过物体三维点建立物体的数字形状模型(DSM);第二,通过提取物体轮廓线,尤其是相互的平行直段和垂直线段,构建物体的轮廓线;第三,给合现存的三维数据模型,在目标显示和数据结构方面构建恢复三维物体。实验以一个茶筒为例,采用Java3D显示结果,取得良好的结果。  相似文献   

18.
A new algorithm for 3D head tracking under partial occlusion from 2D monocular image sequences is proposed. The extended superquadric (ESQ) is used to generate a geometric 3D face model in order to reduce the shape ambiguity during tracking. Optical flow is then regularized by this model to estimate the 3D rigid motion. To deal with occlusion, a new motion segmentation algorithm using motion residual error analysis is developed. The occluded areas are successfully detected and discarded as noise. Furthermore, accumulation error is heavily reduced by a new post-regularization process based on edge flow. This makes the algorithm more stable over long image sequences. The algorithm is applied to both synthetic occlusion sequence and real image sequences. Comparisons with the ground truth indicate that our method is effective and is not sensitive to occlusion during head tracking.  相似文献   

19.
This study investigates a variational, active curve evolution method for dense three-dimensional (3D) segmentation and interpretation of optical flow in an image sequence of a scene containing moving rigid objects viewed by a possibly moving camera. This method jointly performs 3D motion segmentation, 3D interpretation (recovery of 3D structure and motion), and optical flow estimation. The objective functional contains two data terms for each segmentation region, one based on the motion-only equation which relates the essential parameters of 3D rigid body motion to optical flow, and the other on the Horn and Schunck optical flow constraint. It also contains two regularization terms for each region, one for optical flow, the other for the region boundary. The necessary conditions for a minimum of the functional result in concurrent 3D-motion segmentation, by active curve evolution via level sets, and linear estimation of each region essential parameters and optical flow. Subsequently, the screw of 3D motion and regularized relative depth are recovered analytically for each region from the estimated essential parameters and optical flow. Examples are provided which verify the method and its implementation  相似文献   

20.
We present a color and shape based 3D tracking system suited to a large class of vision sensors. The method is applicable, in principle, to any known calibrated projection model. The tracking architecture is based on particle filtering methods where each particle represents the 3D state of the object, rather than its state in the image, therefore overcoming the nonlinearity caused by the projection model. This allows the use of realistic 3D motion models and easy incorporation of self-motion measurements. All nonlinearities are concentrated in the observation model so that each particle projects a few tens of special points onto the image, on (and around) the 3D object’s surface. The likelihood of each state is then evaluated by comparing the color distributions inside and outside the object’s occluding contour. Since only pixel access operations are required, the method does not require the use of image processing routines like edge/feature extraction, color segmentation or 3D reconstruction, which can be sensitive to motion blur and optical distortions typical in applications of omnidirectional sensors to robotics. We show tracking applications considering different objects (balls, boxes), several projection models (catadioptric, dioptric, perspective) and several challenging scenarios (clutter, occlusion, illumination changes, motion and optical blur). We compare our methodology against a state-of-the-art alternative, both in realistic tracking sequences and with ground truth generated data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号