首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 152 毫秒
1.
多摄像头协同感知系统的设计与实现   总被引:1,自引:0,他引:1       下载免费PDF全文
为了提供连续的实时监控,提高区域安全性,利用多个活动像机,设计了一个对多个运动目标进行无缝检测和跟踪的协同感知系统,并提供给用户一个可视的3维场景。最后给出了实验室的应用实例。  相似文献   

2.
在复杂背景下对多个非刚性目标进行跟踪是计算机视觉中的一个难点。在短程线主动轮廓模型的基础上,利用力场正则化方法,并加入运动边缘信息,提出了一种在复杂背景下多个非刚性目标进行跟踪的方法。该方法由运动检测和跟踪两部分组成:运动检测利用运动边缘信息对运动目标的运动做出检测,让轮廓曲线运动到目标轮廓附近;跟踪利用当前帧中的静态边缘信息对运动检测的结果加以修正,而跟踪这一步引入的偏差将在下一帧的运动检测中得到修正。实验表明该方法能够有效地在复杂背景中对多个非刚性运动目标进行跟踪。  相似文献   

3.
一种新型智能图像监控系统   总被引:9,自引:0,他引:9  
针对目前流行的图像监控系统的缺陷,本论文研制开发了一种新型智能图像监 控系统.该系统实现了监控场景的自动光圈调整和自动聚焦;并能够实时检测、定位监控场 景中的特定颜色目标;能够自动检测跟踪运动目标,并解决了运动目标检测中图像刷新率问 题,极大提高了监控系统的智能化和自动化程度.实践证明,系统工作性能良好.  相似文献   

4.
一种面向停车场场景的运动目标检测与跟踪方法   总被引:2,自引:0,他引:2  
针对采用固定相机的停车场场景监控视频中可能出现运动目标较长时间停留的情况,提出一种帧间差(Frame difference)和运行期均值(Running average)相结合的运动目标检测方法,然后在卡尔曼滤波以及运动目标直方图和轮廓信息的辅助下实现运动目标的跟踪.实验结果表明,该方法可以在满足实时监控的需求下较好的检测与跟踪停车场场景中的运动目标.  相似文献   

5.
传统摄像头在获取大范围复杂场景中的感兴趣目标时,容易出现目标物体丢失或遮挡等问题。为此,提出一种基于全景摄像头的柱面展开及运动目标实时跟踪算法。通过改进的柱面展开算法对360。摄像头获取的全景图像进行还原展开,解决全景图像中的成像扭曲问题。利用CamShift和Kalman预测相结合的算法跟踪运动目标。实验结果表明,在运动目标存在遮挡、短暂消失或同色物体干扰的情况下,该方法能实现对全景范围复杂环境中运动目标实时鲁棒的跟踪。  相似文献   

6.
基于主动视觉的运动目标检测跟踪方法   总被引:1,自引:0,他引:1  
卢瑾  方俊  张健 《计算机仿真》2012,29(7):278-281,291
研究主动视觉运动目标检测跟踪系统。针对图像目标跟踪多为非连续动态过程,准确性差,通过混合高斯法建立背景模型,采用背景差分法,利用最大类间方差算法确定阈值,检测分割出运动目标,提出一种结合SURF算法的带宽自适应均值漂移跟踪算法实现目标跟踪,使用线程并行控制摄像机运动,确保跟踪目标在图像序列中的合适尺寸。实验表明,改进系统能够实现对场景中运动目标的准确检测,稳定跟踪,并能到达实时应用的要求。  相似文献   

7.
针对大范围未知环境下的机器人目标跟踪问题,在智能空间下分布式智能网络设备与机器人本体二维激光的基础上,提出了一种基于异质信息融合结构以实现机器人对目标实时检测和跟踪的方法。系统通过颜色信息进行移动目标匹配,根据三角测量原理基于最小二乘法对目标进行三维重建;检测出运动目标后,通过激光传感器扫描目标人腿进行近邻点聚类获得目标的准确深度信息;利用一种优化的迭代扩展卡尔曼滤波算法对异质传感器进行信息融合,以实现基于智能空间的机器人定位与目标跟踪。实验结果验证了方法的有效性。  相似文献   

8.
程磊 《数字社区&智能家居》2010,6(7):1684-1685,1688
针对背景动态变化的场景,提出了一种基于全方位视觉的运动目标检测跟踪方法。通过目标在HSV颜色空间中的H值、目标间的欧氏距离和目标相交面积等特征融合,提高目标跟踪的鲁棒性。实验表明,所设计的方法能实现实时准确的运动目标检测与跟踪。  相似文献   

9.
李旭  俞娜  李景文  姜建武 《计算机仿真》2021,38(11):162-167
针对重叠视域范围内多摄像头目标跟踪匹配问题,提出一种基于视野分界线的运动目标检测跟踪方法,通过Sift算法与Harris算法相融合对存在重叠视域范围的图像自动获取特征匹配点,利用所获特征匹配点对生成视野分界线,当目标通过分界线时,依据颜色信息的投影不变量方法,通过目标中心位置与视野分界线之间的距离来判断运动目标,进而实现运动目标的交接,完成后续动态目标的追踪.通过设置相同的视频帧数和实验场景与已有文献进行对比实验,所提算法对运动目标跟踪准确率分别为76%、87%、88%,均高于文献算法跟踪精度.实验结果表明所提出的算法能够实现多摄像头间的目标连续跟踪既能解决对同一运动目标的实时跟踪,又可以解决多视角协同重叠视域范围内的运动目标跟踪距离较近而导致运动目标丢失的问题,通有效提高目标跟踪的准确性,为多视角运动目标跟踪提供了一种新的解决方法.  相似文献   

10.
为了获取高速公路交通视频中目标车辆的行驶轨迹,提出一种基于视频的多目标车辆跟踪及实时轨迹分布算法,为交通管理系统和交通决策提供目标车辆交通信息.首先,使用YOLOv4算法检测目标车辆位置及置信度.其次,在不同场景条件下,使用提出的基于稀疏帧检测的跟踪方法,结合KCF跟踪算法,将车辆数据进行关联获取完整轨迹.最后,用车辆分布图和交通场景俯视图显示轨迹,便于交通管理与分析.实验结果表明,提出的跟踪方法在车辆跟踪中有较高的跟踪正确率,同时基于稀疏帧检测的跟踪方法处理速度也较快,实时轨迹分布正确反映了真实场景的车道信息以及目标车辆运动信息.  相似文献   

11.
Vision systems are increasingly being deployed to perform complex surveillance tasks. While improved algorithms are being developed to perform these tasks, it is also important that data suitable for these algorithms be acquired – a non-trivial task in a dynamic and crowded scene viewed by multiple PTZ cameras. In this paper, we describe a real-time multi-camera system that collects images and videos of moving objects in such scenes, subject to task constraints. The system constructs “task visibility intervals” that contain information about what can be sensed in future time intervals. Constructing these intervals requires prediction of future object motion and consideration of several factors such as object occlusion and camera control parameters. Such intervals can also be combined to form multi-task intervals, during which a single camera can collect videos suitable for multiple tasks simultaneously. Experimental results are provided to illustrate the system capabilities in constructing such task visibility intervals, followed by scheduling them using a greedy algorithm.  相似文献   

12.
针对增强现实游戏应用,介绍了一种基于自然特征的实时跟踪算法,它可以在一个未知但纹理丰富的环境中,不需要任何预先的学习,对相机和若干出现在相机视野里的刚体进行并行跟踪,获得相机和各个刚体的姿态和运动信息,可以在刚体移出相机视野以及相机剧烈运动导致跟踪丢失的情况下进行自动恢复。基于静态环境中并行相机跟踪和三维重建的算法框架进行扩展和改进,引入了用于交互的运动刚体。结果表明,利用现有的多核处理技术,在含有运动物体的环境中进行相机和运动物体的并行跟踪是可行的,为增强现实环境中同时支持显示和交互提出了廉价易行的解决方案。  相似文献   

13.
Camera model and its calibration are required in many applications for coordinate conversions between the two-dimensional image and the real three-dimensional world. Self-calibration method is usually chosen for camera calibration in uncontrolled environments because the scene geometry could be unknown. However when no reliable feature correspondences can be established or when the camera is static in relation to the majority of the scene, self-calibration method fails to work. On the other hand, object-based calibration methods are more reliable than self-calibration methods due to the existence of the object with known geometry. However, most object-based calibration methods are unable to work in uncontrolled environments because they require the geometric knowledge on calibration objects. Though in the past few years the simplest geometry required for a calibration object has been reduced to a 1D object with at least three points, it is still not easy to find such an object in an uncontrolled environment, not to mention the additional metric/motion requirement in the existing methods. Meanwhile, it is very easy to find a 1D object with two end points in most scenes. Thus, it would be very worthwhile to investigate an object-based method based on such a simple object so that it would still be possible to calibrate a camera when both self-calibration and existing object-based calibration fail to work. We propose a new camera calibration method which requires only an object with two end points, the simplest geometry that can be extracted from many real-life objects. Through observations of such a 1D object at different positions/orientations on a plane which is fixed in relation to the camera, both intrinsic (focal length) and extrinsic (rotation angles and translations) camera parameters can be calibrated using the proposed method. The proposed method has been tested on simulated data and real data from both controlled and uncontrolled environments, including situations where no explicit 1D calibration objects are available, e.g. from a human walking sequence. Very accurate camera calibration results have been achieved using the proposed method.  相似文献   

14.
Robust camera pose and scene structure analysis for service robotics   总被引:1,自引:0,他引:1  
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot’s position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object’s depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.  相似文献   

15.
Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.  相似文献   

16.
基于动态场景分析的计算机视觉系统尤其是对运动目标的运动参数测量已成为计算机视觉研究的热点;建立了基于双目立体视觉的目标姿态测量系统,使用两个摄像机在不同视角对同一目标进行拍摄,采用DLT方法对摄像机标定;对拍摄到的图像进行去噪、滤波,通过立体图像匹配得到特征点在两幅图像中的同名点,姿态参数求解结合DLT算法和立体图像匹配算法,实现了对目标姿态参数的测量,得到了目标的三维姿态信息;实验结果表明,该测量系统结构简单,计算量小,具有较高的测量精度.  相似文献   

17.
Intelligent visual surveillance — A survey   总被引:3,自引:0,他引:3  
Detection, tracking, and understanding of moving objects of interest in dynamic scenes have been active research areas in computer vision over the past decades. Intelligent visual surveillance (IVS) refers to an automated visual monitoring process that involves analysis and interpretation of object behaviors, as well as object detection and tracking, to understand the visual events of the scene. Main tasks of IVS include scene interpretation and wide area surveillance control. Scene interpretation aims at detecting and tracking moving objects in an image sequence and understanding their behaviors. In wide area surveillance control task, multiple cameras or agents are controlled in a cooperative manner to monitor tagged objects in motion. This paper reviews recent advances and future research directions of these tasks. This article consists of two parts: The first part surveys image enhancement, moving object detection and tracking, and motion behavior understanding. The second part reviews wide-area surveillance techniques based on the fusion of multiple visual sensors, camera calibration and cooperative camera systems.  相似文献   

18.
Detecting and tracking moving objects within a scene is an essential step for high-level machine vision applications such as video content analysis. In this paper, we propose a fast and accurate method for tracking an object of interest in a dynamic environment (active camera model). First, we manually select the region of the object of interest and extract three statistical features, namely the mean, the variance and the range of intensity values of the feature points lying inside the selected region. Then, using the motion information of the background’s feature points and k-means clustering algorithm, we calculate camera motion transformation matrix. Based on this matrix, the previous frame is transformed to the current frame’s coordinate system to compensate the impact of camera motion. Afterwards, we detect the regions of moving objects within the scene using our introduced frame difference algorithm. Subsequently, utilizing DBSCAN clustering algorithm, we cluster the feature points of the extracted regions in order to find the distinct moving objects. Finally, we use the same statistical features (the mean, the variance and the range of intensity values) as a template to identify and track the moving object of interest among the detected moving objects. Our approach is simple and straightforward yet robust, accurate and time efficient. Experimental results on various videos show an acceptable performance of our tracker method compared to complex competitors.  相似文献   

19.
针对移动镜头下的运动目标检测中的背景建模复杂、计算量大等问题,提出一种基于运动显著性的移动镜头下的运动目标检测方法,在避免复杂的背景建模的同时实现准确的运动目标检测。该方法通过模拟人类视觉系统的注意机制,分析相机平动时场景中背景和前景的运动特点,计算视频场景的显著性,实现动态场景中运动目标检测。首先,采用光流法提取目标的运动特征,用二维高斯卷积方法抑制背景的运动纹理;然后采用直方图统计衡量运动特征的全局显著性,根据得到的运动显著图提取前景与背景的颜色信息;最后,结合贝叶斯方法对运动显著图进行处理,得到显著运动目标。通用数据库视频上的实验结果表明,所提方法能够在抑制背景运动噪声的同时,突出并准确地检测出场景中的运动目标。  相似文献   

20.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号