首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Moving vehicles are detected and tracked automatically in monocular image sequences from road traffic scenes recorded by a stationary camera. In order to exploit the a priori knowledge about shape and motion of vehicles in traffic scenes, a parameterized vehicle model is used for an intraframe matching process and a recursive estimator based on a motion model is used for motion estimation. An interpretation cycle supports the intraframe matching process with a state MAP-update step. Initial model hypotheses are generated using an image segmentation component which clusters coherently moving image features into candidate representations of images of a moving vehicle. The inclusion of an illumination model allows taking shadow edges of the vehicle into account during the matching process. Only such an elaborate combination of various techniques has enabled us to track vehicles under complex illumination conditions and over long (over 400 frames) monocular image sequences. Results on various real-world road traffic scenes are presented and open problems as well as future work are outlined.  相似文献   

2.
基于车型聚类的交通流参数视频检测   总被引:1,自引:0,他引:1  
吴聪  李勃  董蓉  陈启美 《自动化学报》2011,37(5):569-576
单目摄像机成像丢失深度信息,且PTZ (Pan/Tilt/Zoom)摄像视频场景多变,导致交通流参数提取误差较大. 提出了一种基于车型聚类的交通流参数检测方法. 在改进的摄像机自标定成像模型中,提取PTZ 参数变化下的透视投影不变量"伪形状特征', 对其进行基于贡献率算法的车型聚类分析,以车型均高代替实际高度,获取车辆的长宽, 进而计算道路空间占有率,并提升车速检测精度. 测试表明实时性较高,车型聚类自适应于不同场景,平均准确度为96.9%,车长计算精度优于90%.  相似文献   

3.
视频图像中的车型识别   总被引:2,自引:0,他引:2  
文章介绍一种在固定单摄像头拍摄交通图像序列中检测车辆的方法。处理过程大致分为以下三步:重建不含运动目标的自然背景及图像分割;摄像机标定;目标区域的跟踪和车型识别。实验证明方法是可行的。  相似文献   

4.
5.
采用模板匹配法对同源传感器视频采集到的车辆图像进行匹配,对所配准的图像采用加权平均的融合算法进行融合,可以获得对同一场景、同一目标更准确、全面、可靠的图像描述,从而实现对车辆图像特征的精确提取。通过Mat-lab仿真实验表明,该方法能更有效地提取车辆图像特征,达到对车辆外型的准确描述。  相似文献   

6.
基于Radon变换的视频测速算法   总被引:1,自引:0,他引:1  
为了从视频监测图像中自动提取车辆行驶速度,提出了一种基于Radon变换的视频测速算法。根据车辆行驶轨迹的时空特点,利用道路交通标线为图像与道路建立距离映射关系,简化了现场标定的条件和过程;利用时间堆栈成像方法,建立了车辆行驶轨迹的时空关系;基于Radon变换的图像处理,实现了行驶车辆的视频测速技术。算法具有较好的鲁棒性,与传统方法相比,计算复杂性也要低很多。  相似文献   

7.
A new algorithm capable of estimating disparity gradients to produce accurate dense disparities is proposed. Such a disparity gradient plays a critical role in acquiring accurate disparities for scenes with many different object shapes. The target is a road traffic scene because it contains various objects, including the road surface, vehicles, pedestrians, sidewalks, and walls. In this paper, we adopt several methods, such as initial matching cost computation, scanline optimization, left/right consistency check, and cost aggregation. However, disparity accuracy is slightly improved by the simple organization of such methods. Disparity quality decisively relies on the application of disparity gradients. Accordingly, in the proposed algorithm, cost aggregation is performed along the direction of the estimated disparity gradient in a disparity space image. This approach improves disparity quality significantly. However, this cost aggregation is time consuming. To reduce the time required, we designed a new 2D integral cost technique. The robustness of the proposed algorithm is demonstrated through the disparity maps obtained from standard images on the Web, indoor images, and outdoor images of various road traffic scenes.  相似文献   

8.
廖威  翁璐斌  于俊伟  田原 《计算机应用》2011,31(6):1709-1712
针对无法依靠景象匹配手段进行导航定位和无法有效利用惯导姿态信息的情况,提出了一种基于地形高程模型的飞行器绝对姿态和位置的估计方法。该方法首先利用机载下视摄像系统获取实时立体图像对及利用传感器获得飞行速度信息,通过修改双像运动模型来重建飞行器下方的地形信息;然后利用三维重建结果的刚体约束给出一种匹配机载地形高程模型数据的方法,用于估计飞行器在世界坐标系中的绝对位姿。仿真结果表明:改进的双像运动模型具有更高的精度,更有利于在世界坐标系下进行位姿估计。  相似文献   

9.
针对交通图像的特点,提出了一种将灰度判断与边缘检测相结合的方法,来获得车流量、车速等相关交通参数。将先锋遗传算法(Vanguard Genetic Algorithms)应用到图像阈值分割中,利用先锋遗传算法寻求全局最优阈值,可以比较准确地将图像中不同灰度的车辆从背景中分离出来。分析了小波变换的特点,应用小波变换对交通图像进行车辆边缘检测。采用帧平均法来处理视频流,可以减小由于摄像头抖动或背景微小变化而产生的误差。实验结果表明,该方法简单、有效,能够获得较满意的检测结果。  相似文献   

10.
11.
《Real》2000,6(3):241-249
Real-time measurement and analysis of road traffic flow parameters such as volume, speed and queue are increasingly required for traffic control and management. Image processing is considered as an attractive and flexible technique for automatic analysis of road traffic scenes for the measurement and data collection of road traffic parameters. In this paper, the authors describe a novel image processing based approach for analysis of road traffic scenes. Combined background differencing and edge detection techniques are used to detect vehicles and measure various traffic parameters such as vehicle count and the queue length. A RISC based multiprocessor system was designed to enable real-time execution of the authors algorithm. The multiprocessor system has nine processing modules connected in a parallel pipeline fashion. Results shows that the authors multiprocessor system is able to provide measurement of traffic parameters in real-time. Results are presented for real tests of our system by analysing traffic scenes on the highways of Singapore.  相似文献   

12.
航空序列图像的特征模型提取及追踪   总被引:1,自引:0,他引:1       下载免费PDF全文
针对航空序列图像中模型没有任何先验知识且图像中的景物在不断刷新的情况,首先将图像分割成独立的区域,利用改进的Canny算子提取分割区域所包含的边缘,同时提出了区域特征比较因子,在此基础上给出了有效的特征模型提取和追踪方法,并分析了如何通过特征模型在航空序列图像中的移动来估计无人飞行器的飞行轨迹。通过对实际拍摄到的航空序列图像进行分析,表明本文的方法是有效的,对由航空序列图像的分析来估计无人飞行器的飞行轨迹提供了良好的基础。  相似文献   

13.
本文提出了对道路标志进行快速别的一种辅助系统,该系统以人类的视觉及思维模式作为蓝本,可以较为快速的实现行进汽车中的交通标志识别。这一系统通过安装在汽车上的摄像机获取信息,通过子采样消除图像重影同时减少需要处理的数据量,然后根据交通标志的色彩和形状两方面特征,寻找画面帧中的交通标志,采用模板匹配的方法实现标志的定位和最终识别。  相似文献   

14.
Model-Based Localisation and Recognition of Road Vehicles   总被引:5,自引:2,他引:5  
Objects are often constrained to lie on a known plane. This paper concerns the pose determination and recognition of vehicles in traffic scenes, which under normal conditions stand on the ground-plane. The ground-plane constraint reduces the problem of localisation and recognition from 6 dof to 3 dof.The ground-plane constraint significantly reduces the pose redundancy of 2D image and 3D model line matches. A form of the generalised Hough transform is used in conjuction with explicit probability-based voting models to find consistent matches and to identify the approximate poses. The algorithms are applied to images of several outdoor traffic scenes and successful results are obtained. The work reported in this paper illustrates the efficiency and robustness of context-based vision in a practical application of computer vision.Multiple cameras may be used to overcome the limitations of a single camera. Data fusion in the proposed algorithms is shown to be simple and straightforward.  相似文献   

15.
Robust camera pose and scene structure analysis for service robotics   总被引:1,自引:0,他引:1  
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot’s position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object’s depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.  相似文献   

16.
目的:交通场景中车辆间的距离过近或相互遮挡容易造成识别上的粘连,增加了准确检测目标车辆的难度,因此需要建立有效、可靠的遮挡车辆分割机制。方法:首先在图像分块的基础上确定出车辆区域,根据车辆区域的长宽比和占空比进行多车判断;然后提出了一种基于“七宫格”的凹陷区域检测算法,用以找出车辆间的凹陷区域,通过匹配对应的凹陷区域得到遮挡区域;最后,将遮挡区域内检测出的车辆边缘轮廓作为分割曲线,从而分割遮挡车辆。结果:实验结果表明,在满足实时性的前提下算法具有较高的识别率,且能够按车辆的边缘轮廓准确分割多个相互遮挡的车辆。与其他算法相比,该算法提高了分割成功率和分割精度,查全率和查准率均可达到90%。结论:提出了一种新的遮挡车辆分割方法,有效解决了遮挡车辆不宜分割和分割不准确的问题,具有较强的适应性。  相似文献   

17.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

18.
基于多特征融合的视频交通数据采集方法   总被引:1,自引:0,他引:1  
提出了一种基于多特征融合的视频交通数据采集方法, 核心思想是: 在图像中设置虚拟线圈, 假设车辆从虚拟线圈上驶过时引起像素变化, 通过识别这种像素变化来检测车辆并估计车速. 与现有技术相比, 本文的贡献在于: 1) 综合利用虚拟线圈内的前景面积、纹理变化、像素运动等特征来检测车辆, 提出了有效的多特征融合方法, 显著提高了车辆检测精度; 2) 根据单个虚拟线圈内的像素运动向量来估计车速, 避免了双线圈测速法的错误匹配问题. 算法测试结果表明本文算法能够在复杂多样的交通场景和天气条件下, 准确地检测车辆和估计车速. 在算法研究的基础上, 研制了一款嵌入式交通视频检测器, 在路口长期采集交通数据, 为交通信号控制和交通规律分析提供决策依据.  相似文献   

19.
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera’s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles’ pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.  相似文献   

20.
A critical challenge for autonomous underwater vehicles (AUVs) is the docking operation for applications such as sleeping under the mother ship, recharging batteries, transferring data, and new mission downloading. The final stage of docking at a unidirectional docking station requires the AUV to approach while keeping the pose (position and orientation) of the vehicle within an allowable range. The appropriate pose therefore demands a sensor unit and a control system that have high accuracy and robustness against disturbances existing in a real-world underwater environment. This paper presents a vision-based AUV docking system consisting of a 3D model-based matching method and Real-time Multi-step Genetic Algorithm (GA) for real-time estimation of the robot’s relative pose. Experiments using a remotely operated vehicle (ROV) with dual-eye cameras and a separate 3D marker were conducted in a small indoor pool. The experimental results confirmed that the proposed system is able to provide high homing accuracy and robustness against disturbances that influence not only the captured camera images but also the movement of the vehicle. A successful docking operation using stereo vision that is new and novel to the underwater vehicle environment was achieved and thus proved the effectiveness of the proposed system for AUV.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号