首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
激光测距雷达距离图障碍物实时检测算法研究及误差分析   总被引:7,自引:1,他引:6  
张奇  顾伟康 《机器人》1997,19(2):122-128,133
本文在首先阐述陆自 主车中利用激光测距成像雷达获得取的距离图实时检测障碍的坐标变换法的基础上,着重深入研究了雷达垂直扫描中心角的误差与自主车3个姿态角的误差及激光测距成像雷达多义性间距对障碍物检测结果的影响。  相似文献   

2.
The automation of rotorcraft low-altitude flight presents challenging problems in control, computer vision, and image understanding. A critical element in this problem is the ability to detect and locate obstacles, using on-board sensors, and to modify the nominal trajectory. This requirement is also necessary for the safe landing of an autonomous lander on Mars. This paper examines some of the issues in the location of objects, using a sequence of images from a passive sensor, and describes a Kalman filter approach to estimate range to obstacles. The Kalman filter is also used to track features in the images leading to a significant reduction of search effort in the feature-extraction step of the algorithm. The method can compute range for both straightline and curvilinear motion of the sensor. An experiment is designed in the laboratory to acquire a sequence of images along with the sensor motion parameters under conditions similar to helicopter flight. The paper presents range estimation results using this imagery.  相似文献   

3.
High-resolution terrain map from multiple sensor data   总被引:3,自引:0,他引:3  
The authors present 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using multiple sensors. They have developed the locus method to model the rugged terrain. The locus method exploits sensor geometry to efficiently build a terrain representation from multiple sensor data. The locus method is used to estimate the vehicle position in the digital elevation map (DEM) by matching a sequence of range images with the DEM. Experimental results from large-scale real and synthetic terrains demonstrate the feasibility and power of the 3-D mapping techniques for rugged terrain. In real world experiments, a composite terrain map was built by merging 125 real range images. Using synthetic range images, a composite map of 150 m was produced from 159 images. With the proposed system, mobile robots operating in rugged environments can build accurate terrain models from multiple sensor data  相似文献   

4.
This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications. For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision and motion analysis can be implemented to reconfigure the system for other real-time vision applications.  相似文献   

5.
The iterative closest point (ICP) algorithm represents an efficient method to establish an initial set of possible correspondences between two overlapping range images. An inherent limitation of the algorithm is the introduction of false matches, a problem that has been tackled by a variety of schemes mainly based on local invariants described in a single coordinate frame. In this paper we propose using global rigid motion constraints to deal with false matches. Such constraints are derived from geometric properties of correspondence vectors bridging the points described in different coordinate frames before and after a rigid motion. In order to accurately and efficiently estimate the parameters of interest, the Monte Carlo resampling technique is used and motion parameter candidates are then synthesised by a median filter. The proposed algorithm is validated based on both synthetic data and real range images. Experimental results show that the proposed algorithm has advantages over existing registration methods concerning robustness, accuracy, and efficiency.  相似文献   

6.
We present work on analyzing 3-D point clouds of a small utility vehicle for purposes of humanoid robot driving. The scope of this work is limited to a subset of ingress-related tasks including stepping up into the vehicle and grasping the steering wheel. First, we describe how partial point clouds are acquired from different perspectives using sensors, including a stereo camera and a tilting laser range finder. For finer detail and a larger model than one sensor view alone can capture, a Kinect Fusion (Izadi et al. in KinectFusion: realtime 3D reconstruction and interaction using a moving depth camera, 2011)-like algorithm is used to integrate the stereo point clouds as the sensor head is moved around the vehicle. Second, we discuss how individual sensor views can be registered to the overall vehicle model to provide context, and present methods to estimate both statically and dynamically several geometric parameters critical to motion planning: (1) the floor height and boundaries defined by the seat and the dashboard, and (2) the steering wheel pose and dimensions. Results are compared using the different sensors, and the usefulness of the estimated quantities for motion planning is also addressed.  相似文献   

7.
Locating sensors in an indoor environment is a challenging problem due to the insufficient distance measurements caused by short ultrasound range and the incorrect distance measurements caused by multipath effect of ultrasound. In this paper, we propose a virtual ruler approach, in which a vehicle equipped with multiple ultrasound beacons travels around the area to measure distances between pairwise sensors. Virtual Ruler can not only obtain sufficient distances between pairwise sensors, but can also eliminate incorrect distances in the distance measurement phase of sensor localization. We propose to measure the distance between pairwise sensors from multiple perspectives using the virtual ruler and filter incorrect values through a statistical approach. By assigning measured distances with confidence values, the localization algorithm can intelligently localize each sensor based on high confidence distances, which greatly improves localization accuracy. Our performance evaluation shows that the proposed approach can achieve better localization results than previous approaches in an indoor environment.  相似文献   

8.
We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize‐and‐test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time. © 2006 Wiley Periodicals, Inc.  相似文献   

9.
利用多源传感器之间获取信息的互补性,克服单传感器的缺陷,从而提高系统整体性能指标的思想已经在军事、医疗、卫星等领域获得了广泛的应用。可见光和红外图像相融合也能提高视觉应用场景中对目标的探测能力,降低目标警报的虚警率和漏警率,提升准确率和工作效益。对于红外与可见光图像配准过程中受不同传感器图像成像原理不同,成像结果图像灰度差异大、特征难以匹配的问题,可以利用红外和可见光图像的共有特征即边缘轮廓特征,采用Canny边缘提取算法提取出图像最基本、稳定的特征,然后在边缘图中使用SURF特征检测算法进行特征点提取与匹配,最后采用RANSAC进行精准匹配。由于边缘在红外和可见光图像中都是比较稳定的特征,而且在边缘轮廓图中进行特征提取将极大减少计算量和提高匹配率,因而最终能够获得较为准确的红外、可见光图像的变换关系。  相似文献   

10.
自主地面车在越野环境下导航面临最困难的问题之一是对地形的理解,分析感知到的越野地形,作适合于自主车导航的可通行性分析。本文提出了越野高程地形的相对不变性概念,并利用这种性质提取出在一定尺度范围内地形具有的相对不变特征,如地形坡度、起伏度和粗糙度,最后基于模糊规则组合各特征对地形的可通行性进行评估。自主车越野导航实验表明,本文算法稳定有效,能满足自主地面车越野导航的需要。  相似文献   

11.
现代舰艇配置多部用于探测作战任务目标的传感器,因此必须估计距离、方位和俯仰的探测参数偏差。大部分已有算法需要从传感器获取额外信息,比如滤波增益和关联协方差矩阵。本文提出7阶多项式拟合和假设检验的新算法,使用K-S检验、卡方检验和t检验方法统计分析估计传感器系统偏差。通过比较不同传感器的航迹数据,该算法可获得多种传感器的探测精度和偏差,并提供传感器间偏差异常定位。最后,通过仿真数据和无人机测量数据验证本文所提算法的有效性。  相似文献   

12.
An experimental study is made on the alignment of three autonomous air-levitated vehicles with air-jet controls to achieve an equilateral-triangle formation. The vehicles are equipped with lasers, optical sensors, radio transceivers, and on-board power sources. The attitude and displacement of each vehicle are controlled by air-jets, simple rule-based controls activated by discrete optical sensors with binary outputs and special geometric configuration for formation alignment. The main objective is to determine the feasibility of using control rules derived by making use of the sensor data only for formation alignment. The sensors consist of discrete, binary optical detectors arranged in a certain geometric pattern. The vehicle design, including estimation of levitation lifetime, sensor design, and vehicle excursion due to an air-jet pulse, are discussed first. Then the control rules are described in detail. The effectiveness of the proposed sensor–control combination in formation alignment is determined both experimentally and via computer simulation. © 1998 John Wiley & Sons, Inc.  相似文献   

13.
论文提出了一种摄像机旋转运动下的快速目标检测算法。首先为图像的全 局运动建立旋转参数模型,然后基于运动预测在相邻帧之间建立SIFT 特征点对,利用 RANSAC 去除外点的影响,结合最小二乘法求解全局运动参数进行运动补偿,基于残差图 像的更新策略实时更新特征点集,以适应背景的变化,最后使用帧差法获得运动目标。该算 法不仅保持了SIFT 本身的优越性能,而且极大地提高了检测速度。实验结果表明该算法可 以实时准确的检测出运动目标。  相似文献   

14.
非完整自主车基于圆轨迹的道路避障   总被引:2,自引:0,他引:2  
司秉玉  吕宗涛  徐心和 《机器人》2003,25(2):147-151
本文就四轮非完整自主车提出了一种基于圆轨迹的道路避障策略.先将道路上 的障碍按照障碍距离自主车的远近划分层次,使一个层次的障碍能在自主车视场中全部出现 .然后给出基于圆轨迹的避障算法,即自主车沿由自主车出发位姿和子目标点确定的圆弧轨 迹走行.在此之前推导四轮非完整车的运动模型为提出避障策略的基础准备.尽量减小自主 车在走行过程中运动状态的改变,基于圆轨迹避障策略能够很好地满足这一要求.最后引入 代价函数,给出对于此方法的评价,体现了本方法的优越性.  相似文献   

15.
A vision-based approach to unsupervised learning of the indoor environment for autonomous land vehicle (ALV) navigation is proposed. The ALV may, without human's involvement, self-navigate systematically in an unexplored closed environment, collect the information of the environment features, and then build a top-view map of the environment for later planned navigation or other applications. The learning system consists of three subsystems: a feature location subsystem, a model management subsystem, and an environment exploration subsystem. The feature location subsystem processes input images, and calculates the locations of the local features and the ALV by model matching techniques. To facilitate feature collection, two laser markers are mounted on the vehicle which project laser light on the corridor walls to form easily detectable line and corner features. The model management subsystem attaches the local model into a global one by merging matched corner pairs as well as line segment pairs. The environment exploration subsystem guides the ALV to explore the entire navigation environment by using the information of the learned model and the current ALV location. The guidance scheme is based on the use of a pushdown transducer derived from automata theory. A prototype learning system was implemented on a real vehicle, and simulations and experimental results in real environments show the feasibility of the proposed approach.  相似文献   

16.
Motion estimation via cluster matching   总被引:4,自引:0,他引:4  
A new method for estimating displacements in computer imagery through cluster matching is presented. Without reliance on any object model, the algorithm clusters two successive frames of an image sequence based on position and intensity. After clustering, displacement estimates are obtained by matching the cluster centers between the two frames using cluster features such as position, intensity, shape and average gray-scale difference. The performance of the algorithm was compared to that of a gradient method and a block matching method. The cluster matching approach showed the best performance over a broad range of motion, illumination change and object deformation  相似文献   

17.
详细介绍了室外移动机器人磁导航系统中磁钉材料、形状及尺寸的选择原则.在分析各种磁传感器特性的基础上,选择HMC1022型磁阻传感器作为磁信号的检测元件.利用13个磁阻传感器,形成了用于检测磁钉磁场的磁尺.磁尺安装在车辆前保险杠上相对地面比较高的位置,解决了应用霍尔传感器等进行磁导航时安装高度比较低的问题.提出了一种改进的序列算法,根据相邻传感器的比值,通过设定阈值来细分区间,从而获得了更精确的磁定位结果.实验结果验证了改进序列算法的精确测量特性.  相似文献   

18.
In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.  相似文献   

19.
20.
We provide a sensor fusion framework for solving the problem of joint ego-motion and road geometry estimation. More specifically we employ a sensor fusion framework to make systematic use of the measurements from a forward looking radar and camera, steering wheel angle sensor, wheel speed sensors and inertial sensors to compute good estimates of the road geometry and the motion of the ego vehicle on this road. In order to solve this problem we derive dynamical models for the ego vehicle, the road and the leading vehicles. The main difference to existing approaches is that we make use of a new dynamic model for the road. An extended Kalman filter is used to fuse data and to filter measurements from the camera in order to improve the road geometry estimate. The proposed solution has been tested and compared to existing algorithms for this problem, using measurements from authentic traffic environments on public roads in Sweden. The results clearly indicate that the proposed method provides better estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号