首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.  相似文献   

2.
The current work addresses the problem of 3D model tracking in the context of monocular and stereo omnidirectional vision in order to estimate the camera pose. To this end, we track 3D objects modeled by line segments because the straight line feature is often used to model the environment. Indeed, we are interested in mobile robot navigation using omnidirectional vision in structured environments. In the case of omnidirectional vision, 3D straight lines are projected as conics in omnidirectional images. Under certain conditions, these conics may have singularities.In this paper, we present two contributions. We, first, propose a new spherical formulation of the pose estimation withdrawing singularities, using an object model composed of lines. The theoretical formulation and the validation on synthetic images thus show that the new formulation clearly outperforms the former image plane one. The second contribution is the extension of the spherical representation to the stereovision case. We consider in the paper a sensor which combines a camera and four mirrors. Results in various situations show the robustness to illumination changes and local mistracking. As a final result, the proposed new stereo spherical formulation allows us to localize online a robot indoor and outdoor whereas the classical formulation fails.  相似文献   

3.
This paper proposes a new technique for vision-based robot navigation. The basic framework is to localise the robot by comparing images taken at its current location with reference images stored in its memory. In this work, the only sensor mounted on the robot is an omnidirectional camera. The Fourier components of the omnidirectional image provide a signature for the views acquired by the robot and can be used to simplify the solution to the robot navigation problem. The proposed system can calculate the robot position with variable accuracy (‘hierarchical localisation’) saving computational time when the robot does not need a precise localisation (e.g. when it is travelling through a clear space). In addition, the system is able to self-organise its visual memory of the environment. The self-organisation of visual memory is essential to realise a fully autonomous robot that is able to navigate in an unexplored environment. Experimental evidence of the robustness of this system is given in unmodified office environments.  相似文献   

4.
The mobile manipulators (MMs) have been increasingly adopted for machining large and complex components. In order to ensure the machining efficiency and quality, the MMs usually need to cooperate with each other. However, due to the low motion accuracy of the mobile platform, the relative pose accuracy between the coordinated MMs are difficult to guarantee, so an effective calibration method is needed to on-line obtain the relative pose of the MMs. For this purpose, a vision-based fast base frame calibration method is proposed in this paper, which can quickly and accurately obtain the relative pose between the coordinated MMs. The method only needs to add a camera and a marker, and then a frame network of the calibration system can be generated by installing the marker at three different positions. Based on the Perspective-n-Points principle and the robot forward kinematics, the transformation matrix of the marker frame with respect to the camera frame and the robot base frame can be determined by simply obtaining the images of the marker at different positions and corresponding robot joint angles. Then, the relative pose between the base frames of coordinated MMs can be determined by the calibration equation established based on a frame closed chains. In addition, the calibration method is capable of real-time calculation by dividing the calibration process into off-line and on-line stages. Simulation and experimental results have verified the effectiveness of the proposed method.  相似文献   

5.
Huimin Lu  Xun Li  Hui Zhang 《Advanced Robotics》2013,27(18):1439-1453
Topological localization is especially suitable for human–robot interaction and robot’s high level planning, and it can be realized by visual place recognition. In this paper, bag-of-features, a popular and successful approach in pattern recognition community, is introduced to realize robot topological localization. By combining the real-time local visual features proposed by ourselves for omnidirectional vision and support vector machines, a robust and real-time visual place recognition algorithm based on omnidirectional vision is proposed. The panoramic images from the COLD database were used to perform experiments to determine the best algorithm parameters and the best training condition. The experimental results show that the robot can achieve robust topological localization with high successful rate in real time by using our algorithm.  相似文献   

6.
《Knowledge》2006,19(5):324-332
We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct subtasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, then a network trained via reinforcement allows the robot to turn to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation and remove the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark.  相似文献   

7.
《Advanced Robotics》2013,27(3-4):441-460
This paper describes the omnidirectional vision-based ego-pose estimation method of an in-pipe mobile robot. An in-pipe mobile robot has been developed for inspecting the inner surface of various pipeline configurations, such as the straight pipeline, the elbow and the multiple-branch. Because the proposed in-pipe mobile robot has four individual drive wheels, it has the ability of flexible motions in various pipelines. The ego-pose estimation is indispensable for the autonomous navigation of the proposed in-pipe robot. An omnidirectional camera and four laser modules mounted on the mobile robot are used for ego-pose estimation. An omnidirectional camera is also used for investigating the inner surface of the pipeline. The pose of the in-pipe mobile robot is estimated from the relationship equation between the pose of a robot and the pixel coordinates of four intersection points where light rays that emerge from four laser modules intersect the inside of the pipeline. This relationship equation is derived from the geometry analysis of an omnidirectional camera and four laser modules. In experiments, the performance of the proposed method is evaluated by comparing the result of our algorithm with the measurement value of a specifically designed sensor, which is a kind of a gyroscope.  相似文献   

8.
A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.  相似文献   

9.
This paper presents an efficient metric for the computation of the similarity among omnidirectional images (image matching). The representation of image appearance is based on feature vectors that include both the chromatic attributes of color sets and their mutual spatial relationships. The proposed metric fits well to robotic navigation using omnidirectional vision sensors, because it has very important properties: it is reflexive, compositional and invariant with respect to image scaling and rotation. The robustness of the metric was repeatedly tested using omnidirectional images for a robot localization task in a real indoor environment.  相似文献   

10.
为保证全向轮机器人在移动过程中所捕获到的目标对象能够完全符合理想目标设定条件,准确追踪目标节点的运动行为,设计基于关联规则挖掘的全向轮移动机器人目标跟踪控制系统;根据CAN主控框架的部署形式,按需连接核心管控电路与I/O跟踪模块;分别以转向控制器、速度控制器为例,完善全向轮控制结构的物理作用能力,实现机器人目标跟踪控制系统的硬件设计;在此基础上,定义频繁项集合,按照具体的关联规则特征描述结果,确定挖掘程序指令的执行能力,得到准确的关联离散度指标计算结果,实现控制系统的关联规则挖掘,再联合相关硬件设备结构,完成基于关联规则挖掘的全向轮移动机器人目标跟踪控制系统设计;分析对比实验结果可知,随着关联规则挖掘控制系统的应用,全向轮机器人在移动过程中所捕获到的目标对象能够将理想目标完全包含在内,机器人目标跟踪结果准确,可以辅助全向轮移动机器人更加准确地追踪目标节点的运动行为,符合实际应用需求。  相似文献   

11.
Mobile robotic devices hold great promise for a variety of applications in industry. A key step in the design of a mobile robot is to determine the navigation method for mobility control. The purpose of this paper is to describe a new algorithm for omnidirectional vision navigation. A prototype omnidirectional vision system and the implementation of the navigation techniques using this modern sensor and an advanced automatic image processor is described. The significance of this work is in the development of a new and novel approach—dynamic omnidirectional vision for mobile robots and autonomous guided vehicles.  相似文献   

12.
A reactive navigation system for an autonomous mobile robot in unstructured dynamic environments is presented. The motion of moving obstacles is estimated for robot motion planning and obstacle avoidance. A multisensor-based obstacle predictor is utilized to obtain obstacle-motion information. Sensory data from a CCD camera and multiple ultrasonic range finders are combined to predict obstacle positions at the next sampling instant. A neural network, which is trained off-line, provides the desired prediction on-line in real time. The predicted obstacle configuration is employed by the proposed virtual force based navigation method to prevent collision with moving obstacles. Simulation results are presented to verify the effectiveness of the proposed navigation system in an environment with multiple mobile robots or moving objects. This system was implemented and tested on an experimental mobile robot at our laboratory. Navigation results in real environment are presented and analyzed.  相似文献   

13.
机器人轨迹节点跟踪比较难,导致机器人实际轨迹偏离期望轨迹,所以设计基于视觉图像的全向移动机器人轨迹跟踪控制方法;构建全向移动机器人的运动学数学模型,以此确定机器人移动轨迹数学模型;以移动轨迹数学模型为基础,按照视觉图像划分标准对全向移动机器人运动图像的分割,通过分离目标节点的方式提取运动学特征参量,完成机器人轨迹节点跟踪处理;结合节点跟踪处理结果,将运动学不等式与误差向量作为机器人轨迹跟踪控制的约束条件,利用滑模变结构搭建轨迹跟踪控制模型,实现全向移动机器人轨迹跟踪控制;对比实验结果表明,所设计的方法应用后,全向移动机器人角速度曲线、线速度曲线与期望运动轨迹曲线之间的贴合程度均超过90%,满足全向移动机器人轨迹跟踪控制要求。  相似文献   

14.
We present a novel approach to estimating depth from single omnidirectional camera images by learning the relationship between visual features and range measurements available during a training phase. Our model not only yields the most likely distance to obstacles in all directions, but also the predictive uncertainties for these estimates. This information can be utilized by a mobile robot to build an occupancy grid map of the environment or to avoid obstacles during exploration—tasks that typically require dedicated proximity sensors such as laser range finders or sonars. We show in this paper how an omnidirectional camera can be used as an alternative to such range sensors. As the learning engine, we apply Gaussian processes, a nonparametric approach to function regression, as well as a recently developed extension for dealing with input-dependent noise. In practical experiments carried out in different indoor environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to estimate range with an accuracy comparable to that of dedicated sensors based on sonar or infrared light.  相似文献   

15.
Self-localization is the basis to realize autonomous ability such as motion planning and decision-making for mobile robots, and omnidirectional vision is one of the most important sensors for RoboCup Middle Size League (MSL) soccer robots. According to the characteristic that RoboCup competition is highly dynamic and the deficiency of the current self-localization methods, a robust and real-time self-localization algorithm based on omnidirectional vision is proposed for MSL soccer robots. Monte Carlo localization and matching optimization localization, two most popular approaches used in MSL, are combined in our algorithm. The advantages of these two approaches are maintained, while the disadvantages are avoided. A camera parameters auto-adjusting method based on image entropy is also integrated to adapt the output of omnidirectional vision to dynamic lighting conditions. The experimental results show that global localization can be realized effectively while highly accurate localization is achieved in real-time, and robot self-localization is robust to the highly dynamic environment with occlusions and changing lighting conditions.  相似文献   

16.
Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in many mobile robotic applications.  A challenging problem consists of using these sensors, mounted on mobile robotic platforms, as visual compasses (VCs) to provide an estimate of the rotational motion of the camera/robot from the omnidirectional video stream. Existing VC algorithms suffer from some practical limitations, since they require a precise knowledge either of the camera-calibration parameters, or the 3-D geometry of the observed scene. In this paper we present a novel multiple-view geometry constraint for paracatadioptric views of lines in 3-D, that we use to design a VC algorithm that does not require either the knowledge of the camera calibration parameters, or the 3-D scene geometry. In addition, our algorithm runs in real time since it relies on a closed-form estimate of the camera/robot rotation, and can address the image-feature correspondence problem. Extensive simulations and experiments with real robots have been performed to show the accuracy and robustness of the proposed method.  相似文献   

17.
Omnidirectional cameras that give a 360° panoramic view of the surroundings have recently been used in many applications such as robotics, navigation, and surveillance. This paper describes the application of parametric ego-motion estimation for vehicle detection to perform surround analysis using an automobile-mounted camera. For this purpose, the parametric planar motion model is integrated with the transformations to compensate distortion in omnidirectional images. The framework is used to detect objects with independent motion or height above the road. Camera calibration as well as the approximate vehicle speed obtained from a CAN bus are integrated with the motion information from spatial and temporal gradients using a Bayesian approach. The approach is tested for various configurations of an automobile-mounted omni camera as well as a rectilinear camera. Successful detection and tracking of moving vehicles and generation of a surround map are demonstrated for application to intelligent driver support.Received: 1 August 2003, Accepted: 8 July 2004, Published online: 3 February 2005  相似文献   

18.
Mobile robot localization, which allows a robot to identify its position, is one of main challenges in the field of Robotics. In this work, we provide an evaluation of consolidated feature extractions and machine learning techniques from omnidirectional images focusing on topological map and localization tasks. The main contributions of this work are a novel method for localization via classification with reject option using omnidirectional images, as well as two novel omnidirectional image data sets. The localization system was analyzed in both virtual and real environments. Based on the experiments performed, the Minimal Learning Machine with Nearest Neighbors classifier and Local Binary Patterns feature extraction proved to be the best combination for mobile robot localization with accuracy of 96.7% and an Fscore of 96.6%.  相似文献   

19.
Localization is a fundamental operation for the navigation of mobile robots. The standard localization algorithms fuse external measurements of the environment with the odometric evolution of the robot pose to obtain its optimal estimation. In this work, we present a different approach to determine the pose using angular measurements discontinuously obtained in time. The presented method is based on an Extended Kalman Filter (EKF) with a state-vector composed of the external angular measurements. This algorithm keeps track of the angles between actual measurements from robot odometric information. This continuous angular estimation allows the consistent use of the triangulation methods to determine the robot pose at any time during its motion. The article reports experimental results that show the localization accuracy obtained by means of the presented approach. These results are compared to the ones obtained applying the EKF algorithm with the standard pose state-vector. For the experiments, an omnidirectional robotic platform with omnidirectional wheels is used.  相似文献   

20.
In robot teleoperation, a robot works as a physical agent at a remote site for a robot operator. There are mainly two tasks in robot teleoperation using camera images: environment recognition using visual information and robot control according to the recognition. In this paper, we propose a gaze direction based vehicle teleoperation method with an omnidirectional image stabilization and an automatic body rotation control. In the proposed method, we manage above two tasks in the same manner that are usually treated separately. This method is an intuitive vehicle teleoperation method where an operator do not need to have concern about vehicle body orientations and can absorb differences of vehicle driving mechanisms. That is, this method frees an operator from being bothered from controlling a vehicle and the operator can concentrate on where he/she intends to go. This method mainly consists of two technologies: an omnidirectional image stabilization technology and automatic body rotation control. The conducted experiments show effectiveness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号