首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

2.
《Advanced Robotics》2013,27(3-4):441-460
This paper describes the omnidirectional vision-based ego-pose estimation method of an in-pipe mobile robot. An in-pipe mobile robot has been developed for inspecting the inner surface of various pipeline configurations, such as the straight pipeline, the elbow and the multiple-branch. Because the proposed in-pipe mobile robot has four individual drive wheels, it has the ability of flexible motions in various pipelines. The ego-pose estimation is indispensable for the autonomous navigation of the proposed in-pipe robot. An omnidirectional camera and four laser modules mounted on the mobile robot are used for ego-pose estimation. An omnidirectional camera is also used for investigating the inner surface of the pipeline. The pose of the in-pipe mobile robot is estimated from the relationship equation between the pose of a robot and the pixel coordinates of four intersection points where light rays that emerge from four laser modules intersect the inside of the pipeline. This relationship equation is derived from the geometry analysis of an omnidirectional camera and four laser modules. In experiments, the performance of the proposed method is evaluated by comparing the result of our algorithm with the measurement value of a specifically designed sensor, which is a kind of a gyroscope.  相似文献   

3.
Vision-based 3-D trajectory tracking for unknown environments   总被引:1,自引:0,他引:1  
This paper describes a vision-based system for 3-D localization of a mobile robot in a natural environment. The system includes a mountable head with three on-board charge-coupled device cameras that can be installed on the robot. The main emphasis of this paper is on the ability to estimate the motion of the robot independently from any prior scene knowledge, landmark, or extra sensory devices. Distinctive scene features are identified using a novel algorithm, and their 3-D locations are estimated with high accuracy by a stereo algorithm. Using new two-stage feature tracking and iterative motion estimation in a symbiotic manner, precise motion vectors are obtained. The 3-D positions of scene features and the robot are refined by a Kalman filtering approach with a complete error-propagation modeling scheme. Experimental results show that good tracking and localization can be achieved using the proposed vision system.  相似文献   

4.
We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge of the 3-D scene geometry. The control scheme is divided into two steps. In the first, using an approximate input-output linearizing feedback, the epipoles are zeroed so as to align the robot with the goal. Feature points are then used in the second translational step to reach the desired configuration. Asymptotic convergence to the desired configuration is proven, both in the calibrated and partially calibrated case. Simulation and experimental results show the effectiveness of the proposed control scheme  相似文献   

5.
A new technique for vision processing is presented which lets a mobile robot equipped with an omnidirectional camera perform appearance-based global localization in real time. The technique is applied directly to the omnidirectional camera images, producing low-dimensional rotation invariant feature vectors without any training or set-up phase. Using the feature vectors, particle filters can accurately estimate the location of a continuously moving real robot, processing 5000 simultaneous localization hypotheses on-line. Estimated body positions overlap the actual ones in over 95% of the time steps. The feature vectors show a graceful degradation against increasing levels of simulated noise and occlusion.  相似文献   

6.
The precise positioning of robotic systems is of great interest particularly in mobile robots. In this context, the use of omnidirectional vision provides many advantages thanks to its wide field of view. This paper presents an image-based visual control to drive a mobile robot to a desired location, which is specified by a target image previously acquired. It exploits the properties of omnidirectional images to preserve the bearing information by using a 1D trifocal tensor. The main contribution of the paper is that the elements of the tensor are introduced directly in the control law and neither any a priori knowledge of the scene nor any auxiliary image are required. Our approach can be applied with any visual sensor obeying approximately a central projection model, presents good robustness to image noise, and avoids the problem of a short baseline by exploiting the information of three views. A sliding mode control law in a square system ensures stability and robustness for the closed loop. The good performance of the control system is proven via simulations and real world experiments with a hypercatadioptric imaging system.  相似文献   

7.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

8.
The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is proposed to estimate the rotation matrix by using a single-axis gyroscope and the image points correspondence from a monocular camera. The experimental results show that the precision of mobile robot s yaw angle estimated by the proposed EKF algorithm is much better than the results given by the image-only and gyroscope-only method, which demonstrates that our method is a preferable way to estimate the rotation for the autonomous mobile robot applications.  相似文献   

9.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

10.
Mobile robots operating in real and populated environments usually execute tasks that require accurate knowledge on their position. Monte Carlo Localization (MCL) algorithms have been successfully applied for laser range finders. However, vision-based approaches present several problems with occlusions, real-time operation, and environment modifications. In this article, an omnivision-based MCL algorithm that solves these drawbacks is presented. The algorithm works with a variable number of particles through the use of the Kullback–Leibler divergence (KLD). The measurement model is based on an omnidirectional camera with a fish-eye lens. This model uses a feature-based map of the environment and the feature extraction process makes it robust to occlusions and changes in the environment. Moreover, the algorithm is scalable and works in real-time. Results on tracking, global localization and kidnapped robot problem show the excellent performance of the localization system in a real environment. In addition, experiments under severe and continuous occlusions reflect the ability of the algorithm to localize the robot in crowded environments.  相似文献   

11.
In this paper, we consider the problem of controlling a 6 DOF holonomic robot and a nonholonomic mobile robot from the projection of 3-D straight lines in the image plane of central catadioptric systems. A generic central catadioptric interaction matrix for the projection of 3-D straight lines is derived using an unifying imaging model valid for an entire class of cameras. This result is exploited to design an image-based control law that allows us to control the 6 DOF of a robotic arm. Then, the projected lines are exploited to control a nonholonomic robot. We show that as when considering a robotic arm, the control objectives are mainly based on catadioptric image feature and that local asymptotic convergence is guaranteed. Simulation results and real experiments with a 6 DOF eye-to-hand system and a mobile robot illustrate the control strategy.  相似文献   

12.
This paper presents a novel approach for image-based visual servoing of a robot manipulator with an eye-in-hand camera when the camera parameters are not calibrated and the 3-D coordinates of the features are not known. Both point and line features are considered. This paper extends the concept of depth-independent interaction (or image Jacobian) matrix, developed in earlier work for visual servoing using point features and fixed cameras, to the problem using eye-in-hand cameras and point and line features. By using the depth-independent interaction matrix, it is possible to linearly parameterize, by the unknown camera parameters and the unknown coordinates of the features, the closed-loop dynamics of the system. A new algorithm is developed to estimate unknown parameters online by combining the Slotine–Li method with the idea of structure from motion in computer vision. By minimizing the errors between the real and estimated projections of the feature on multiple images captured during motion of the robot, this new adaptive algorithm can guarantee the convergence of the estimated parameters to the real values up to a scale. On the basis of the nonlinear robot dynamics, we proved asymptotic convergence of the image errors by the Lyapunov theory. Experiments have been conducted to demonstrate the performance of the proposed controller.   相似文献   

13.
14.
A novel image-mosaicking technique suitable for 3-D visualization of roadside buildings on websites or mobile systems is proposed. Our method was tested on a roadside building scene taken using a side-looking video camera employing a continuous set of vertical-textured planar faces. A vertical plane approximation of the scene geometry for each frame was calculated using sparsely distributed feature points that were assigned 3-D data through bundle adjustments. These vertical planes were concatenated to create an approximate model on which the images could be backprojected as textures and blended together. Additionally, our proposed method includes an expanded crossed-slits projection around far-range areas to reduce the "ghost effect," a phenomenon in which a particular object appears repeatedly in a created image mosaic. The final step was to produce seamless image mosaics using Dijkstra's algorithm to find the optimum seam line to blend overlapping images. We used our algorithm to create efficient image mosaics in 3-D space from a sequence of real images.  相似文献   

15.
现有全景摄像机标定算法中,场景的先验知识在很多情况下难以准确获取。提出了一种基于HALCON的双曲面折反射全景摄像机标定算法。通过摄像机或者标定板的相互移动来获取不同位置的标定板图像;通过HALCON预处理后提取角点坐标;通过最小二乘法求解标定点投影方程的展开系数得到相机的内外参数。该算法中摄像机只需满足单视点要求,不需要场景的先验知识,也不需要特殊的装置和设备。实验结果表明能够获得较高的标定精度,并且在足球机器人边线距离的测量中得到了准确的结果。  相似文献   

16.
针对全向移动机器人轨迹跟踪控制中存在未知轮子打滑干扰问题,设计自抗扰反步控制器.首先,建立存在轮子打滑扰动的全向移动机器人的运动学模型;然后,融合自抗扰控制技术与反步控制技术,设计基于全向移动机器人运动学模型的轨迹跟踪控制器,该控制器分别从纵向控制、横向控制及姿态控制上对打滑干扰进行实时估计与补偿;最后,利用Lyapunov定理分析闭环系统的稳定性并通过仿真实验验证了所提出控制算法的有效性和鲁棒性.  相似文献   

17.
In this article, we propose a new approach to the map building task: the implementation of the Spatial Semantic Hierarchy (SSH), proposed by B. Kuipers, on a real robot fitted with an omnidirectional camera. The original Kuiper's formulation of the SSH was slightly modified, in order to manage in a more efficient way the knowledge the real robot collects while moving in the environment. The sensory data experienced by the robot are transformed by the different levels of the SSH in order to obtain a compact representation of the environment. This knowledge is stored in the form of a topological map and, eventually, of a metrical map. The aim of this article is to show that a catadioptric omnidirectional camera is a good sensor for the SSH and nicely couples with several elements of the SSH. The panoramic view and rotational invariance of our omnidirectional camera makes the identification and labelling of places a simple matter. A deeper insight is that the tracking and identification of events on an omnidirectional image such as occlusions and alignments can be used for the segmentation of continuous sensory image data into the discrete topological and metric elements of a map. The proposed combination of the SSH and omnidirectional vision provides a powerful general framework for robot maping and offers new insights into the concept of “place.” Some preliminary experiments performed with a real robot in an unmodified office environment are presented.  相似文献   

18.
MonoSLAM: real-time single camera SLAM   总被引:4,自引:0,他引:4  
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera  相似文献   

19.
《Knowledge》2006,19(5):324-332
We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct subtasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, then a network trained via reinforcement allows the robot to turn to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation and remove the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark.  相似文献   

20.
A holonomic omnidirectional mobile robot with active dual-wheel caster assemblies is proposed as a robotic transport vehicle. With concern to the existence of sudden acceleration and other dynamical effects during maneuver, the tip-over instability monitoring is very important to prevent any unexpected injuries and property damage. This work presents the preventive method of the tip-over occurrence by estimating the tipping direction and stability metrics. The dynamical model of the omnidirectional mobile robot is derived to estimate the net force from the supporting reaction force at each wheel which is caused by the inertial and external forces. The direction of tipping and stability metric is estimated using moments stability measure. The performance of the tip-over prediction for an omnidirectional mobile robot with active dual-wheel assemblies is shown by the conducted simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号