首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
MonoSLAM: real-time single camera SLAM   总被引:4,自引:0,他引:4  
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera  相似文献   

2.
In this paper, we propose a real-time vision-based localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build an accurate 3D map of the environment. In the map computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization methods (bundle adjustment). Once we have computed a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of 3D points, we learn the visibility of the 3D points by exploiting all the geometric relationships between the camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the learned visibility prediction for monocular vision-based localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points, thus, speeding up tremendously the data association between 3D map points and perceived 2D features in the image. In this way, we can solve very efficiently the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate the robustness and accuracy of our approach by showing several vision-based localization experiments with the HRP-2 humanoid robot.  相似文献   

3.
仿人机器人视觉导航中的实时性运动模糊探测器设计   总被引:1,自引:0,他引:1  
针对仿人机器人视觉导航系统的鲁棒性受到运动模糊制约的问题,提出一种基于运动模糊特征的实时性异常探测方法. 首先定量地分析运动模糊对视觉导航系统的负面影响,然后研究仿人机器人上图像的运动模糊规律,在此基础上对图像的运动模糊特征进行无参考的度量,随后采用无监督的异常探测技术,在探测框架下对时间序列上发生的图像运动模糊特征进行聚类分析,实时地召回数据流中的模糊异常,以增强机器人视觉导航系统对运动模糊的鲁棒性. 仿真实验和仿人机器人实验表明:针对国际公开的标准数据集和仿人机器人NAO数据集,方法具有良好的实时性(一次探测时间0.1s)和有效性(召回率98.5%,精确率90.7%). 方法的探测框架对地面移动机器人亦具有较好的普适性和集成性,可方便地与视觉导航系统协同工作.  相似文献   

4.
Huimin Lu  Xun Li  Hui Zhang 《Advanced Robotics》2013,27(18):1439-1453
Topological localization is especially suitable for human–robot interaction and robot’s high level planning, and it can be realized by visual place recognition. In this paper, bag-of-features, a popular and successful approach in pattern recognition community, is introduced to realize robot topological localization. By combining the real-time local visual features proposed by ourselves for omnidirectional vision and support vector machines, a robust and real-time visual place recognition algorithm based on omnidirectional vision is proposed. The panoramic images from the COLD database were used to perform experiments to determine the best algorithm parameters and the best training condition. The experimental results show that the robot can achieve robust topological localization with high successful rate in real time by using our algorithm.  相似文献   

5.
针对传统类人机器人在控制系统实时性和视觉识别方面的不足,以S3C6410作为主控芯片,设计了具有视觉识别功能的类人机器人控制系统,通过改进和简化视频识别算法取得了良好的目标识别效果。实验表明,基于本控制系统设计而成的类人机器人实时性好,目标识别准确,通过调整运动路径能够快速找到目标。  相似文献   

6.
《Advanced Robotics》2013,27(1-2):207-232
In this paper, we provide the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data without any prior information of its dynamics model. Programming a humanoid robot to perform an action (such as walking) that takes into account the robot's complex dynamics is a challenging problem. Traditional approaches typically require highly accurate prior knowledge of the robot's dynamics and environment in order to devise complex (and often brittle) control algorithms for generating a stable dynamic motion. Training using human mocap is an intuitive and flexible approach to programming a robot, but direct usage of mocap data usually results in dynamically unstable motion. Furthermore, optimization using high-dimensional mocap data in the humanoid full-body joint space is typically intractable. We propose a new approach to tractable imitation-based learning in humanoids without a robot's dynamic model. We represent kinematic information from human mocap in a low-dimensional subspace and map motor commands in this low-dimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate optimal motor commands that satisfy the initial kinematic constraints as best as possible while generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking learned from mocap data using both a simulator and a real humanoid robot.  相似文献   

7.
In this paper, we present a real‐time high‐precision visual localization system for an autonomous vehicle which employs only low‐cost stereo cameras to localize the vehicle with a priori map built using a more expensive 3D LiDAR sensor. To this end, we construct two different visual maps: a sparse feature visual map for visual odometry (VO) based motion tracking, and a semidense visual map for registration with the prior LiDAR map. To register two point clouds sourced from different modalities (i.e., cameras and LiDAR), we leverage probabilistic weighted normal distributions transformation (ProW‐NDT), by particularly taking into account the uncertainty of source point clouds. The registration results are then fused via pose graph optimization to correct the VO drift. Moreover, surfels extracted from the prior LiDAR map are used to refine the sparse 3D visual features that will further improve VO‐based motion estimation. The proposed system has been tested extensively in both simulated and real‐world experiments, showing that robust, high‐precision, real‐time localization can be achieved.  相似文献   

8.
Wide-baseline stereo vision for terrain mapping   总被引:3,自引:0,他引:3  
Terrain mapping is important for mobile robots to perform localization and navigation. Stereo vision has been used extensively for this purpose in outdoor mapping tasks. However, conventional stereo does not scale well to distant terrain. This paper examines the use of wide-baseline stereo vision in the context of a mobile robot for terrain mapping, and we are particularly interested in the application of this technique to terrain mapping for Mars exploration. In wide-baseline stereo, the images are not captured simultaneously by two cameras, but by a single camera at different positions. The larger baseline allows more accurate depth estimation of distant terrain, but the robot motion between camera positions introduces two new problems. One issue is that the robot estimates the relative positions of the camera at the two locations imprecisely, unlike the precise calibration that is performed in conventional stereo. Furthermore, the wide-baseline results in a larger change in viewpoint than in conventional stereo. Thus, the images are less similar and this makes the stereo matching process more difficult. Our methodology addresses these issues using robust motion estimation and feature matching. We give results using real images of terrain on Earth and Mars and discuss the successes and failures of the technique.  相似文献   

9.
This paper attempts to discover the invariant features in a whole-body dynamic task under perturbations. Our hypothesis is that the features are useful both for execution and recognition of a task, and have their origin in human embodiment.

For the sake of concreteness, we focus on a particular task named “Roll-and-Rise” motion, and carried out a multi-approach investigation. First, an analysis of motion capture data of human performance is presented to show its invariant features. Next, we show that such invariants emerge from the underlying physics of the task, using simulation data. These invariants are actually useful for generating robot motion, which has been successfully realized with an adult-size real humanoid robot. The experimental data are analyzed to confirm the temporal localization of invariant features. Lastly, we present a psychological experiment which confirms that these timings are actually important points where human observers extract crucial information about the task.  相似文献   


10.
Successful approaches to the robot localization problem include particle filters, which estimate non-parametric localization belief distributions. Particle filters are successful at tracking a robot’s pose, although they fare poorly at determining the robot’s global pose. The global localization problem has been addressed for robots that sense unambiguous visual landmarks with sensor resetting, by performing sensor-based resampling when the robot is lost. Unfortunately, for robots that make sparse, ambiguous and noisy observations, standard sensor resetting places new pose hypotheses across a wide region, in poses that may be inconsistent with previous observations. We introduce multi-observation sensor resetting (MOSR) to address the localization problem with sparse, ambiguous and noisy observations. MOSR merges observations from multiple frames to generate new hypotheses more effectively. We demonstrate experimentally on the NAO humanoid robots that MOSR converges more efficiently to the robot’s true pose than standard sensor resetting, and is more robust to systematic vision errors.  相似文献   

11.
This paper introduces a mobile humanoid robot platform able to execute various services for humans in their everyday environments. For service in more intelligent and varied environments, the control system of a robot must operate efficiently to ensure a coordinated robot system. We enhanced the efficiency of the control system by developing a dual-network control system. The network system consists of two communication protocols: high-speed IEEE 1394, and a highly stable Controller Area Network (CAN). A service framework is also introduced for the coordinated task execution by a humanoid robot. To execute given tasks, various sub-systems of the robot were coordinated effectively by this system. Performance assessments of the presented framework and the proposed control system are experimentally conducted. MAHRU-M, as a platform for a mobile humanoid robot, recognizes the designated object. The object’s pose is calculated by performing model-based object tracking using a particle filter with back projection-based sampling. A unique approach is used to solve the human-like arm inverse kinematics, allowing the control system to generate smooth trajectories for each joint of the humanoid robot. A mean-shift algorithm using bilateral filtering is also used for real-time and robust object tracking. The results of the experiment show that a robot can execute its services efficiently in human workspaces such as an office or a home.  相似文献   

12.
In this paper a humanoid robot simulator based on the multi-robot simulation framework (MuRoSimF) is presented. Among the unique features of this simulator is the scalability in the level of physical detail in both the robot’s motion and sensing systems. It facilitates the development of control software for humanoid robots which is demonstrated for several scenarios from the RoboCup Humanoid Robot League.Different requirements exist for a humanoid robot simulator. E.g., testing of algorithms for motion control and postural stability require high fidelity of physical motion properties whereas testing of behavior control and role distribution for a robot team requires only a moderate level of detail for real-time simulation of multiple robots. To meet such very different requirements often different simulators are used which makes it necessary to model a robot multiple times and to integrate different simulations with high-level robot control software.MuRoSimF provides the capability of exchanging the simulation algorithms used for each robot transparently, thus allowing a trade-off between computational performance and fidelity of the simulation. It is therefore possible to choose different simulation algorithms which are adequate for the needs of a given simulation experiment, for example, motion simulation of humanoid robots based on kinematical, simplified dynamics or full multi-body system dynamics algorithms. In this paper also the sensor simulation capabilities of MuRoSimF are revised. The methods for motion simulation and collision detection and handling are presented in detail including an algorithm which allows the real-time simulation of the full dynamics of a 21 DOF humanoid robot. Merits and drawbacks of the different algorithms are discussed in the light of different simulation purposes. The simulator performance is measured and illustrated in various examples, including comparison with experiments of a physical humanoid robot.  相似文献   

13.
Important aspects of present-day humanoid robot research is to make such robots look realistic and human-like, both in appearance, as well as in motion and mannerism. In this paper, we focus our study on advanced control leading to realistic motion coordination for a humanoid’s robot neck and eyes while tracking an object. The motivating application for such controls is conversational robotics, in which a robot head “actor” should be able to detect and make eye contact with a human subject. Therefore, in such a scenario, the 3D position and orientation of an object of interest in space should be tracked by the redundant head–eye mechanism partly through its neck, and partly through its eyes. In this paper, we propose an optimization approach, combined with a real-time visual feedback to generate the realistic robot motion and robustify it. We also offer experimental results showing that the neck–eye motion obtained from the proposed algorithm is realistic comparing to the head–eye motion of humans.  相似文献   

14.
Humanoid robots introduce instabilities during biped march that complicate the process of estimating their position and orientation along time. Tracking humanoid robots may be useful not only in typical applications such as navigation, but in tasks that require benchmarking the multiple processes that involve registering measures about the performance of the humanoid during walking. Small robots represent an additional challenge due to their size and mechanic limitations which may generate unstable swinging while walking. This paper presents a strategy for the active localization of a humanoid robot in environments that are monitored by external devices. The problem is faced using a particle filter method over depth images captured by an RGB-D sensor in order to effectively track the position and orientation of the robot during its march. The tracking stage is coupled with a locomotion system controlling the stepping of the robot toward a given oriented target. We present an integral communication framework between the tracking and the locomotion control of the robot based on the robot operating system, which is capable of achieving real-time locomotion tasks using a NAO humanoid robot.  相似文献   

15.
李元    王石荣    于宁波   《智能系统学报》2018,13(3):445-451
移动机器人在各种辅助任务中需具备自主定位、建图、路径规划与运动控制的能力。本文利用RGB-D信息和ORB-SLAM算法进行自主定位,结合点云数据和GMapping算法建立环境栅格地图,基于二次规划方法进行平滑可解析的路径规划,并设计非线性控制器,实现了由一个运动底盘、一个RGB-D传感器和一个运算平台组成的自主移动机器人系统。经实验验证,这一系统实现了复杂室内环境下的实时定位与建图、自主移动和障碍物规避。由此,为移动机器人的推广应用提供了一个硬件结构简单、性能良好、易扩展、经济性好、开发维护方便的解决方案。  相似文献   

16.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

17.
Sensing visual motion gives a creature valuable information about its interactions with the environment. Flies in particular use visual motion information to navigate through turbulent air, avoid obstacles, and land safely. Mobile robots are ideal candidates for using this sensory modality to enhance their performance, but so far have been limited by the computational expense of processing video. Also, the complex structure of natural visual scenes poses an algorithmic challenge for extracting useful information in a robust manner. We address both issues by creating a small, low-power visual sensor with integrated analog parallel processing to extract motion in real-time. Because our architecture is based on biological motion detectors, we gain the advantages of this highly evolved system: A design that robustly and continuously extracts relevant information from its visual environment. We show that this sensor is suitable for use in the real world, and demonstrate its ability to compensate for an imperfect motor system in the control of an autonomous robot. The sensor attenuates open-loop rotation by a factor of 31 with less than 1 mW power dissipation.  相似文献   

18.
《Advanced Robotics》2013,27(5):527-546
Prediction of dynamic features is an important task for determining the manipulation strategies of an object. This paper presents a technique for predicting dynamics of objects relative to the robot's motion from visual images. During the training phase, the authors use the recurrent neural network with parametric bias (RNNPB) to self-organize the dynamics of objects manipulated by the robot into the PB space. The acquired PB values, static images of objects and robot motor values are input into a hierarchical neural network to link the images to dynamic features (PB values). The neural network extracts prominent features that each induce object dynamics. For prediction of the motion sequence of an unknown object, the static image of the object and robot motor value are input into the neural network to calculate the PB values. By inputting the PB values into the closed loop RNNPB, the predicted movements of the object relative to the robot motion are calculated recursively. Experiments were conducted with the humanoid robot Robovie-IIs pushing objects at different heights. The results of the experiment predicting the dynamics of target objects proved that the technique is efficient for predicting the dynamics of the objects.  相似文献   

19.
当主流的仿人机器人都采ZMP(zero moment point)理论作为稳定行走的判据.实时ZMP点落在支撑足与地面接触形成的多边形支撑区域内是仿人机器人实现稳定步行的必要条件.因此实现仿人机器人在复杂现实环境中稳定行走,必须要求机器人足部感知系统提供足够丰富的地面环境信息,从而可以准确获取支撑区域的形状以实现基于实时ZMP点的稳定控制.文中将柔性阵列力传感器应用于仿人机器人足部感知系统,提出了获取仿人机器人支撑区域形状的方法,而且通过实验验证了其可行性.  相似文献   

20.
本文面向仿人机器人7自由度手臂高速动态作业运动需求,提出了基于分解动量的仿人机器人实时平衡控制方法,给出了作业臂运动用末端执行器速度向量表示下的机器人总动量计算公式,分析了动量控制维度的择优选取原则,给出了基于分解动量控制的辅助臂关节速度实时计算优化方法.仿真实验表明,经过改进优化的分解动量控制方法对于仿人机器人手臂高速作业运动下的平衡控制具有较高的可行性和实时性,不仅所控制维度的分解动量完全达到了预期的效果,而且其他维度的分解动量也有所降低或基本维持不变,所生成辅助臂运动平滑且在关节运动性能上有较大裕度,机器人总体具有优秀的零力矩点(zero-moment point,ZMP)稳定性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号