首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Huimin Lu  Xun Li  Hui Zhang 《Advanced Robotics》2013,27(18):1439-1453
Topological localization is especially suitable for human–robot interaction and robot’s high level planning, and it can be realized by visual place recognition. In this paper, bag-of-features, a popular and successful approach in pattern recognition community, is introduced to realize robot topological localization. By combining the real-time local visual features proposed by ourselves for omnidirectional vision and support vector machines, a robust and real-time visual place recognition algorithm based on omnidirectional vision is proposed. The panoramic images from the COLD database were used to perform experiments to determine the best algorithm parameters and the best training condition. The experimental results show that the robot can achieve robust topological localization with high successful rate in real time by using our algorithm.  相似文献   

2.
3.
《Advanced Robotics》2013,27(4):399-410
Building environmental models by a vision-guided mobile robot is a key problem in robotics. This paper presents a new strategy of the vision-guided mobile robot for building models of an unknown environment by panoramic sensing. The mobile robot perceives with two types of panoramic sensing: one is for acquiring omnidirectional visual information at an observation point to find the outline structure of the local environment and the other is for acquiring visual information along a route to build local environmental models. Before exploring the environment, the robot looks around and finds the outline structure of the local environment as a reference frame for acquiring the local models. Then the robot builds the local models while moving along the directions of the outline structure (the outline structure is represented by a simple convex polygon, each side of which has a direction). We have implemented the above-mentioned robot behaviors into a mobile robot which has multiple vision agents. The multiple vision agents can simultaneously execute different vision tasks needed for panoramic sensing.  相似文献   

4.
This paper describes ongoing research on vision based mobile robot navigation for wheel chairs. After a guided tour through a natural environment while taking images at regular time intervals, natural landmarks are extracted to automatically build a topological map. Later on this map can be used for place recognition and navigation. We use visual servoing on the landmarks to steer the robot. In this paper, we investigate ways to improve the performance by incorporating inertial sensors. © 2004 Wiley Periodicals, Inc.  相似文献   

5.
Support vector machines (SVMs) are one of the most successful algorithms for classification. However, due to their space and time requirements, they are not suitable for on-line learning, that is, when presented with an endless stream of training observations.In this paper we propose a new on-line algorithm, called on-line independent support vector machines (OISVMs), which approximately converges to the standard SVM solution each time new observations are added; the approximation is controlled via a user-defined parameter. The method employs a set of linearly independent observations and tries to project every new observation onto the set obtained so far, dramatically reducing time and space requirements at the price of a negligible loss in accuracy. As opposed to similar algorithms, the size of the solution obtained by OISVMs is always bounded, implying a bounded testing time. These statements are supported by extensive experiments on standard benchmark databases as well as on two real-world applications, namely place recognition by a mobile robot in an indoor environment and human grasping posture classification.  相似文献   

6.
Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.  相似文献   

7.
拥有自主导航能力的移动机器人在救灾、家政等人类生活中使用得愈加广泛.单目视觉导航算法作为机器人视觉导航中的一种,具有成本低、距离不受限的优势,但仍存在尺度不确定性和初始化问题.该综述根据对移动机器人的运动性质研究,主要从障碍检测、空间定位、路径规划三个方面对单目视觉导航技术进行了模块化分析,并以单目视觉导航算法的关键技...  相似文献   

8.
9.
Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modeling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labeling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.  相似文献   

10.
Most localization algorithms are either range-based or vision-based, but the use of only one type of sensor cannot often ensure successful localization. This paper proposes a particle filter-based localization method that combines the range information obtained from a low-cost IR scanner with the SIFT-based visual information obtained from a monocular camera to robustly estimate the robot pose. The rough estimation of the robot pose by the range sensor can be compensated by the visual information given by the camera and the slow visual object recognition can be overcome by the frequent updates of the range information. Although the bandwidths of the two sensors are different, they can be synchronized by using the encoder information of the mobile robot. Therefore, all data from both sensors are used to estimate the robot pose without time delay and the samples used for estimating the robot pose converge faster than those from either range-based or vision-based localization. This paper also suggests a method for evaluating the state of localization based on the normalized probability of a vision sensor model. Various experiments show that the proposed algorithm can reliably estimate the robot pose in various indoor environments and can recover the robot pose upon incorrect localization. Recommended by Editorial Board member Sooyong Lee under the direction of Editor Hyun Seok Yang. This research was conducted by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Knowledge Economy of Korea. Yong-Ju Lee received the B.S. degree in Mechanical Engineering from Korea University in 2004. He is now a Student for Ph.D. of Mechanical Engineering from Korea University. His research interests include mobile robotics. Byung-Doo Yim received the B.S. degree in Control and Instrumentation Engineering from Seoul National University of Technology in 2005. Also, he received the M.S. degree in Mechatroncis Engineering from Korea University in 2007. His research interests include mobile robotics. Jae-Bok Song received the B.S. and M.S. degrees in Mechanical Engineering from Seoul National University in 1983 and 1985, respectively. Also, he received the Ph.D. degree in Mechanical Engineering from MIT in 1992. He is currently a Professor of Mechanical Engineering, Korea University, where he is also the Director of the Intelligent Robotics Laboratory from 1993. His current research interests lie mainly in mobile robotics, safe robot arms, and design/control of intelligent robotic systems.  相似文献   

11.
With the recent proliferation of robust but computationally demanding robotic algorithms, there is now a need for a mobile robot platform equipped with powerful computing facilities. In this paper, we present the design and implementation of Beobot 2.0, an affordable research‐level mobile robot equipped with a cluster of 16 2.2‐GHz processing cores. Beobot 2.0 uses compact Computer on Module (COM) processors with modest power requirements, thus accommodating various robot design constraints while still satisfying the requirement for computationally intensive algorithms. We discuss issues involved in utilizing multiple COM Express modules on a mobile platform, such as interprocessor communication, power consumption, cooling, and protection from shocks, vibrations, and other environmental hazards such as dust and moisture. We have applied Beobot 2.0 to the following computationally demanding tasks: laser‐based robot navigation, scale‐invariant feature transform (SIFT) object recognition, finding objects in a cluttered scene using visual saliency, and vision‐based localization, wherein the robot has to identify landmarks from a large database of images in a timely manner. For the last task, we tested the localization system in three large‐scale outdoor environments, which provide 3,583, 6,006, and 8,823 test frames, respectively. The localization errors for the three environments were 1.26, 2.38, and 4.08 m, respectively. The per‐frame processing times were 421.45, 794.31, and 884.74 ms respectively, representing speedup factors of 2.80, 3.00, and 3.58 when compared to a single dual‐core computer performing localization. © 2010 Wiley Periodicals, Inc.  相似文献   

12.
The control of a robot system using camera information is a challenging task regarding unpredictable conditions, such as feature point mismatch and changing scene illumination. This paper presents a solution for the visual control of a nonholonomic mobile robot in demanding real world circumstances based on machine learning techniques. A novel intelligent approach for mobile robots using neural networks (NNs), learning from demonstration (LfD) framework, and epipolar geometry between two views is proposed and evaluated in a series of experiments. A direct mapping from the image space to the actuator command is conducted using two phases. In an offline phase, NN–LfD approach is employed in order to relate the feature position in the image plane with the angular velocity for lateral motion correction. An online phase refers to a switching vision based scheme between the epipole based linear velocity controller and NN–LfD based angular velocity controller, which selection depends on the feature distance from the pre-defined interest area in the image. In total, 18 architectures and 6 learning algorithms are tested in order to find optimal solution for robot control. The best training outcomes for each learning algorithms are then employed in real time so as to discover optimal NN configuration for robot orientation correction. Experiments conducted on a nonholonomic mobile robot in a structured indoor environment confirm an excellent performance with respect to the system robustness and positioning accuracy in the desired location.  相似文献   

13.
This paper presents the design, implementation and evaluation of a trainable vision guided mobile robot. The robot, CORGI, has a CCD camera as its only sensor which it is trained to use for a variety of tasks. The techniques used for training and the choice of natural light vision as the primary sensor makes the methodology immediately applicable to tasks such as trash collection or fruit picking. For example, the robot is readily trained to perform a ball finding task which involves avoiding obstacles and aligning with tennis balls. The robot is able to move at speeds up to 0.8 ms-1 while performing this task, and has never had a collision in the trained environment. It can process video and update the actuators at 11 Hz using a single $20 microprocessor to perform all computation. Further results are shown to evaluate the system for generalization across unseen domains, fault tolerance and dynamic environments.  相似文献   

14.
Wyeth  Gordon 《Machine Learning》1998,31(1-3):201-222
This paper presents the design, implementation and evaluation of a trainable vision guided mobile robot. The robot, CORGI, has a CCD camera as its only sensor which it is trained to use for a variety of tasks. The techniques used for train ing and the choice of natural light vision as the primary sensor makes the methodology immediately applicable to tasks such as trash collection or fruit picking. For example, the robot is readily trained to perform a ball finding task which involves avoiding obstacles and aligning with tennis balls. The robot is able to move at speeds up to 0.8 ms-1 while performing this task, and has never had a collision in the trained environment. It can process video and update the actuators at 11 Hz using a single $20 microprocessor to perform all computation. Further results are shown to evaluate the system for generalization across unseen domains, fault tolerance and dynamic environments.  相似文献   

15.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

16.
用于移动机器人的视觉全局定位系统研究   总被引:6,自引:1,他引:5  
魏芳  董再励  孙茂相  王晓蕾 《机器人》2001,23(5):400-403
本文叙述了用于移动机器人自主导航定位的一种视觉全局定位系统技术.该视觉定 位系统由LED主动路标、全景视觉传感器和数据处理系统组成.本文主要介绍了为提高全景 视觉图像处理速度和环境信标识别可靠性、准确性的应用方法,并给出了实验结果.实验表 明,视觉定位是具有明显研究价值和应用前景的全局导航定位技术.  相似文献   

17.
Autonomous mobile robot path planning is a common topic for robotics and computational geometry. Many important results have been found, but a lot of issues are still veiled. This paper first describes new problem of symmetrically aligned robot-obstacle-goal (SAROG) when using potential field methods for mobile robot path planning. In addition, we consider constant robot speed for practical use. The SAROG and the constant speed involve two potential risks: robot-obstacle collision and local minima trap. For dealing with the two potential risks, we analyze the conditions of the collision and the local minima trap, and propose new potential functions and random force based algorithms. For the algorithm verification, we use WiRobot X80 with three ultrasonic range sensor modules.  相似文献   

18.
基于立体视觉的移动机器人导航算法   总被引:1,自引:0,他引:1  
移动机器人立体视觉系统不仅提供三维地形图用于障碍规避和路径规划,其结果还可以用于视觉导航。以移动机器人立体视觉系统为基础,研究了基于前后两个位置上立体图对的视觉测量算法用于移动机器人的连续导航,讨论了影响导航精度的因素和改进方法;研究了基于局部和全局三维地形图的地形匹配算法用于定期校正位置误差,算法实现简便,定位精度取决于地形图精度。实验结果证明了两种方法的有效性,可以兼顾近距离和中远距离导航任务。  相似文献   

19.
In an environment where robots coexist with humans, mobile robots should be human-aware and comply with humans' behavioural norms so as to not disturb humans' personal space and activities. In this work, we propose an inverse reinforcement learning-based time-dependent A* planner for human-aware robot navigation with local vision. In this method, the planning process of time-dependent A* is regarded as a Markov decision process and the cost function of the time-dependent A* is learned using the inverse reinforcement learning via capturing humans' demonstration trajectories. With this method, a robot can plan a path that complies with humans' behaviour patterns and the robot's kinematics. When constructing feature vectors of the cost function, considering the local vision characteristics, we propose a visual coverage feature for enabling robots to learn from how humans move in a limited visual field. The effectiveness of the proposed method has been validated by experiments in real-world scenarios: using this approach robots can effectively mimic human motion patterns when avoiding pedestrians; furthermore, in a limited visual field, robots can learn to choose a path that enables them to have the larger visual coverage which shows a better navigation performance.  相似文献   

20.
In this paper, a method for visual attentional selection in mobile robots is proposed, based on amplification of the selected stimulus. Attention processing is performed on the vision sensor, which is integrated on a silicon chip and consists of a contrast sensitive retina with the ability to change the local inhibitory strength between adjacent pixel elements. As a result, the sensitivity to visual contrast at a particular region of the retina can be adjusted. As the local inhibitory strength can be regulated from outside of the chip, a reconfigurable sensor is realized. This “attention-retina” was tested on an autonomous robot (MorphoII) which was given the task of selecting a line to follow while there were two alternatives. The robot develops directional preference by associating its visual stimulus with an electrical energy providing stimulus, in this case a solar cell.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号