首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is focused on endowing a mobile robot with topological spatial cognition. We propose an integrated model—where the concept of a ‘place’  is defined as a collection of appearances or locations sharing common perceptual signatures or physical boundaries. In this model, as the robot navigates, places are detected in a systematic manner via monitoring coherency in the incoming visual data while pruning out uninformative or scanty data. Detected places are then either recognized or learned along with mapping as necessary. The novelties of the model are twofold: First, it explicitly incorporates a long-term spatial memory where the knowledge of learned places and their spatial relations are retained in place and map memories respectively. Second, the processing modules operate together so that the robot is able to build its spatial memory in an organized, incremental and unsupervised manner. Thus, the robot’s long-term spatial memory evolves completely on its own while learned knowledge is organized based on appearance-related similarities in a manner that is amenable for higher-level semantic reasoning, As such, the proposed model constitutes a step forward towards having robots that are capable of interacting with their environments in an autonomous manner.  相似文献   

2.
The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permits a user-set trade-off between performance and memory. The resulting algorithm is able to achieve the same recognition results as the original incremental method while reducing the memory growth. Our method is especially suited to work for autonomous systems in realistic settings. We present experiments on two common scenarios in this domain: adaptation in presence of dynamic changes and transfer of knowledge between two different autonomous agents, focusing in both cases on the problem of visual place recognition applied to mobile robot topological localization. Experiments in both scenarios clearly show the power of our approach.  相似文献   

3.
One of the most impressive characteristics of human perception is its domain adaptation capability. Humans can recognize objects and places simply by transferring knowledge from their past experience. Inspired by that, current research in robotics is addressing a great challenge: building robots able to sense and interpret the surrounding world by reusing information previously collected, gathered by other robots or obtained from the web. But, how can a robot automatically understand what is useful among a large amount of information and perform knowledge transfer? In this paper we address the domain adaptation problem in the context of visual place recognition. We consider the scenario where a robot equipped with a monocular camera explores a new environment. In this situation traditional approaches based on supervised learning perform poorly, as no annotated data are provided in the new environment and the models learned from data collected in other places are inappropriate due to the large variability of visual information. To overcome these problems we introduce a novel transfer learning approach. With our algorithm the robot is given only some training data (annotated images collected in different environments by other robots) and is able to decide whether, and how much, this knowledge is useful in the current scenario. At the base of our approach there is a transfer risk measure which quantifies the similarity between the given and the new visual data. To improve the performance, we also extend our framework to take into account multiple visual cues. Our experiments on three publicly available datasets demonstrate the effectiveness of the proposed approach.  相似文献   

4.
The development of robots that learn from experience is a relentless challenge confronting artificial intelligence today. This paper describes a robot learning method which enables a mobile robot to simultaneously acquire the ability to avoid objects, follow walls, seek goals and control its velocity as a result of interacting with the environment without human assistance. The robot acquires these behaviors by learning how fast it should move along predefined trajectories with respect to the current state of the input vector. This enables the robot to perform object avoidance, wall following and goal seeking behaviors by choosing to follow fast trajectories near: the forward direction, the closest object or the goal location respectively. Learning trajectory velocities can be done relatively quickly because the required knowledge can be obtained from the robot's interactions with the environment without incurring the credit assignment problem. We provide experimental results to verify our robot learning method by using a mobile robot to simultaneously acquire all three behaviors.  相似文献   

5.
In this paper, we present a novel cognitive framework allowing a robot to form memories of relevant traits of its perceptions and to recall them when necessary. The framework is based on two main principles: on the one hand, we propose an architecture inspired by current knowledge in human memory organisation; on the other hand, we integrate such an architecture with the notion of context, which is used to modulate the knowledge acquisition process when consolidating memories and forming new ones, as well as with the notion of familiarity, which is employed to retrieve proper memories given relevant cues. Although much research has been carried out, which exploits Machine Learning approaches to provide robots with internal models of their environment (including objects and occurring events therein), we argue that such approaches may not be the right direction to follow if a long-term, continuous knowledge acquisition is to be achieved. As a case study scenario, we focus on both robot–environment and human–robot interaction processes. In case of robot–environment interaction, a robot performs pick and place movements using the objects in the workspace, at the same time observing their displacement on a table in front of it, and progressively forms memories defined as relevant cues (e.g. colour, shape or relative position) in a context-aware fashion. As far as human–robot interaction is concerned, the robot can recall specific snapshots representing past events using both sensory information and contextual cues upon request by humans.  相似文献   

6.
为提高家庭服务机器人指令中目标对象预测的准确率,提出一种基于混合深度学习的多模态自然语言理处理(Natural Language Processing,NLP)指令分类方法.该方法从语言特征、视觉特征和关系特征多模态入手,采用两种深度学习方法分别以多模态特征进行编码.对于语言指令,采用多层双向长短期记忆(Bi-LSTM...  相似文献   

7.
In this article, we present a cognitive system based on artificial curiosity for high-level knowledge acquisition from visual patterns. The curiosity (perceptual curiosity and epistemic curiosity) is realized through combining perceptual saliency detection and Machine-Learning based approaches. The learning is accomplished by autonomous observation of visual patterns and by interaction with an expert (a human tutor) detaining semantic knowledge about the detected visual patterns. Experimental results validating the deployment of the investigated system have been obtained on the basis of a humanoid robot acquiring visually knowledge from its surrounding environment interacting with a human tutor. We show that our cognitive system allows the humanoid robot to discover the surrounding world in which it evolves, to learn new knowledge about it and describe it using human-like (natural) utterances.  相似文献   

8.
为了解决未知环境下的单目视觉移动机器人目标跟踪问题,提出了一种将目标状态估计与机器人可观性控制相结合的机器人同时定位、地图构建与目标跟踪方法。在状态估计方面,以机器人单目视觉同时定位与地图构建为基础,设计了扩展式卡尔曼滤波框架下的目标跟踪算法;在机器人可观性控制方面,设计了基于目标协方差阵更新最大化的优化控制方法。该方法能够实现机器人在单目视觉条件下对自身状态、环境状态、目标状态的同步估计以及目标跟随。仿真和原型样机实验验证了目标状态估计和机器人控制之间的耦合关系,证明了方法的准确性和有效性,结果表明:机器人将产生螺旋状机动运动轨迹,同时,目标跟踪和机器人定位精度与机器人机动能力成正比例关系。  相似文献   

9.
In this work a visual-based autonomous system capable of memorizing and recalling sensory-motor associations is presented. The robot's behaviors are based on learned associations between its sensory inputs and its motor actions. Perception is divided into two stages. The first one is functional: algorithmic procedures extract in real time visual features such as disparity and local orientation from the input images. The second stage is mnemonic: the features produced by the different functional areas are integrated with motor information and memorized or recalled. An efficient memory organization and fast information retrieval enables the robot to learn to navigate and to avoid obstacles without need of an internal metric reconstruction of the external environment. Received: 22 November 1996 / Accepted: 18 November 1997  相似文献   

10.
《Advanced Robotics》2013,27(12):1351-1367
Robot imitation is a useful and promising alternative to robot programming. Robot imitation involves two crucial issues. The first is how a robot can imitate a human whose physical structure and properties differ greatly from its own. The second is how the robot can generate various motions from finite programmable patterns (generalization). This paper describes a novel approach to robot imitation based on its own physical experiences. We considered the target task of moving an object on a table. For imitation, we focused on an active sensing process in which the robot acquires the relation between the object's motion and its own arm motion. For generalization, we applied the RNNPB (recurrent neural network with parametric bias) model to enable recognition/generation of imitation motions. The robot associates the arm motion which reproduces the observed object's motion presented by a human operator. Experimental results proved the generalization capability of our method, which enables the robot to imitate not only motion it has experienced, but also unknown motion through nonlinear combination of the experienced motions.  相似文献   

11.
Imitation has been receiving increasing attention from the viewpoint of not simply generating new motions but also the emergence of communication. This paper proposes a system for a humanoid who obtains new motions through learning the interaction rules with a human partner based on the assumption of the mirror system. First, a humanoid learns the correspondence between its own posture and the partner’s one on the ISOMAPs supposing that a human partner imitates the robot motions. Based on this correspondence, the robot can easily transfer the observed partner’s gestures to its own motion. Then, this correspondence enables a robot to acquire the new motion primitive for the interaction. Furthermore, through this process, the humanoid learns an interaction rule that control gesture turn-taking. The preliminary results and future issues are given.  相似文献   

12.
为了解决架空裸露导线存在的安全隐患,提出了一种用于架空裸露导线的配电线路自动绝缘包裹机器人。该机器人将机构紧凑地配合在一起,在多传感器的辅助下,能够完成在裸露导线上自主行走、包裹;采用CAN总线架构进行电机管理,PID控制算法实现各个电机的精确配合;IMU的引用实现了机器人在不同运行环境下更高效稳定的运行;机器视觉的应用实现包裹结果的实时传输与自主判断。试验表明,包裹机器人运行效果理想,具有广阔的应用前景。  相似文献   

13.
This paper deals with the study of scaling up behaviors in evolutive robotics (ER). Complex behaviors were obtained from simple ones. Each behavior is supported by an artificial neural network (ANN)-based controller or neurocontroller. Hence, a method for the generation of a hierarchy of neurocontrollers, resorting to the paradigm of Layered Evolution (LE), is developed and verified experimentally through computer simulations and tests in a Khepera® micro-robot. Several behavioral modules are initially evolved using specialized neurocontrollers based on different ANN paradigms. The results show that simple behaviors coordination through LE is a feasible strategy that gives rise to emergent complex behaviors. These complex behaviors can then solve real-world problems efficiently. From a pure evolutionary perspective, however, the methodology presented is too much dependent on user’s prior knowledge about the problem to solve and also that evolution take place in a rigid, prescribed framework. Mobile robot’s navigation in an unknown environment is used as a test bed for the proposed scaling strategies.  相似文献   

14.
This paper proposes a new technique for vision-based robot navigation. The basic framework is to localise the robot by comparing images taken at its current location with reference images stored in its memory. In this work, the only sensor mounted on the robot is an omnidirectional camera. The Fourier components of the omnidirectional image provide a signature for the views acquired by the robot and can be used to simplify the solution to the robot navigation problem. The proposed system can calculate the robot position with variable accuracy (‘hierarchical localisation’) saving computational time when the robot does not need a precise localisation (e.g. when it is travelling through a clear space). In addition, the system is able to self-organise its visual memory of the environment. The self-organisation of visual memory is essential to realise a fully autonomous robot that is able to navigate in an unexplored environment. Experimental evidence of the robustness of this system is given in unmodified office environments.  相似文献   

15.
A central goal of robotics and AI is to be able to deploy an agent to act autonomously in the real world over an extended period of time. To operate in the real world, autonomous robots rely on sensory information. Despite the potential richness of visual information from on-board cameras, many mobile robots continue to rely on non-visual sensors such as tactile sensors, sonar, and laser. This preference for relatively low-fidelity sensors can be attributed to, among other things, the characteristic requirement of real-time operation under limited computational resources. Illumination changes pose another big challenge. For true extended autonomy, an agent must be able to recognize for itself when to abandon its current model in favor of learning a new one; and how to learn in its current situation. We describe a self-contained vision system that works on-board a vision-based autonomous robot under varying illumination conditions. First, we present a baseline system capable of color segmentation and object recognition within the computational and memory constraints of the robot. This relies on manually labeled data and operates under constant and reasonably uniform illumination conditions. We then relax these limitations by introducing algorithms for (i) Autonomous planned color learning, where the robot uses the knowledge of its environment (position, size and shape of objects) to automatically generate a suitable motion sequence and learn the desired colors, and (ii) Illumination change detection and adaptation, where the robot recognizes for itself when the illumination conditions have changed sufficiently to warrant revising its knowledge of colors. Our algorithms are fully implemented and tested on the Sony ERS-7 Aibo robots.
Mohan SridharanEmail:
  相似文献   

16.
Optimal landmark selection for triangulation of robot position   总被引:4,自引:0,他引:4  
A mobile robot can identify its own position relative to a global environment model by using triangulation based on three landmarks in the environment. It is shown that this procedure may be very sensitive to noise depending on spatial landmark configuration, and relative position between robot and landmarks. A general analysis is presented which permits prediction of the uncertainty in the triangulated position.

In addition an algorithm is presented for automatic selection of optimal landmarks. This algorithm enables a robot to continuously base its position computation on the set of available landmarks, which provides the least noise sensitive position estimate. It is demonstrated that using this algorithm can result in more than one order of magnitude reduction in uncertainty.  相似文献   


17.
《Advanced Robotics》2012,26(17):1995-2020
Abstract

In this paper, we propose a robot that acquires multimodal information, i.e. visual, auditory, and haptic information, fully autonomously using its embodiment. We also propose batch and online algorithms for multimodal categorization based on the acquired multimodal information and partial words given by human users. To obtain multimodal information, the robot detects an object on a flat surface. Then, the robot grasps and shakes it to obtain haptic and auditory information. For obtaining visual information, the robot uses a small hand-held observation table with an XBee wireless controller to control the viewpoints for observing the object. In this paper, for multimodal concept formation, multimodal latent Dirichlet allocation using Gibbs sampling is extended to an online version. This framework makes it possible for the robot to learn object concepts naturally in everyday operation in conjunction with a small amount of linguistic information from human users. The proposed algorithms are implemented on a real robot and tested using real everyday objects to show the validity of the proposed system.  相似文献   

18.
Huimin Lu  Xun Li  Hui Zhang 《Advanced Robotics》2013,27(18):1439-1453
Topological localization is especially suitable for human–robot interaction and robot’s high level planning, and it can be realized by visual place recognition. In this paper, bag-of-features, a popular and successful approach in pattern recognition community, is introduced to realize robot topological localization. By combining the real-time local visual features proposed by ourselves for omnidirectional vision and support vector machines, a robust and real-time visual place recognition algorithm based on omnidirectional vision is proposed. The panoramic images from the COLD database were used to perform experiments to determine the best algorithm parameters and the best training condition. The experimental results show that the robot can achieve robust topological localization with high successful rate in real time by using our algorithm.  相似文献   

19.
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."  相似文献   

20.
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号