首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters.We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.  相似文献   

2.
针对两轮自平衡机器人在学习过程中主动性差的问题,受心理学内在动机理论启发,提出一种基于内在动机的智能机器人自主发育算法。该算法在强化学习的理论框架中,引入模拟人类好奇心的内在动机理论作为内部驱动力,与外部奖赏信号一起作用于整个学习过程。采用双层内部回归神经网络存储知识的学习与积累,使机器人逐步学会自主平衡技能。最后针对测量噪声污染对机器人平衡控制中两轮角速度的影响,进一步采用卡尔曼滤波方法进行补偿,以提高算法收敛速度,降低系统误差。仿真实验表明,该算法能够使两轮机器人通过与环境的交互获得认知,成功地学会运动平衡控制技能。  相似文献   

3.
4.
Mosaic model for sensorimotor learning and control   总被引:1,自引:0,他引:1  
Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. We previously proposed a new modular architecture, the modular selection and identification for control (MOSAIC) model, for motor learning and control based on multiple pairs of forward (predictor) and inverse (controller) models. The architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the set of inverse models appropriate for a given environment. It combines both feedforward and feedback sensorimotor information so that the controllers can be selected both prior to movement and subsequently during movement. This article extends and evaluates the MOSAIC architecture in the following respects. The learning in the architecture was implemented by both the original gradient-descent method and the expectation-maximization (EM) algorithm. Unlike gradient descent, the newly derived EM algorithm is robust to the initial starting conditions and learning parameters. Second, simulations of an object manipulation task prove that the architecture can learn to manipulate multiple objects and switch between them appropriately. Moreover, after learning, the model shows generalization to novel objects whose dynamics lie within the polyhedra of already learned dynamics. Finally, when each of the dynamics is associated with a particular object shape, the model is able to select the appropriate controller before movement execution. When presented with a novel shape-dynamic pairing, inappropriate activation of modules is observed followed by on-line correction.  相似文献   

5.
In this paper we present a robot control architecture for learning by imitation which takes inspiration from recent discoveries in action observation/execution experiments with humans and other primates. The architecture implements two basic processing principles: (1) imitation is primarily directed toward reproducing the outcome of an observed action sequence rather than reproducing the exact action means, and (2) the required capacity to understand the motor intention of another agent is based on motor simulation. The control architecture is validated in a robot system imitating in a goal-directed manner a grasping and placing sequence displayed by a human model. During imitation, skill transfer occurs by learning and representing appropriate goal-directed sequences of motor primitives. The robustness of the goal-directed organization of the controller is tested in the presence of incomplete visual information and changes in environmental constraints.  相似文献   

6.
Many motor skills in humanoid robotics can be learned using parametrized motor primitives. While successful applications to date have been achieved with imitation learning, most of the interesting motor learning problems are high-dimensional reinforcement learning problems. These problems are often beyond the reach of current reinforcement learning methods. In this paper, we study parametrized policy search methods and apply these to benchmark problems of motor primitive learning in robotics. We show that many well-known parametrized policy search methods can be derived from a general, common framework. This framework yields both policy gradient methods and expectation-maximization (EM) inspired algorithms. We introduce a novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives. We compare this algorithm, both in simulation and on a real robot, to several well-known parametrized policy search methods such as episodic REINFORCE, ??Vanilla?? Policy Gradients with optimal baselines, episodic Natural Actor Critic, and episodic Reward-Weighted Regression. We show that the proposed method out-performs them on an empirical benchmark of learning dynamical system motor primitives both in simulation and on a real robot. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task on a real Barrett WAM? robot arm.  相似文献   

7.
基于生成对抗网络的模仿学习综述   总被引:1,自引:0,他引:1  
模仿学习研究如何从专家的决策数据中进行学习,以得到接近专家水准的决策模型.同样学习如何决策的强化学习往往只根据环境的评价式反馈进行学习,与之相比,模仿学习能从决策数据中获得更为直接的反馈.它可以分为行为克隆、基于逆向强化学习的模仿学习两类方法.基于逆向强化学习的模仿学习把模仿学习的过程分解成逆向强化学习和强化学习两个子过程,并反复迭代.逆向强化学习用于推导符合专家决策数据的奖赏函数,而强化学习基于该奖赏函数来学习策略.基于生成对抗网络的模仿学习方法从基于逆向强化学习的模仿学习发展而来,其中最早出现且最具代表性的是生成对抗模仿学习方法(Generative Adversarial Imitation Learning,简称GAIL).生成对抗网络由两个相对抗的神经网络构成,分别为判别器和生成器.GAIL的特点是用生成对抗网络框架求解模仿学习问题,其中,判别器的训练过程可类比奖赏函数的学习过程,生成器的训练过程可类比策略的学习过程.与传统模仿学习方法相比,GAIL具有更好的鲁棒性、表征能力和计算效率.因此,它能够处理复杂的大规模问题,并可拓展到实际应用中.然而,GAIL存在着模态崩塌、环境交互样本利用效率低等问题.最近,新的研究工作利用生成对抗网络技术和强化学习技术等分别对这些问题进行改进,并在观察机制、多智能体系统等方面对GAIL进行了拓展.本文先介绍了GAIL的主要思想及其优缺点,然后对GAIL的改进算法进行了归类、分析和对比,最后总结全文并探讨了可能的未来趋势.  相似文献   

8.
模仿学习是强化学习与监督学习的结合,目标是通过观察专家演示,学习专家策略,从而加速强化学习。通过引入任务相关的额外信息,模仿学习相较于强化学习,可以更快地实现策略优化,为缓解低样本效率问题提供了解决方案。模仿学习已成为解决强化学习问题的一种流行框架,涌现出多种提高学习性能的算法和技术。通过与图形图像学的最新研究成果相结合,模仿学习已经在游戏人工智能(artificial intelligence,AI)、机器人控制和自动驾驶等领域发挥了重要作用。本文围绕模仿学习的年度发展,从行为克隆、逆强化学习、对抗式模仿学习、基于观察量的模仿学习和跨领域模仿学习等多个角度进行深入探讨,介绍了模仿学习在实际应用上的最新情况,比较了国内外研究现状,并展望了该领域未来的发展方向。旨在为研究人员和从业人员提供模仿学习的最新进展,从而为开展工作提供参考与便利。  相似文献   

9.
Intrinsic Motivation Systems for Autonomous Mental Development   总被引:1,自引:0,他引:1  
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology  相似文献   

10.
Active Learning for Vision-Based Robot Grasping   总被引:1,自引:0,他引:1  
Salganicoff  Marcos  Ungar  Lyle H.  Bajcsy  Ruzena 《Machine Learning》1996,23(2-3):251-278
Reliable vision-based grasping has proved elusive outside of controlled environments. One approach towards building more flexible and domain-independent robot grasping systems is to employ learning to adapt the robot's perceptual and motor system to the task. However, one pitfall in robot perceptual and motor learning is that the cost of gathering the learning set may be unacceptably high. Active learning algorithms address this shortcoming by intelligently selecting actions so as to decrease the number of examples necessary to achieve good performance and also avoid separate training and execution phases, leading to higher autonomy. We describe the IE-ID3 algorithm, which extends the Interval Estimation (IE) active learning approach from discrete to real-valued learning domains by combining IE with a classification tree learning algorithm (ID-3). We present a robot system which rapidly learns to select the grasp approach directions using IE-ID3 given simplified superquadric shape approximations of objects. Initial results on a small set of objects show that a robot with a laser scanner system can rapidly learn to pick up new objects, and simulation studies show the superiority of the active learning approach for a simulated grasping task using larger sets of objects. Extensions of the approach and future areas of research incorporating more sophisticated perceptual and action representation are discussed  相似文献   

11.
针对目前智能移动机器人在未知环境中学习遇到的如学习主动性、实时性差,无法在线积累学习的知识和经验等问题,受心理学中内部动机的启发,提出一种内部动机驱动的移动机器人未知环境在线自主学习方法,在一定程度上弥补目前该领域存在的一些问题。该方法通过在移动机器人Q学习的框架下,将奖励机制用基于心理学启发的内部动机取代,提高其对于未知环境的学习主动性,同时,采用增量自组织神经网络代替经典Q学习中的查找表,实现输入输出空间的映射,使得机器人能够在线增量地学习未知环境。实验结果表明,通过内部动机驱动的方法,移动机器人对于未知环境的学习主动性得到了提高,智能程度有了明显改进。  相似文献   

12.
Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.  相似文献   

13.
This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.  相似文献   

14.
针对两轮机器人运动平衡控制问题,为其建立起一种人工感知运动系统TWR-SMS(Two-wheeled robot sensorimotor system),使机器人在与环境的接触过程中可以通过学习自主掌握运动平衡技能.感知运动系统的认知系统以学习自动机为数学模型,引入好奇心和取向性概念,设计了能够主动探索环境以及主动学习环境的内发动机机制.实验结果证明内发动机机制的引入不仅提高了机器人的自学习和自组织特性,同时能够有效避免小概率事件的发生,稳定性较高.与传统线性二次型调节器(Linear quadratic regulator,LQR)控制方法的对比实验表明系统具有更好的鲁棒性.  相似文献   

15.
We present a system for robust robot skill acquisition from kinesthetic demonstrations. This system allows a robot to learn a simple goal-directed gesture and correctly reproduce it despite changes in the initial conditions and perturbations in the environment. It combines a dynamical system control approach with tools of statistical learning theory and provides a solution to the inverse kinematics problem when dealing with a redundant manipulator. The system is validated on two experiments involving a humanoid robot: putting an object into a box and reaching for and grasping an object.   相似文献   

16.
Successful planning and control of robots strongly depends on the quality of kinematic models, which define mappings between configuration space (e.g. joint angles) and task space (e.g. Cartesian coordinates of the end effector). Often these models are predefined, in which case, for example, unforeseen bodily changes may result in unpredictable behavior. We are interested in a learning approach that can adapt to such changes—be they due to motor or sensory failures, or also due to the flexible extension of the robot body by, for example, the usage of tools. We focus on learning locally linear forward velocity kinematics models by means of the neuro-evolution approach XCSF. The algorithm learns self-supervised, executing movements autonomously by means of goal-babbling. It preserves actuator redundancies, which can be exploited during movement execution to fulfill current task constraints. For detailed evaluation purposes, we study the performance of XCSF when learning to control an anthropomorphic seven degrees of freedom arm in simulation. We show that XCSF can learn large forward velocity kinematic mappings autonomously and rather independently of the task space representation provided. The resulting mapping is highly suitable to resolve redundancies on the fly during inverse, goal-directed control.  相似文献   

17.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

18.
Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.  相似文献   

19.
针对移动机器人避障上存在的自适应能力较差的问题,结合遗传算法(GA)的进化思想,以自适应启发评价(AHC)学习和操作条件反射(OC)理论为基础,提出了一种基于进化操作行为学习模型(EOBLM)的移动机器人学习避障行为的方法。该方法是一种改进的AHC学习模式,评价单元采用多层前向神经网络来实现,利用TD算法和梯度下降法进行权值更新,这一阶段学习用来生成取向性信息,作为内在动机决定进化的方向;动作选择单元主要用来优化操作行为以实现状态到动作的最佳映射。优化过程分两个阶段来完成,第一阶段通过操作条件反射学习算法得到的信息熵作为个体适应度,执行GA学习算法搜索最优个体;第二阶段由OC学习算法选择最优个体内的最优操作行为,并得到新的信息熵值。通过移动机器人避障仿真实验,结果表明所设计的EOBLM能使机器人通过不断与外界未知环境进行交互主动学会避障的能力,与传统的AHC方法相比其自学习自适应的能力得到加强。  相似文献   

20.
From infants to adults, each individual undergoes changes both physically and mentally through interaction with environments. These cognitive developments are usually staged, exhibited as behaviour changes and supported by neural growth and shrinking in the brain. The ultimate goal for an intelligent artificial system is to automatically build its skills in a similar way as the mental development in humans and animals, and adapt to different environments. In this paper, we present an approach to constructing robot coordination skills by developmental learning inspired by developmental psychology and neuroscience. In particular, we investigated the learning of two types of robot coordination skills: intra-modality mapping such as sensor-motor coordination of the arm; and inter-modality mapping such as eye/hand coordination. A self-organising radial basis function (RBF) network is used as the substrate to support the learning process. The RBF network grows or shrinks according to the novelty of the data obtained during the robot interaction with environment, and its parameters are adjusted by a simplified extended Kalman filter algorithm. The paper further reveals the possible biological evidence to support both the system architecture, the learning algorithm, and the adaptation of the system to its bodily changes such as in tool-use. The learning error was regarded as intrinsic motivation to drive the system to actively explore the workspace and reduce the learning errors. We tested our algorithms on a laboratory robot system with two industrial quality manipulator arms and a motorised pan/tilt head carrying a colour CCD camera. The experimental results demonstrated that the system can develop its intra-modal and inter-modal coordination skills by constructing mapping networks in a similar way as humans and animals during their early cognition development. In order to adapt to different tool sizes, the system can quickly reuse and adjust its learned knowledge in terms of the number of neurons, the size of receptive field of each neuron and the contribution from each neuron in the network. The active learning approach greatly reduces the nonuniform distribution of the learning errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号