首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Advanced Robotics》2013,27(10):1215-1229
Reinforcement learning is the scheme for unsupervised learning in which robots are expected to acquire behavior skills through self-explorations based on reward signals. There are some difficulties, however, in applying conventional reinforcement learning algorithms to motion control tasks of a robot because most algorithms are concerned with discrete state space and based on the assumption of complete observability of the state. Real-world environments often have partial observablility; therefore, robots have to estimate the unobservable hidden states. This paper proposes a method to solve these two problems by combining the reinforcement learning algorithm and a learning algorithm for a continuous time recurrent neural network (CTRNN). The CTRNN can learn spatio-temporal structures in a continuous time and space domain, and can preserve the contextual flow by a self-organizing appropriate internal memory structure. This enables the robot to deal with the hidden state problem. We carried out an experiment on the pendulum swing-up task without rotational speed information. As a result, this task is accomplished in several hundred trials using the proposed algorithm. In addition, it is shown that the information about the rotational speed of the pendulum, which is considered as a hidden state, is estimated and encoded on the activation of a context neuron.  相似文献   

2.
强化学习一词来自于行为心理学,这门学科把行为学习看成反复试验的过程,从而把环境状态映射成相应的动作。在设计智能机器人过程中,如何来实现行为主义的思想,在与环境的交互中学习行为动作?文中把机器人在未知环境中为躲避障碍所采取的动作看作一种行为,采用强化学习方法来实现智能机器人避碰行为学习。为了提高机器人学习速度,在机器人局部路径规划中的状态空量化就显得十分重要。本文采用自组织映射网络的方法来进行空间的量化。由于自组织映射网络本身所具有的自组织特性,使得它在进行空间量化时就能够较好地解决适应性灵活性问题,本文在对状态空间进行自组织量化的基础方法上,采用强化学习。解决了机器人避碰行为的学习问题,取得了满意的学习结果。  相似文献   

3.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

4.
It was confirmed that a real mobile robot with a simple visual sensor could learn appropriate motions to reach a target object by direct-vision-based reinforcement learning (RL). In direct-vision-based RL, raw visual sensory signals are put directly into a layered neural network, and then the neural network is trained using back propagation, with the training signal being generated by reinforcement learning. Because of the time-delay in transmitting the visual sensory signals, the actor outputs are trained by the critic output at two time-steps ahead. It was shown that a robot with a simple monochrome visual sensor can learn to reach a target object from scratch without any advance knowledge of this task by direct-vision-based RL. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

5.
We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters.We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.  相似文献   

6.
An approach to learning mobile robot navigation   总被引:1,自引:0,他引:1  
This paper describes an approach to learning an indoor robot navigation task through trial-and-error. A mobile robot, equipped with visual, ultrasonic and laser sensors, learns to servo to a designated target object. In less than ten minutes of operation time, the robot is able to navigate to a marked target object in an office environment. The central learning mechanism is the explanation-based neural network learning algorithm (EBNN). EBNN initially learns function purely inductively using neural network representations. With increasing experience, EBNN employs domain knowledge to explain and to analyze training data in order to generalize in a more knowledgeable way. Here EBNN is applied in the context of reinforcement learning, which allows the robot to learn control using dynamic programming.  相似文献   

7.
Asada  Minoru  Noda  Shoichi  Tawaratsumida  Sukoya  Hosoda  Koh 《Machine Learning》1996,23(2-3):279-303
This paper presents a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal. We discuss several issues in applying the reinforcement learning method to a real robot with vision sensor by which the robot can obtain information about the changes in an environment. First, we construct a state space in terms of size, position, and orientation of a ball and a goal in an image, and an action space is designed in terms of the action commands to be sent to the left and right motors of a mobile robot. This causes a state-action deviation problem in constructing the state and action spaces that reflect the outputs from physical sensors and actuators, respectively. To deal with this issue, an action set is constructed in a way that one action consists of a series of the same action primitive which is successively executed until the current state changes. Next, to speed up the learning time, a mechanism of Learning from Easy Missions (or LEM) is implemented. LEM reduces the learning time from exponential to almost linear order in the size of the state space. The results of computer simulations and real robot experiments are given.  相似文献   

8.
This article proposes a reinforcement learning procedure for mobile robot navigation using a latent-like learning schema. Latent learning refers to learning that occurs in the absence of reinforcement signals and is not apparent until reinforcement is introduced. This concept considers that part of a task can be learned before the agent receives any indication of how to perform such a task. In the proposed topological reinforcement learning agent (TRLA), a topological map is used to perform the latent learning. The propagation of the reinforcement signal throughout the topological neighborhoods of the map permits the estimation of a value function which takes in average less trials and with less updatings per trial than six of the main temporal difference reinforcement learning algorithms: Q-learning, SARSA, Q(λ)-learning, SARSA(λ), Dyna-Q and fast Q(λ)-learning. The RL agents were tested in four different environments designed to consider a growing level of complexity in accomplishing navigation tasks. The tests suggested that the TRLA chooses shorter trajectories (in the number of steps) and/or requires less value function updatings in each trial than the other six reinforcement learning (RL) algorithms.  相似文献   

9.
We study spatial learning and navigation for autonomous agents. A state space representation is constructed by unsupervised Hebbian learning during exploration. As a result of learning, a representation of the continuous two-dimensional (2-D) manifold in the high-dimensional input space is found. The representation consists of a population of localized overlapping place fields covering the 2-D space densely and uniformly. This space coding is comparable to the representation provided by hippocampal place cells in rats. Place fields are learned by extracting spatio-temporal properties of the environment from sensory inputs. The visual scene is modeled using the responses of modified Gabor filters placed at the nodes of a sparse Log-polar graph. Visual sensory aliasing is eliminated by taking into account self-motion signals via path integration. This solves the hidden state problem and provides a suitable representation for applying reinforcement learning in continuous space for action selection. A temporal-difference prediction scheme is used to learn sensorimotor mappings to perform goal-oriented navigation. Population vector coding is employed to interpret ensemble neural activity. The model is validated on a mobile Khepera miniature robot.  相似文献   

10.
为了控制移动机器人在人群密集的复杂环境中高效友好地完成避障任务,本文提出了一种人群环境中基于深度强化学习的移动机器人避障算法。首先,针对深度强化学习算法中值函数网络学习能力不足的情况,基于行人交互(crowd interaction)对值函数网络做了改进,通过行人角度网格(angel pedestrian grid)对行人之间的交互信息进行提取,并通过注意力机制(attention mechanism)提取单个行人的时序特征,学习得到当前状态与历史轨迹状态的相对重要性以及对机器人避障策略的联合影响,为之后多层感知机的学习提供先验知识;其次,依据行人空间行为(human spatial behavior)设计强化学习的奖励函数,并对机器人角度变化过大的状态进行惩罚,实现了舒适避障的要求;最后,通过仿真实验验证了人群环境中基于深度强化学习的移动机器人避障算法在人群密集的复杂环境中的可行性与有效性。  相似文献   

11.
We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.  相似文献   

12.
Robots require a form of visual attention to perform a wide range of tasks effectively. Existing approaches specify in advance the image features and attention control scheme required for a given robot to perform a specific task. However, to cope with different tasks in a dynamic environment, a robot should be able to construct its own attentional mechanisms. This paper presents a method that a robot can use to generating image features by learning a visuo-motor map. The robot constructs the visuo-motor map from training data, and the map constrains both the generation of image features and the estimation of state vectors. The resulting image features and state vectors are highly task-oriented. The learned mechanism is attentional in the sense that it determines what information to select from the image to perform a task. We examine robot experiments using the proposed method for indoor navigation and scoring soccer goals.  相似文献   

13.
Reinforcement learning (RL) is a biologically supported learning paradigm, which allows an agent to learn through experience acquired by interaction with its environment. Its potential to learn complex action sequences has been proven for a variety of problems, such as navigation tasks. However, the interactive randomized exploration of the state space, common in reinforcement learning, makes it difficult to be used in real-world scenarios. In this work we describe a novel real-world reinforcement learning method. It uses a supervised reinforcement learning approach combined with Gaussian distributed state activation. We successfully tested this method in two real scenarios of humanoid robot navigation: first, backward movements for docking at a charging station and second, forward movements to prepare grasping. Our approach reduces the required learning steps by more than an order of magnitude, and it is robust and easy to be integrated into conventional RL techniques.  相似文献   

14.
强化学习在足球机器人基本动作学习中的应用   总被引:1,自引:0,他引:1  
主要研究了强化学习算法及其在机器人足球比赛技术动作学习问题中的应用.强化学习的状态空间 和动作空间过大或变量连续,往往导致学习的速度过慢甚至难于收敛.针对这一问题,提出了基于T-S 模型模糊 神经网络的强化学习方法,能够有效地实现强化学习状态空间到动作空间的映射.此外,使用提出的强化学习方 法设计了足球机器人的技术动作,研究了在不需要专家知识和环境模型情况下机器人的行为学习问题.最后,通 过实验证明了所研究方法的有效性,其能够满足机器人足球比赛的需要.  相似文献   

15.
We aim to achieve interaction between a robot and multiple people. For this, robots should localize people, select an interaction partner, and act appropriately for him/her. It is difficult to deal with all these problems using only the sensors installed into the robots. We focus on that people use a rough interaction distance among other people . We divide this interaction area into different spaces based on both the interaction distances and sensor abilities of robots. Our robots localize people roughly within this divided space. To select an interaction partner, they map friendliness holding the interaction history onto the divided space, and integrate the sensor information. Furthermore, we developed a method for appropriately changing the motions, sizes, and speeds based on the distance. Our robots regard the divided spaces as Q-Learning states, and learn the motion parameters. Our robot interacted with 27 visitors. It localized a partner with an F-value of 0.76 through integration, which is higher than that of a single sensor. A factor analysis was performed on the results from questionnaires. Exciting and Friendly were the representatives of the first and second factors, respectively. For both factors, a motion with friendliness provided higher impression scores than that without friendliness.  相似文献   

16.
多智能体强化学习及其在足球机器人角色分配中的应用   总被引:2,自引:0,他引:2  
足球机器人系统是一个典型的多智能体系统, 每个机器人球员选择动作不仅与自身的状态有关, 还要受到其他球员的影响, 因此通过强化学习来实现足球机器人决策策略需要采用组合状态和组合动作. 本文研究了基于智能体动作预测的多智能体强化学习算法, 使用朴素贝叶斯分类器来预测其他智能体的动作. 并引入策略共享机制来交换多智能体所学习的策略, 以提高多智能体强化学习的速度. 最后, 研究了所提出的方法在足球机器人动态角色分配中的应用, 实现了多机器人的分工和协作.  相似文献   

17.
In recent robotics fields, much attention has been focused on utilizing reinforcement learning (RL) for designing robot controllers, since environments where the robots will be situated in should be unpredictable for human designers in advance. However there exist some difficulties. One of them is well known as ‘curse of dimensionality problem’. Thus, in order to adopt RL for complicated systems, not only ‘adaptability’ but also ‘computational efficiencies’ should be taken into account. The paper proposes an adaptive state recruitment strategy for NGnet-based actor-critic RL. The strategy enables the learning system to rearrange/divide its state space gradually according to the task complexity and the progress of learning. Some simulation results and real robot implementations show the validity of the method.  相似文献   

18.
19.
In this paper, we propose a multiple-metric learning algorithm to learn jointly a set of optimal homogenous/heterogeneous metrics in order to fuse the data collected from multiple sensors for joint classification. The learned metrics have the potential to perform better than the conventional Euclidean metric for classification. Moreover, in the case of heterogenous sensors, the learned multiple metrics can be quite different, which are adapted to each type of sensor. By learning the multiple metrics jointly within a single unified optimization framework, we can learn better metrics to fuse the multi-sensor data for a joint classification. Furthermore, we also exploit multi-metric learning in a kernel induced feature space to capture the non-linearity in the original feature space via kernel mapping.  相似文献   

20.
为了解决传统深度强化学习在室内未知环境下移动机器人路径规划中存在探索能力差和环境状态空间奖励稀疏的问题,提出了一种基于深度图像信息的改进深度强化学习算法。利用Kinect视觉传感器直接获取的深度图像信息和目标位置信息作为网络的输入,以机器人的线速度和角速度作为下一步动作指令的输出。设计了改进的奖惩函数,提高了算法的奖励值,优化了状态空间,在一定程度上缓解了奖励稀疏的问题。仿真结果表明,改进算法提高了机器人的探索能力,优化了路径轨迹,使机器人有效地避开了障碍物,规划出更短的路径,简单环境下比DQN算法的平均路径长度缩短了21.4%,复杂环境下平均路径长度缩短了11.3%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号