首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
基于异联想记忆Hopfield网络的强化学习   总被引:2,自引:0,他引:2  
针对自主机器人的动态避障问题,借鉴异联想记忆Hopfield神经网络对样本模式的记忆能力和强化学习解决问题的突出能力提出了一种新的融合学习方法即异联想记忆神经网络--强化学习方法.通过在强化学习中引入记忆来增强学习方法的能力,可以使自主机器人快速和适应的学习,从而实现机器人的动态避障.仿真结果表明了该避障方法的有效性.  相似文献   

2.
机器人动态神经网络导航算法的研究和实现   总被引:1,自引:0,他引:1  
针对Pioneer3-DX 移动机器人, 提出了基于强化学习的自主导航策略, 完成了基于动态神经网络的移动机器人导航算法设计. 动态神经网络可以根据机器人环境状态的复杂程度自动地调整其结构, 实时地实现机器人的状态与其导航动作之间的映射关系, 有效地解决了强化学习中状态变量表的维数爆炸问题. 通过对Pioneer3-DX移动机器人导航进行仿真和实物实验, 证明该方法的有效性, 且导航效果明显优于人工势场法.  相似文献   

3.
自主导航是移动机器人的一项关键技术。该文采用强化学习结合模糊逻辑的方法实现了未知环境下自主式移动机机器人的导航控制。文中首先介绍了强化学习原理,然后设计了一种未知环境下机器人导航框架。该框架由避碰模块、寻找目标模块和行为选择模块组成。针对该框架,提出了一种基于强化学习和模糊逻辑的学习、规划算法:在对避碰和寻找目标行为进行独立学习后,利用超声波传感器得到的环境信息进行行为选择,使机器人在成功避碰的同时到达目标点。最后通过大量的仿真实验,证明了算法的有效性。  相似文献   

4.
Skinner 操作条件反射的一种仿生学习算法与机器人控制   总被引:1,自引:0,他引:1  
针对两轮自平衡机器人的运动平衡控制问题,提出了基于Skinner 操作条件反射理论的BP 神经网络 与资格迹相结合的仿生自主学习算法作为两轮机器人的学习机制.该算法利用资格迹能解决延迟影响、加快学习速 度和提高可靠性的特点,将其与BP 神经网络相结合构成复合学习算法,能够预测机器人将要获得的行为评价函数, 并依据概率取向机制以一定的概率选择最大评价值对应的最优行为,从而使机器人能够在未知环境下通过与环境的 交互、学习和训练,获得像人或动物一样的自主学习技能,实现对两轮机器人的运动平衡控制.最后,分别用基于 Skinner 操作条件反射理论的BP 算法和BP 资格迹复合算法对两轮机器人做了仿真实验并进行了比较.结果表明, 基于Skinner 操作条件反射理论的BP 资格迹复合仿生自主学习算法的学习机制能够使机器人获得良好的动态性能 和较快的学习速度,体现了机器人较强的自主学习技能和平衡控制能力.  相似文献   

5.
一种动态环境下移动机器人的路径规划方法   总被引:26,自引:2,他引:26  
朴松昊  洪炳熔 《机器人》2003,25(1):18-21
本文提出了在动态环境中,移动机器人的一种路径规划方法,适用于环境中存 在已知和未知、静止和运动障碍物的复杂情况.采用链接图法建立了机器人工作空间模型, 整个系统由全局路径规划器和局部路径规划器两部分组成.在全局路径规划器中,应用遗传 算法规划出初步全局优化路径.在局部路径规划器中,设计了三种基本行为:跟踪全局路径 的行为、避碰的行为和目标制导的行为,采用基于行为的方法进一步优化路径.其中,避碰 的行为是通过强化学习得到的.仿真和实验结果表明所提方法简便可行,能够满足移动 机器人导航的高实时性要求.  相似文献   

6.
段勇  徐心和 《控制与决策》2007,22(5):525-529
研究基于行为的移动机器人控制方法.将模糊神经网络与强化学习理论相结合,构成模糊强化系统.它既可获取模糊规则的结论部分和模糊隶属度函数参数,也可解决连续状态空间和动作空间的强化学习问题.将残差算法用于神经网络的学习,保证了函数逼近的快速性和收敛性.将该系统的学习结果作为反应式自主机器人的行为控制器,有效地解决了复杂环境中的机器人导航问题.  相似文献   

7.
多机器人系统在联合搜救、智慧车间、智能交通等领域得到了日益广泛的应用。目前,多个机器人之间、机器人与动态环境之间的路径规划和导航避障仍需依赖精确的环境地图,给多机器人系统在非结构环境下的协调与协作带来了挑战。针对上述问题,本文提出了不依赖精确地图的分布式异构多机器人导航避障方法,建立了基于深度强化学习的多特征策略梯度优化算法,并考虑了人机协同环境下的社会范式,使分布式机器人能够通过与环境的试错交互,学习最优的导航避障策略;并在Gazebo仿真环境下进行了最优策略的训练学习,同时将模型移植到多个异构实体机器人上,将机器人控制信号解码,进行真实环境测试。实验结果表明:本文提出的多特征策略梯度优化算法能够通过自学习获得最优的导航避障策略,为分布式异构多机器人在动态环境下的应用提供了一种技术参考。  相似文献   

8.
一种动态未知环境中自主机器人的导航方法   总被引:1,自引:1,他引:0  
提出一种动态未知环境中机器人自主导航方法,利用少量的人类辅助避免了繁琐的地图描述.该方法分两个阶段:用户引导阶段和自主导航阶段.在用户引导阶段,利用多种传感器信息融合生成局部环境的粗略的极坐标地图,利用它可以得到全局地图,还给出了消除传感器数据误差的方法;在自主导航阶段,利用引导阶段得到的地图在动态环境中进行运动,并给出了运动控制的约束条件以及动态避障的方法.机器人利用该方法可以处理突发的障碍物,还能对路径进行优化,实验结果证明了其有效性.  相似文献   

9.
顾勇平  周华平  马宏绪 《机器人》2002,24(2):140-143
本文以国防科学技术大学机器人实验室研制的六足步行机器人为背景,分析了在多机 器人协调控制中可能出现的问题及相应的解决方法,此类机器人控制理论的研究是以机器人 部队概念紧密联系在一起的.同时,作为一种侦察、探测用机器人,亦有广泛的应用背景.  相似文献   

10.
用势场法改进的极限环导航方法在移动机器人中的应用   总被引:7,自引:0,他引:7  
程拥强  蒋平  朱劲  郭凤龙 《机器人》2004,26(2):133-138
结合人工势场法,提出一种新的极限环方法.它可以在诸如机器人足球赛等动态变化的环境中为自主移动机器人进行很好的实时路径规划.将障碍物的运动速度和一些不易直接表达的策略① 等转化成势场,然后把势场抽象成虚拟的障碍物,通过改变非线性方程的极限环半径,获得动态运动路径规划.这种方法能让机器人对高速运动的障碍物有很平滑的避障能力,并且能综合复杂的路径规划要求达到目标.仿真和试验都表明了这种方法在机 器人足球赛中的应用价值.􀁱 􀁽  相似文献   

11.
Reinforcement learning (RL) can provide a basic framework for autonomous robots to learn to control and maximize future cumulative rewards in complex environments. To achieve high performance, RL controllers must consider the complex external dynamics for movements and task (reward function) and optimize control commands. For example, a robot playing tennis and squash needs to cope with the different dynamics of a tennis or squash racket and such dynamic environmental factors as the wind. In addition, this robot has to tailor its tactics simultaneously under the rules of either game. This double complexity of the external dynamics and reward function sometimes becomes more complex when both the multiple dynamics and multiple reward functions switch implicitly, as in the situation of a real (multi-agent) game of tennis where one player cannot observe the intention of her opponents or her partner. The robot must consider its opponent's and its partner's unobservable behavioral goals (reward function). In this article, we address how an RL agent should be designed to handle such double complexity of dynamics and reward. We have previously proposed modular selection and identification for control (MOSAIC) to cope with nonstationary dynamics where appropriate controllers are selected and learned among many candidates based on the error of its paired dynamics predictor: the forward model. Here we extend this framework for RL and propose MOSAIC-MR architecture. It resembles MOSAIC in spirit and selects and learns an appropriate RL controller based on the RL controller's TD error using the errors of the dynamics (the forward model) and the reward predictors. Furthermore, unlike other MOSAIC variants for RL, RL controllers are not a priori paired with the fixed predictors of dynamics and rewards. The simulation results demonstrate that MOSAIC-MR outperforms other counterparts because of this flexible association ability among RL controllers, forward models, and reward predictors.  相似文献   

12.
Reinforcement learning (RL) is a popular method for solving the path planning problem of autonomous mobile robots in unknown environments. However, the primary difficulty faced by learning robots using the RL method is that they learn too slowly in obstacle-dense environments. To more efficiently solve the path planning problem of autonomous mobile robots in such environments, this paper presents a novel approach in which the robot’s learning process is divided into two phases. The first one is to accelerate the learning process for obtaining an optimal policy by developing the well-known Dyna-Q algorithm that trains the robot in learning actions for avoiding obstacles when following the vector direction. In this phase, the robot’s position is represented as a uniform grid. At each time step, the robot performs an action to move to one of its eight adjacent cells, so the path obtained from the optimal policy may be longer than the true shortest path. The second one is to train the robot in learning a collision-free smooth path for decreasing the number of the heading changes of the robot. The simulation results show that the proposed approach is efficient for the path planning problem of autonomous mobile robots in unknown environments with dense obstacles.  相似文献   

13.
We propose an integrated technique of genetic programming (GP) and reinforcement learning (RL) to enable a real robot to adapt its actions to a real environment. Our technique does not require a precise simulator because learning is achieved through the real robot. In addition, our technique makes it possible for real robots to learn effective actions. Based on this proposed technique, we acquire common programs, using GP, which are applicable to various types of robots. Through this acquired program, we execute RL in a real robot. With our method, the robot can adapt to its own operational characteristics and learn effective actions. In this paper, we show experimental results from two different robots: a four-legged robot "AIBO" and a humanoid robot "HOAP-1." We present results showing that both effectively solved the box-moving task; the end result demonstrates that our proposed technique performs better than the traditional Q-learning method.  相似文献   

14.
The robot soccer game has been proposed as a benchmark problem for the artificial intelligence and robotic researches. Decision-making system is the most important part of the robot soccer system. As the environment is dynamic and complex, one of the reinforcement learning (RL) method named FNN-RL is employed in learning the decision-making strategy. The FNN-RL system consists of the fuzzy neural network (FNN) and RL. RL is used for structure identification and parameters tuning of FNN. On the other hand, the curse of dimensionality problem of RL can be solved by the function approximation characteristics of FNN. Furthermore, the residual algorithm is used to calculate the gradient of the FNN-RL method in order to guarantee the convergence and rapidity of learning. The complex decision-making task is divided into multiple learning subtasks that include dynamic role assignment, action selection, and action implementation. They constitute a hierarchical learning system. We apply the proposed FNN-RL method to the soccer agents who attempt to learn each subtask at the various layers. The effectiveness of the proposed method is demonstrated by the simulation and the real experiments.  相似文献   

15.
Reinforcement learning (RL) is a biologically supported learning paradigm, which allows an agent to learn through experience acquired by interaction with its environment. Its potential to learn complex action sequences has been proven for a variety of problems, such as navigation tasks. However, the interactive randomized exploration of the state space, common in reinforcement learning, makes it difficult to be used in real-world scenarios. In this work we describe a novel real-world reinforcement learning method. It uses a supervised reinforcement learning approach combined with Gaussian distributed state activation. We successfully tested this method in two real scenarios of humanoid robot navigation: first, backward movements for docking at a charging station and second, forward movements to prepare grasping. Our approach reduces the required learning steps by more than an order of magnitude, and it is robust and easy to be integrated into conventional RL techniques.  相似文献   

16.
Robust motion control is fundamental to autonomous mobile robots. In the past few years, reinforcement learning (RL) has attracted considerable attention in the feedback control of wheeled mobile robot. However, it is still difficult for RL to solve problems with large or continuous state spaces, which is common in robotics. To improve the generalization ability of RL, this paper presents a novel hierarchical RL approach for optimal path tracking of wheeled mobile robots. In the proposed approach, a graph Laplacian-based hierarchical approximate policy iteration (GHAPI) algorithm is developed, in which the basis functions are constructed automatically using the graph Laplacian operator. In GHAPI, the state space of an Markov decision process is divided into several subspaces and approximate policy iteration is carried out on each subspace. Then, a near-optimal path-tracking control strategy can be obtained by GHAPI combined with proportional-derivative (PD) control. The performance of the proposed approach is evaluated by using a P3-AT wheeled mobile robot. It is demonstrated that the GHAPI-based PD control can obtain better near-optimal control policies than previous approaches.  相似文献   

17.
A Reinforcement Learning (RL) algorithm based on eXtended Classifier System (XCS) is used to navigate a spherical robot. Traditional motion planning strategies rely on pre-planned optimal trajectories and feedback control techniques. The proposed learning agent approach enjoys a direct model-free methodology that enables the robot to function in dynamic and/or partially observable environments. The agent uses a set of guard-action rules that determines the motion inputs at each step. Using a number of control inputs (actions) and the developed RL scheme, the agent learns to make near-optimal moves in response to the incoming position/orientation signals. The proposed method employs an improved variant of the XCS as its learning agent. Results of several simulated experiments for the spherical robot show that this approach is capable of planning a near-optimal path to a predefined target from any given position/orientation.  相似文献   

18.
Reinforcement Learning (RL) is learning through directexperimentation. It does not assume the existence of a teacher thatprovides examples upon which learning of a task takes place. Instead, inRL experience is the only teacher. With historical roots on the study ofbiological conditioned reflexes, RL attracts the interest of Engineersand Computer Scientists because of its theoretical relevance andpotential applications in fields as diverse as Operational Research andIntelligent Robotics.Computationally, RL is intended to operate in a learning environmentcomposed by two subjects: the learner and a dynamic process. Atsuccessive time steps, the learner makes an observation of the processstate, selects an action and applies it back to the process. Its goal isto find out an action policy that controls the behavior of the dynamicprocess, guided by signals (reinforcements) that indicate how badly orwell it has been performing the required task. These signals are usuallyassociated to a dramatic condition – e.g., accomplishment of a subtask(reward) or complete failure (punishment), and the learner tries tooptimize its behavior by using a performance measure (a function of thereceived reinforcements). The crucial point is that in order to do that,the learner must evaluate the conditions (associations between observedstates and chosen actions) that led to rewards or punishments.Starting from basic concepts, this tutorial presents the many flavorsof RL algorithms, develops the corresponding mathematical tools, assesstheir practical limitations and discusses alternatives that have beenproposed for applying RL to realistic tasks.  相似文献   

19.
《Advanced Robotics》2013,27(10):1125-1142
This paper presents a novel approach for acquiring dynamic whole-body movements on humanoid robots focused on learning a control policy for the center of mass (CoM). In our approach, we combine both a model-based CoM controller and a model-free reinforcement learning (RL) method to acquire dynamic whole-body movements in humanoid robots. (i) To cope with high dimensionality, we use a model-based CoM controller as a basic controller that derives joint angular velocities from the desired CoM velocity. The balancing issue can also be considered in the controller. (ii) The RL method is used to acquire a controller that generates the desired CoM velocity based on the current state. To demonstrate the effectiveness of our approach, we apply it to a ball-punching task on a simulated humanoid robot model. The acquired whole-body punching movement was also demonstrated on Fujitsu's Hoap-2 humanoid robot.  相似文献   

20.
Reinforcement learning (RL) has been widely used as a mechanism for autonomous robots to learn state-action pairs by interacting with their environment. However, most RL methods usually suffer from slow convergence when deriving an optimum policy in practical applications. To solve this problem, a stochastic shortest path-based Q-learning (SSPQL) is proposed, combining a stochastic shortest path-finding method with Q-learning, a well-known model-free RL method. The rationale is, if a robot has an internal state-transition model which is incrementally learnt, then the robot can infer the local optimum policy by using a stochastic shortest path-finding method. By increasing state-action pair values comprising of these local optimum policies, a robot can then reach a goal quickly and as a result, this process can enhance convergence speed. To demonstrate the validity of this proposed learning approach, several experimental results are presented in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号