首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 170 毫秒
1.
基于先验知识的改进强化学习及其在MAS中应用   总被引:2,自引:1,他引:1       下载免费PDF全文
针对传统的多Agent强化学习算法中,Agent学习效率低的问题,在传统强化学习算法中加入具有经验知识的函数;从心理学角度引入内部激励的概念,并将其作为强化学习的激励信号,与外部激励信号一同作用于强化学习的整个过程。最后将此算法运用到RoboCup仿真中,仿真结果表明该算法的学习效率和收敛速度明显优于传统的强化学习。  相似文献   

2.
针对两轮自平衡机器人在学习过程中遇到的主动性差和以往强化学习对单步学习效率低的问题,受心理学中内在动机理论的启发,提出一种基于内在动机的强化学习算法;该算法利用内在动机信号作为内部奖励,模拟人类心理认知机理并与外部信号一起作用于整个学习过程,提高了智能体的自学习能力,同时采用自组织神经网络进行训练,保证了算法的快速性;通过无扰动和有扰动两种仿真实验的对比,验证了基于内在动机的强化学习算法能够使两轮机器人在未知环境下通过自主学习最终达到平衡,且体现了该算法的鲁棒性和可行性。  相似文献   

3.
针对具有连续状态空间的无模型非线性系统,提出一种基于径向基(radial basis function, RBF)神经网络的多步强化学习控制算法.首先,将神经网络引入强化学习系统,利用RBF神经网络的函数逼近功能近似表示状态-动作值函数,解决连续状态空间表达问题;然后,结合资格迹机制形成多步Sarsa算法,通过记录经历过的状态提高系统的学习效率;最后,采用温度参数衰减的方式改进softmax策略,优化动作的选择概率,达到平衡探索和利用关系的目的. MountainCar任务的仿真实验表明:所提出算法经过少量训练能够有效实现无模型情况下的连续非线性系统控制;与单步算法相比,该算法完成任务所用的平均收敛步数更少,效果更稳定,表明非线性值函数近似与多步算法结合在控制任务中同样可以具有良好的性能.  相似文献   

4.
由于强化学习算法动作策略学习比较费时,提出一种基于状态回溯的启发式强化学习方法.分析强化学习过程中重复状态,通过比较状态回溯过程中重复动作的选择策略,引入代价函数描述重复动作的重要性.结合动作奖赏及动作代价提出一种新的启发函数定义.该启发函数在强调动作重要性以加快学习速度的同时,基于代价函数计算动作选择的代价以减少不必要的探索,从而平稳地提高学习效率.对基于代价函数的动作选择策略进行证明.建立两种仿真场景,将算法用于机器人路径规划的仿真实验.实验结果表明基于状态回溯的启发式强化学习方法能平衡考虑获得的奖赏及付出的代价,有效提高Q学习的收敛速度.  相似文献   

5.
提高强化学习速度的方法研究   总被引:4,自引:0,他引:4  
强化学习一词出自于行为心理学,这门学科把学习看作为反复试验的过程,以便把环境的状态映射为动作。强化学习的这种特性必然增加智能系统的困难性,学习时间增长。强化学习学习速度较慢的原因是没有明确的监督信号。因此,强化学习系统在与环境交互时不得不采取反复试验的方法依靠外部评价信号来调整自己的行为。智能系统必然经过很长的学习过程。如何提高强化学习速度是一个最重要的研究问题。该文从几个方面来讨论提高强化学习速度的方法。  相似文献   

6.
提出了一种新颖的基于Q-学习、蚁群算法和轮盘赌算法的多Agent强化学习。在强化学习算法中,当Agent数量增加到足够大时,就会出现动作空间灾难性问题,即:其学习速度骤然下降。另外,Agent是利用Q值来选择下一步动作的,因此,在学习早期,动作的选择严重束缚于高Q值。把蚁群算法、轮盘赌算法和强化学习三者结合起来,期望解决上述提出的问题。最后,对新算法的理论分析和实验结果都证明了改进的Q学习是可行的,并且可以有效地提高学习效率。  相似文献   

7.
多Agent Q学习几点问题的研究及改进   总被引:1,自引:0,他引:1  
提出了一种新颖的基于Q-学习,蚁群算法和轮盘赌算法的多Agent强化学习.在强化学习算法中,当Agent数量增加到足够大时,就会出现动作空间灾难性问题,即:其交互困难,学习速度骤然下降.另外,由于Agent是利用Q值来选择下一步动作的,因此,在学习早期,动作的选择严重束缚于高Q值.在这里,把蚁群算法,轮盘赌算法和强化学习三者结合起来,期望解决上述提出的问题.最后,对新算法的理论分析和实验结果都证明了改进的Q学习是可行的,并且可以有效的提高学习效率.  相似文献   

8.
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来解决策问题。激励学习方法是Agent利用试验与环境交互以改进自身的行为。Markov决策过程(MDP)模型是解决激励学习问题的通用方法,而动态规划方法是Agent在具有Markov环境下与策略相关的值函数学习算法。但由于Agent在学习的过程中,需要记忆全部的值函数,这个记忆容量随着状态空间的增加会变得非常巨大。文章提出了一种基于动态规划方法的激励学习遗忘算法,这个算法是通过将记忆心理学中有关遗忘的基本原理引入到值函数的激励学习中,导出了一类用动态规划方法解决激励学习问题的比较好的方法,即Forget-DP算法。  相似文献   

9.
樊尚春  关铭  庄海涵 《传感技术学报》2007,20(11):2408-2411
为了能够增强谐振音叉的输出信号,进而提高谐振音叉的激励效率,测量、分析并比较了谐振音叉在不同激励信号下的激励效率.针对双端固定音叉,对不同激励信号从理论上进行分析.最后,利用谐振式加速度测试平台,以不同的激励信号对谐振音叉进行激励,测量输出信号.测试结果表明:激励信号的激励效率从大到小的排列顺序是:方波、正弦波、三角波、锯齿波.激励信号所加的直流偏置越大,输出的信号波形的幅值就越大.  相似文献   

10.
陈道琨  刘芳芳  杨超 《软件学报》2022,33(8):4452-4463
很多强化学习方法较少地考虑决策的安全性,但研究领域和工业应用领域都要求的智能体所做决策是安全的.解决智能体决策安全问题的传统方法主要有改变目标函数、改变智能体的探索过程等,然而这些方法忽略了智能体遭受的损害和成本,因此不能有效地保障决策的安全性.在受限马尔可夫决策过程的基础上,通过对动作空间添加安全约束,设计了安全Sarsa (λ)方法和安全Sarsa方法.在求解过程中,不仅要求智能体得到最大的状态-动作值,还要求其满足安全约束的限制,从而获得安全的最优策略.由于传统的强化学习求解方法不再适用于求解带约束的安全Sarsa (λ)模型和安全Sarsa模型,为在满足约束条件下得到全局最优状态-动作值函数,提出了安全强化学习的求解模型.求解模型基于线性化多维约束,采用拉格朗日乘数法,在保证状态-动作值函数和约束函数具有可微性的前提下,将安全强化学习模型转化为凸模型,避免了在求解过程中陷入局部最优解的问题,提高了算法的求解效率和精确度.同时,给出了算法的可行性证明.最后,实验验证了算法的有效性.  相似文献   

11.
针对两轮自平衡机器人在学习过程中主动性差的问题,受心理学内在动机理论启发,提出一种基于内在动机的智能机器人自主发育算法。该算法在强化学习的理论框架中,引入模拟人类好奇心的内在动机理论作为内部驱动力,与外部奖赏信号一起作用于整个学习过程。采用双层内部回归神经网络存储知识的学习与积累,使机器人逐步学会自主平衡技能。最后针对测量噪声污染对机器人平衡控制中两轮角速度的影响,进一步采用卡尔曼滤波方法进行补偿,以提高算法收敛速度,降低系统误差。仿真实验表明,该算法能够使两轮机器人通过与环境的交互获得认知,成功地学会运动平衡控制技能。  相似文献   

12.
This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our algorithmic architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line, we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.  相似文献   

13.
While previous research has emphasised the importance of business skills for information systems (IS) developers in the process of IS development, few studies have investigated the determinants of IS developers’ behavioural intention to learn business skills. The current study explores the factors affecting IS developers’ intention to learn business skills based on previous theories and research. Data collected from 258 valid respondents are tested against the research model using the partial least-squares approach. The results indicate that both job involvement and career insight have significant positive effects on extrinsic and intrinsic motivations for learning business skills. Additionally, learning self-efficacy is not only found to have a significant influence on learning intention, but is also found to have a moderating effect on the positive relationship between intrinsic motivation and learning intention. The findings of this study provide several important theoretical and practical implications for IS developers’ behaviour of learning business skills.  相似文献   

14.
This paper proposes a reinforcement fuzzy adaptive learning control network (RFALCON), constructed by integrating two fuzzy adaptive learning control networks (FALCON), each of which has a feedforward multilayer network and is developed for the realization of a fuzzy controller. One FALCON performs as a critic network (fuzzy predictor), the other as an action network (fuzzy controller). Using temporal difference prediction, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network performs a stochastic exploratory algorithm to adapt itself according to the internal reinforcement signal. An ART-based reinforcement structure/parameter-learning algorithm is developed for constructing the RFALCON dynamically. During the learning process, structure and parameter learning are performed simultaneously. RFALCON can construct a fuzzy control system through a reward/penalty signal. It has two important features; it reduces the combinatorial demands of system adaptive linearization, and it is highly autonomous.  相似文献   

15.
Online learning control by association and reinforcement   总被引:4,自引:0,他引:4  
This paper focuses on a systematic treatment for developing a generic online learning control system based on the fundamental principle of reinforcement learning or more specifically neural dynamic programming. This online learning system improves its performance over time in two aspects: 1) it learns from its own mistakes through the reinforcement signal from the external environment and tries to reinforce its action to improve future performance; and 2) system states associated with the positive reinforcement is memorized through a network learning process where in the future, similar states will be more positively associated with a control action leading to a positive reinforcement. A successful candidate of online learning control design is introduced. Real-time learning algorithms is derived for individual components in the learning system. Some analytical insight is provided to give guidelines on the learning process took place in each module of the online learning control system.  相似文献   

16.
This paper presents a reinforcement learning algorithm which allows a robot, with a single camera mounted on a pan tilt platform, to learn simple skills such as watch and orientation and to obtain the complex skill called approach combining the previously learned ones. The reinforcement signal the robot receives is a real continuous value so it is not necessary to estimate an expected reward. Skills are implemented with a generic structure which permits complex skill creation from sequencing, output addition and data flow of available simple skills.  相似文献   

17.
Controlling chaos by GA-based reinforcement learning neural network   总被引:12,自引:0,他引:12  
Proposes a TD (temporal difference) and GA (genetic algorithm) based reinforcement (TDGAR) neural learning scheme for controlling chaotic dynamical systems based on the technique of small perturbations. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to fulfil the reinforcement learning task. Structurally, the TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network for helping the learning of the other network, the action network, which determines the outputs (actions) of the TDGAR learning system. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. This can usually accelerate the GA learning since an external reinforcement signal may only be available at a time long after a sequence of actions have occurred in the reinforcement learning problems. By defining a simple external reinforcement signal. the TDGAR learning system can learn to produce a series of small perturbations to convert chaotic oscillations of a chaotic system into desired regular ones with a periodic behavior. The proposed method is an adaptive search for the optimum control technique. Computer simulations on controlling two chaotic systems, i.e., the Henon map and the logistic map, have been conducted to illustrate the performance of the proposed method.  相似文献   

18.

A solution to the problem of model-free intelligent attitude control of aerospace launch vehicles is presented. Emphasis is placed on the development of learning the proper action through reinforcement learning for problems that have no model or in which the model is too complex. One approach to solving this class of problems is via motivation from emotional learning mechanism in the mammalian brain. A simple but effective mathematical model from the emotional learning mechanism in the human brain is presented and developed to solve the closed-loop command tracking problem. The emotional learning mechanism needs a set of sensory inputs and a reinforcing signal to produce action. Determination and tuning of the parameters of the reinforcing signal and sensory input are left to be solved through evolution by a genetic algorithm. It is shown in simulations that the mathematical model of the emotional learning mechanism can be initialized as a powerful feedback control algorithm.  相似文献   

19.
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号