首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
《Applied Soft Computing》2007,7(3):818-827
This paper proposes a reinforcement learning (RL)-based game-theoretic formulation for designing robust controllers for nonlinear systems affected by bounded external disturbances and parametric uncertainties. Based on the theory of Markov games, we consider a differential game in which a ‘disturbing’ agent tries to make worst possible disturbance while a ‘control’ agent tries to make best control input. The problem is formulated as finding a min–max solution of a value function. We propose an online procedure for learning optimal value function and for calculating a robust control policy. Proposed game-theoretic paradigm has been tested on the control task of a highly nonlinear two-link robot system. We compare the performance of proposed Markov game controller with a standard RL-based robust controller, and an H theory-based robust game controller. For the robot control task, the proposed controller achieved superior robustness to changes in payload mass and external disturbances, over other control schemes. Results also validate the effectiveness of neural networks in extending the Markov game framework to problems with continuous state–action spaces.  相似文献   

2.
模仿学习是强化学习与监督学习的结合,目标是通过观察专家演示,学习专家策略,从而加速强化学习。通过引入任务相关的额外信息,模仿学习相较于强化学习,可以更快地实现策略优化,为缓解低样本效率问题提供了解决方案。模仿学习已成为解决强化学习问题的一种流行框架,涌现出多种提高学习性能的算法和技术。通过与图形图像学的最新研究成果相结合,模仿学习已经在游戏人工智能(artificial intelligence,AI)、机器人控制和自动驾驶等领域发挥了重要作用。本文围绕模仿学习的年度发展,从行为克隆、逆强化学习、对抗式模仿学习、基于观察量的模仿学习和跨领域模仿学习等多个角度进行深入探讨,介绍了模仿学习在实际应用上的最新情况,比较了国内外研究现状,并展望了该领域未来的发展方向。旨在为研究人员和从业人员提供模仿学习的最新进展,从而为开展工作提供参考与便利。  相似文献   

3.
基于后悔值的多Agent冲突博弈强化学习模型   总被引:1,自引:0,他引:1  
肖正  张世永 《软件学报》2008,19(11):2957-2967
对于冲突博弈,研究了一种理性保守的行为选择方法,即最小化最坏情况下Agent的后悔值.在该方法下,Agent当前的行为策略在未来可能造成的损失最小,并且在没有任何其他Agent信息的条件下,能够得到Nash均衡混合策略.基于后悔值提出了多Agent复杂环境下冲突博弈的强化学习模型以及算法实现.该模型中通过引入交叉熵距离建立信念更新过程,进一步优化了冲突博弈时的行为选择策略.基于Markov重复博弈模型验证了算法的收敛性,分析了信念与最优策略的关系.此外,与MMDP(multi-agent markov decision process)下Q学习扩展算法相比,该算法在很大程度上减少了冲突发生的次数,增强了Agent行为的协调性,并且提高了系统的性能,有利于维持系统的稳定.  相似文献   

4.
如何消除数据中心的局部热点是困扰数据中心行业的关键问题之一.本文采用主动地板(AVT)来抑制局部机架热点现象,并将数据中心AVT控制问题抽象为马尔可夫决策过程,设计了基于深度强化学习的主动地板最优控制策略.该策略基于模型深度强化学习方法,克服了传统无模型深度强化学习方法采样效率低的缺陷.大量仿真实验结果表明,与经典无模型(PPO)方法相比,所提出的方法可迅速收敛到最优控制策略,并可以有效抑制机架热点现象.  相似文献   

5.
Fujita H  Ishii S 《Neural computation》2007,19(11):3051-3087
Games constitute a challenging domain of reinforcement learning (RL) for acquiring strategies because many of them include multiple players and many unobservable variables in a large state space. The difficulty of solving such realistic multiagent problems with partial observability arises mainly from the fact that the computational cost for the estimation and prediction in the whole state space, including unobservable variables, is too heavy. To overcome this intractability and enable an agent to learn in an unknown environment, an effective approximation method is required with explicit learning of the environmental model. We present a model-based RL scheme for large-scale multiagent problems with partial observability and apply it to a card game, hearts. This game is a well-defined example of an imperfect information game and can be approximately formulated as a partially observable Markov decision process (POMDP) for a single learning agent. To reduce the computational cost, we use a sampling technique in which the heavy integration required for the estimation and prediction can be approximated by a plausible number of samples. Computer simulation results show that our method is effective in solving such a difficult, partially observable multiagent problem.  相似文献   

6.
This paper develops an adaptive fuzzy controller for robot manipulators using a Markov game formulation. The Markov game framework offers a promising platform for robust control of robot manipulators in the presence of bounded external disturbances and unknown parameter variations. We propose fuzzy Markov games as an adaptation of fuzzy Q-learning (FQL) to a continuous-action variation of Markov games, wherein the reinforcement signal is used to tune online the conclusion part of a fuzzy Markov game controller. The proposed Markov game-adaptive fuzzy controller uses a simple fuzzy inference system (FIS), is computationally efficient, generates a swift control, and requires no exact dynamics of the robot system. To illustrate the superiority of Markov game-adaptive fuzzy control, we compare the performance of the controller against a) the Markov game-based robust neural controller, b) the reinforcement learning (RL)-adaptive fuzzy controller, c) the FQL controller, d) the Hinfin theory-based robust neural game controller, and e) a standard RL-based robust neural controller, on two highly nonlinear robot arm control problems of i) a standard two-link rigid robot arm and ii) a 2-DOF SCARA robot manipulator. The proposed Markov game-adaptive fuzzy controller outperformed other controllers in terms of tracking errors and control torque requirements, over different desired trajectories. The results also demonstrate the viability of FISs for accelerating learning in Markov games and extending Markov game-based control to continuous state-action space problems.  相似文献   

7.
陈鑫  魏海军  吴敏  曹卫华 《自动化学报》2013,39(12):2021-2031
提高适应性、实现连续空间的泛化、降低维度是实现多智能体强化学习(Multi-agent reinforcement learning,MARL)在连续系统中应用的几个关键. 针对上述需求,本文提出连续多智能体系统(Multi-agent systems,MAS)环境下基于模型的智能体跟踪式学习机制和算法(MAS MBRL-CPT).以学习智能体适应同伴策略为出发点,通过定义个体期望即时回报,将智能体对同伴策略的观测融入环境交互效果中,并运用随机逼近实现个体期望即时回报的在线学习.定义降维的Q函数,在降低学习空间维度的同时,建立MAS环境下智能体跟踪式学习的Markov决策过程(Markov decision process,MDP).在运用高斯回归建立状态转移概率模型的基础上,实现泛化样本集Q值函数的在线动态规划求解.基于离散样本集Q函数运用高斯回归建立值函数和策略的泛化模型. MAS MBRL-CPT在连续空间Multi-cart-pole控制系统的仿真实验表明,算法能够使学习智能体在系统动力学模型和同伴策略未知的条件下,实现适应性协作策略的学习,具有学习效率高、泛化能力强等特点.  相似文献   

8.
李保罗  蔡明钰  阚震 《控制与决策》2023,38(7):1835-1844
针对动态不确定环境下机器人执行复杂任务的需求,提出一种线性时序逻辑(linear temporal logic, LTL)引导的无模型安全强化学习算法,能在最大化任务完成概率的同时保证学习过程的安全性.首先,综合考虑环境中的不确定因素,构建马尔可夫决策过程(Markov decision process, MDP),再用LTL刻画智能体的复杂任务,将其转化为有多接受集的基于转移的有限确定性广义布奇自动机(transition-based limit deterministic generalized Büchi automaton, t LDGBA),并通过接受边界函数构建可记录当前待访问接受集的约束型tLDGBA (constrained tLDGBA,ctLDGBA);其次,构建乘积MDP用于强化学习搜索最优策略;最后,基于LTL对安全性的描述和MDP的观测函数构建安全博弈,并根据安全博弈设计安全盾机制保证系统在学习过程中的安全性.严格的分析证明了所提出的算法能获得最大化LTL任务完成概率的最优策略.仿真结果验证了LTL引导的安全强化学习算法的有效性.  相似文献   

9.
This paper proposes model-free deep inverse reinforcement learning to find nonlinear reward function structures. We formulate inverse reinforcement learning as a problem of density ratio estimation, and show that the log of the ratio between an optimal state transition and a baseline one is given by a part of reward and the difference of the value functions under the framework of linearly solvable Markov decision processes. The logarithm of density ratio is efficiently calculated by binomial logistic regression, of which the classifier is constructed by the reward and state value function. The classifier tries to discriminate between samples drawn from the optimal state transition probability and those from the baseline one. Then, the estimated state value function is used to initialize the part of the deep neural networks for forward reinforcement learning. The proposed deep forward and inverse reinforcement learning is applied into two benchmark games: Atari 2600 and Reversi. Simulation results show that our method reaches the best performance substantially faster than the standard combination of forward and inverse reinforcement learning as well as behavior cloning.  相似文献   

10.
Human-Robot Collaboration (HRC) presents an opportunity to improve the efficiency of manufacturing processes. However, the existing task planning approaches for HRC are still limited in many ways, e.g., co-robot encoding must rely on experts’ knowledge and the real-time task scheduling is applicable within small state-action spaces or simplified problem settings. In this paper, the HRC assembly working process is formatted into a novel chessboard setting, in which the selection of chess piece move is used to analogize to the decision making by both humans and robots in the HRC assembly working process. To optimize the completion time, a Markov game model is considered, which takes the task structure and the agent status as the state input and the overall completion time as the reward. Without experts’ knowledge, this game model is capable of seeking for correlated equilibrium policy among agents with convergency in making real-time decisions facing a dynamic environment. To improve the efficiency in finding an optimal policy of the task scheduling, a deep-Q-network (DQN) based multi-agent reinforcement learning (MARL) method is applied and compared with the Nash-Q learning, dynamic programming and the DQN-based single-agent reinforcement learning method. A height-adjustable desk assembly is used as a case study to demonstrate the effectiveness of the proposed algorithm with different number of tasks and agents.  相似文献   

11.
Monte-Carlo tree search for Bayesian reinforcement learning   总被引:2,自引:2,他引:0  
Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment’s dynamics is a product of Dirichlet distributions, the POMDP’s optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.  相似文献   

12.
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来解决策问题。激励学习方法是Agent利用试验与环境交互以改进自身的行为。Markov决策过程(MDP)模型是解决激励学习问题的通用方法,而动态规划方法是Agent在具有Markov环境下与策略相关的值函数学习算法。但由于Agent在学习的过程中,需要记忆全部的值函数,这个记忆容量随着状态空间的增加会变得非常巨大。文章提出了一种基于动态规划方法的激励学习遗忘算法,这个算法是通过将记忆心理学中有关遗忘的基本原理引入到值函数的激励学习中,导出了一类用动态规划方法解决激励学习问题的比较好的方法,即Forget-DP算法。  相似文献   

13.
郭锐  彭军  吴敏 《计算机工程与应用》2005,41(13):36-38,146
增强学习属于机器学习的一种,它通过与环境的交互获得策略的改进,其在线学习和自适应学习的特点使其成为解决策略寻优问题有力的工具。多智能体系统是人工智能领域的一个研究热点,对于多智能体学习技术的研究需要建立在系统环境模型的基础之上,由于多个智能体的存在,智能体之间的相互影响使得多智能体系统高度复杂,多智能体系统环境属于非确定马尔可夫模型,因此直接把基于马尔可夫模型的增强学习技术引入多智能体系统是不合适的。论文基于智能体间独立的学习机制,提出了一种改进的多智能体Q学习算法,使其适用于非确定马尔可夫环境,并对该学习技术在多智能体系统RoboCup中的应用进行了研究,实验证明了该学习技术的有效性与泛化能力,最后简要给出了多智能体增强学习研究的方向及进一步的工作。  相似文献   

14.
This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.  相似文献   

15.
以无人机网络的资源分配为研究对象,研究了基于强化学习的多无人机网络动态时隙分配方案,在无人机网络中,合理地分配时隙资源对改善无人机资源利用率具有重要意义;针对动态时隙分配问题,根据调度问题的限制条件,建立了多无人机网络时隙分配模型,提出了一种基于近端策略优化(PPO)强化学习算法的时隙分配方案,并进行强化学习算法的环境映射,建立马尔可夫决策过程(MDP)模型与强化学习算法接口相匹配;在gym仿真环境下进行模型训练,对提出的时隙分配方案进行验证,仿真结果验证了基于近端策略优化强化学习算法的时隙分配方案在多无人机网络环境下可以高效进行时隙分配,提高网络信道利用率,提出的方案可以根据实际需求适当缩短训练时间得到较优分配结果。  相似文献   

16.
双轮驱动移动机器人的学习控制器设计方法*   总被引:1,自引:0,他引:1  
提出一种基于增强学习的双轮驱动移动机器人路径跟随控制方法,通过将机器人运动控制器的优化设计问题建模为Markov决策过程,采用基于核的最小二乘策略迭代算法(KLSPI)实现控制器参数的自学习优化。与传统表格型和基于神经网络的增强学习方法不同,KLSPI算法在策略评价中应用核方法进行特征选择和值函数逼近,从而提高了泛化性能和学习效率。仿真结果表明,该方法通过较少次数的迭代就可以获得优化的路径跟随控制策略,有利于在实际应用中的推广。  相似文献   

17.
This paper presents and compares three model-based reinforcement learning schemes for admission policy with handoff prioritization in mobile communication networks. The goal is to reduce the handoff failures while making efficient use of the wireless network resources. A performance measure is formed as a weighted linear function of the blocking probability of new connection requests and the handoff failure probability. Then, the problem is formulated as a semi-Markov decision process with an average cost criterion and a simulation-based learning algorithm is developed to approximate the optimal control policy. The proposed schemes are driven by a dynamic model estimated simultaneously while learning the control policy using samples generated from direct interactions with the network. Extensive simulations are provided to assess and compare their effectiveness of the algorithm under a variety of traffic conditions with some well-known policies.  相似文献   

18.
Conventional robot control schemes are basically model-based methods. However, exact modeling of robot dynamics poses considerable problems and faces various uncertainties in task execution. This paper proposes a reinforcement learning control approach for overcoming such drawbacks. An artificial neural network (ANN) serves as the learning structure, and an applied stochastic real-valued (SRV) unit as the learning method. Initially, force tracking control of a two-link robot arm is simulated to verify the control design. The simulation results confirm that even without information related to the robot dynamic model and environment states, operation rules for simultaneous controlling force and velocity are achievable by repetitive exploration. Hitherto, however, an acceptable performance has demanded many learning iterations and the learning speed proved too slow for practical applications. The approach herein, therefore, improves the tracking performance by combining a conventional controller with a reinforcement learning strategy. Experimental results demonstrate improved trajectory tracking performance of a two-link direct-drive robot manipulator using the proposed method.  相似文献   

19.
In this paper we consider the problem of reinforcement learning in a dynamically changing environment. In this context, we study the problem of adaptive control of finite-state Markov chains with a finite number of controls. The transition and payoff structures are unknown. The objective is to find an optimal policy which maximizes the expected total discounted payoff over the infinite horizon. A stochastic neural network model is suggested for the controller. The parameters of the neural net, which determine a random control strategy, are updated at each instant using a simple learning scheme. This learning scheme involves estimation of some relevant parameters using an adaptive critic. It is proved that the controller asymptotically chooses an optimal action in each state of the Markov chain with a high probability  相似文献   

20.
Partially observable Markov decision processes (POMDP) provide a mathematical framework for agent planning under stochastic and partially observable environments. The classic Bayesian optimal solution can be obtained by transforming the problem into Markov decision process (MDP) using belief states. However, because the belief state space is continuous and multi-dimensional, the problem is highly intractable. Many practical heuristic based methods are proposed, but most of them require a complete POMDP model of the environment, which is not always practical. This article introduces a modified memory-based reinforcement learning algorithm called modified U-Tree that is capable of learning from raw sensor experiences with minimum prior knowledge. This article describes an enhancement of the original U-Tree’s state generation process to make the generated model more compact, and also proposes a modification of the statistical test for reward estimation, which allows the algorithm to be benchmarked against some traditional model-based algorithms with a set of well known POMDP problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号