首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
虽然在深度学习与强化学习结合后,人工智能在棋类游戏和视频游戏等领域取得了超越人类水平的重大成就,但是实时策略性游戏星际争霸由于其巨大的状态空间和动作空间,对于人工智能研究者来说是一个巨大的挑战平台,针对Deepmind在星际争霸II迷你游戏中利用经典的深度强化学习算法A3C训练出来的基线智能体的水平和普通业余玩家的水平相比还存在较大的差距的问题。通过采用更简化的网络结构以及把注意力机制与强化学习中的奖励结合起来的方法,提出基于状态注意力的A3C算法,所训练出来的智能体在个别星际迷你游戏中利用更少的特征图层取得的成绩最高,高于Deepmind的基线智能体71分。  相似文献   

2.
基于Markov对策的多Agent强化学习模型及算法研究   总被引:19,自引:0,他引:19  
在MDP,单Agent可以通过强化学习来寻找问题的最优解。但在多Agent系统中,MDP模型不再适用。同样极小极大Q算法只能解决采用零和对策模型的MAS学习问题。文中采用非零和Markov对策作为多Agent系统学习框架,并提出元对策强化学习的学习模型和元对策Q算法。理论证明元对策Q算法收敛在非零和Markov对策的元对策最优解。  相似文献   

3.
Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may not be optimal against the specific opponents' policies, or they may not converge at all. In this article we examine this learning problem in the framework of stochastic games. We look at a number of previous learning algorithms showing how they fail at one of the above criteria. We then contribute a new reinforcement learning technique using a variable learning rate to overcome these shortcomings. Specifically, we introduce the WoLF principle, “Win or Learn Fast”, for varying the learning rate. We examine this technique theoretically, proving convergence in self-play on a restricted class of iterated matrix games. We also present empirical results on a variety of more general stochastic games, in situations of self-play and otherwise, demonstrating the wide applicability of this method.  相似文献   

4.
解决具有连续动作空间的问题是当前强化学习领域的一个研究热点和难点.在处理这类问题时,传统的强化学习算法通常利用先验信息对连续动作空间进行离散化处理,然后再求解最优策略.然而,在很多实际应用中,由于缺乏用于离散化处理的先验信息,算法效果会变差甚至算法失效.针对这类问题,提出了一种最小二乘行动者-评论家方法(least square actor-critic algorithm, LSAC),使用函数逼近器近似表示值函数及策略,利用最小二乘法在线动态求解近似值函数参数及近似策略参数,以近似值函数作为评论家指导近似策略参数的求解.将LSAC算法用于解决经典的具有连续动作空间的小车平衡杆问题和mountain car问题,并与Cacla(continuous actor-critic learning automaton)算法和eNAC(episodic natural actor-critic)算法进行比较.结果表明,LSAC算法能有效地解决连续动作空间问题,并具有较优的执行性能.  相似文献   

5.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

6.
Learning automata (LA) were recently shown to be valuable tools for designing Multi-Agent Reinforcement Learning algorithms and are able to control the stochastic games. In this paper, the concepts of stigmergy and entropy are imported into learning automata based multi-agent systems with the purpose of providing a simple framework for interaction and coordination in multi-agent systems and speeding up the learning process. The multi-agent system considered in this paper is designed to find optimal policies in Markov games. We consider several dummy agents that walk around in the states of the environment, make local learning automaton active, and bring information so that the involved learning automaton can update their local state. The entropy of the probability vector for the learning automata of the next state is used to determine reward or penalty for the actions of learning automata. The experimental results have shown that in terms of the speed of reaching the optimal policy, the proposed algorithm has better learning performance than other learning algorithms.  相似文献   

7.
In this paper we present an online adaptive control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time (CT) multi player non-zero-sum (NZS) game with infinite horizon for linear and nonlinear systems. NZS games allow for players to have a cooperative team component and an individual selfish component of strategy. The adaptive algorithm learns online the solution of coupled Riccati equations and coupled Hamilton–Jacobi equations for linear and nonlinear systems respectively. This adaptive control method finds in real-time approximations of the optimal value and the NZS Nash-equilibrium, while also guaranteeing closed-loop stability. The optimal-adaptive algorithm is implemented as a separate actor/critic parametric network approximator structure for every player, and involves simultaneous continuous-time adaptation of the actor/critic networks. A persistence of excitation condition is shown to guarantee convergence of every critic to the actual optimal value function for that player. A detailed mathematical analysis is done for 2-player NZS games. Novel tuning algorithms are given for the actor/critic networks. The convergence to the Nash equilibrium is proven and stability of the system is also guaranteed. This provides optimal adaptive control solutions for both non-zero-sum games and their special case, the zero-sum games. Simulation examples show the effectiveness of the new algorithm.  相似文献   

8.
基于后悔值的多Agent冲突博弈强化学习模型   总被引:1,自引:0,他引:1  
肖正  张世永 《软件学报》2008,19(11):2957-2967
对于冲突博弈,研究了一种理性保守的行为选择方法,即最小化最坏情况下Agent的后悔值.在该方法下,Agent当前的行为策略在未来可能造成的损失最小,并且在没有任何其他Agent信息的条件下,能够得到Nash均衡混合策略.基于后悔值提出了多Agent复杂环境下冲突博弈的强化学习模型以及算法实现.该模型中通过引入交叉熵距离建立信念更新过程,进一步优化了冲突博弈时的行为选择策略.基于Markov重复博弈模型验证了算法的收敛性,分析了信念与最优策略的关系.此外,与MMDP(multi-agent markov decision process)下Q学习扩展算法相比,该算法在很大程度上减少了冲突发生的次数,增强了Agent行为的协调性,并且提高了系统的性能,有利于维持系统的稳定.  相似文献   

9.
In this paper, we treat optimization problems as a kind of reinforcement learning problems regarding an optimization procedure for searching an optimal solution as a reinforcement learning procedure for finding the best policy to maximize the expected rewards. This viewpoint motivated us to propose a Q-learning-based swarm optimization (QSO) algorithm. The proposed QSO algorithm is a population-based optimization algorithm which integrates the essential properties of Q-learning and particle swarm optimization. The optimization procedure of the QSO algorithm proceeds as each individual imitates the behavior of the global best one in the swarm. The best individual is chosen based on its accumulated performance instead of its momentary performance at each evaluation. Two data sets including a set of benchmark functions and a real-world problem—the economic dispatch (ED) problem for power systems—were used to test the performance of the proposed QSO algorithm. The simulation results on the benchmark functions show that the proposed QSO algorithm is comparable to or even outperforms several existing optimization algorithms. As for the ED problem, the proposed QSO algorithm has found solutions better than all previously found solutions.  相似文献   

10.
由于强化学习算法动作策略学习比较费时,提出一种基于状态回溯的启发式强化学习方法.分析强化学习过程中重复状态,通过比较状态回溯过程中重复动作的选择策略,引入代价函数描述重复动作的重要性.结合动作奖赏及动作代价提出一种新的启发函数定义.该启发函数在强调动作重要性以加快学习速度的同时,基于代价函数计算动作选择的代价以减少不必要的探索,从而平稳地提高学习效率.对基于代价函数的动作选择策略进行证明.建立两种仿真场景,将算法用于机器人路径规划的仿真实验.实验结果表明基于状态回溯的启发式强化学习方法能平衡考虑获得的奖赏及付出的代价,有效提高Q学习的收敛速度.  相似文献   

11.
陈浩  李嘉祥  黄健  王菖  刘权  张中杰 《控制与决策》2023,38(11):3209-3218
面对高维连续状态空间或稀疏奖励等复杂任务时,仅依靠深度强化学习算法从零学习最优策略十分困难,如何将已有知识表示为人与学习型智能体之间相互可理解的形式,并有效地加速策略收敛仍是一个难题.对此,提出一种融合认知行为模型的深度强化学习框架,将领域内先验知识建模为基于信念-愿望-意图(belief- desire-intention, BDI)的认知行为模型,用于引导智能体策略学习.基于此框架,分别提出融合认知行为模型的深度Q学习算法和近端策略优化算法,并定量化设计认知行为模型对智能体策略更新的引导方式.最后,通过典型gym环境和空战机动决策对抗环境,验证所提出算法可以高效利用认知行为模型加速策略学习,有效缓解状态空间巨大和环境奖励稀疏的影响.  相似文献   

12.
We present a novel and uniform formulation of the problem of reinforcement learning against bounded memory adaptive adversaries in repeated games, and the methodologies to accomplish learning in this novel framework. First we delineate a novel strategic definition of best response that optimises rewards over multiple steps, as opposed to the notion of tactical best response in game theory. We show that the problem of learning a strategic best response reduces to that of learning an optimal policy in a Markov Decision Process (MDP). We deal with both finite and infinite horizon versions of this problem. We adapt an existing Monte Carlo based algorithm for learning optimal policies in such MDPs over finite horizon, in polynomial time. We show that this new efficient algorithm can obtain higher average rewards than a previously known efficient algorithm against some opponents in the contract game. Though this improvement comes at the cost of increased domain knowledge, simple experiments in the Prisoner's Dilemma, and coordination games show that even when no extra domain knowledge (besides that an upper bound on the opponent's memory size is known) is assumed, the error can still be small. We also experiment with a general infinite-horizon learner (using function-approximation to tackle the complexity of history space) against a greedy bounded memory opponent and show that while it can create and exploit opportunities of mutual cooperation in the Prisoner's Dilemma game, it is cautious enough to ensure minimax payoffs in the Rock–Scissors–Paper game.  相似文献   

13.
Repeated play in games by simple adaptive agents is investigated. The agents use Q-learning, a special form of reinforcement learning, to direct learning of behavioral strategies in a number of 2×2 games. The agents are able effectively to maximize the total wealth extracted. This often leads to Pareto optimal outcomes. When the rewards signals are sufficiently clear, Pareto optimal outcomes will largely be achieved. The effect can select Pareto outcomes that are not Nash equilibria and it can select Pareto optimal outcomes among Nash equilibria.Acknowledgement This material is based upon work supported by, or in part by, NSF grant number SES-9709548. We wish to thank an anonymous referee for a number of very helpful suggestions.  相似文献   

14.
多配送中心车辆路径规划(multi-depot vehicle routing problem, MDVRP)是现阶段供应链应用较为广泛的问题模型,现有算法多采用启发式方法,其求解速度慢且无法保证解的质量,因此研究快速且有效的求解算法具有重要的学术意义和应用价值.以最小化总车辆路径距离为目标,提出一种基于多智能体深度强化学习的求解模型.首先,定义多配送中心车辆路径问题的多智能体强化学习形式,包括状态、动作、回报以及状态转移函数,使模型能够利用多智能体强化学习训练;然后通过对MDVRP的节点邻居及遮掩机制的定义,基于注意力机制设计由多个智能体网络构成的策略网络模型,并利用策略梯度算法进行训练以获得能够快速求解的模型;接着,利用2-opt局部搜索策略和采样搜索策略改进解的质量;最后,通过对不同规模问题仿真实验以及与其他算法进行对比,验证所提出的多智能体深度强化学习模型及其与搜索策略的结合能够快速获得高质量的解.  相似文献   

15.
目前应用于机械臂控制中有许多不同的算法,如传统的自适应PD控制、模糊自适应控制等,这些大多需要基于数学模型。也有基于强化学习的控制方法,如:DQN(Deep Q Network)、Sarsa等。但这些强化学习算法在连续高维的动作空间中存在学习效率不高、回报奖励设置困难、控制效果不佳等问题。论文对基于PPO(Proximal Policy Optimization近端策略优化)算法实现任意位置的机械臂抓取应用进行研究,并将实验数据与Actor-Critic(演员-评论家)算法的进行对比,验证了使用PPO算法的控制效果良好,学习效率较高且稳定。  相似文献   

16.
郭锐  彭军  吴敏 《计算机工程与应用》2005,41(13):36-38,146
增强学习属于机器学习的一种,它通过与环境的交互获得策略的改进,其在线学习和自适应学习的特点使其成为解决策略寻优问题有力的工具。多智能体系统是人工智能领域的一个研究热点,对于多智能体学习技术的研究需要建立在系统环境模型的基础之上,由于多个智能体的存在,智能体之间的相互影响使得多智能体系统高度复杂,多智能体系统环境属于非确定马尔可夫模型,因此直接把基于马尔可夫模型的增强学习技术引入多智能体系统是不合适的。论文基于智能体间独立的学习机制,提出了一种改进的多智能体Q学习算法,使其适用于非确定马尔可夫环境,并对该学习技术在多智能体系统RoboCup中的应用进行了研究,实验证明了该学习技术的有效性与泛化能力,最后简要给出了多智能体增强学习研究的方向及进一步的工作。  相似文献   

17.
需求可拆分车辆路径问题(SDVRP)出现在广泛的物流配送场景中, 具有重要的研究价值. 高效的SDVRP优化算法能够提高车辆装载率, 降低物流配送成本. 为提高SDVRP的求解效率, 本文提出基于残差图卷积神经网络(RGCN)和多头注意力的深度强化学习算法(REINFORCE), 逐步构建可行解序列. 首先, 从强化学习的角度出发, 文章对SDVRP建立马尔科夫决策模型, 定义序列预测过程的环境状态、智能体动作空间、状态转移函数等. 其次, 建立编–解码模型求解节点选择策略, 其中使用残差图卷积神经网络的编码器重构配送中心和客户节点的特征, 将配送网络中节点间的连接关系与节点特征相互关联, 获得差异性显著的特征嵌入向量; 利用注意力网络解码器在重构后的嵌入向量基础上融合动态变化的车辆剩余装载量和客户需求等信息执行解码任务, 实现每次迭代为单个案例提供多个可行解. 最后, 提出基于平均基准值的REINFORCE算法更新模型参数, 通过求解不同问题规模测试集、标准SDVRP数据集, 以及京东物流实际配送任务, 验证了所提算法的有效性.  相似文献   

18.
一种新的多智能体Q学习算法   总被引:2,自引:0,他引:2  
郭锐  吴敏  彭军  彭姣  曹卫华 《自动化学报》2007,33(4):367-372
针对非确定马尔可夫环境下的多智能体系统,提出了一种新的多智能体Q学习算法.算法中通过对联合动作的统计来学习其它智能体的行为策略,并利用智能体策略向量的全概率分布保证了对联合最优动作的选择. 同时对算法的收敛性和学习性能进行了分析.该算法在多智能体系统RoboCup中的应用进一步表明了算法的有效性与泛化能力.  相似文献   

19.
渗透测试作为一种评估网络系统安全性能的重要手段, 是以攻击者的角度模拟真实的网络攻击, 找出网络系统中的脆弱点。而自动化渗透测试则是利用各种智能化方法实现渗透测试过程的自动化, 从而大幅降低渗透测试的成本。攻击路径发现作为自动化渗透测试中的关键技术, 如何快速有效地在网络系统中实现智能化攻击路径发现, 一直受到学术界的广泛关注。现有的自动化渗透测试方法主要基于强化学习框架实现智能化攻击路径发现, 但还存在奖赏稀疏、学习效率低等问题, 导致算法收敛速度慢, 攻击路径发现难以满足渗透测试的高时效性需求。为此, 提出一种基于势能的启发式奖赏塑形函数的分层强化学习算法(HRL-HRSF), 该算法首先利用渗透测试的特性, 根据网络攻击的先验知识提出了一种基于深度横向渗透的启发式方法, 并利用该启发式方法设计出基于势能的启发式奖赏塑形函数, 以此为智能体前期探索提供正向反馈, 有效缓解了奖赏稀疏的问题;然后将该塑形函数与分层强化学习算法相结合, 不仅能够有效减少环境状态空间与动作空间大小, 还能大幅度提高智能体在攻击路径发现过程中的奖赏反馈, 加快智能体的学习效率。实验结果表明, HRL-HRSF 相较于没有奖赏塑形的分层强化学习算法、DQN 及其改进算法更加快速有效, 并且随着网络规模和主机漏洞数目的增大, HRL-HRSF 均能保持更好地学习效率, 拥有良好的鲁棒性和泛化性。  相似文献   

20.
This article proposes three novel time-varying policy iteration algorithms for finite-horizon optimal control problem of continuous-time affine nonlinear systems. We first propose a model-based time-varying policy iteration algorithm. The method considers time-varying solutions to the Hamiltonian–Jacobi–Bellman equation for finite-horizon optimal control. Based on this algorithm, value function approximation is applied to the Bellman equation by establishing neural networks with time-varying weights. A novel update law for time-varying weights is put forward based on the idea of iterative learning control, which obtains optimal solutions more efficiently compared to previous works. Considering that system models may be unknown in real applications, we propose a partially model-free time-varying policy iteration algorithm that applies integral reinforcement learning to acquiring the time-varying value function. Moreover, analysis of convergence, stability, and optimality is provided for every algorithm. Finally, simulations for different cases are given to verify the convenience and effectiveness of the proposed algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号