首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The purpose of the reinforcement learning system is to learn an optimal policy in general. On the other hand, in two-player games such as Othello, it is important to acquire a penalty-avoiding policy that can avoid losing the game. We know the penalty avoiding rational policy making algorithm (PARP) to learn the policy. If we apply PARP to large-scale problems, we are confronted with an explosion of the number of states. In this article, we focus on Othello, a game that has huge state spaces. We introduce several ideas and heuristics to adapt PARP to Othello. We show that our learning player beats the well-known Othello program, KITTY. This work was presented, in part, at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

2.
作为机器学习和人工智能领域的一个重要分支,多智能体分层强化学习以一种通用的形式将多智能体的协作能力与强化学习的决策能力相结合,并通过将复杂的强化学习问题分解成若干个子问题并分别解决,可以有效解决空间维数灾难问题。这也使得多智能体分层强化学习成为解决大规模复杂背景下智能决策问题的一种潜在途径。首先对多智能体分层强化学习中涉及的主要技术进行阐述,包括强化学习、半马尔可夫决策过程和多智能体强化学习;然后基于分层的角度,对基于选项、基于分层抽象机、基于值函数分解和基于端到端等4种多智能体分层强化学习方法的算法原理和研究现状进行了综述;最后介绍了多智能体分层强化学习在机器人控制、博弈决策以及任务规划等领域的应用现状。  相似文献   

3.
Recent Advances in Hierarchical Reinforcement Learning   总被引:22,自引:0,他引:22  
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.  相似文献   

4.
One of the difficulties encountered in the application of reinforcement learning methods to real-world problems is their limited ability to cope with large-scale or continuous spaces. In order to solve the curse of the dimensionality problem, resulting from making continuous state or action spaces discrete, a new fuzzy Actor-Critic reinforcement learning network (FACRLN) based on a fuzzy radial basis function (FRBF) neural network is proposed. The architecture of FACRLN is realized by a four-layer FRBF neural network that is used to approximate both the action value function of the Actor and the state value function of the Critic simultaneously. The Actor and the Critic networks share the input, rule and normalized layers of the FRBF network, which can reduce the demands for storage space from the learning system and avoid repeated computations for the outputs of the rule units. Moreover, the FRBF network is able to adjust its structure and parameters in an adaptive way with a novel self-organizing approach according to the complexity of the task and the progress in learning, which ensures an economic size of the network. Experimental studies concerning a cart-pole balancing control illustrate the performance and applicability of the proposed FACRLN.  相似文献   

5.
Recent Advances in Hierarchical Reinforcement Learning   总被引:16,自引:0,他引:16  
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.  相似文献   

6.
强化学习通过试错与环境交互获得策略的改进,其自学习和在线学习的特点使其成为机器学习研究的一个重要分支。但强化学习方法一直被维数灾难所困扰。近年来,分层强化学习方法在解决维数灾问题中取得了显著成果,并逐渐开始向多智能体系统推广,论文归纳分析这一领域目前的研究进展,并对迫切需要解决的一些问题和进一步的发展趋势作出探讨和展望。  相似文献   

7.
遗传算法在逃逸机动策略中的应用研究   总被引:1,自引:0,他引:1  
周锐  陈宗基 《控制与决策》2001,16(4):465-467
分析了基于强化学习原理和遗传算法的序贯决策规则的自动学习方法,从规划报偿和规则激活度的角度讨论和研究了规则的信度分配问题,解决了在大的状态空间中搜索和延迟评价问题,为处理复杂的决策过程提供了一种行之有效的方法。基于该方法实现了飞机的逃逸机动策略,仿真结果表明了该方法的有效性。  相似文献   

8.
Human-Robot Collaboration (HRC) presents an opportunity to improve the efficiency of manufacturing processes. However, the existing task planning approaches for HRC are still limited in many ways, e.g., co-robot encoding must rely on experts’ knowledge and the real-time task scheduling is applicable within small state-action spaces or simplified problem settings. In this paper, the HRC assembly working process is formatted into a novel chessboard setting, in which the selection of chess piece move is used to analogize to the decision making by both humans and robots in the HRC assembly working process. To optimize the completion time, a Markov game model is considered, which takes the task structure and the agent status as the state input and the overall completion time as the reward. Without experts’ knowledge, this game model is capable of seeking for correlated equilibrium policy among agents with convergency in making real-time decisions facing a dynamic environment. To improve the efficiency in finding an optimal policy of the task scheduling, a deep-Q-network (DQN) based multi-agent reinforcement learning (MARL) method is applied and compared with the Nash-Q learning, dynamic programming and the DQN-based single-agent reinforcement learning method. A height-adjustable desk assembly is used as a case study to demonstrate the effectiveness of the proposed algorithm with different number of tasks and agents.  相似文献   

9.
强化学习(reinforcement learning)是机器学习和人工智能领域的重要分支,近年来受到社会各界和企业的广泛关注。强化学习算法要解决的主要问题是,智能体如何直接与环境进行交互来学习策略。但是当状态空间维度增加时,传统的强化学习方法往往面临着维度灾难,难以取得好的学习效果。分层强化学习(hierarchical reinforcement learning)致力于将一个复杂的强化学习问题分解成几个子问题并分别解决,可以取得比直接解决整个问题更好的效果。分层强化学习是解决大规模强化学习问题的潜在途径,然而其受到的关注不高。本文将介绍和回顾分层强化学习的几大类方法。  相似文献   

10.
强化学习是机器学习领域的研究热点, 是考察智能体与环境的相互作用, 做出序列决策、优化策略并最大化累积回报的过程. 强化学习具有巨大的研究价值和应用潜力, 是实现通用人工智能的关键步骤. 本文综述了强化学习算法与应用的研究进展和发展动态, 首先介绍强化学习的基本原理, 包括马尔可夫决策过程、价值函数、探索-利用问题. 其次, 回顾强化学习经典算法, 包括基于价值函数的强化学习算法、基于策略搜索的强化学习算法、结合价值函数和策略搜索的强化学习算法, 以及综述强化学习前沿研究, 主要介绍多智能体强化学习和元强化学习方向. 最后综述强化学习在游戏对抗、机器人控制、城市交通和商业等领域的成功应用, 以及总结与展望.  相似文献   

11.
A primary challenge of agent-based policy learning in complex and uncertain environments is escalating computational complexity with the size of the task space(action choices and world states) and the number of agents.Nonetheless,there is ample evidence in the natural world that high-functioning social mammals learn to solve complex problems with ease,both individually and cooperatively.This ability to solve computationally intractable problems stems from both brain circuits for hierarchical representation of state and action spaces and learned policies as well as constraints imposed by social cognition.Using biologically derived mechanisms for state representation and mammalian social intelligence,we constrain state-action choices in reinforcement learning in order to improve learning efficiency.Analysis results bound the reduction in computational complexity due to stateion,hierarchical representation,and socially constrained action selection in agent-based learning problems that can be described as variants of Markov decision processes.Investigation of two task domains,single-robot herding and multirobot foraging,shows that theoretical bounds hold and that acceptable policies emerge,which reduce task completion time,computational cost,and/or memory resources compared to learning without hierarchical representations and with no social knowledge.  相似文献   

12.
This paper relieves the ‘curse of dimensionality’ problem, which becomes intractable when scaling reinforcement learning to multi-agent systems. This problem is aggravated exponentially as the number of agents increases, resulting in large memory requirement and slowness in learning speed. For cooperative systems which widely exist in multi-agent systems, this paper proposes a new multi-agent Q-learning algorithm based on decomposing the joint state and joint action learning into two learning processes, which are learning individual action and the maximum value of the joint state approximately. The latter process considers others’ actions to insure that the joint action is optimal and supports the updating of the former one. The simulation results illustrate that the proposed algorithm can learn the optimal joint behavior with smaller memory and faster learning speed compared with friend-Q learning and independent learning.  相似文献   

13.
针对应用传统强化学习进行城市自适应交通信号配时决策时存在维数灾难和缺乏协调机制等问题,提出引入交互协调机制的强化学习算法。以车均延误为性能指标设计了针对城市交通信号配时决策的独立Q-强化学习算法。在此基础上,通过引入直接交互机制对独立强化学习算法进行了延伸,即相邻交叉口交通信号控制agent间直接交换配时动作和交互点值。通过仿真实验分析表明,引入交互协调机制的强化学习的控制效果明显优于独立强化学习算法,协调更有效,并且其学习算法具有较好的收敛性能,交互点值趋向稳定。  相似文献   

14.
全局游戏策略GGP(General Game Playing)旨在开发一种没有游戏经验支撑下能够精通各类游戏的人工智能。在原有强化学习算法研究的基础上,提出一种基于经验的简化学习方法,通过对游戏状态的筛选和游戏经验的归纳,从而降低决策对经验数量的需求,提高决策效率,并能达到指定胜利、平局或失败的游戏目标。通过在三种不同的游戏规则下与玩家进行游戏比赛实验表明,该学习方法能有效地达到预期结果。  相似文献   

15.
Fujita H  Ishii S 《Neural computation》2007,19(11):3051-3087
Games constitute a challenging domain of reinforcement learning (RL) for acquiring strategies because many of them include multiple players and many unobservable variables in a large state space. The difficulty of solving such realistic multiagent problems with partial observability arises mainly from the fact that the computational cost for the estimation and prediction in the whole state space, including unobservable variables, is too heavy. To overcome this intractability and enable an agent to learn in an unknown environment, an effective approximation method is required with explicit learning of the environmental model. We present a model-based RL scheme for large-scale multiagent problems with partial observability and apply it to a card game, hearts. This game is a well-defined example of an imperfect information game and can be approximately formulated as a partially observable Markov decision process (POMDP) for a single learning agent. To reduce the computational cost, we use a sampling technique in which the heavy integration required for the estimation and prediction can be approximated by a plausible number of samples. Computer simulation results show that our method is effective in solving such a difficult, partially observable multiagent problem.  相似文献   

16.
Transfer of Learning by Composing Solutions of Elemental Sequential Tasks   总被引:2,自引:0,他引:2  
Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focused on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm.  相似文献   

17.
随机博弈框架下的多agent强化学习方法综述   总被引:4,自引:0,他引:4  
宋梅萍  顾国昌  张国印 《控制与决策》2005,20(10):1081-1090
多agent学习是在随机博弈的框架下,研究多个智能体间通过自学习掌握交互技巧的问题.单agent强化学习方法研究的成功,对策论本身牢固的数学基础以及在复杂任务环境中广阔的应用前景,使得多agent强化学习成为目前机器学习研究领域的一个重要课题.首先介绍了多agent系统随机博弈中基本概念的形式定义;然后介绍了随机博弈和重复博弈中学习算法的研究以及其他相关工作;最后结合近年来的发展,综述了多agent学习在电子商务、机器人以及军事等方面的应用研究,并介绍了仍存在的问题和未来的研究方向.  相似文献   

18.
针对多Agent强化学习研究中面临的非马尔可夫环境和维数灾难问题,提出了一种半马氏博弈模型和MAHRL(multi-agent hierarchical reinforcement learning)协同框架.该模型弱化了系统对外界环境的要求,引入了随机时间步和通信策略的概念,更符合MAHRL研究的实际情况;协同框架中分别用SMG和SMDP模型对不同子任务进行建模,明确了Agent之间的协同机制.通过实验证明了SMG模型和协同框架的有效性和优越性.  相似文献   

19.
样本有限关联值递归Q学习算法及其收敛性证明   总被引:5,自引:0,他引:5  
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来解决问题,求解最优决策一般有两种途径:一种是求最大奖赏方法,另一种最求最优费用方法,利用求解最优费用函数的方法给出了一种新的Q学习算法,Q学习算法是求解信息不完全Markov决策问题的一种有效激励学习方法。Watkins提出了Q学习的基本算法,尽管他证明了在满足一定条件下Q值学习的迭代公式的收敛性,但是在他给出的算法中,没有考虑到在迭代过程中初始状态与初始动作的选取对后继学习的影响,因此提出的关联值递归Q学习算法改进了原来的Q学习算法,并且这种算法有比较好的收敛性质,从求解最优费用函数的方法出发,给出了Q学习的关联值递归算法,这种方法的建立可以使得动态规划(DP)算法中的许多结论直接应用到Q学习的研究中来。  相似文献   

20.
本文针对动态流水车间调度问题(DFSP), 以最小化最大完工时间为优化目标, 提出一种自适应深度强化学习算法(ADRLA)进行求解. 首先, 将DFSP的新工件动态到达过程模拟为泊松过程, 进而采用马尔科夫决策过程(MDP)对DFSP的求解过程进行描述, 将DFSP转化为可由强化学习求解的序贯决策问题. 然后, 根据DFSP的排序模型特点, 设计具有较好状态特征区分度和泛化性的状态特征向量, 并依此提出5种特定动作(即调度规则)来选择当前需加工的工件, 同时构造基于问题特性的奖励函数以获取动作执行效果的评价值(即奖励值), 从而确定ADRLA的3类基本要素. 进而, 以深度双Q网络(DDQN) 作为ADRLA中的智能体, 用于进行调度决策. 该智能体采用由少量小规模DFSP确定的数据集(即3类基本要素在不同问题上的数据)训练后, 可较准确刻画不同规模DFSP的状态特征向量与Q值向量(由各动作的Q值组成)间的非线性关系, 从而能对各种规模DFSP进行自适应实时调度. 最后, 通过在不同测试问题上的仿真实验和与算法比较, 验证了所提ADRLA求解DFSP的有效性和实时性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号