首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reinforcement learning (RL) has been widely used to solve problems with a little feedback from environment. Q learning can solve Markov decision processes (MDPs) quite well. For partially observable Markov decision processes (POMDPs), a recurrent neural network (RNN) can be used to approximate Q values. However, learning time for these problems is typically very long. We present a new combination of RL and RNN to find a good policy for POMDPs in a shorter learning time. This method contains two phases: firstly, state space is divided into two groups (fully observable state group and hidden state group); secondly, a Q value table is used to store values of fully observable states and an RNN is used to approximate values for hidden states. Results of experiments in two grid world problems show that the proposed method enables an agent to acquire a policy with better learning performance compared to the method using only a RNN.  相似文献   

2.
对智能体Q强化学习方法进行了扩展,讨论效用驱动的Markov强化学习问题。与单吸收状态相比,学习过程不再是状态驱动,而是效用驱动的。智能体的学习将不再与特定的目标状态相联系,而是最大化每步的平均期望收益,即最大化一定步数内的收益总和,因此学习结果是一个平均收益最大的最优循环。证明了多吸收状态下强化学习的收敛性,将栅格图像看作具有多个吸收状态的格子世界,测试了确定性环境下多吸收状态Q学习的有效性。  相似文献   

3.
Reinforcement learning is one of the more prominent machine-learning technologies due to its unsupervised learning structure and ability to continually learn, even in a dynamic operating environment. Applying this learning to cooperative multi-agent systems not only allows each individual agent to learn from its own experience, but also offers the opportunity for the individual agents to learn from the other agents in the system so the speed of learning can be accelerated. In the proposed learning algorithm, an agent adapts to comply with its peers by learning carefully when it obtains a positive reinforcement feedback signal, but should learn more aggressively if a negative reward follows the action just taken. These two properties are applied to develop the proposed cooperative learning method. This research presents the novel use of the fastest policy hill-climbing methods of Win or Lose Fast (WoLF) with policy-sharing. Results from the multi-agent cooperative domain illustrate that the proposed algorithms perform better than Q-learning alone in a piano mover environment. It also demonstrates that agents can learn to accomplish a task together efficiently through repetitive trials.  相似文献   

4.
传统Q算法对于机器人回报函数的定义较为宽泛,导致机器人的学习效率不高。为解决该问题,给出一种回报详细分类Q(RDC-Q)学习算法。综合机器人各个传感器的返回值,依据机器人距离障碍物的远近把机器人的状态划分为20个奖励状态和15个惩罚状态,对机器人每个时刻所获得的回报值按其状态的安全等级分类,使机器人趋向于安全等级更高的状态,从而帮助机器人更快更好地学习。通过在一个障碍物密集的环境中进行仿真实验,证明该算法收敛速度相对传统回报Q算法有明显提高。  相似文献   

5.
针对多机器人协作学习时出现的学习速度慢、学习效率低等问题,提出了一种基于π演算心智模型的足球机器人协作Q学习方法,描述了机器人的运动模型,定义了球场现状、目标、意图、行为、协作、请求、扩展知识、能力判断和联合意图等机器人心智状态,构造了联合奖励函数。最后通过实验验证了方法的有效性。  相似文献   

6.
一种基于Agent团队的强化学习模型与应用研究   总被引:22,自引:2,他引:20  
多Agent学习是近年来受到较多关注的研究方向,以单Agent强化Q-learning算法为基础,提出了一种基于Agent团队的强化学习模,这个模型的最大特点是引入主导Agent作为团队学习的主角,并通过主导Agent的角色变换实现整个团队的学习。结合仿真机器人足球领域,设计了具体的应用模型,在几个方面对Q-learning进行扩充,并进行了实验,在仿真机器人足球领域的成功应用表明了这个模型的有效  相似文献   

7.
乔林  罗杰 《计算机科学》2012,39(105):235-237
传统的Q学习算法是基于单奖惩标准的。基于单奖惩标准的Q学习算法往往不能适应multi-agent system相似文献   

8.
The ability to analyze the effectiveness of agent reward structures is critical to the successful design of multiagent learning algorithms. Though final system performance is the best indicator of the suitability of a given reward structure, it is often preferable to analyze the reward properties that lead to good system behavior (i.e., properties promoting coordination among the agents and providing agents with strong signal to noise ratios). This step is particularly helpful in continuous, dynamic, stochastic domains ill-suited to simple table backup schemes commonly used in TD(λ)/Q-learning where the effectiveness of the reward structure is difficult to distinguish from the effectiveness of the chosen learning algorithm. In this paper, we present a new reward evaluation method that provides a visualization of the tradeoff between the level of coordination among the agents and the difficulty of the learning problem each agent faces. This method is independent of the learning algorithm and is only a function of the problem domain and the agents’ reward structure. We use this reward property visualization method to determine an effective reward without performing extensive simulations. We then test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and take noisy actions (e.g., the agents’ movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting good rewards, compared to running a full simulation. In addition, this method facilitates the design and analysis of new rewards tailored to the observational limitations of the domain, providing rewards that combine the best properties of traditional rewards.  相似文献   

9.
深度强化学习探索问题中,需要根据环境给予的外部奖赏以作出决策,而在稀疏奖赏环境下,训练初期将获取不到任何信息,且在训练后期难以动态地结合已获得的信息对探索策略进行调整。为缓解这个问题,提出优先状态估计方法,在对状态进行访问时给予优先值,结合外部奖赏一并存入经验池中,引导探索的策略方向。结合DDQN(Double Deep Q Network)与优先经验回放,在OpenAI Gym中的MountainCar经典控制问题与Atari 2600中的FreeWay游戏中进行对比实验,结果表明该方法在稀疏奖赏环境中具有更好的学习性能,取得了更高的平均分数。  相似文献   

10.
多Agent协作的强化学习模型和算法   总被引:2,自引:0,他引:2  
结合强化学习技术讨论了多Agent协作学习的过程,构造了一个新的多Agent协作学习模型。在这个模型的基础上,提出一个多Agent协作学习算法。算法充分考虑了多Agent共同学习的特点,使得Agent基于对动作长期利益的估计来预测其动作策略,并做出相应的决策,进而达成最优的联合动作策略。最后,通过对猎人。猎物追逐问题的仿真试验验证了该算法的收敛性,表明这种学习算法是一种高效、快速的学习方法。  相似文献   

11.
Reinforcement learning is a learning scheme for finding the optimal policy to control a system, based on a scalar signal representing a reward or a punishment. If the observation of the system by the controller is sufficiently rich to represent the internal state of the system, the controller can achieve the optimal policy simply by learning reactive behavior. However, if the state of the controlled system cannot be assessed completely using current sensory observations, the controller must learn a dynamic behavior to achieve the optimal policy. In this paper, we propose a dynamic controller scheme which utilizes memory to uncover hidden states by using information about past system outputs, and makes control decisions using memory. This scheme integrates Q-learning, as proposed by Watkins, and recurrent neural networks of several types. It performs favorably in simulations which involve a task with hidden states. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

12.
Markov games, as the generalization of Markov decision processes to the multi‐agent case, have long been used for modeling multi‐agent systems (MAS). The Markov game view of MAS is considered as a sequence of games having to be played by multiple players while each game belongs to a different state of the environment. In this paper, several learning automata based multi‐agent system algorithms for finding optimal policies in Markov games are proposed. In all of the proposed algorithms, each agent residing in every state of the environment is equipped with a learning automaton. Every joint‐action of the set of learning automata in each state corresponds to moving to one of the adjacent states. Each agent moves from one state to another and tries to reach the goal state. The actions taken by learning automata along the path traversed by the agent are then rewarded or penalized based on the comparison of the average reward received by agent per move along the path with a dynamic threshold. In the second group of the proposed algorithms, the concept of entropy has been imported into learning automata based multi‐agent systems to improve the performance of the algorithms. To evaluate the performance of the proposed algorithms, computer experiments have been conducted. The results of experiments have shown that the proposed algorithms perform better than the existing algorithms in terms of speed and accuracy of reaching the optimal policy. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

13.
基于目标导向行为和空间拓扑记忆的视觉导航方法   总被引:1,自引:0,他引:1  
针对在具有动态因素且视觉丰富环境中的导航问题,受路标机制空间记忆方式启发,提出一种可同步学习目标导向行为和记忆空间结构的视觉导航方法.首先,为直接从原始输入中学习控制策略,以深度强化学习为基本导航框架,同时添加碰撞预测作为模型辅助任务;然后,在智能体学习导航过程中,利用时间相关性网络祛除冗余观测及寻找导航节点,实现通过...  相似文献   

14.
提出一种改进深度强化学习算法(NDQN),解决传统Q-learning算法处理复杂地形中移动机器人路径规划时面临的维数灾难.提出一种将深度学习融于Q-learning框架中,以网络输出代替Q值表的深度强化学习方法.针对深度Q网络存在严重的过估计问题,利用更正函数对深度Q网络中的评价函数进行改进.将改进深度强化学习算法与...  相似文献   

15.
一种基于强化学习的学习Agent   总被引:24,自引:2,他引:22  
强化学习通过感知环境状态和从环境中获得不确定奖赏值来学习动态系统的最优行为策略,是构造智能Agent的核心技术之一,在面向Agent的开发环境AODE中扩充BDI模型,引入策略和能力心智成分,采用强化学习技术实现策略构造函数,从而提出一种基于强化学习技术的学习Agent,研究AODE中自适应Agent物结构和运行方式,使智能Agent具有动态环境的在线学习能力,有效期能够有效地满足Agent各种心智要求。  相似文献   

16.
超文本学习状态空间模型与学习控制   总被引:9,自引:1,他引:8  
超文本教学材料的路径控制是超文本教育应用中存在的一个重要问题。文中以知识空间理论和一般的关系型超文本数学模型为基础,相入超文本学习状态空间的概念;通过定义学习状态空间的数学模型和学习状态跃迁阈值,实现对状态空间学习路径的控制,避免学生“走弯路”,提高了超文本的教学效果。在这种控制方式下,学生在学习状态内部有充分的自由浏览各个知识点;在学习状态之间学生的浏览则受到合理的控制,从而达到了自由与控制的统  相似文献   

17.
In reinforcement learning an agent may explore ineffectively when dealing with sparse reward tasks where finding a reward point is difficult. To solve the problem, we propose an algorithm called hierarchical deep reinforcement learning with automatic sub-goal identification via computer vision (HADS) which takes advantage of hierarchical reinforcement learning to alleviate the sparse reward problem and improve efficiency of exploration by utilizing a sub-goal mechanism. HADS uses a computer vision method to identify sub-goals automatically for hierarchical deep reinforcement learning. Due to the fact that not all sub-goal points are reachable, a mechanism is proposed to remove unreachable sub-goal points so as to further improve the performance of the algorithm. HADS involves contour recognition to identify sub-goals from the state image where some salient states in the state image may be recognized as sub-goals, while those that are not will be removed based on prior knowledge. Our experiments verified the effect of the algorithm.   相似文献   

18.
模仿学习提供了一种能够使智能体从专家示范中学习如何决策的框架。在学习过程中,智能体无需与专家进行交互,也不依赖于环境的奖励信号,而只需要大量的专家示范。经典的模仿学习方法需要使用第一人称的专家示范,该示范由一个状态序列以及对应的专家动作序列组成。但是,在现实生活中,专家示范通常以第三人称视频的形式存在。相比第一人称专家示范,第三人称示范的观察视角与智能体的存在差异,导致两者之间缺乏一一对应关系,因此第三人称示范无法被直接用于模仿学习中。针对此问题,文中提出了一种数据高效的第三人称模仿学习方法。首先,该方法在生成对抗模仿学习的基础上引入了图像差分方法,利用马尔可夫决策过程的马尔可夫性质以及其状态的时间连续性,去除环境背景、颜色等领域特征,以得到观察图像中与行为策略最相关的部分,并将其用于模仿学习;其次,该方法引入了一个变分判别器瓶颈,以对判别器进行限制,进一步削弱了领域特征对策略学习的影响。为了验证所提算法的性能,通过MuJoCo平台中的3个实验环境对其进行了测试,并与已有算法进行了比较。实验结果表明,与已有的模仿学习方法相比,该方法在第三人称模仿学习任务中具有更好的性能表现,并且不需要...  相似文献   

19.
Quad-Q-learning     
Develops the theory of quad-Q-learning which is a learning algorithm that evolved from Q-learning. Quad-Q-learning is applicable to problems that can be solved by "divide and conquer" techniques. Quad-Q-learning concerns an autonomous agent that learns without supervision to act optimally to achieve specified goals. The learning agent acts in an environment that can be characterized by a state. In the Q-learning environment, when an action is taken, a reward is received and a single new state results. The objective of Q-learning is to learn a policy function that maps states to actions so as to maximize a function of the rewards such as the sum of rewards. However, with respect to quad-Q-learning, when an action is taken from a state either an immediate reward and no new state results, or no reward is received and four new states result from taking that action. The environment in which quad-Q-learning operates can thus be viewed as a hierarchy of states where lower level states are the children of higher level states. The hierarchical aspect of quad-Q-learning leads to a bottom up view of learning that improves the efficiency of learning at higher levels in the hierarchy. The objective of quad-Q-learning is to maximize the sum of rewards obtained from each of the environments that result as actions are taken. Two versions of quad-Q-learning are discussed; these are discrete state and mixed discrete and continuous state quad-Q-learning. The discrete state version is only applicable to problems with small numbers of states. Scaling up to problems with practical numbers of states requires a continuous state learning method. Continuous state learning can be accomplished using functional approximation methods. Application of quad-Q-learning to image compression is briefly described.  相似文献   

20.
史豪斌  徐梦  刘珈妤  李继超 《控制与决策》2019,34(12):2517-2526
基于图像的视觉伺服机器人控制方法通过机器人的视觉获取图像信息,然后形成基于图像信息的闭环反馈来控制机器人的合理运动.经典视觉伺服的伺服增益的选取在大多数条件下是人工赋值的,故存在鲁棒性差、收敛速度慢等问题.针对该问题,提出一种基于Dyna-Q的旋翼无人机视觉伺服智能控制方法调节伺服增益以提高其自适应性.首先,使用基于费尔曼链码的图像特征提取算法提取目标特征点;然后,使用基于图像的视觉伺服形成特征误差的闭环控制;其次,针对旋翼无人机强耦合欠驱动的动力学特性提出一种解耦的视觉伺服控制模型;最后,建立使用Dyna-Q学习调节伺服增益的强化学习模型,通过训练可以使得旋翼无人机自主选择伺服增益.Dyna-Q学习在经典的Q学习的基础上通过建立环境模型来存储经验,环境模型产生的虚拟样本可以作为学习样本来进行值函数的迭代.实验结果表明,所提出的方法相比于传统控制方法PID控制以及经典的基于图像视觉伺服方法具有收敛速度快、稳定性高的优势.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号