首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new adaptive segmentation of continuous state space based on vector quantization algorithm such as Linde–Buzo–Gray for high-dimensional continuous state spaces. The objective of adaptive state space partitioning is to develop the efficiency of learning reward values with an accumulation of state transition vector in a single-agent environment. We constructed our single-agent model in continuous state and discrete actions spaces using Q-learning function. Moreover, the study of the resulting state space partition reveals a Voronoi tessellation. In addition, the experimental results show that this proposed method can partition the continuous state space appropriately into Voronoi regions according to not only the number of actions, but also achieve a good performance of reward-based learning tasks compared with other approaches such as square partition lattice on discrete state space.  相似文献   

2.
强化学习算法依赖于精心设计的外在奖励,然而Agent在和环境交互过程中,环境反馈给Agent的外在奖励往往是非常稀少的或延迟,这导致了Agent无法学习到一个好的策略。为了解决该问题,从新颖性和风险评估这两方面设计一个内在奖励,使Agent能充分地探索环境以及考虑环境中存在不确定性动作。该方法分为两部分,首先是新颖性描述为对当前状态-动作和转换后状态的访问次数,将具体执行的动作考虑进去;其次是动作的风险程度,风险评估从累积奖励方差考虑,来判断当前动作对状态的意义是有风险的还是无风险的。该方法在Mujoco环境下进行了评估,实验验证该方法取得了更高的平均奖励值,尤其是在外在奖励延迟的情况下,也能取得不错的平均奖励值。说明该方法能有效地解决外在奖励稀疏的问题。  相似文献   

3.
This article demonstrates that Q-learning can be accelerated by appropriately specifying initial Q-values using dynamic wave expansion neural network. In our method, the neural network has the same topography as robot work space. Each neuron corresponds to a certain discrete state. Every neuron of the network will reach an equilibrium state according to the initial environment information. The activity of the special neuron denotes the maximum cumulative reward by following the optimal policy from the corresponding state when the network is stable. Then the initial Q-values are defined as the immediate reward plus the maximum cumulative reward by following the optimal policy beginning at the succeeding state. In this way, we create a mapping between the known environment information and the initial values of Q-table based on neural network. The prior knowledge can be incorporated into the learning system, and give robots a better learning foundation. Results of experiments in a grid world problem show that neural network-based Q-learning enables a robot to acquire an optimal policy with better learning performance compared to conventional Q-learning and potential field-based Qlearning.  相似文献   

4.
样本有限关联值递归Q学习算法及其收敛性证明   总被引:5,自引:0,他引:5  
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来解决问题,求解最优决策一般有两种途径:一种是求最大奖赏方法,另一种最求最优费用方法,利用求解最优费用函数的方法给出了一种新的Q学习算法,Q学习算法是求解信息不完全Markov决策问题的一种有效激励学习方法。Watkins提出了Q学习的基本算法,尽管他证明了在满足一定条件下Q值学习的迭代公式的收敛性,但是在他给出的算法中,没有考虑到在迭代过程中初始状态与初始动作的选取对后继学习的影响,因此提出的关联值递归Q学习算法改进了原来的Q学习算法,并且这种算法有比较好的收敛性质,从求解最优费用函数的方法出发,给出了Q学习的关联值递归算法,这种方法的建立可以使得动态规划(DP)算法中的许多结论直接应用到Q学习的研究中来。  相似文献   

5.
对智能体Q强化学习方法进行了扩展,讨论效用驱动的Markov强化学习问题。与单吸收状态相比,学习过程不再是状态驱动,而是效用驱动的。智能体的学习将不再与特定的目标状态相联系,而是最大化每步的平均期望收益,即最大化一定步数内的收益总和,因此学习结果是一个平均收益最大的最优循环。证明了多吸收状态下强化学习的收敛性,将栅格图像看作具有多个吸收状态的格子世界,测试了确定性环境下多吸收状态Q学习的有效性。  相似文献   

6.
In order to improve the learning ability of robots, we present a reinforcement learning approach with a knowledge base for mapping natural language instructions to executable action sequences. A simulated platform with physical engine is built as interactive environment. Based on the knowledge base, a reward function with immediate rewards and delayed rewards is designed to handle sparse reward problems. Also, a list of object states is produced by retrieving the knowledge base, as a standard to define the quality of action sequences. Experimental results demonstrate that our approach yields good performance on accuracy of action sequences production.  相似文献   

7.
Mahadevan  Sridhar 《Machine Learning》1996,22(1-3):159-195
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.  相似文献   

8.
基于有限样本的最优费用关联值递归Q学习算法   总被引:4,自引:2,他引:4  
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来求解决策问题。求解最优决策一般有两种途径,一种是求最大奖赏方法,另一种是求最优费用方法。该文利用求解最优费用函数的方法给出了一种新的Q学习算法。Q学习算法是求解信息不完全Markov决策问题的一种有效激励学习方法。文章从求解最优费用函数的方法出发,给出了Q学习的关联值递归算法,这种方法的建立,可以使得动态规划(DP)算法中的许多结论直接应用到Q学习的研究中来。  相似文献   

9.
由于强大的自主学习能力, 强化学习方法逐渐成为机器人导航问题的研究热点, 但是复杂的未知环境对算法的运行效率和收敛速度提出了考验。提出一种新的机器人导航Q学习算法, 首先用三个离散的变量来定义环境状态空间, 然后分别设计了两部分奖赏函数, 结合对导航达到目标有利的知识来启发引导机器人的学习过程。实验在Simbad仿真平台上进行, 结果表明本文提出的算法很好地完成了机器人在未知环境中的导航任务, 收敛性能也有其优越性。  相似文献   

10.
The ability to analyze the effectiveness of agent reward structures is critical to the successful design of multiagent learning algorithms. Though final system performance is the best indicator of the suitability of a given reward structure, it is often preferable to analyze the reward properties that lead to good system behavior (i.e., properties promoting coordination among the agents and providing agents with strong signal to noise ratios). This step is particularly helpful in continuous, dynamic, stochastic domains ill-suited to simple table backup schemes commonly used in TD(λ)/Q-learning where the effectiveness of the reward structure is difficult to distinguish from the effectiveness of the chosen learning algorithm. In this paper, we present a new reward evaluation method that provides a visualization of the tradeoff between the level of coordination among the agents and the difficulty of the learning problem each agent faces. This method is independent of the learning algorithm and is only a function of the problem domain and the agents’ reward structure. We use this reward property visualization method to determine an effective reward without performing extensive simulations. We then test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and take noisy actions (e.g., the agents’ movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting good rewards, compared to running a full simulation. In addition, this method facilitates the design and analysis of new rewards tailored to the observational limitations of the domain, providing rewards that combine the best properties of traditional rewards.  相似文献   

11.
针对连续空间下的强化学习控制问题,提出了一种基于自组织模糊RBF网络的Q学习方法.网络的输入为状态,输出为连续动作及其Q值,从而实现了“连续状态—连续动作”的映射关系.首先将连续动作空间离散化为确定数目的离散动作,采用完全贪婪策略选取具有最大Q值的离散动作作为每条模糊规则的局部获胜动作.然后采用命令融合机制对获胜的离散动作按其效用值进行加权,得到实际作用于系统的连续动作.另外,为简化网络结构和提高学习速度,采用改进的RAN算法和梯度下降法分别对网络的结构和参数进行在线自适应调整.倒立摆平衡控制的仿真结果验证了所提Q学习方法的有效性.  相似文献   

12.
In the reinforcement learning system, the agent obtains a positive reward, such as 1, when it achieves its goal. Positive rewards are propagated around the goal area, and the agent gradually succeeds in reaching its goal. If you want to avoid certain situations, such as dangerous places or poison, you might want to give a negative reward to the agent. However, in conventional Q-learning, negative rewards are not propagated in more than one state. In this article, we propose a new way to propagate negative rewards. This is a very simple and efficient technique for Q-learning. Finally, we show the results of computer simulations and the effectiveness of the proposed method.  相似文献   

13.
A problem related to the use of reinforcement learning (RL) algorithms on real robot applications is the difficulty of measuring the learning level reached after some experience. Among the different RL algorithms, the Q-learning is the most widely used in accomplishing robotic tasks. The aim of this work is to a priori evaluate the optimal Q-values for problems where it is possible to compute the distance between the current state and the goal state of the system. Starting from the Q-learning updating formula the equations for the maximum Q-weights, for optimal and non-optimal actions, have been computed considering delayed and immediate rewards. Deterministic and non deterministic grid-world environments have been also considered to test in simulations the obtained equations. Besides the convergence rates of the Q-learning algorithm have been compared using different learning rate parameters.  相似文献   

14.
基于节点生长k-均值聚类算法的强化学习方法   总被引:3,自引:0,他引:3  
处理连续状态强化学习问题,主要方法有两类:参数化的函数逼近和自适应离散划分.在分析了现有对连续状态空间进行自适应划分方法的优缺点的基础上,提出了一种基于节点生长k均值聚类算法的划分方法,分别给出了在离散动作和连续动作两种情况下该强化学习方法的算法步骤.在离散动作的MountainCar问题和连续动作的双积分问题上进行仿真实验.实验结果表明,该方法能够根据状态在连续空间的分布,自动调整划分的精度,实现对于连续状态空间的自适应划分,并学习到最佳策略.  相似文献   

15.
探索与利用的权衡是强化学习的挑战之一。探索使智能体为进一步改进策略而采取新的动作,而利用使智能体采用历史经验中的信息以最大化累计奖赏。深度强化学习中常用“[ε]-greedy”策略处理探索与利用的权衡问题,未考虑影响智能体做出决策的其他因素,具有一定的盲目性。针对此问题提出一种自适应调节探索因子的[ε]-greedy策略,该策略依据智能体每完成一次任务所获得的序列累计奖赏值指导智能体进行合理的探索或利用。序列累计奖赏值越大,说明当前智能体所采用的有效动作越多,减小探索因子以便更多地利用历史经验。反之,序列累计奖赏值越小,说明当前策略还有改进的空间,增大探索因子以便探索更多可能的动作。实验结果证明改进的策略在Playing Atari 2600视频游戏中取得了更高的平均奖赏值,说明改进的策略能更好地权衡探索与利用。  相似文献   

16.
针对以机场为代表的大型交通枢纽出租车调度困难的问题,从出租车司机利益的角度出发,提出一种基于改进深度强化学习的司机决策方法。该方法首先对机场环境和机场所在的城市环境进行模拟,定义了司机的状态、动作,与环境交互获得的奖励和状态转移。然后,以司机的状态参数作为DQN的输入,用DQN拟合状态-动作值函数(Q值函数)。最后,通过不断地让司机根据ε-贪心策略做出决策,并根据奖励函数达到更新DQN参数的目的。实验结果表明:在模拟的大、中、小型城市等环境下,司机都可以通过模型定量地得到当前各种决策动作的期望收益并作出合理的决策,从而自动地完成出租车调度的过程。  相似文献   

17.
This paper introduces an approach to off-policy Monte Carlo (MC) learning guided by behaviour patterns gleaned from approximation spaces and rough set theory introduced by Zdzisław Pawlak in 1981. During reinforcement learning, an agent makes action selections in an effort to maximize a reward signal obtained from the environment. The problem considered in this paper is how to estimate the expected value of cumulative future discounted rewards in evaluating agent actions during reinforcement learning. The solution to this problem results from a form of weighted sampling using a combination of MC methods and approximation spaces to estimate the expected value of returns on actions. This is made possible by considering behaviour patterns of an agent in the context of approximation spaces. The framework provided by an approximation space makes it possible to measure the degree that agent behaviours are a part of (“covered by”) a set of accepted agent behaviours that serve as a behaviour evaluation norm. Furthermore, this article introduces an adaptive action control strategy called run-and-twiddle (RT) (a form of adaptive learning introduced by Oliver Selfridge in 1984), where approximate spaces are constructed on a “need by need” basis. Finally, a monocular vision system has been selected to facilitate the evaluation of the reinforcement learning methods. The goal of the vision system is to track a moving object, and rewards are based on the proximity of the object to the centre of the camera field of view. The contribution of this article is the introduction of a RT form of off-policy MC learning.  相似文献   

18.
传统Q算法对于机器人回报函数的定义较为宽泛,导致机器人的学习效率不高。为解决该问题,给出一种回报详细分类Q(RDC-Q)学习算法。综合机器人各个传感器的返回值,依据机器人距离障碍物的远近把机器人的状态划分为20个奖励状态和15个惩罚状态,对机器人每个时刻所获得的回报值按其状态的安全等级分类,使机器人趋向于安全等级更高的状态,从而帮助机器人更快更好地学习。通过在一个障碍物密集的环境中进行仿真实验,证明该算法收敛速度相对传统回报Q算法有明显提高。  相似文献   

19.
强化学习算法中启发式回报函数的设计及其收敛性分析   总被引:3,自引:0,他引:3  
(中国科学院沈阳自动化所机器人学重点实验室沈阳110016)  相似文献   

20.
To develop a nonverbal communication channel between an operator and a system, we built a tracking system called the Adaptive Visual Attentive Tracker (AVAT) to track and zoom in to the operator's behavioral sequence which represents his/her intention. In our system, hidden Markov models (HMMs) first roughly model the gesture pattern. Then, the state transition probabilities in HMMs are used to assign as the rewards in temporal difference (TD) learning. Later, the TD learning method is utilized to adjust the action model of the tracker for its situated behaviors in the tracking task. Identification of the hand sign gesture context through wavelet analysis autonomously provides a reward value for optimizing AVAT's action patterns. Experimental results of tracking the operator's hand sign action sequences during her natural walking motion with higher accuracy are shown which demonstrate the effectiveness of the proposed HMM-based TD learning algorithm of AVAT. During TD learning experiments, the exploring randomly chosen actions sometimes exceed the predefined state area, and thus involuntarily enlarge the domain of states. We describe a method utilizing HMMs with continuous observation distribution to detect whether the state would be split to make a new state. The generation of new states brings the ability of enlarging the predefined area of states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号