首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
折扣报酬模型强化学习是目前强化学习研究的主流,但折扣因子的选取使得近期期望报酬的影响大于远期期望报酬的影响,而有时候较大远期期望报酬的策略有可能是最优的,因此比较合理的方法是采用平均报酬模型强化学习。该文介绍了平均报酬模型强化学习的两个主要算法以及主要应用。  相似文献   

2.
平均奖赏强化学习算法研究   总被引:7,自引:0,他引:7  
高阳  周如益  王皓  曹志新 《计算机学报》2007,30(8):1372-1378
顺序决策问题常用马尔可夫决策过程(MDP)建模.当决策行为执行从时刻点扩展到连续时间上时,经典的马尔可夫决策过程模型也扩展到半马尔可夫决策过程模型(SMDP).当系统参数未知时,强化学习技术被用来学习最优策略.文中基于性能势理论,证明了平均奖赏强化学习的逼近定理.通过逼近相对参考状态的性能势值函数,研究一个新的平均奖赏强化学习算法--G-学习算法.G-学习算法既可以用于MDP,也可以用于SMDP.不同于经典的R-学习算法,G-学习算法采用相对参考状态的性能势值函数替代相对平均奖赏和的相对值函数.在顾客访问控制和生产库存仿真实验中,G-学习算法表现出优于R-学习算法和SMART算法的性能.  相似文献   

3.
李春贵 《计算机工程》2005,31(11):13-15
研究了优先扫描的强化学习方法,通过定义新的迹,把多步截断即时差分学习用于集成规划的优先扫描强化学习,用多步截断即时差分来定义扫描优先权,提出一种改进的优先扫描强化学习算法并进行仿真实验,实验结果表明,新算法的学习效率有明显的提高。  相似文献   

4.
可重入生产系统的平均报酬型强化学习调度   总被引:4,自引:0,他引:4  
在可重入生产系统中,一个重要的问题就是对调度策略进行优化,以提高系统平均输出率.本文采用了一种平均报酬型强化学习算法来解决该问题,直接从所关心的系统品质出发,自动获得具有自适应性的动态调度策略.仿真结果表明,其性能优于两种熟知的优先权调度策略.  相似文献   

5.
平均奖赏强化学习是强化学习中的一类重要的非折扣最优性框架,目前大多工作都主要是在离散域进行.本文尝试将平均奖赏强化学习算法和函数估计结合来解决连续状态空间的问题,并根据状态域的改变,相应修改R-learning和G-learning中参数的更新条件.此外对结合函数估计的G-learning算法的性能表现及其对各种参数的敏感程度进行针对性研究.最后给出实验结果及分析.实验结果证明R-learning和G-learning在ε较小的情况下解容易发散,同时也说明特征抽取方法Tile Coding的有效性,且可作为其它特征抽取方法的参考标准.  相似文献   

6.
强化学习(Reinforcement Learning,RL)作为机器学习领域中与监督学习、无监督学习并列的第三种学习范式,通过与环境进行交互来学习,最终将累积收益最大化.常用的强化学习算法分为模型化强化学习(Model-based Reinforcement Lear-ning)和无模型强化学习(Model-free...  相似文献   

7.
8.
郑宇  罗四维  吕子昂 《计算机应用》2008,28(5):1328-1330
强化学习算法的探索次数随着状态空间的增加呈指数增长,因此难以用于复杂系统的控制中。为克服这一问题,提出一种稳定状态空间控制的强化学习算法。算法以寻找稳定空间的最优控制动作为学习目标,将探索过程集中于稳定状态空间中,而不探索系统的全部状态空间。由于稳定状态空间通常仅占系统状态空间中的极小一部分,因此算法的探索次数不随状态空间的增加呈指数增长。  相似文献   

9.
为满足自适应巡航系统跟车模式下的舒适性需求并兼顾车辆安全性和行车效率,解决已有算法泛化性和舒适性差的问题,基于深度确定性策略梯度算法(deep deterministic policy gradient,DDPG),提出一种新的多目标车辆跟随决策算法.根据跟随车辆与领航车辆的相互纵向运动学特性,建立车辆跟随过程的马尔可...  相似文献   

10.
强化学习(Reinforcement learning, RL)在围棋、视频游戏、导航、推荐系统等领域均取得了巨大成功. 然而, 许多强化学习算法仍然无法直接移植到真实物理环境中. 这是因为在模拟场景下智能体能以不断试错的方式与环境进行交互, 从而学习最优策略. 但考虑到安全因素, 很多现实世界的应用则要求限制智能体的随机探索行为. 因此, 安全问题成为强化学习从模拟到现实的一个重要挑战. 近年来, 许多研究致力于开发安全强化学习(Safe reinforcement learning, SRL)算法, 在确保系统性能的同时满足安全约束. 本文对现有的安全强化学习算法进行全面综述, 将其归为三类: 修改学习过程、修改学习目标、离线强化学习, 并介绍了5大基准测试平台: Safety Gym、safe-control-gym、SafeRL-Kit、D4RL、NeoRL. 最后总结了安全强化学习在自动驾驶、机器人控制、工业过程控制、电力系统优化和医疗健康领域中的应用, 并给出结论与展望.  相似文献   

11.
Recent Advances in Hierarchical Reinforcement Learning   总被引:22,自引:0,他引:22  
Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.  相似文献   

12.
Kearns  Michael  Mansour  Yishay  Ng  Andrew Y. 《Machine Learning》2002,49(2-3):193-208
A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter.Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).  相似文献   

13.
强化学习研究综述   总被引:8,自引:2,他引:8  
在未知环境中,关于agent的学习行为是一个既充满挑战又有趣的问题,强化学习通过试探与环境交互获得策略的改进,其学习和在线学习的特点使其成为机器学习研究的一个重要分支。介绍了强化学习在理论、算法和应用研究三个方面最新的研究成果,首先介绍了强化学习的环境模型和其基本要素;其次介绍了强化学习算法的收敛性和泛化有关的理论研究问题;然后结合最近几年的研究成果,综述了折扣型回报指标和平均回报指标强化学习算法;最后列举了强化学习在非线性控制、机器人控制、人工智能问题求解、多agent 系统问题等若干领域的成功应用和未来的发展方向。  相似文献   

14.
Near-Optimal Reinforcement Learning in Polynomial Time   总被引:1,自引:0,他引:1  
Kearns  Michael  Singh  Satinder 《Machine Learning》2002,49(2-3):209-232
We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.  相似文献   

15.
Incremental Multi-Step Q-Learning   总被引:23,自引:0,他引:23  
Peng  Jing  Williams  Ronald J. 《Machine Learning》1996,22(1-3):283-290
This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD() return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovian effect of coarse state-space quantization. The resulting algorithm, Q()-learning, thus combines some of the best features of the Q-learning and actor-critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations.  相似文献   

16.
本文针对多智能体强化学习中存在的通信和计算资源消耗大等问题,提出了一种基于事件驱动的多智能体强化学习算法,侧重于事件驱动在多智能体学习策略层方面的研究。在智能体与环境的交互过程中,算法基于事件驱动的思想,根据智能体观测信息的变化率设计触发函数,使学习过程中的通信和学习时机无需实时或按周期地进行,故在相同时间内可以降低数据传输和计算次数。另外,分析了该算法的计算资源消耗,以及对算法收敛性进行了论证。最后,仿真实验说明了该算法可以在学习过程中减少一定的通信次数和策略遍历次数,进而缓解了通信和计算资源消耗。  相似文献   

17.
    
This paper presents two dynamic and distributed clustering algorithms for Wireless Sensor Networks (WSNs). Clustering approaches are used in WSNs to improve the network lifetime and scalability by balancing the workload among the clusters. Each cluster is managed by a cluster head (CH) node. The first algorithm requires the CH nodes to be mobile: by dynamically varying the CH node positions, the algorithm is proved to converge to a specific partition of the mission area, the generalised Voronoi tessellation, in which the loads of the CH nodes are balanced. Conversely, if the CH nodes are fixed, a weighted Voronoi clustering approach is proposed with the same load-balancing objective: a reinforcement learning approach is used to dynamically vary the mission space partition by controlling the weights of the Voronoi regions. Numerical simulations are provided to validate the approaches.  相似文献   

18.
Mahadevan  Sridhar 《Machine Learning》1996,22(1-3):159-195
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.  相似文献   

19.
    
In this work, we combined the model based reinforcement learning (MBRL) and model free reinforcement learning (MFRL) to stabilize a biped robot (NAO robot) on a rotating platform, where the angular velocity of the platform is unknown for the proposed learning algorithm and treated as the external disturbance. Nonparametric Gaussian processes normally require a large number of training data points to deal with the discontinuity of the estimated model. Although some improved method such as probabilistic inference for learning control (PILCO) does not require an explicit global model as the actions are obtained by directly searching the policy space, the overfitting and lack of model complexity may still result in a large deviation between the prediction and the real system. Besides, none of these approaches consider the data error and measurement noise during the training process and test process, respectively. We propose a hierarchical Gaussian processes (GP) models, containing two layers of independent GPs, where the physically continuous probability transition model of the robot is obtained. Due to the physically continuous estimation, the algorithm overcomes the overfitting problem with a guaranteed model complexity, and the number of training data is also reduced. The policy for any given initial state is generated automatically by minimizing the expected cost according to the predefined cost function and the obtained probability distribution of the state. Furthermore, a novel Q(λ) based MFRL method scheme is employed to improve the policy. Simulation results show that the proposed RL algorithm is able to balance NAO robot on a rotating platform, and it is capable of adapting to the platform with varying angular velocity.   相似文献   

20.
提出了一个新的效用聚类激励学习算法U-Clustering。该算法完全不用像U-Tree算法那样进行边缘节点的生成和测试,它首先根据实例链的观测动作值对实例进行聚类,然后对每个聚类进行特征选择,最后再进行特征压缩,经过压缩后的新特征就成为新的状态空间树节点。通过对NewYorkDriving[2,13]的仿真和算法的实验分析,表明U-Clustering算法对解决大型部分可观测环境问题是比较有效的算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号