首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 46 毫秒
1.
Long-Ji Lin 《Machine Learning》1992,8(3-4):293-321
To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning.This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.  相似文献   

2.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

3.
邵杰  杜丽娟  杨静宇 《计算机科学》2013,40(8):249-251,292
XCS分类器在解决机器人强化学习方面已显示出较强的能力,但在多机器人领域仅局限于MDP环境,只能解决环境空间较小的学习问题。提出了XCSG来解决多机器人的强化学习问题。XCSG建立低维的逼近函数,梯度下降技术利用在线知识建立稳定的逼近函数,使Q-表格一直保持在稳定低维状态。逼近函数Q不仅所需的存储空间更小,而且允许机器人在线对已获得的知识进行归纳一般化。仿真实验表明,XCSG算法很好地解决了多机器人学习空间大、学习速度慢、学习效果不确定等问题。  相似文献   

4.
Mahadevan  Sridhar 《Machine Learning》1996,22(1-3):159-195
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.  相似文献   

5.
文章介绍了加强学习模型,分别给出了加强学习的四个主要算法:动态规划、蒙特卡罗算法、时序差分算法、Q-学习,并指出了它们之间的区别和联系。最后给出加强学习的两个应用以及今后的研究方向。  相似文献   

6.
针对现有Dyna强化学习体系结构下,"规划"和"学习"在计算资源分配上的不合理问题,提出了一种分阶Dyna体系结构,随着经验知识的不断积累,将学习过程划分为探索阶段、变比重学习阶段和优化阶段,分别进行"规划"和"学习"的协调控制,大大减少了计算资源的浪费.结合传统的Q-学习算法,设计了分阶Dyna-Q强化学习算法,以适应动态不确定环境下的任务.在一个标准强化学习问题中,验证了所设计的分阶Dyna强化学习算法比基本Dyna强化学习算法具有更好的学习性能.  相似文献   

7.
随机梯度下降算法研究进展   总被引:5,自引:1,他引:5  
在机器学习领域中, 梯度下降算法是求解最优化问题最重要、最基础的方法. 随着数据规模的不断扩大, 传统的梯度下降算法已不能有效地解决大规模机器学习问题. 随机梯度下降算法在迭代过程中随机选择一个或几个样本的梯度来替代总体梯度, 以达到降低计算复杂度的目的. 近年来, 随机梯度下降算法已成为机器学习特别是深度学习研究的焦点. 随着对搜索方向和步长的不断探索, 涌现出随机梯度下降算法的众多改进版本, 本文对这些算法的主要研究进展进行了综述. 将随机梯度下降算法的改进策略大致分为动量、方差缩减、增量梯度和自适应学习率等四种. 其中, 前三种主要是校正梯度或搜索方向, 第四种对参数变量的不同分量自适应地设计步长. 着重介绍了各种策略下随机梯度下降算法的核心思想、原理, 探讨了不同算法之间的区别与联系. 将主要的随机梯度下降算法应用到逻辑回归和深度卷积神经网络等机器学习任务中, 并定量地比较了这些算法的实际性能. 文末总结了本文的主要研究工作, 并展望了随机梯度下降算法的未来发展方向.  相似文献   

8.
This study presents a wavelet-based neuro-fuzzy network (WNFN). The proposed WNFN model combines the traditional Takagi–Sugeno–Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This study adopts the non-orthogonal and compactly supported functions as wavelet neural network bases. A novel supervised evolutionary learning, called WNFN-S, is proposed to tune the adjustable parameters of the WNFN model. The proposed WNFN-S learning scheme is based on dynamic symbiotic evolution (DSE). The proposed DSE uses the sequential-search-based dynamic evolutionary (SSDE) method. In some real-world applications, exact training data may be expensive or even impossible to obtain. To solve this problem, the reinforcement evolutionary learning, called WNFN-R, is proposed. Computer simulations have been conducted to illustrate the performance and applicability of the proposed WNFN-S and WNFN-R learning algorithms.  相似文献   

9.
强化学习算法中启发式回报函数的设计及其收敛性分析   总被引:3,自引:0,他引:3  
(中国科学院沈阳自动化所机器人学重点实验室沈阳110016)  相似文献   

10.
唐亮贵  刘波  唐灿  程代杰 《计算机科学》2007,34(11):156-158
在深入分析Agent决策过程中状态与行为空间的迁移与构造的基础上,设计了Agent基于强化学习的最优行为选择策略和Agent强化学习的神经网络模型与算法,并对算法的收敛性进行了证明。通过对多Agent电子商务系统.中Agent竞价行为的预测仿真实验,验证了基于神经网络的Agent强化学习算法具有良好的性能和行为逼近能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号