首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
在策略迭代结强化学习方法的值函数逼近过程中,基函数的合理选择直接影响方法的性能.为更好地 描述环境的拓扑关系,采用测地线距离来替换普通高斯函数中的欧氏距离,提出一种基于测地高斯基函数的策略迭 代强化学习方法.首先,基于马尔可夫决策过程抽样得到的样本数据建立环境的图论描述.其次,在图上定义测地 高斯基函数,并用基于最短路径快速算法得到的最短路径来逼近测地线距离.然后,假定强化学习系统的状态—动 作值函数是给定测地高斯基函数的加权组合,采用递归最小二乘方法对权值进行在线增量式更新.最后,基于估计 的值函数进行策略改进.10£10 和20£20 迷宫问题的仿真结果验证了所提策略迭代方法的有效性.  相似文献   

2.
傅启明  刘全  伏玉琛  周谊成  于俊 《软件学报》2013,24(11):2676-2686
在大规模状态空间或者连续状态空间中,将函数近似与强化学习相结合是当前机器学习领域的一个研究热点;同时,在学习过程中如何平衡探索和利用的问题更是强化学习领域的一个研究难点.针对大规模状态空间或者连续状态空间、确定环境问题中的探索和利用的平衡问题,提出了一种基于高斯过程的近似策略迭代算法.该算法利用高斯过程对带参值函数进行建模,结合生成模型,根据贝叶斯推理,求解值函数的后验分布.在学习过程中,根据值函数的概率分布,求解动作的信息价值增益,结合值函数的期望值,选择相应的动作.在一定程度上,该算法可以解决探索和利用的平衡问题,加快算法收敛.将该算法用于经典的Mountain Car 问题,实验结果表明,该算法收敛速度较快,收敛精度较好.  相似文献   

3.
季挺  张华 《控制与决策》2017,32(12):2153-2161
为解决当前近似策略迭代增强学习算法普遍存在计算量大、基函数不能完全自动构建的问题,提出一种基于状态聚类的非参数化近似广义策略迭代增强学习算法(NPAGPI-SC).该算法利用二级随机采样过程采集样本,利用trial-and-error过程和以样本完全覆盖为目标的估计方法计算逼近器初始参数,利用delta规则和最近邻思想在学习过程中自适应地调整逼近器,利用贪心策略选择应执行的动作.一级倒立摆平衡控制的仿真实验结果验证了所提出算法的有效性和鲁棒性.  相似文献   

4.
由于强化学习算法动作策略学习比较费时,提出一种基于状态回溯的启发式强化学习方法.分析强化学习过程中重复状态,通过比较状态回溯过程中重复动作的选择策略,引入代价函数描述重复动作的重要性.结合动作奖赏及动作代价提出一种新的启发函数定义.该启发函数在强调动作重要性以加快学习速度的同时,基于代价函数计算动作选择的代价以减少不必要的探索,从而平稳地提高学习效率.对基于代价函数的动作选择策略进行证明.建立两种仿真场景,将算法用于机器人路径规划的仿真实验.实验结果表明基于状态回溯的启发式强化学习方法能平衡考虑获得的奖赏及付出的代价,有效提高Q学习的收敛速度.  相似文献   

5.
一类基于谱方法的强化学习混合迁移算法   总被引:1,自引:0,他引:1  
在状态空间比例放大的迁移任务中, 原型值函数方法只能有效迁移较小特征值对应的基函数, 用于目标任务的值函数逼近时会使部分状态的值函数出现错误. 针对该问题, 利用拉普拉斯特征映射能保持状态空间局部拓扑结构不变的特点, 对基于谱图理论的层次分解技术进行了改进, 提出一种基函数与子任务最优策略相结合的混合迁移方法. 首先, 在源任务中利用谱方法求取基函数, 再采用线性插值技术将其扩展为目标任务的基函数; 然后, 用插值得到的次级基函数(目标任务的近似Fiedler特征向量)实现任务分解, 并借助改进的层次分解技术求取相关子任务的最优策略; 最后, 将扩展的基函数和获取的子任务策略一起用于目标任务学习中. 所提的混合迁移方法可直接确定目标任务部分状态空间的最优策略, 减少了值函数逼近所需的最少基函数数目, 降低了策略迭代次数, 适用于状态空间比例放大且具有层次结构的迁移任务. 格子世界的仿真结果验证了新方法的有效性.  相似文献   

6.
针对传统强化学习方法因对状态空间进行离散化而无法保证无人机在复杂应用场景中航迹精度的问题,使用最小二乘策略迭代(Least-Squares Policy Iteration,LSPI)算法开展连续状态航迹规划问题研究。该算法采用带参线性函数逼近器近似表示动作值函数,无需进行空间离散化,提高了航迹精度,并基于样本数据离线计算策略,直接对策略进行评价和改进。与Q学习算法的对比仿真实验结果表明LSPI算法规划出的三维航迹更为平滑,有利于飞机实际飞行。  相似文献   

7.
针对具有连续状态空间的无模型非线性系统,提出一种基于径向基(radial basis function, RBF)神经网络的多步强化学习控制算法.首先,将神经网络引入强化学习系统,利用RBF神经网络的函数逼近功能近似表示状态-动作值函数,解决连续状态空间表达问题;然后,结合资格迹机制形成多步Sarsa算法,通过记录经历过的状态提高系统的学习效率;最后,采用温度参数衰减的方式改进softmax策略,优化动作的选择概率,达到平衡探索和利用关系的目的. MountainCar任务的仿真实验表明:所提出算法经过少量训练能够有效实现无模型情况下的连续非线性系统控制;与单步算法相比,该算法完成任务所用的平均收敛步数更少,效果更稳定,表明非线性值函数近似与多步算法结合在控制任务中同样可以具有良好的性能.  相似文献   

8.
基于协同最小二乘支持向量机的Q学习   总被引:5,自引:0,他引:5  
针对强化学习系统收敛速度慢的问题, 提出一种适用于连续状态、离散动作空间的基于协同最小二乘支持向量机的Q学习. 该Q学习系统由一个最小二乘支持向量回归机(Least squares support vector regression machine, LS-SVRM)和一个最小二乘支持向量分类机(Least squares support vector classification machine, LS-SVCM)构成. LS-SVRM用于逼近状态--动作对到值函数的映射, LS-SVCM则用于逼近连续状态空间到离散动作空间的映射, 并为LS-SVRM提供实时、动态的知识或建议(建议动作值)以促进值函数的学习. 小车爬山最短时间控制仿真结果表明, 与基于单一LS-SVRM的Q学习系统相比, 该方法加快了系统的学习收敛速度, 具有较好的学习性能.  相似文献   

9.
将函数逼近用于强化学习是目前机器学习领域的一个新的研究热点.针对传统的基于查询表及函数逼近的Q(λ)学习算法在大规模状态空间中收敛速度慢或者无法收敛的问题,提出一种基于线性函数逼近的离策略Q(λ)算法.该算法通过引入重要性关联因子,在迭代次数逐步增长的过程中,使得在策略与离策略相统一,确保算法的收敛性.同时在保证在策略与离策略的样本数据一致性的前提下,对算法的收敛性给予理论证明.将文中提出的算法用于Baird反例、Mountain-Car及Random Walk仿真平台,实验结果表明,该算法与传统的基于函数逼近的离策略算法相比,具有较好的收敛性;与传统的基于查询表的算法相比,具有更快的收敛速度,且对于状态空间的增长具有较强的鲁棒性.  相似文献   

10.
季挺  张华 《计算机应用》2018,38(5):1230-1238
为解决当前近似策略迭代增强学习算法逼近器不能完全自动构建的问题,提出一种基于Dyna框架的非参数化近似策略迭代(NPAPI-Dyna)增强学习算法。引入采样缓存和采样变化率设计二级随机采样过程采集样本,基于轮廓指标、采用K均值聚类算法实现trial-and-error过程生成核心状态基函数,采用以样本完全覆盖为目标的估计方法生成Q值函数逼近器,采用贪心策略设计动作选择器,利用对状态基函数的访问频次描述环境拓扑特征并构建环境估计模型;而后基于Dyna框架的模型辨识思想,将学习和规划过程有机结合,进一步加快了增强学习速度。一级倒立摆平衡控制的仿真实验中,当增强学习误差率为0.01时,算法学习成功率为100%,学习成功的最小尝试次数仅为2,平均尝试次数仅为7.73,角度平均绝对偏差为3.0538°,角度平均振荡范围为2.759°;当增强学习误差率为0.1时进行100次独立仿真运算,相比Online-LSPI和BLSPI算法平均需要150次以上尝试才能学习得到控制策略,而NPAPI-Dyna基本可在50次尝试内学习成功。实验分析表明,NPAPI-Dyna能够完全自动地构建、调整增强学习结构,学习结果精度较高,同时较快收敛。  相似文献   

11.
RRL is a relational reinforcement learning system based on Q-learning in relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. For relational reinforcement learning, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be very reliable, and it has to be able to handle the relational representation of state-action pairs. In this paper we investigate the use of Gaussian processes to approximate the Q-values of state-action pairs. In order to employ Gaussian processes in a relational setting we propose graph kernels as a covariance function between state-action pairs. The standard prediction mechanism for Gaussian processes requires a matrix inversion which can become unstable when the kernel matrix has low rank. These instabilities can be avoided by employing QR-factorization. This leads to better and more stable performance of the algorithm and a more efficient incremental update mechanism. Experiments conducted in the blocks world and with the Tetris game show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance based regression as a generalization algorithm for RRL. Editors: David Page and Akihiro Yamamoto  相似文献   

12.
The least-squares policy iteration approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular and useful choice as a basis function. However, it does not allow for discontinuity which typically arises in real-world reinforcement learning tasks. In this paper, we propose a new basis function based on geodesic Gaussian kernels, which exploits the non-linear manifold structure induced by the Markov decision processes. The usefulness of the proposed method is successfully demonstrated in simulated robot arm control and Khepera robot navigation.
Sethu VijayakumarEmail:
  相似文献   

13.
Control of nonlinear systems is challenging in realtime. Decision making, performed many times per second, must ensure system safety. Designing input to perform a task often involves solving a nonlinear system of differential equations, which is a computationally intensive, if not intractable problem. This article proposes sampling-based task learning for controlaffine nonlinear systems through the combined learning of both state and action-value functions in a model-free approximate value iteration setting with continuous inputs. A quadratic negative definite state-value function implies the existence of a unique maximum of the action-value function at any state. This allows the replacement of the standard greedy policy with a computationally efficient policy approximation that guarantees progression to a goal state without knowledge of the system dynamics. The policy approximation is consistent, i.e., it does not depend on the action samples used to calculate it. This method is appropriate for mechanical systems with high-dimensional input spaces and unknown dynamics performing Constraint-Balancing Tasks. We verify it both in simulation and experimentally for an Unmanned Aerial Vehicles (UAVs) carrying a suspended load, and in simulation, for the rendezvous of heterogeneous robots.   相似文献   

14.
Adaptive Radial Basis Decomposition by Learning Vector Quantization   总被引:2,自引:0,他引:2  
A method for function approximation in reinforcement learning settings is proposed. The action-value function of the Q-learning method is approximated by the radial basis function neural network and learned by the gradient descent. Those radial basis units that are unable to fit the local action-value function exactly enough are decomposed into new units with smaller widths. The local temporal-difference error is modelled by a two-class learning vector quantization algorithm, which approximates distributions of the positive and of the negative error and provides the centers of the new units. This method is especially convenient in cases of smooth value functions with large local variation in certain parts of the state space, such that non-uniform placement of basis functions is required. In comparison with four related methods, it has the smallest requirements of basis functions when achieving a comparable accuracy.  相似文献   

15.
As an important approach to solving complex sequential decision problems, reinforcement learning (RL) has been widely studied in the community of artificial intelligence and machine learning. However, the generalization ability of RL is still an open problem and it is difficult for existing RL algorithms to solve Markov decision problems (MDPs) with both continuous state and action spaces. In this paper, a novel RL approach with fast policy search and adaptive basis function selection, which is called Continuous-action Approximate Policy Iteration (CAPI), is proposed for RL in MDPs with both continuous state and action spaces. In CAPI, based on the value functions estimated by temporal-difference learning, a fast policy search technique is suggested to search for optimal actions in continuous spaces, which is computationally efficient and easy to implement. To improve the generalization ability and learning efficiency of CAPI, two adaptive basis function selection methods are developed so that sparse approximation of value functions can be obtained efficiently both for linear function approximators and kernel machines. Simulation results on benchmark learning control tasks with continuous state and action spaces show that the proposed approach not only can converge to a near-optimal policy in a few iterations but also can obtain comparable or even better performance than Sarsa-learning, and previous approximate policy iteration methods such as LSPI and KLSPI.  相似文献   

16.
为解决在线近似策略迭代增强学习计算复杂度高、收敛速度慢的问题,引入CMAC结构作为值函数逼近器,提出一种基于CMAC的非参数化近似策略迭代增强学习(NPAPI-CMAC)算法。算法通过构建样本采集过程确定CMAC泛化参数,利用初始划分和拓展划分确定CMAC状态划分方式,利用量化编码结构构建样本数集合定义增强学习率,实现了增强学习结构和参数的完全自动构建。此外,该算法利用delta规则和最近邻思想在学习过程中自适应调整增强学习参数,利用贪心策略对动作投票器得到的结果进行选择。一级倒立摆平衡控制的仿真实验结果验证了算法的有效性、鲁棒性和快速收敛能力。  相似文献   

17.
Kernel-based least squares policy iteration for reinforcement learning.   总被引:4,自引:0,他引:4  
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.  相似文献   

18.
This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a preference-based approach to reinforcement learning is the observation that in many real-world domains, numerical feedback signals are not readily available, or are defined arbitrarily in order to satisfy the needs of conventional RL algorithms. Instead, we propose an alternative framework for reinforcement learning, in which qualitative reward signals can be directly used by the learner. The framework may be viewed as a generalization of the conventional RL framework in which only a partial order between policies is required instead of the total order induced by their respective expected long-term reward. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. As a proof of concept, we realize a first simple instantiation of this framework that defines preferences based on utilities observed for trajectories. To that end, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of preference-based approximate policy iteration are illustrated by means of two case studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号