首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于状态-动作图测地高斯基的策略迭代强化学习   总被引:3,自引:2,他引:1  
在策略迭代强化学习中,基函数构造是影响动作值函数逼近精度的一个重要因素.为了给动作值函数逼近提供合适的基函数,提出一种基于状态-动作图测地高斯基的策略迭代强化学习方法.首先,根据离策略方法建立马尔可夫决策过程的状态-动作图论描述;然后,在状态-动作图上定义测地高斯核函数,利用基于近似线性相关的核稀疏方法自动选择测地高斯...  相似文献   

2.
Relational reinforcement learning (RRL) combines traditional reinforcement learning (RL) with a strong emphasis on a relational (rather than attribute-value) representation. Earlier work used RRL on a learning version of the classic Blocks World planning problem (a version where the learner does not know what the result of taking an action will be) and the Tetris game. Learning results based on the structure of training examples were obtained, such as learning in a mixed 3–5 block environment and being able to perform in a 3 or 10 block environment. Here, we instead take a function approximation approach to RL for the Blocks World problem. We obtain similar learning accuracies, with better running times, allowing us to consider much larger problem sizes. For instance, we can train on 15 blocks and then perform well on worlds with 100–800 blocks–using less running time than the relational method required to perform well for 3–10 blocks.  相似文献   

3.
结合半监督核的高斯过程分类   总被引:1,自引:0,他引:1  
提出了一种半监督算法用于学习高斯过程分类器, 其通过结合非参数的半监督核向分类器提供未标记数据信息. 该算法主要包括以下几个方面: 1)通过图拉普拉斯的谱分解获得核矩阵, 其联合了标记数据和未标记数据信息; 2)采用凸最优化方法学习核矩阵特征向量的最优权值, 构建非参数的半监督核; 3)把半监督核整合到高斯过程模型中, 构建所提出的半监督学习算法. 该算法的主要特点是: 把基于整个数据集的非参数半监督核应用于高斯过程模型, 该模型有着明确的概率描述, 可以方便地对数据之间的不确定性进行建模, 并能够解决复杂的推论问题. 通过实验结果表明, 该算法与其他方法相比具有更高的可靠性.  相似文献   

4.
基于协同最小二乘支持向量机的Q学习   总被引:5,自引:0,他引:5  
针对强化学习系统收敛速度慢的问题, 提出一种适用于连续状态、离散动作空间的基于协同最小二乘支持向量机的Q学习. 该Q学习系统由一个最小二乘支持向量回归机(Least squares support vector regression machine, LS-SVRM)和一个最小二乘支持向量分类机(Least squares support vector classification machine, LS-SVCM)构成. LS-SVRM用于逼近状态--动作对到值函数的映射, LS-SVCM则用于逼近连续状态空间到离散动作空间的映射, 并为LS-SVRM提供实时、动态的知识或建议(建议动作值)以促进值函数的学习. 小车爬山最短时间控制仿真结果表明, 与基于单一LS-SVRM的Q学习系统相比, 该方法加快了系统的学习收敛速度, 具有较好的学习性能.  相似文献   

5.
Kernel-Based Reinforcement Learning   总被引:5,自引:0,他引:5  
Ormoneit  Dirk  Sen  Śaunak 《Machine Learning》2002,49(2-3):161-178
We present a kernel-based approach to reinforcement learning that overcomes the stability problems of temporal-difference learning in continuous state-spaces. First, our algorithm converges to a unique solution of an approximate Bellman's equation regardless of its initialization values. Second, the method is consistent in the sense that the resulting policy converges asymptotically to the optimal policy. Parametric value function estimates such as neural networks do not possess this property. Our kernel-based approach also allows us to show that the limiting distribution of the value function estimate is a Gaussian process. This information is useful in studying the bias-variance tradeoff in reinforcement learning. We find that all reinforcement learning approaches to estimating the value function, parametric or non-parametric, are subject to a bias. This bias is typically larger in reinforcement learning than in a comparable regression problem.  相似文献   

6.
Džeroski  Sašo  De Raedt  Luc  Driessens  Kurt 《Machine Learning》2001,43(1-2):7-52
Relational reinforcement learning is presented, a learning technique that combines reinforcement learning with relational learning or inductive logic programming. Due to the use of a more expressive representation language to represent states, actions and Q-functions, relational reinforcement learning can be potentially applied to a new range of learning tasks. One such task that we investigate is planning in the blocks world, where it is assumed that the effects of the actions are unknown to the agent and the agent has to learn a policy. Within this simple domain we show that relational reinforcement learning solves some existing problems with reinforcement learning. In particular, relational reinforcement learning allows us to employ structural representations, to abstract from specific goals pursued and to exploit the results of previous learning phases when addressing new (more complex) situations.  相似文献   

7.
The present study aims at contributing to the current state-of-the art of activity-based travel demand modelling by presenting a framework to simulate sequential data. To this end, the suitability of a reinforcement learning approach to reproduce sequential data is explored. Additionally, as traditional reinforcement learning techniques are not capable of learning efficiently in large state and action spaces with respect to memory and computational time requirements on the one hand, and of generalizing based on infrequent visits of all state-action pairs on the other hand, the reinforcement learning technique as used in most applications, is enhanced by means of regression tree function approximation.Three reinforcement learning algorithms are implemented to validate their applicability: the traditional Q-learning and Q-learning with bucket-brigade updating are tested against the improved reinforcement learning approach with a CART function approximator. These methods are applied on data of 26 diary days. The results are promising and show that the proposed techniques offer great opportunity of simulating sequential data. Moreover, the reinforcement learning approach improved by introducing a regression tree function approximator learns a more optimal solution much faster than the two traditional Q-learning approaches.  相似文献   

8.
陈鑫  魏海军  吴敏  曹卫华 《自动化学报》2013,39(12):2021-2031
提高适应性、实现连续空间的泛化、降低维度是实现多智能体强化学习(Multi-agent reinforcement learning,MARL)在连续系统中应用的几个关键. 针对上述需求,本文提出连续多智能体系统(Multi-agent systems,MAS)环境下基于模型的智能体跟踪式学习机制和算法(MAS MBRL-CPT).以学习智能体适应同伴策略为出发点,通过定义个体期望即时回报,将智能体对同伴策略的观测融入环境交互效果中,并运用随机逼近实现个体期望即时回报的在线学习.定义降维的Q函数,在降低学习空间维度的同时,建立MAS环境下智能体跟踪式学习的Markov决策过程(Markov decision process,MDP).在运用高斯回归建立状态转移概率模型的基础上,实现泛化样本集Q值函数的在线动态规划求解.基于离散样本集Q函数运用高斯回归建立值函数和策略的泛化模型. MAS MBRL-CPT在连续空间Multi-cart-pole控制系统的仿真实验表明,算法能够使学习智能体在系统动力学模型和同伴策略未知的条件下,实现适应性协作策略的学习,具有学习效率高、泛化能力强等特点.  相似文献   

9.
Driessens  Kurt  Džeroski  Sašo 《Machine Learning》2004,57(3):271-304
Reinforcement learning, and Q-learning in particular, encounter two major problems when dealing with large state spaces. First, learning the Q-function in tabular form may be infeasible because of the excessive amount of memory needed to store the table, and because the Q-function only converges after each state has been visited multiple times. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. The first problem is often solved by learning a generalization of the encountered examples (e.g., using a neural net or decision tree). Relational reinforcement learning (RRL) is such an approach; it makes Q-learning feasible in structural domains by incorporating a relational learner into Q-learning. The problem of sparse rewards has not been addressed for RRL. This paper presents a solution based on the use of reasonable policies to provide guidance. Different types of policies and different strategies to supply guidance through these policies are discussed and evaluated experimentally in several relational domains to show the merits of the approach.  相似文献   

10.
强化学习在多Agent系统中面对的最大问题就是随着Agent数量的增加而导致的状态和动作空间的指数增长以及随之而来的缓慢的学习效率。采用了一种局部合作的Q-学习方法,只有在Agent之间有明确协作时才考察联合动作,否则,就只进行简单的个体Agent的Q-学习,从而使的学习时所要考察的状态动作对值大大减少。最后算法在捕食者-猎物的追逐问题和机器人足球仿真2D上的实验结果,与常用的多Agent强化学习技术相比有更好的效能。  相似文献   

11.
陈浩杰  范江亭  刘勇 《计算机应用》2022,42(4):1194-1200
针对未设计启发式算法的组合优化问题设计统一的解决方案已成为机器学习领域的一个研究热点,目前成熟的技术主要针对静态的组合优化问题,但是对于加入动态变化的组合优化问题还没有得到充分的解决。为了解决以上问题,提出一个将多头注意力机制与分层强化学习结合来求解动态图上的旅行商问题的轻量级模型Dy4TSP。首先,用以多头注意力机制为基础的预测网络处理来自图卷积神经网络的节点表征向量输入;然后,借助分布式强化学习算法训练来快速地预估图中每个节点被输出作为最优解的可能性,使得模型在不同的可能性中全面探索问题的最优解决方案空间;最后,训练后的模型将实时地生成满足具体目标奖励函数的动作决策序列。该模型在3个组合优问题上进行了评估,实验结果表明,该模型在经典旅行商系列问题中解的质量比开源求解器LKH3高0.15~0.37个单位,明显优于带有边嵌入的图注意网络(EGATE)等最新的算法;并且在其他的动态旅行商问题中可以达到0.1~1.05的最优路径差距,结果也略胜一筹。  相似文献   

12.
In the real world all events are connected. There is a hidden network of dependencies that governs behavior of natural processes. Without much argument it can be said that, of all the known data-structures, graphs are naturally suitable to model such information. But to learn to use graph data structure is a tedious job as most operations on graphs are computationally expensive, so exploring fast machine learning techniques for graph data has been an active area of research and a family of algorithms called kernel based approaches has been famous among researchers of the machine learning domain. With the help of support vector machines, kernel based methods work very well for learning with Gaussian processes. In this survey we will explore various kernels that operate on graph representations. Starting from the basics of kernel based learning we will travel through the history of graph kernels from its first appearance to discussion of current state of the art techniques in practice.  相似文献   

13.
Embedding a Priori Knowledge in Reinforcement Learning   总被引:2,自引:0,他引:2  
In the last years, temporal differences methods have been put forward as convenient tools for reinforcement learning. Techniques based on temporal differences, however, suffer from a serious drawback: as stochastic adaptive algorithms, they may need extensive exploration of the state-action space before convergence is achieved. Although the basic methods are now reasonably well understood, it is precisely the structural simplicity of the reinforcement learning principle – learning through experimentation – that causes these excessive demands on the learning agent. Additionally, one must consider that the agent is very rarely a tabula rasa: some rough knowledge about characteristics of the surrounding environment is often available. In this paper, I present methods for embedding a priori knowledge in a reinforcement learning technique in such a way that both the mathematical structure of the basic learning algorithm and the capacity to generalise experience across the state-action space are kept. Extensive experimental results show that the resulting variants may lead to good performance, provided a sensible balance between risky use of prior imprecise knowledge and cautious use of learning experience is adopted.  相似文献   

14.
A primary challenge of agent-based policy learning in complex and uncertain environments is escalating computational complexity with the size of the task space(action choices and world states) and the number of agents.Nonetheless,there is ample evidence in the natural world that high-functioning social mammals learn to solve complex problems with ease,both individually and cooperatively.This ability to solve computationally intractable problems stems from both brain circuits for hierarchical representation of state and action spaces and learned policies as well as constraints imposed by social cognition.Using biologically derived mechanisms for state representation and mammalian social intelligence,we constrain state-action choices in reinforcement learning in order to improve learning efficiency.Analysis results bound the reduction in computational complexity due to stateion,hierarchical representation,and socially constrained action selection in agent-based learning problems that can be described as variants of Markov decision processes.Investigation of two task domains,single-robot herding and multirobot foraging,shows that theoretical bounds hold and that acceptable policies emerge,which reduce task completion time,computational cost,and/or memory resources compared to learning without hierarchical representations and with no social knowledge.  相似文献   

15.
Learning to trade via direct reinforcement   总被引:1,自引:0,他引:1  
We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting models is eliminated, and better trading performance is obtained. The direct reinforcement approach differs from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. We find that the RRL direct reinforcement framework enables a simpler problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency. We demonstrate how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs. In extensive simulation work using real financial data, we find that our approach based on RRL produces better trading strategies than systems utilizing Q-learning (a value function method). Real-world applications include an intra-daily currency trader and a monthly asset allocation system for the S&P 500 Stock Index and T-Bills.  相似文献   

16.
Abstract—An off-policy Bayesian nonparameteric approximate reinforcement learning framework, termed as GPQ, that employs a Gaussian processes (GP) model of the value (Q) function is presented in both the batch and online settings. Sufficient conditions on GP hyperparameter selection are established to guarantee convergence of off-policy GPQ in the batch setting, and theoretical and practical extensions are provided for the online case. Empirical results demonstrate GPQ has competitive learning speed in addition to its convergence guarantees and its ability to automatically choose its own bases locations.   相似文献   

17.
王斐  齐欢  周星群  王建辉 《机器人》2018,40(4):551-559
为解决现有机器人装配学习过程复杂且对编程技术要求高等问题,提出一种基于前臂表面肌电信号和惯性多源信息融合的隐式交互方式来实现机器人演示编程.在通过演示学习获得演示人的装配经验的基础上,为提高对装配对象和环境变化的自适应能力,提出了一种多工深度确定性策略梯度算法(M-DDPG)来修正装配参数,在演示编程的基础上,进行强化学习确保机器人稳定执行任务.在演示编程实验中,提出一种改进的PCNN(并行卷积神经网络),称作1维PCNN(1D-PCNN),即通过1维的卷积与池化过程自动提取惯性信息与肌电信息特征,增强了手势识别的泛化性和准确率;在演示再现实验中,采用高斯混合模型(GMM)对演示数据进行统计编码,利用高斯混合回归(GMR)方法实现机器人轨迹动作再现,消除噪声点.最后,基于Primesense Carmine摄像机采用帧差法与多特征图核相关滤波算法(MKCF)的融合跟踪算法分别获取X轴与Y轴方向的环境变化,采用2个相同的网络结构并行进行连续过程的深度强化学习.在轴孔相对位置变化的情况下,机械臂能根据强化学习得到的泛化策略模型自动对机械臂末端位置进行调整,实现轴孔装配的演示学习.  相似文献   

18.
韩伟  韩忠愿 《计算机工程》2007,33(22):42-44,4
Q学习算法要求智能体无限遍历每个状态-动作转换,因此在涉及状态-动作空间非常大的应用问题时,导致收敛速度非常慢。借助多智能体的合作学习,智能体之间基于黑板模型的方法通过开关函数相互协调合作,可以更快地定位那些有效的状态-动作转换,避免了无效的更新,从而以较小的学习代价加快了Q表的收敛速度。  相似文献   

19.
主动配电网的新能源、储能等能源形式可以有效提高运行的灵活性和可靠性, 同时新能源和负荷也给配电网带来了双重不确定性, 致使主动配电网的实时优化调度决策维度大、建模精度差. 针对这一问题, 本文提出结合图神经网络和强化学习的图强化学习方法, 避免对复杂系统的精准建模. 首先, 将实时优化调度问题表述为马尔可夫决策过程, 并将其表述为动态序贯决策问题. 其次, 提出了基于物理连接关系的图表示方法, 用以表达状态量的隐含相关性. 随后, 提出图强化学习来学习将系统状态图映射到决策输出的最优策略. 最后, 将图强化学习推广到分布式图强化学习. 算例结果表明, 图强化学习在最优性和效率方面都取得了更好的效果.  相似文献   

20.
具备学习能力是高等动物智能的典型表现特征, 为探明四足动物运动技能学习机理, 本文对四足机器人步 态学习任务进行研究, 复现了四足动物的节律步态学习过程. 近年来, 近端策略优化(PPO)算法作为深度强化学习 的典型代表, 普遍被用于四足机器人步态学习任务, 实验效果较好且仅需较少的超参数. 然而, 在多维输入输出场 景下, 其容易收敛到局部最优点, 表现为四足机器人学习到步态节律信号杂乱且重心震荡严重. 为解决上述问题, 在元学习启发下, 基于元学习具有刻画学习过程高维抽象表征优势, 本文提出了一种融合元学习和PPO思想的元近 端策略优化(MPPO)算法, 该算法可以让四足机器人进化学习到更优步态. 在PyBullet仿真平台上的仿真实验结果表 明, 本文提出的算法可以使四足机器人学会行走运动技能, 且与柔性行动者评价器(SAC)和PPO算法的对比实验显 示, 本文提出的MPPO算法具有步态节律信号更规律、行走速度更快等优势.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号