首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
平均奖赏强化学习是强化学习中的一类重要的非折扣最优性框架,目前大多工作都主要是在离散域进行.本文尝试将平均奖赏强化学习算法和函数估计结合来解决连续状态空间的问题,并根据状态域的改变,相应修改R-learning和G-learning中参数的更新条件.此外对结合函数估计的G-learning算法的性能表现及其对各种参数的敏感程度进行针对性研究.最后给出实验结果及分析.实验结果证明R-learning和G-learning在ε较小的情况下解容易发散,同时也说明特征抽取方法Tile Coding的有效性,且可作为其它特征抽取方法的参考标准.  相似文献   

2.
强化学习领域的一个研究难点是在大规模或连续空间中平衡探索和利用的问题。针对该问题,应运函数近似与高斯过程方法,提出新的行动者评论家 (Actor-Critic,AC)算法。该算法在Actor中使用时间差分误差构造关于策略参数的更新公式;在Critic中利用高斯过程对线性带参值函数建模,结合生成模型,根据贝叶斯推理,求解值函数的后验分布。将该算法应用于平衡杆实验中,实验结果表明,算法收敛速度较快,可以有效解决在大规模或连续空间中探索和利用的平衡问题,具有较好的性能。  相似文献   

3.
针对现有基于策略梯度的深度强化学习方法应用于办公室、走廊等室内复杂场景下的机器人导航时,存在训练时间长、学习效率低的问题,本文提出了一种结合优势结构和最小化目标Q值的深度强化学习导航算法.该算法将优势结构引入到基于策略梯度的深度强化学习算法中,以区分同一状态价值下的动作差异,提升学习效率,并且在多目标导航场景中,对状态价值进行单独估计,利用地图信息提供更准确的价值判断.同时,针对离散控制中缓解目标Q值过估计方法在强化学习主流的Actor-Critic框架下难以奏效,设计了基于高斯平滑的最小目标Q值方法,以减小过估计对训练的影响.实验结果表明本文算法能够有效加快学习速率,在单目标、多目标连续导航训练过程中,收敛速度上都优于柔性演员评论家算法(SAC),双延迟深度策略性梯度算法(TD3),深度确定性策略梯度算法(DDPG),并使移动机器人有效远离障碍物,训练得到的导航模型具备较好的泛化能力.  相似文献   

4.
目前应用于机械臂控制中有许多不同的算法,如传统的自适应PD控制、模糊自适应控制等,这些大多需要基于数学模型。也有基于强化学习的控制方法,如:DQN(Deep Q Network)、Sarsa等。但这些强化学习算法在连续高维的动作空间中存在学习效率不高、回报奖励设置困难、控制效果不佳等问题。论文对基于PPO(Proximal Policy Optimization近端策略优化)算法实现任意位置的机械臂抓取应用进行研究,并将实验数据与Actor-Critic(演员-评论家)算法的进行对比,验证了使用PPO算法的控制效果良好,学习效率较高且稳定。  相似文献   

5.
强化学习算法通常要处理连续状态及连续动作空间问题以实现精确控制.就此文中结合Actor-Critic方法在处理连续动作空间的优点及核方法在处理连续状态空间的优势,提出一种基于核方法的连续动作Actor-Critic学习算法(KCACL).该算法中,Actor根据奖赏不作为原则更新动作概率,Critic采用基于核方法的在线选择时间差分算法学习状态值函数.对比实验验证该算法的有效性.  相似文献   

6.
提出了一种基于递深度递归强化学习的自动驾驶策略模型学习方法,并在TORCS虚拟驾驶引擎进行仿真验真。针对Actor-Critic框架过估计和更新缓慢的问题,结合clipped double DQN,通过取最小估计值的方法缓解过估计的情况。为了获取多时刻状态输入以帮助智能体更好的决策,结合递归神经网络,设计出包含LSTM结构的Actor策略网络的Critic评价网络。在TORCS平台仿真实验表明,所提算法相对与传统DDPG算法能有效提高训练效率。  相似文献   

7.
针对深度强化学习算法在部分可观测环境中面临信息掌握不足、存在随机因素等问题,提出了一种融合注意力机制与循环神经网络的近端策略优化算法(ARPPO算法)。该算法首先通过卷积网络层提取特征;其次采用注意力机制突出状态中重要的关键信息;再次通过LSTM网络提取数据的时域特性;最后基于Actor-Critic结构的PPO算法进行策略学习与训练提升。基于Gym-Minigrid环境设计了两项探索任务的消融与对比实验,实验结果表明ARPPO算法较已有的A2C算法、PPO算法、RPPO算法具有更快的收敛速度,且ARPPO算法在收敛之后具有很强的稳定性,并对存在随机因素的未知环境具备更强的适应力。  相似文献   

8.
针对逆强化学习算法在训练初期由于专家样本稀疏所导致的学习速率慢的问题,提出一种基于生成对抗网络(Generative Adversarial Networks,GAN)的最大熵逆强化学习算法。在学习过程中,结合专家样本训练优化生成对抗网络,以生成虚拟专家样本,在此基础上利用随机策略生成非专家样本,构建混合样本集,结合最大熵概率模型,对奖赏函数进行建模,并利用梯度下降方法求解最优奖赏函数。基于所求解的最优奖赏函数,利用正向强化学习方法求解最优策略,并在此基础上进一步生成非专家样本,重新构建混合样本集,迭代求解最优奖赏函数。将所提出的算法与MaxEnt IRL算法应用于经典的Object World与Mountain Car问题,实验表明,该算法在专家样本稀疏的情况下可以较好地求解奖赏函数,具有较好的收敛性能。  相似文献   

9.
在 3D 打印、快递物流等领域,需要将形状各异的零件或货物在限定的空间中摆放,称为异形填 充。给出一种摆放方案,以便将尽可能多的多面体放入给定容器;或者一批物体紧密地摆放,使得占用体积最小, 则称为异形填充问题。这是个 NP 问题,很难高效求解。基于此,研究在一个可变维度的三维容器内摆放给定的 一组多面体,使得打包后容器的可变维度最小。并提出一个基于强化学习的算法 AC-HAPE3D,利用启发式算法 HAPE3D 将问题建模为马尔可夫过程,再利用基于策略的强化学习方法 Actor-Critic 进行学习。同时用体素来表 示容器和多面体,从而简化状态信息的表达,并用神经网络表示价值函数和策略函;为了解决状态信息长度以及 动作空间可变的问题,采取遮罩的方法来屏蔽部分输入和输出,并且引入 LSTM 来处理变长的状态信息。在 5 个 不同的数据集进行的实验表明算法能够取得较好的结果。  相似文献   

10.
Actor-Critic是一种强化学习方法,通过与环境在线试错交互收集样本来学习策略,是求解序贯感知决策问题的有效手段.但是,这种在线交互的主动学习范式在一些复杂真实环境中收集样本时会带来成本和安全问题.离线强化学习作为一种基于数据驱动的强化学习范式,强调从静态样本数据集中学习策略,与环境无探索交互,为机器人、自动驾驶、健康护理等真实世界部署应用提供了可行的解决方案,是近年来的研究热点.目前,离线强化学习方法存在学习策略和行为策略之间的分布偏移挑战.针对这个挑战,通常采用策略约束或值函数正则化来限制访问数据集分布之外(Out-Of-Distribution, OOD)的动作,从而导致学习性能过于保守,阻碍了值函数网络的泛化和学习策略的性能提升.为此,本文利用不确定性估计和OOD采样来平衡值函数学习的泛化性和保守性,提出一种基于不确定性估计的离线确定型Actor-Critic方法(Offline Deterministic Actor-Critic based on Uncertainty Estimation, ODACUE).首先,针对确定型策略,给出一种Q值函数的不确定性估计算子定...  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

13.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

14.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

15.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

19.
20.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号