首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
基于生成对抗网络的模仿学习综述   总被引:1,自引:0,他引:1  
模仿学习研究如何从专家的决策数据中进行学习,以得到接近专家水准的决策模型.同样学习如何决策的强化学习往往只根据环境的评价式反馈进行学习,与之相比,模仿学习能从决策数据中获得更为直接的反馈.它可以分为行为克隆、基于逆向强化学习的模仿学习两类方法.基于逆向强化学习的模仿学习把模仿学习的过程分解成逆向强化学习和强化学习两个子过程,并反复迭代.逆向强化学习用于推导符合专家决策数据的奖赏函数,而强化学习基于该奖赏函数来学习策略.基于生成对抗网络的模仿学习方法从基于逆向强化学习的模仿学习发展而来,其中最早出现且最具代表性的是生成对抗模仿学习方法(Generative Adversarial Imitation Learning,简称GAIL).生成对抗网络由两个相对抗的神经网络构成,分别为判别器和生成器.GAIL的特点是用生成对抗网络框架求解模仿学习问题,其中,判别器的训练过程可类比奖赏函数的学习过程,生成器的训练过程可类比策略的学习过程.与传统模仿学习方法相比,GAIL具有更好的鲁棒性、表征能力和计算效率.因此,它能够处理复杂的大规模问题,并可拓展到实际应用中.然而,GAIL存在着模态崩塌、环境交互样本利用效率低等问题.最近,新的研究工作利用生成对抗网络技术和强化学习技术等分别对这些问题进行改进,并在观察机制、多智能体系统等方面对GAIL进行了拓展.本文先介绍了GAIL的主要思想及其优缺点,然后对GAIL的改进算法进行了归类、分析和对比,最后总结全文并探讨了可能的未来趋势.  相似文献   

2.
强化学习(reinforcement learning)是机器学习和人工智能领域的重要分支,近年来受到社会各界和企业的广泛关注。强化学习算法要解决的主要问题是,智能体如何直接与环境进行交互来学习策略。但是当状态空间维度增加时,传统的强化学习方法往往面临着维度灾难,难以取得好的学习效果。分层强化学习(hierarchical reinforcement learning)致力于将一个复杂的强化学习问题分解成几个子问题并分别解决,可以取得比直接解决整个问题更好的效果。分层强化学习是解决大规模强化学习问题的潜在途径,然而其受到的关注不高。本文将介绍和回顾分层强化学习的几大类方法。  相似文献   

3.
In this work we investigate the use of a reinforcement learning (RL) framework for the autonomous navigation of a group of mini-robots in a multi-agent collaborative environment. Each mini-robot is driven by inertial forces provided by two vibration motors that are controlled by a simple and efficient low-level speed controller. The action of the RL agent is the direction of each mini-robot, and it is based on the position of each mini-robot, the distance between them and the sign of the distance gradient between each mini-robot and the nearest one. Each mini-robot is considered a moving obstacle that must be avoided by the others. We propose suitable state space and reward function that result in an efficient collaborative RL framework. The classical and the double Q-learning algorithms are employed, where the latter is considered to learn optimal policies of mini-robots that offers more stable and reliable learning process. A simulation environment is created, using the ROS framework, that include a group of four mini-robots. The dynamic model of each mini-robot and of the vibration motors is also included. Several application scenarios are simulated and the results are presented to demonstrate the performance of the proposed approach.  相似文献   

4.
Mahadevan  Sridhar 《Machine Learning》1996,22(1-3):159-195
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.  相似文献   

5.
模仿学习是强化学习与监督学习的结合,目标是通过观察专家演示,学习专家策略,从而加速强化学习。通过引入任务相关的额外信息,模仿学习相较于强化学习,可以更快地实现策略优化,为缓解低样本效率问题提供了解决方案。模仿学习已成为解决强化学习问题的一种流行框架,涌现出多种提高学习性能的算法和技术。通过与图形图像学的最新研究成果相结合,模仿学习已经在游戏人工智能(artificial intelligence,AI)、机器人控制和自动驾驶等领域发挥了重要作用。本文围绕模仿学习的年度发展,从行为克隆、逆强化学习、对抗式模仿学习、基于观察量的模仿学习和跨领域模仿学习等多个角度进行深入探讨,介绍了模仿学习在实际应用上的最新情况,比较了国内外研究现状,并展望了该领域未来的发展方向。旨在为研究人员和从业人员提供模仿学习的最新进展,从而为开展工作提供参考与便利。  相似文献   

6.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

7.
模仿学习提供了一种能够使智能体从专家示范中学习如何决策的框架。在学习过程中,智能体无需与专家进行交互,也不依赖于环境的奖励信号,而只需要大量的专家示范。经典的模仿学习方法需要使用第一人称的专家示范,该示范由一个状态序列以及对应的专家动作序列组成。但是,在现实生活中,专家示范通常以第三人称视频的形式存在。相比第一人称专家示范,第三人称示范的观察视角与智能体的存在差异,导致两者之间缺乏一一对应关系,因此第三人称示范无法被直接用于模仿学习中。针对此问题,文中提出了一种数据高效的第三人称模仿学习方法。首先,该方法在生成对抗模仿学习的基础上引入了图像差分方法,利用马尔可夫决策过程的马尔可夫性质以及其状态的时间连续性,去除环境背景、颜色等领域特征,以得到观察图像中与行为策略最相关的部分,并将其用于模仿学习;其次,该方法引入了一个变分判别器瓶颈,以对判别器进行限制,进一步削弱了领域特征对策略学习的影响。为了验证所提算法的性能,通过MuJoCo平台中的3个实验环境对其进行了测试,并与已有算法进行了比较。实验结果表明,与已有的模仿学习方法相比,该方法在第三人称模仿学习任务中具有更好的性能表现,并且不需要额外增加对样本的需求。  相似文献   

8.
This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a preference-based approach to reinforcement learning is the observation that in many real-world domains, numerical feedback signals are not readily available, or are defined arbitrarily in order to satisfy the needs of conventional RL algorithms. Instead, we propose an alternative framework for reinforcement learning, in which qualitative reward signals can be directly used by the learner. The framework may be viewed as a generalization of the conventional RL framework in which only a partial order between policies is required instead of the total order induced by their respective expected long-term reward. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. As a proof of concept, we realize a first simple instantiation of this framework that defines preferences based on utilities observed for trajectories. To that end, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of preference-based approximate policy iteration are illustrated by means of two case studies.  相似文献   

9.
钱煜  俞扬  周志华 《软件学报》2013,24(11):2667-2675
强化学习通过从以往的决策反馈中学习,使Agent 做出正确的短期决策,以最大化其获得的累积奖赏值.以往研究发现,奖赏塑形方法通过提供简单、易学的奖赏替代函数(即奖赏塑性函数)来替换真实的环境奖赏,能够有效地提高强化学习性能.然而奖赏塑形函数通常是在领域知识或者最优策略示例的基础上建立的,均需要专家参与,代价高昂.研究是否可以在强化学习过程中自动地学习有效的奖赏塑形函数.通常,强化学习算法在学习过程中会采集大量样本.这些样本虽然有很多是失败的尝试,但对构造奖赏塑形函数可能提供有用信息.提出了针对奖赏塑形的新型最优策略不变条件,并在此基础上提出了RFPotential 方法,从自生成样本中学习奖赏塑形.在多个强化学习算法和问题上进行了实验,其结果表明,该方法可以加速强化学习过程.  相似文献   

10.
Bearings and tools are the important parts of the machine tool. And monitoring automatically the fault of bearings and the wear of tools under different working conditions is the necessary performance of the intelligent manufacturing system. In this paper, a multi-label imitation learning (MLIL) framework is proposed to monitor the tool wear and bearing fault under different working conditions. Specially, the multi-label samples with multiple sublabels are transformed into the imitation objects, and the MLIL develops a discriminator and a deep reinforcement learning (DRL) to imitate the feature from imitation objects. In detail, the DRL is implemented without setting the reward function to enhance the feature extraction ability of deep neural networks, and meanwhile the discriminator is used to discriminate the generations of DRL and imitation objects. As a result, the MLIL framework can not only deal with the correlation between multiple working conditions including different speeds and loads, but also distinguish the compound fault composed of coinstantaneous bearing fault and tool wear. Two cases demonstrate jointly the imitation ability of the MLIL framework on monitoring tool wear and bearing fault under different working conditions.  相似文献   

11.
Reinforcement learning (RL) has been applied to constructing controllers for nonlinear systems in recent years. Since RL methods do not require an exact dynamics model of the controlled object, they have a higher flexibility and potential for adaptation to uncertain or nonstationary environments than methods based on traditional control theory. If the target system has a continuous state space whose dynamic characteristics are nonlinear, however, RL methods often suffer from unstable learning processes. For this reason, it is difficult to apply RL methods to control tasks in the real world. In order to overcome the disadvantage of RL methods, we propose an RL scheme combining multiple controllers, each of which is constructed based on traditional control theory. We then apply it to a swinging-up and stabilizing task of an acrobot with a limited torque, which is a typical but difficult task in the field of nonlinear control theory. Our simulation result showed that our method was able to realize stable learning and to achieve fairly good control.This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004  相似文献   

12.
Scheduling semiconductor wafer manufacturing systems has been viewed as one of the most challenging optimization problems owing to the complicated constraints, and dynamic system environment. This paper proposes a fuzzy hierarchical reinforcement learning (FHRL) approach to schedule a SWFS, which controls the cycle time (CT) of each wafer lot to improve on-time delivery by adjusting the priority of each wafer lot. To cope with the layer correlation and wafer correlation of CT due to the re-entrant process constraint, a hierarchical model is presented with a recurrent reinforcement learning (RL) unit in each layer to control the corresponding sub-CT of each integrated circuit layer. In each RL unit, a fuzzy reward calculator is designed to reduce the impact of uncertainty of expected finishing time caused by the rematching of a lot to a delivery batch. The results demonstrate that the mean deviation (MD) between the actual and expected completion time of wafer lots under the scheduling of the FHRL approach is only about 30 % of the compared methods in the whole SWFS.  相似文献   

13.
模仿学习一直是人工智能领域的研究热点。模仿学习是一种基于专家示教重建期望策略的方法。近年来,在理论研究中,此方法和强化学习等方法结合,已经取得了重要成果;在实际应用中,尤其是在机器人和其他智能体的复杂环境中,模仿学习取得了很好的效果。主要阐述了模仿学习在机器人学领域的研究与运用。介绍了和模仿学习相关的理论知识;研究了模仿学习的两类主要方法:行为克隆学习方法和逆强化学习方法;对模仿学习的成功应用进行总结;最后,给出当前面对的问题和挑战并且展望未来发展趋势。  相似文献   

14.
The field of reinforcement learning (RL) has been energized in the past few decades by elegant theoretical results indicating under what conditions, and how quickly, certain algorithms are guaranteed to converge to optimal policies. However, in practical problems, these conditions are seldom met. When we cannot achieve optimality, the performance of RL algorithms must be measured empirically. Consequently, in order to meaningfully differentiate learning methods, it becomes necessary to characterize their performance on different problems, taking into account factors such as state estimation, exploration, function approximation, and constraints on computation and memory. To this end, we propose parameterized learning problems, in which such factors can be controlled systematically and their effects on learning methods characterized through targeted studies. Apart from providing very precise control of the parameters that affect learning, our parameterized learning problems enable benchmarking against optimal behavior; their relatively small sizes facilitate extensive experimentation. Based on a survey of existing RL applications, in this article, we focus our attention on two predominant, ??first order?? factors: partial observability and function approximation. We design an appropriate parameterized learning problem, through which we compare two qualitatively distinct classes of algorithms: on-line value function-based methods and policy search methods. Empirical comparisons among various methods within each of these classes project Sarsa(??) and Q-learning(??) as winners among the former, and CMA-ES as the winner in the latter. Comparing Sarsa(??) and CMA-ES further on relevant problem instances, our study highlights regions of the problem space favoring their contrasting approaches. Short run-times for our experiments allow for an extensive search procedure that provides additional insights on relationships between method-specific parameters??such as eligibility traces, initial weights, and population sizes??and problem instances.  相似文献   

15.
In this article, we propose a new control method using reinforcement learning (RL) with the concept of sliding mode control (SMC). Some remarkable characteristics of the SMC method are good robustness and stability for deviations from control conditions. On the other hand, RL may be applicable to complex systems that are difficult to model. However, applying reinforcement learning to a real system has a serious problem, i.e., many trials are required for learning. We intend to develop a new control method with good characteristics for both these methods. To realize it, we employ the actor-critic method, a kind of RL, to unite with the SMC. We are able to verify the effectiveness of the proposed control method through a computer simulation of inverted pendulum control without the use of inverted pendulum dynamics. In particular, it is shown that the proposed method enables the RL to learn in fewer trials than the reinforcement learning method. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

16.
This study analyses simultaneous ordering and pricing decisions for retailers working in a multi‐retailer competitive environment for an infinite horizon. Retailers compete for the same market where the market demand is uncertain. The customer selects the winning agent (retailer) in each term on the basis of random utility maximization, which depends primarily on retailer price and random error. The complexity of the problem is increased by competitiveness, necessity for simultaneous decisions and uncertainty in the nature of increases, and is not conducive to examination using standard analytical methods. Therefore, we model the problem using reinforcement learning (RL), which is founded on stochastic dynamic programming and agent‐based simulations. We analyse the effects of competitiveness and performance of RL on three different scenarios: a monopolistic case where one retailer employing a RL agent maximizes its profit, a duopolistic case where one retailer employs RL and another utilizes adaptive pricing and ordering policies, and a duopolistic case where both retailers employ RL.  相似文献   

17.
We address an unrelated parallel machine scheduling problem with R-learning, an average-reward reinforcement learning (RL) method. Different types of jobs dynamically arrive in independent Poisson processes. Thus the arrival time and the due date of each job are stochastic. We convert the scheduling problems into RL problems by constructing elaborate state features, actions, and the reward function. The state features and actions are defined fully utilizing prior domain knowledge. Minimizing the reward per decision time step is equivalent to minimizing the schedule objective, i.e. mean weighted tardiness. We apply an on-line R-learning algorithm with function approximation to solve the RL problems. Computational experiments demonstrate that R-learning learns an optimal or near-optimal policy in a dynamic environment from experience and outperforms four effective heuristic priority rules (i.e. WSPT, WMDD, ATC and WCOVERT) in all test problems.  相似文献   

18.
In this study, a new value function based Reinforcement learning (RL) algorithm, Local Update Dynamic Policy Programming (LUDPP), is proposed. It exploits the nature of smooth policy update using Kullback–Leibler divergence to update its value function locally and considerably reduces the computational complexity. We firstly investigated the learning performance of LUDPP and other algorithms without smooth policy update for tasks of pendulum swing up and n DOFs manipulator reaching in simulation. Only LUDPP could efficiently and stably learn good control policies in high dimensional systems with limited number of training samples. In real word application, we applied LUDPP to control Pneumatic Artificial Muscles (PAMs) driven robots without the knowledge of model which is challenging for traditional methods due to the high nonlinearities of PAM’s air pressure dynamics and mechanical structure. LUDPP successfully achieved one finger control of Shadow Dexterous Hand, a PAM-driven humanoid robot hand, with far lower computational resource compared with other conventional value function based RL algorithms.  相似文献   

19.
《Automatica》2014,50(12):3038-3053
This paper introduces a new class of multi-agent discrete-time dynamic games, known in the literature as dynamic graphical games. For that reason a local performance index is defined for each agent that depends only on the local information available to each agent. Nash equilibrium policies and best-response policies are given in terms of the solutions to the discrete-time coupled Hamilton–Jacobi equations. Since in these games the interactions between the agents are prescribed by a communication graph structure we have to introduce a new notion of Nash equilibrium. It is proved that this notion holds if all agents are in Nash equilibrium and the graph is strongly connected. A novel reinforcement learning value iteration algorithm is given to solve the dynamic graphical games in an online manner along with its proof of convergence. The policies of the agents form a Nash equilibrium when all the agents in the neighborhood update their policies, and a best response outcome when the agents in the neighborhood are kept constant. The paper brings together discrete Hamiltonian mechanics, distributed multi-agent control, optimal control theory, and game theory to formulate and solve these multi-agent dynamic graphical games. A simulation example shows the effectiveness of the proposed approach in a leader-synchronization case along with optimality guarantees.  相似文献   

20.
激励学习的最优判据研究   总被引:8,自引:0,他引:8       下载免费PDF全文
激励学习智能体通过最优策略的学习与规划来求解序贯决策问题,因此如何定义策略的最优判所是激励学习研究的核心问题之一,本文讨论了一系列来自动态规划的最优判据,通过实例检验了各种判据对激励学习的适用性和优缺点,分析了设计各种判据的激励学习算法的必要性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号