首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
李春贵 《计算机工程》2005,31(11):13-15
研究了优先扫描的强化学习方法,通过定义新的迹,把多步截断即时差分学习用于集成规划的优先扫描强化学习,用多步截断即时差分来定义扫描优先权,提出一种改进的优先扫描强化学习算法并进行仿真实验,实验结果表明,新算法的学习效率有明显的提高。  相似文献   

2.
任燚  陈宗海 《控制与决策》2006,21(4):430-434
多机器人系统中,随着机器人数目的增加.系统中的冲突呈指数级增加.甚至出现死锁.本文提出了基于过程奖赏和优先扫除的强化学习算法作为多机器人系统的冲突消解策略.针对典型的多机器人可识别群体觅食任务.以计算机仿真为手段,以收集的目标物数量为系统性能指标,以算法收敛时学习次数为学习速度指标,进行仿真研究,并与基于全局奖赏和Q学习算法等其他9种算法进行比较.结果表明所提出的基于过程奖赏和优先扫除的强化学习算法能显著减少冲突.避免死锁.提高系统整体性能.  相似文献   

3.
任燚  陈宗海 《计算机仿真》2005,22(10):183-186
该文针对典型的觅食任务,以计算机仿真为手段,直观地揭示噪声对机器人系统性能的影响.在此基础上,提出了以过程奖赏(process reward)代替传统的结果奖赏(result reward),并与优先扫除(prioritized sweeping)的强化学习算法结合作为噪声消解策略.然后与基于结果奖赏的Q学习算法(Q-learning)等其它四种算法进行比较,结果表明基于过程奖赏和优先扫除的强化学习算法能显著降低噪声的影响,提高了系统整体性能.  相似文献   

4.
This article discusses the use of repetitive control for output reference tracking in linear time-varying discrete time systems with both repetitive and non-repetitive noise components. The design of such controllers is formulated as a lifted linear stochastic output feedback problem on which the mature techniques of discrete linear control may be applied. In many modern applications, the large size of the system matrices in such a control problem inhibits the application of standard solvers and optimisation techniques. For linear quadratic Gaussian (LQG) problems, the matrices of the lifted feedback problem can be fitted into the recently developed sequentially semi-separable structure. Innovative numerical solutions are developed that have 𝒪(N) computational complexity (where N is the trial length) in both controller synthesis and implementation, comparable to that of many non-lifted and Fourier transform based learning control methods. Moreover, within this formulation, the system is allowed to vary over the learning cycle, closed-loop stability is guaranteed, and stochastic noise and disturbances are handled in an LQG sense.  相似文献   

5.
Reinforcement learning is an optimisation technique for applications like control or scheduling problems. It is used in learning situations, where success and failure of the system are the only training information. Unfortunately, we have to pay a price for this powerful ability: long training times and the instability of the learning process are not tolerable for industrial applications with large continuous state spaces. From our point of view, the integration of prior knowledge is a key mechanism for making autonomous learning practicable for industrial applications. The learning control architecture Fynesse provides a unified view onto the integration of prior control knowledge in the reinforcement learning framework. In this way, other approaches in this area can be embedded into Fynesse. The key features of Fynesse are (1) the integration of prior control knowledge like linear controllers, control characteristics or fuzzy controllers, (2) autonomous learning of control strategies and (3) the interpretation of learned strategies in terms of fuzzy control rules. The benefits and problems of different methods for the integration of a priori knowledge are demonstrated on empirical studies.The research project F ynesse was supported by the Deutsche Forschungsgemeinschaft (DFG).  相似文献   

6.
王云鹏  郭戈 《自动化学报》2019,45(12):2366-2377
现有的有轨电车信号优先控制系统存在诸多问题, 如无法适应实时交通变化、优化求解较为复杂等. 本文提出了一种基于深度强化学习的有轨电车信号优先控制策略. 不依赖于交叉口复杂交通建模, 采用实时交通信息作为输入, 在有轨电车整个通行过程中连续动态调整交通信号. 协同考虑有轨电车与社会车辆的通行需求, 在尽量保证有轨电车无需停车的同时, 降低社会车辆的通行延误. 采用深度Q网络算法进行问题求解, 并利用竞争架构、双Q网络和加权样本池改善学习性能. 基于SUMO的实验表明, 该模型能够有效地协同提高有轨电车与社会车辆的通行效率.  相似文献   

7.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

8.
This paper descibes an explanation-based learning (EBL) system based on a version of Newell, Shaw, and Simon's LOGIC-THEORIST (LT). Results of applying this system to propositional calculus problems from Principia Mathematica are compared with results of applying several other versions of the same performance element to these problems. The primary goal of this study is to characterize and analyze differences between non-learning, rote learning (LT's original learning method), and EBL. Another aim is to provide a characterization of the performance of a simple problem solver in the context of the Principia problems, in the hope that these problems can be used as a benchmark for testing improved learning methods, just as problems like chess and the eight puzzle have been used as benchmarks in research on search methods.  相似文献   

9.
We introduce a sensitivity-based view to the area of learning and optimization of stochastic dynamic systems. We show that this sensitivity-based view provides a unified framework for many different disciplines in this area, including perturbation analysis, Markov decision processes, reinforcement learning, identification and adaptive control, and singular stochastic control; and that this unified framework applies to both the discrete event dynamic systems and continuous-time continuous-state systems. Many results in these disciplines can be simply derived and intuitively explained by using two performance sensitivity formulas. In addition, we show that this sensitivity-based view leads to new results and opens up new directions for future research. For example, the n th bias optimality of Markov processes has been established and the event-based optimization may be developed; this approach has computational and other advantages over the state-based approaches.  相似文献   

10.
针对基于查询表的Dyna优化算法在大规模状态空间中收敛速度慢、环境模型难以表征以及对变化环境的学习滞后性等问题,提出一种新的基于近似模型表示的启发式Dyna优化算法(a heuristic Dyna optimization algorithm using approximate model representation, HDyna-AMR),其利用线性函数近似逼近Q值函数,采用梯度下降方法求解最优值函数.HDyna-AMR算法可以分为学习阶段和规划阶段.在学习阶段,利用agent与环境的交互样本近似表示环境模型并记录特征出现频率;在规划阶段,基于近似环境模型进行值函数的规划学习,并根据模型逼近过程中记录的特征出现频率设定额外奖赏.从理论的角度证明了HDyna-AMR的收敛性.将算法用于扩展的Boyan chain问题和Mountain car问题.实验结果表明,HDyna-AMR在离散状态空间和连续状态空间问题中能学习到最优策略,同时与Dyna-LAPS(Dyna-style planning with linear approximation and prioritized sweeping)和Sarsa(λ)相比,HDyna-AMR具有收敛速度快以及对变化环境的近似模型修正及时的优点.  相似文献   

11.
不确定环境的时序决策问题是强化学习研究的主要内容之一,agent的目标是最大化其与环境交互过程中获得的累计奖赏值.直接学习方法寻找最优策略的算法收敛效率较差,而采用Dyna结构将学习与规划并行集成,可提高算法的收敛效率.为了进一步提高传统Dyna结构的收敛速度和收敛精度,提出了Dyna-PS算法,并在理论上证明了其收敛性.该算法在Dyna结构规划部分使用优先级扫描算法的思想,对优先级函数值高的状态优先更新,剔除了传统值迭代、策略迭代过程中不相关和无更新意义的状态更新,提升了规划的收敛效率,从而进一步提升了Dyna结构算法的性能.将此算法应用于一系列经典规划问题,实验结果表明,Dyna-PS算法有更快的收敛速度和更高的收敛精度,且对于状态空间的增长具有较强的鲁棒性.  相似文献   

12.
Locally weighted learning (LWL) is a class of techniques from nonparametric statistics that provides useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of robotic systems. This paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks. We discuss two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly. In contrast to the traditional belief that LWL methods cannot work well in high-dimensional spaces, we provide new algorithms that have been tested on up to 90 dimensional learning problems. The applicability of our LWL algorithms is demonstrated in various robot learning examples, including the learning of devil-sticking, pole-balancing by a humanoid robot arm, and inverse-dynamics learning for a seven and a 30 degree-of-freedom robot. In all these examples, the application of our statistical neural networks techniques allowed either faster or more accurate acquisition of motor control than classical control engineering.  相似文献   

13.
14.
李凯文  张涛  王锐  覃伟健  贺惠晖  黄鸿 《自动化学报》2021,47(11):2521-2537
组合优化问题广泛存在于国防、交通、工业、生活等各个领域, 几十年来, 传统运筹优化方法是解决组合优化问题的主要手段, 但随着实际应用中问题规模的不断扩大、求解实时性的要求越来越高, 传统运筹优化算法面临着很大的计算压力, 很难实现组合优化问题的在线求解. 近年来随着深度学习技术的迅猛发展, 深度强化学习在围棋、机器人等领域的瞩目成果显示了其强大的学习能力与序贯决策能力. 鉴于此, 近年来涌现出了多个利用深度强化学习方法解决组合优化问题的新方法, 具有求解速度快、模型泛化能力强的优势, 为组合优化问题的求解提供了一种全新的思路. 因此本文总结回顾近些年利用深度强化学习方法解决组合优化问题的相关理论方法与应用研究, 对其基本原理、相关方法、应用研究进行总结和综述, 并指出未来该方向亟待解决的若干问题.  相似文献   

15.
16.
Tapani  Matti 《Neurocomputing》2009,72(16-18):3704
This paper studies the identification and model predictive control in nonlinear hidden state-space models. Nonlinearities are modelled with neural networks and system identification is done with variational Bayesian learning. In addition to the robustness of control, the stochastic approach allows for various control schemes, including combinations of direct and indirect controls, as well as using probabilistic inference for control. We study the noise-robustness, speed, and accuracy of three different control schemes as well as the effect of changing horizon lengths and initialisation methods using a simulated cart–pole system. The simulations indicate that the proposed method is able to find a representation of the system state that makes control easier especially under high noise.  相似文献   

17.
Munos  Rémi 《Machine Learning》2000,40(3):265-299
This paper proposes a study of Reinforcement Learning (RL) for continuous state-space and time control problems, based on the theoretical framework of viscosity solutions (VSs). We use the method of dynamic programming (DP) which introduces the value function (VF), expectation of the best future cumulative reinforcement. In the continuous case, the value function satisfies a non-linear first (or second) order (depending on the deterministic or stochastic aspect of the process) differential equation called the Hamilton-Jacobi-Bellman (HJB) equation. It is well known that there exists an infinity of generalized solutions (differentiable almost everywhere) to this equation, other than the VF. We show that gradient-descent methods may converge to one of these generalized solutions, thus failing to find the optimal control.In order to solve the HJB equation, we use the powerful framework of viscosity solutions and state that there exists a unique viscosity solution to the HJB equation, which is the value function. Then, we use another main result of VSs (their stability when passing to the limit) to prove the convergence of numerical approximations schemes based on finite difference (FD) and finite element (FE) methods. These methods discretize, at some resolution, the HJB equation into a DP equation of a Markov Decision Process (MDP), which can be solved by DP methods (thanks to a strong contraction property) if all the initial data (the state dynamics and the reinforcement function) were perfectly known. However, in the RL approach, as we consider a system in interaction with some a priori (at least partially) unknown environment, which learns from experience, the initial data are not perfectly known but have to be approximated during learning. The main contribution of this work is to derive a general convergence theorem for RL algorithms when one uses only approximations (in a sense of satisfying some weak contraction property) of the initial data. This result can be used for model-based or model-free RL algorithms, with off-line or on-line updating methods, for deterministic or stochastic state dynamics (though this latter case is not described here), and based on FE or FD discretization methods. It is illustrated with several RL algorithms and one numerical simulation for the Car on the Hill problem.  相似文献   

18.
Efficient and inefficient ant coverage methods   总被引:4,自引:0,他引:4  
Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

19.
Elevator Group Control Using Multiple Reinforcement Learning Agents   总被引:22,自引:0,他引:22  
Crites  Robert H.  Barto  Andrew G. 《Machine Learning》1998,33(2-3):235-262
Recent algorithmic and theoretical advances in reinforcement learning (RL) have attracted widespread interest. RL algorithms have appeared that approximate dynamic programming on an incremental basis. They can be trained on the basis of real or simulated experiences, focusing their computation on areas of state space that are actually visited during control, making them computationally tractable on very large problems. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this paper we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems.Elevator group control serves as our testbed. It is a difficult domain posing a combination of challenges not seen in most multi-agent learning research to date. We use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reward signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of multi-agent RL on a very large scale stochastic dynamic optimization problem of practical utility.  相似文献   

20.
在状态空间满足结构化条件的前提下,通过状态空间的维度划分直接将复杂的原始MDP问题递阶分解为一组简单的MDP或SMDP子问题,并在线对递阶结构进行完善.递阶结构中嵌入不同的再励学习方法可以形成不同的递阶学习.所提出的方法在具备递阶再励学习速度快、易于共享等优点的同时,降低了对先验知识的依赖程度,缓解了学习初期回报值稀少的问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号