首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
强化学习算法中启发式回报函数的设计及其收敛性分析   总被引:3,自引:0,他引:3  
(中国科学院沈阳自动化所机器人学重点实验室沈阳110016)  相似文献   

2.
In this paper,an improved PID-neural network(IPIDNN) structure is proposed and applied to the critic and action networks of direct heuristic dynamic programming(DHDP).As one of online learning algorithm of approximate dynamic programming(ADP),DHDP has demonstrated its applicability to large state and control problems.Theoretically, the DHDP algorithm requires access to full state feedback in order to obtain solutions to the Bellman optimality equation. Unfortunately,it is not always possible to access all the states in a real system.This paper proposes a solution by suggesting an IPIDNN configuration to construct the critic and action networks to achieve an output feedback control.Since this structure can estimate the integrals and derivatives of measurable outputs,more system states are utilized and thus better control performance are expected.Compared with traditional PIDNN,this configuration is flexible and easy to expand. Based on this structure,a gradient decent algorithm for this IPIDNN-based DHDP is presented.Convergence issues are addressed within a single learning time step and for the entire learning process.Some important insights are provided to guide the implementation of the algorithm.The proposed learning controller has been applied to a cart-pole system to validate the effectiveness of the structure and the algorithm.  相似文献   

3.
为了满足卫星姿态控制系统对控制精度、抗干扰和鲁棒性要求的不断提高,将模糊神经网络结合再励学习算法应用到卫星姿态控制系统中,即可以在不需要被控卫星的精确数学模型的前提下解决网络参数在线调整的问题,又可以在无需训练样本的前提下实现控制器的在线学习。最后同传统PID控制相比的仿真结果表明,基于再励学习的三轴稳定卫星姿态控制系统不仅可以达到卫星姿态控制任务对控制精度的要求,还可以有效地克服干扰,从而达到了在线学习的目的。  相似文献   

4.
This paper proposes a reinforcement fuzzy adaptive learning control network (RFALCON), constructed by integrating two fuzzy adaptive learning control networks (FALCON), each of which has a feedforward multilayer network and is developed for the realization of a fuzzy controller. One FALCON performs as a critic network (fuzzy predictor), the other as an action network (fuzzy controller). Using temporal difference prediction, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network performs a stochastic exploratory algorithm to adapt itself according to the internal reinforcement signal. An ART-based reinforcement structure/parameter-learning algorithm is developed for constructing the RFALCON dynamically. During the learning process, structure and parameter learning are performed simultaneously. RFALCON can construct a fuzzy control system through a reward/penalty signal. It has two important features; it reduces the combinatorial demands of system adaptive linearization, and it is highly autonomous.  相似文献   

5.
基于协同最小二乘支持向量机的Q学习   总被引:5,自引:0,他引:5  
针对强化学习系统收敛速度慢的问题, 提出一种适用于连续状态、离散动作空间的基于协同最小二乘支持向量机的Q学习. 该Q学习系统由一个最小二乘支持向量回归机(Least squares support vector regression machine, LS-SVRM)和一个最小二乘支持向量分类机(Least squares support vector classification machine, LS-SVCM)构成. LS-SVRM用于逼近状态--动作对到值函数的映射, LS-SVCM则用于逼近连续状态空间到离散动作空间的映射, 并为LS-SVRM提供实时、动态的知识或建议(建议动作值)以促进值函数的学习. 小车爬山最短时间控制仿真结果表明, 与基于单一LS-SVRM的Q学习系统相比, 该方法加快了系统的学习收敛速度, 具有较好的学习性能.  相似文献   

6.
交通信号的智能控制是智能交通研究中的热点问题。为更加及时有效地自适应协调交通,文中提出了一种基于分布式深度强化学习的交通信号控制模型,采用深度神经网络框架,利用目标网络、双Q网络、价值分布提升模型表现。将交叉路口的高维实时交通信息离散化建模并与相应车道上的等待时间、队列长度、延迟时间、相位信息等整合作为状态输入,在对相位序列及动作、奖励做出恰当定义的基础上,在线学习交通信号的控制策略,实现交通信号Agent的自适应控制。为验证所提算法,在SUMO(Simulation of Urban Mobility)中相同设置下,将其与3种典型的深度强化学习算法进行对比。实验结果表明,基于分布式的深度强化学习算法在交通信号Agent的控制中具有更好的效率和鲁棒性,且在交叉路口车辆的平均延迟、行驶时间、队列长度、等待时间等方面具有更好的性能表现。  相似文献   

7.
As a powerful tool for solving nonlinear complex system control problems, the model-free reinforcement learning hardly guarantees system stability in the early stage of learning, especially with high complicity learning components applied. In this paper, a reinforcement learning framework imitating many cognitive mechanisms of brain such as attention, competition, and integration is proposed to realize sample-efficient self-stabilized online learning control. Inspired by the generation of consciousness in human brain, multiple actors that work either competitively for best interaction results or cooperatively for more accurate modeling and predictions were applied. A deep reinforcement learning implementation for challenging control tasks and a real-time control implementation of the proposed framework are respectively given to demonstrate the high sample efficiency and the capability of maintaining system stability in the online learning process without requiring an initial admissible control.  相似文献   

8.
严家政  专祥涛   《智能系统学报》2022,17(2):341-347
传统PID控制算法在非线性时滞系统的应用中,存在参数整定及性能优化过程繁琐、控制效果不理想的问题。针对该问题,提出了一种基于强化学习的控制器参数自整定及优化算法。该算法引入系统动态性能指标计算奖励函数,通过学习周期性阶跃响应的经验数据,无需辨识被控对象模型的具体数据,即可实现控制器参数的在线自整定及优化。以水箱液位控制系统为实验对象,对不同类型的PID控制器使用该算法进行参数整定及优化的对比实验。实验结果表明,相比于传统的参数整定方法,所提出的算法能省去繁琐的人工调参过程,有效优化控制器参数,减少被控量的超调量,提升控制器动态响应性能。  相似文献   

9.
This research treats a bargaining process as a Markov decision process, in which a bargaining agent’s goal is to learn the optimal policy that maximizes the total rewards it receives over the process. Reinforcement learning is an effective method for agents to learn how to determine actions for any time steps in a Markov decision process. Temporal-difference (TD) learning is a fundamental method for solving the reinforcement learning problem, and it can tackle the temporal credit assignment problem. This research designs agents that apply TD-based reinforcement learning to deal with online bilateral bargaining with incomplete information. This research further evaluates the agents’ bargaining performance in terms of the average payoff and settlement rate. The results show that agents using TD-based reinforcement learning are able to achieve good bargaining performance. This learning approach is sufficiently robust and convenient, hence it is suitable for online automated bargaining in electronic commerce.  相似文献   

10.
Controlling chaos by GA-based reinforcement learning neural network   总被引:12,自引:0,他引:12  
Proposes a TD (temporal difference) and GA (genetic algorithm) based reinforcement (TDGAR) neural learning scheme for controlling chaotic dynamical systems based on the technique of small perturbations. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to fulfil the reinforcement learning task. Structurally, the TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network for helping the learning of the other network, the action network, which determines the outputs (actions) of the TDGAR learning system. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. This can usually accelerate the GA learning since an external reinforcement signal may only be available at a time long after a sequence of actions have occurred in the reinforcement learning problems. By defining a simple external reinforcement signal. the TDGAR learning system can learn to produce a series of small perturbations to convert chaotic oscillations of a chaotic system into desired regular ones with a periodic behavior. The proposed method is an adaptive search for the optimum control technique. Computer simulations on controlling two chaotic systems, i.e., the Henon map and the logistic map, have been conducted to illustrate the performance of the proposed method.  相似文献   

11.
为了提高强化学习的控制性能,提出一种基于分数梯度下降RBF神经网络的强化学习算法.通过评价神经网络和执行神经网络组成强化学习系统,利用神经网络记忆和联想,学会控制倒立摆,提高控制精度,使误差趋于零,直至学习成功,并证明闭环系统的稳定性.通过倒立摆的物理实验发现,当分数阶阶数较大,微分的作用更显著,对角速度和速度的控制效果更好,角速度和速度的均方误差和平均绝对误差较小;当分数阶阶数较小,积分的作用更显著,对倾斜角和位移的控制效果更好,因此倾斜角和位移的均方误差和平均绝对误差较小.仿真实验的结果表明,所提算法动态响应好,超调量小,调整时间短,精度高,泛化性能好.它优于基于RBF神经网络的强化学习算法和传统强化学习算法,能有效地加快梯度下降法的收敛速度,提高其控制性能.在引入适当的干扰后,所提算法能够快速地自我调节并恢复稳定状态,控制器的鲁棒性和动态性能满足实际要求.  相似文献   

12.
为解决在线近似策略迭代增强学习计算复杂度高、收敛速度慢的问题,引入CMAC结构作为值函数逼近器,提出一种基于CMAC的非参数化近似策略迭代增强学习(NPAPI-CMAC)算法。算法通过构建样本采集过程确定CMAC泛化参数,利用初始划分和拓展划分确定CMAC状态划分方式,利用量化编码结构构建样本数集合定义增强学习率,实现了增强学习结构和参数的完全自动构建。此外,该算法利用delta规则和最近邻思想在学习过程中自适应调整增强学习参数,利用贪心策略对动作投票器得到的结果进行选择。一级倒立摆平衡控制的仿真实验结果验证了算法的有效性、鲁棒性和快速收敛能力。  相似文献   

13.

Dyslexia is a learning disorder in which individuals have significant reading difficulties. Previous studies found that using machine learning techniques in content supplements is vital in adapting the course concepts to the learners' educational level. However, to the best of our knowledge, no research objectively applied machine learning methods to adaptive content generation. This study introduces an adaptive reinforcement learning framework known as RALF through Cellular Learning Automata (CLA) to generate content automatically for students with dyslexia. At first, RALF generates online alphabet models as a simplified font. CLA structure learns each rule of character generation through the reinforcement learning cycle asynchronously. Second, Persian words are generated algorithmically. This process also considers each character's state to decide the alphabet cursiveness and the cells' response to the environment. Finally, RALF can generate long texts and sentences using the embedded word-formation algorithm. The spaces between words are proceeds through the CLA neighboring states. Besides, RALF provides word pronunciation and several exams and games to improve the learning performance of people with dyslexia. The proposed reinforcement learning tool enhances students' learning rate with dyslexia by almost 27% compared to the face-to-face approach. The findings of this research show the applicability of this approach in dyslexia treatment during Lockdown of COVID-19.

  相似文献   

14.
We propose a novel adaptive optimal control paradigm inspired by Hebbian covariance synaptic adaptation, a preeminent model of learning and memory as well as other malleable functions in the brain. The adaptation is driven by the spontaneous fluctuations in the system input and output, the covariance of which provides useful information about the changes in the system behavior. The control structure represents a novel form of associative reinforcement learning in which the reinforcement signal is implicitly given by the covariance of the input-output (I/O) signals. Theoretical foundations for the paradigm are derived using Lyapunov theory and are verified by means of computer simulations. The learning algorithm is applicable to a general class of nonlinear adaptive control problems. This on-line direct adaptive control method benefits from a computationally straightforward design, proof of convergence, no need for complete system identification, robustness to noise and uncertainties, and the ability to optimize a general performance criterion in terms of system states and control signals. These attractive properties of Hebbian feedback covariance learning control lend themselves to future investigations into the computational functions of synaptic plasticity in biological neurons.  相似文献   

15.
This article is related to the research effort of constructing an intelligent agent, i.e., a computer system that is able to sense its environment (world), reason utilizing its internal knowledge and execute actions upon the world (act). the specific part of this effor presented in this article is reinforcement learning, i.e., the process of acquiring new knowledge based upon an evaluative feedback, called reinforcement, received by tht agent through interactions with the world. This article has two objectives: (1) to give a compact overview of reinforcement learning, and (2) to show that the evolution of the reinforcement learning paradigm has been driven by the need for more efficient learning through the addition of more structure to the learning agent. Therefore, both main ideas of reinforcement learning are introduced, and structural solutions to reinforcemen learning are reviewed. Several architectural enhancements of the RL paradigm are discussed. These include incorporation of state information in the learning process, architectural solutions to learning with delayed reinforcement, dealing with structurally changing worlds through utilization of multiple models of the world, and focusing attention of the learning agent through active perception. the paper closes with an overview of directions for applications and for future research in this area. © 1993 John Wiley & Sons, Inc.  相似文献   

16.
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.  相似文献   

17.
In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent’s reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and/or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.  相似文献   

18.
In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system.  相似文献   

19.
对连续动作强化学习自动机(CARLA)进行了改进,应用改进后的CARLA结合粒子群优化算法(PSO)优化PID参数。以CARLA为基础,建立了CARLA和PSO的组合优化学习模型CARLA-PSO,该模型包含CARLA学习环路和PSO学习环路两个部分,通过优化策略选择器进行学习环路的选择,通过与环境进行相互作用,获得最优控制。本文对连铸结晶器液位控制进行了仿真实验,实验结果表明,CARLA-PSO在进行PID参数优化时寻优效率高,全局搜索能力强,能够达到理想的控制效果。具有较好的应用前景。  相似文献   

20.
This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号