首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.

为推进强化学习研究的进一步深入和扩大其实际应用范围,从强化学习研究的理论基础---知识表示和运用的角度对强化学习进行分类,并就经典随机强化学习,模糊强化学习,定性强化学习以及灰色强化学习作了较详细的探讨与比较.最后从知识表达和运用的角度对强化学习的发展进行了展望.

  相似文献   

2.
深度强化学习是人工智能研究中的热点问题,随着研究的深入,其中的短板也逐渐暴露出来,如数据利用率低、泛化能力弱、探索困难、缺乏推理和表征能力等,这些问题极大地制约着深度强化学习方法在现实问题中的应用。知识迁移是解决此问题的非常有效的方法,文中从深度强化学习的视角探讨了如何使用知识迁移加速智能体训练和跨领域迁移过程,对深度强化学习中知识的存在形式及作用方式进行了分析,并按照强化学习的基本构成要素对深度强化学习中的知识迁移方法进行了分类总结,最后总结了目前深度强化学习中的知识迁移在算法、理论和应用方面存在的问题和发展方向。  相似文献   

3.
钱煜  俞扬  周志华 《软件学报》2013,24(11):2667-2675
强化学习通过从以往的决策反馈中学习,使Agent 做出正确的短期决策,以最大化其获得的累积奖赏值.以往研究发现,奖赏塑形方法通过提供简单、易学的奖赏替代函数(即奖赏塑性函数)来替换真实的环境奖赏,能够有效地提高强化学习性能.然而奖赏塑形函数通常是在领域知识或者最优策略示例的基础上建立的,均需要专家参与,代价高昂.研究是否可以在强化学习过程中自动地学习有效的奖赏塑形函数.通常,强化学习算法在学习过程中会采集大量样本.这些样本虽然有很多是失败的尝试,但对构造奖赏塑形函数可能提供有用信息.提出了针对奖赏塑形的新型最优策略不变条件,并在此基础上提出了RFPotential 方法,从自生成样本中学习奖赏塑形.在多个强化学习算法和问题上进行了实验,其结果表明,该方法可以加速强化学习过程.  相似文献   

4.
作为一种崭新的机器学习方法,深度强化学习将深度学习和强化学习技术结合起来,使智能体能够从高维空间感知信息,并根据得到的信息训练模型、做出决策。由于深度强化学习算法具有通用性和有效性,人们对其进行了广泛的研究,并将其运用到了日常生活的各个领域。首先,对深度强化学习研究进行概述,介绍了深度强化学习的基础理论;然后,分别介绍了基于值函数和基于策略的深度强化学习算法,讨论了其应用前景;最后,对相关研究工作做了总结和展望。  相似文献   

5.
深度分层强化学习是深度强化学习领域的一个重要研究方向,它重点关注经典深度强化学习难以解决的稀疏奖励、顺序决策和弱迁移能力等问题.其核心思想在于:根据分层思想构建具有多层结构的强化学习策略,运用时序抽象表达方法组合时间细粒度的下层动作,学习时间粗粒度的、有语义的上层动作,将复杂问题分解为数个简单问题进行求解.近年来,随着研究的深入,深度分层强化学习方法已经取得了实质性的突破,且被应用于视觉导航、自然语言处理、推荐系统和视频描述生成等生活领域.首先介绍了分层强化学习的理论基础;然后描述了深度分层强化学习的核心技术,包括分层抽象技术和常用实验环境;详细分析了基于技能的深度分层强化学习框架和基于子目标的深度分层强化学习框架,对比了各类算法的研究现状和发展趋势;接下来介绍了深度分层强化学习在多个现实生活领域中的应用;最后,对深度分层强化学习进行了展望和总结.  相似文献   

6.
企业间知识转移是一个复杂的演化过程,当前研究主要集中在理论分析上,通过建模和仿真研究其演化过程的成果尚不多见。从任务需求的角度出发,提出了一种多主体企业间知识转移模型——MASKT(multi-agent system knowledge transfer)模型。该模型引入了强化学习机制及学习对象选择机制,并基于知识收益度对知识转移效果进行了评价。仿真结果表明,强化学习及学习对象选择机制能使企业的知识收益趋于最大化,企业间的知识势差与知识收益的关系呈倒U型结构。  相似文献   

7.
刘晓聪  王华珍  何霆  缑锦  陈坚 《计算机应用研究》2021,38(7):1930-1936,1946
为了及时掌握医学文本表示学习的研究现状,对其现有研究进行系统全面综述.首先,基于技术范式对现有技术进行分类,分别从基于统计学习的方法、基于知识图谱表示的方法和基于图表示的方法,对医学文本表示学习主流方法和相应的代表性成果进行总结.然后,提出了运用定量和定性的质量评测体系,系统地梳理和总结了医学文本表示学习的质量评测方法.最后,对医学文本表示学习的研究趋势进行了展望.  相似文献   

8.
采用ASP.NET技术,通过C#2005、JavaScript编程,综合运用文字、图像、声音、动画等形式,开发了通用性较强的B/S结构网上闯关竞答系统,是运用网络强化知识记忆的一种有效手段,对发挥计算机网络学习功能的方式方法进行了有益探索.  相似文献   

9.
基于因果建模的强化学习技术在智能控制领域越来越受欢迎.因果技术可以挖掘控制系统中的结构性因果知识,并提供了一个可解释的框架,允许人为对系统进行干预并对反馈进行分析.量化干预的效果使智能体能够在复杂的情况下(例如存在混杂因子或非平稳环境)评估策略的性能,提升算法的泛化性.本文旨在探讨基于因果建模的强化学习控制技术(以下简称因果强化学习)的最新进展,阐明其与控制系统各个模块的联系.首先介绍了强化学习的基本概念和经典算法,并讨论强化学习算法在变量因果关系解释和迁移场景下策略泛化性方面存在的缺陷.其次,回顾了因果理论的研究方向,主要包括因果效应估计和因果关系发现,这些内容为解决强化学习的缺陷提供了可行方案.接下来,阐释了如何利用因果理论改善强化学习系统的控制与决策,总结了因果强化学习的四类研究方向及进展,并整理了实际应用场景.最后,对全文进行总结,指出了因果强化学习的缺点和待解决问题,并展望了未来的研究方向.  相似文献   

10.
多Agent系统中强化学习的研究现状和发展趋势   总被引:6,自引:1,他引:6  
本文对有关强化学习及其在多Agent系统中的应用等方面的研究现状、关键技术、问题和发展趋势进行了综述和讨论,试图给出强化学习目前研究的重点和发展方向。主要内容包括:(1)强化学习的框架结构;(2)几个有代表性的强化学习方法;(3)多Agent系统中强化学习的应用和问题。最后讨论了多Agent系统中应用强化学习所面临的挑战。  相似文献   

11.
AUTOMATIC COMPLEXITY REDUCTION IN REINFORCEMENT LEARNING   总被引:1,自引:0,他引:1  
High dimensionality of state representation is a major limitation for scale-up in reinforcement learning (RL). This work derives the knowledge of complexity reduction from partial solutions and provides algorithms for automated dimension reduction in RL. We propose the cascading decomposition algorithm based on the spectral analysis on a normalized graph Laplacian to decompose a problem into several subproblems and then conduct parameter relevance analysis on each subproblem to perform dynamic state abstraction. The elimination of irrelevant parameters projects the original state space into the one with lower dimension in which some subtasks are projected onto the same shared subtasks. The framework could identify irrelevant parameters based on performed action sequences and thus relieve the problem of high dimensionality in learning process. We evaluate the framework with experiments and show that the dimension reduction approach could indeed make some infeasible problem to become learnable.  相似文献   

12.
作为机器学习和人工智能领域的一个重要分支,多智能体分层强化学习以一种通用的形式将多智能体的协作能力与强化学习的决策能力相结合,并通过将复杂的强化学习问题分解成若干个子问题并分别解决,可以有效解决空间维数灾难问题。这也使得多智能体分层强化学习成为解决大规模复杂背景下智能决策问题的一种潜在途径。首先对多智能体分层强化学习中涉及的主要技术进行阐述,包括强化学习、半马尔可夫决策过程和多智能体强化学习;然后基于分层的角度,对基于选项、基于分层抽象机、基于值函数分解和基于端到端等4种多智能体分层强化学习方法的算法原理和研究现状进行了综述;最后介绍了多智能体分层强化学习在机器人控制、博弈决策以及任务规划等领域的应用现状。  相似文献   

13.
Reinforcement learning is a promising new approach for automatically developing effective policies for real-time self-management. RL can achieve superior performance to traditional methods, while requiring less built-in domain knowledge. Several case studies from real and simulated systems management applications demonstrate RL's promises and challenges. These studies show that standard online RL can learn effective policies in feasible training times. Moreover, a Hybrid RL approach can profit from any knowledge contained in an existing policy by training on the policy's observable behavior, without needing to interface directly to such knowledge  相似文献   

14.
This paper describes an extension to reinforcement learning (RL), in which a standard RL algorithm is augmented with a mechanism for transferring experience gained in one problem to new but related problems. In this approach, named Progressive RL, an agent acquires experience of operating in a simple environment through experimentation, and then engages in a period of introspection, during which it rationalises the experience gained and formulates symbolic knowledge describing how to behave in that simple environment. When subsequently experimenting in a more complex but related environment, it is guided by this knowledge until it gains direct experience. A test domain with 15 maze environments, arranged in order of difficulty, is described. A range of experiments in this domain are presented, that demonstrate the benefit of Progressive RL relative to a basic RL approach in which each puzzle is solved from scratch. The experiments also analyse the knowledge formed during introspection, illustrate how domain knowledge may be incorporated, and show that Progressive Reinforcement Learning may be used to solve complex puzzles more quickly.  相似文献   

15.
Online learning of complex control behaviour of autonomous mobile robots like walking machines is one of the current research topics. In this article, a hybrid learning architecture based on reinforcement learning (RL) and self-organizing neural networks for online adaptivity is presented. The hybrid concept integrates different learning methods and task-oriented representations as well as available domain knowledge. The proposed concept is used for RL of control strategies on different control levels on a walking machine.  相似文献   

16.
The rapid growth in the number of devices and their connectivity has enlarged the attack surface and made cyber systems more vulnerable. As attackers become increasingly sophisticated and resourceful, mere reliance on traditional cyber protection, such as intrusion detection, firewalls, and encryption, is insufficient to secure the cyber systems. Cyber resilience provides a new security paradigm that complements inadequate protection with resilience mechanisms. A Cyber-Resilient Mechanism (CRM) adapts to the known or zero-day threats and uncertainties in real-time and strategically responds to them to maintain the critical functions of the cyber systems in the event of successful attacks. Feedback architectures play a pivotal role in enabling the online sensing, reasoning, and actuation process of the CRM. Reinforcement Learning (RL) is an important gathering of algorithms that epitomize the feedback architectures for cyber resilience. It allows the CRM to provide dynamic and sequential responses to attacks with limited or without prior knowledge of the environment and the attacker. In this work, we review the literature on RL for cyber resilience and discuss the cyber-resilient defenses against three major types of vulnerabilities, i.e., posture-related, information-related, and human-related vulnerabilities. We introduce moving target defense, defensive cyber deception, and assistive human security technologies as three application domains of CRMs to elaborate on their designs. The RL algorithms also have vulnerabilities themselves. We explain the major vulnerabilities of RL and present develop several attack models where the attacker target the information exchanged between the environment and the agent: the rewards, the state observations, and the action commands. We show that the attacker can trick the RL agent into learning a nefarious policy with minimum attacking effort. The paper introduces several defense methods to secure the RL-enabled systems from these attacks. However, there is still a lack of works that focuses on the defensive mechanisms for RL-enabled systems. Last but not least, we discuss the future challenges of RL for cyber security and resilience and emerging applications of RL-based CRMs.  相似文献   

17.
Path-based relational reasoning over knowledge graphs has become increasingly popular due to a variety of downstream applications such as question answering in dialogue systems, fact prediction, and recommendation systems. In recent years, reinforcement learning (RL) based solutions for knowledge graphs have been demonstrated to be more interpretable and explainable than other deep learning models. However, the current solutions still struggle with performance issues due to incomplete state representations and large action spaces for the RL agent. We address these problems by developing HRRL (Heterogeneous Relational reasoning with Reinforcement Learning), a type-enhanced RL agent that utilizes the local heterogeneous neighborhood information for efficient path-based reasoning over knowledge graphs. HRRL improves the state representation using a graph neural network (GNN) for encoding the neighborhood information and utilizes entity type information for pruning the action space. Extensive experiments on real-world datasets show that HRRL outperforms state-of-the-art RL methods and discovers more novel paths during the training procedure, demonstrating the explorative power of our method.  相似文献   

18.
In this study, we consider a manufacturer that has strategically decided to outsource the company specific reverse logistics (RL) activities to a third-party logistics (3PL) service provider. Given the locations of the collection centers and reprocessing facilities, the RL network design of the 3PL involves finding the number and places of the test centers under supply uncertainty associated with the quantity of the returns. Hybrid simulation-analytical modeling, which iteratively uses mixed integer programming models and simulation, is a suitable framework for handling the uncertainties in the stochastic RL network design problem. We present two hybrid simulation-analytical modeling approaches for the RL network design of the 3PL. The first one is an adaptation of a problem-specific approach proposed in the literature for the design of a distribution network design of a 3PL. The second one involves the development of a generic approach based on a recently proposed novel solution methodology. In the generic approach instead of exchanging problem-specific parameters between the analytical and simulation model, the interaction is governed by reflecting the impact of uncertainty obtained via simulation to the objective function of the analytical model. The results obtained from the two approaches under different scenario and parameter settings are discussed.  相似文献   

19.
Literature shows that reinforcement learning (RL) and the well-known optimization algorithms derived from it have been applied to assembly sequence planning (ASP); however, the way this is done, as an offline process, ends up generating optimization methods that are not exploiting the full potential of RL. Today’s assembly lines need to be adaptive to changes, resilient to errors and attentive to the operators’ skills and needs. If all of these aspects need to evolve towards a new paradigm, called Industry 4.0, the way RL is applied to ASP needs to change as well: the RL phase has to be part of the assembly execution phase and be optimized with time and several repetitions of the process. This article presents an agile exploratory experiment in ASP to prove the effectiveness of RL techniques to execute ASP as an adaptive, online and experience-driven optimization process, directly at assembly time. The human-assembly interaction is modelled through the input-outputs of an assembly guidance system built as an assembly digital twin. Experimental assemblies are executed without pre-established assembly sequence plans and adapted to the operators’ needs. The experiments show that precedence and transition matrices for an assembly can be generated from the statistical knowledge of several different assembly executions. When the frequency of a given subassembly reinforces its importance, statistical results obtained from the experiments prove that online RL applications are not only possible but also effective for learning, teaching, executing and improving assembly tasks at the same time. This article paves the way towards the application of online RL algorithms to ASP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号