首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 121 毫秒
1.
深度强化学习是人工智能研究中的热点问题,随着研究的深入,其中的短板也逐渐暴露出来,如数据利用率低、泛化能力弱、探索困难、缺乏推理和表征能力等,这些问题极大地制约着深度强化学习方法在现实问题中的应用。知识迁移是解决此问题的非常有效的方法,文中从深度强化学习的视角探讨了如何使用知识迁移加速智能体训练和跨领域迁移过程,对深度强化学习中知识的存在形式及作用方式进行了分析,并按照强化学习的基本构成要素对深度强化学习中的知识迁移方法进行了分类总结,最后总结了目前深度强化学习中的知识迁移在算法、理论和应用方面存在的问题和发展方向。  相似文献   

2.
多机器人动态编队的强化学习算法研究   总被引:8,自引:0,他引:8  
在人工智能领域中,强化学习理论由于其自学习性和自适应性的优点而得到了广泛关注.随着分布式人工智能中多智能体理论的不断发展,分布式强化学习算法逐渐成为研究的重点.首先介绍了强化学习的研究状况,然后以多机器人动态编队为研究模型,阐述应用分布式强化学习实现多机器人行为控制的方法.应用SOM神经网络对状态空间进行自主划分,以加快学习速度;应用BP神经网络实现强化学习,以增强系统的泛化能力;并且采用内、外两个强化信号兼顾机器人的个体利益及整体利益.为了明确控制任务,系统使用黑板通信方式进行分层控制.最后由仿真实验证明该方法的有效性.  相似文献   

3.
陈佳盼  郑敏华 《机器人》2022,44(2):236-256
通过梳理、总结前人的研究,首先对深度学习和强化学习的基本理论和算法进行介绍,进而对深度强化学习的流行算法和在机器人操作领域的应用现状进行综述。最后,根据目前存在的问题及解决方法,对深度强化学习在机器人操作领域未来的发展方向作出总结与展望。  相似文献   

4.
采用鱼群模型驱动多智能体可以涌现出优良的运动特性,但是,由于机器人与真实鱼类相比具有较大的差异性,使得鱼群模型难以应用于真实机器人系统.为此,提出一种结合深度学习与强化学习的迁移控制方法,首先,使用鱼群运动数据训练深度网络(deep neural network, DNN)模型,以此作为机器人成对交互的基础;然后,连接强化学习的深度确定性策略梯度方法(deep deterministic policy gradient, DDPG)来修正DNN模型的输出,设计集群最大视觉尺寸方法挑选关键邻居,从而将DNN+DDPG模型拓展到多智能体的运动控制.集群机器人运动实验表明:所提出方法能使机器人仅利用单个邻居信息就能形成可靠、稳定的集群运动,与单纯DNN直接迁移控制相比,所提出DNN+DDPG控制框架既可以保存原有鱼群运动的灵活性,又能增强机器人系统的安全性与可控性,使得该方法在集群机器人运动控制领域具有较大的应用潜力.  相似文献   

5.
作为解决序贯决策的机器学习方法,强化学习采用交互试错的方法学习最优策略,能够契合人类的智能决策方式。基于课程学习的深度强化学习是强化学习领域的一个研究热点,它针对强化学习智能体在面临高维状态空间和动作空间时学习效率低、难以收敛的问题,通过抽取一个或多个简单源任务训练优化过程中的共性知识,加速或改善复杂目标任务的学习。论文首先介绍了课程学习的基础知识,从四个角度对深度强化学习中的课程学习最新研究进展进行了综述,包括基于网络优化的课程学习、基于多智能体合作的课程学习、基于能力评估的课程学习、基于功能函数的课程学习。然后对课程强化学习最新发展情况进行了分析,并对深度强化学习中的课程学习的当前存在问题和解决思路进行了总结归纳。最后,基于当前课程学习在深度强化学习中的应用,对课程强化学习的发展和研究方向进行了总结。  相似文献   

6.
7.
为了完成非结构化环境中的机器人轴孔装配任务,提出了一种融入模糊奖励机制的深度确定性策略梯度(DDPG)变参数导纳控制算法,来提升未知环境下的装配效率。建立了轴孔装配接触状态力学模型,并开展轴孔装配机理研究,进而指导机器人装配策略的制定。基于导纳控制器实现柔顺轴孔装配,采用DDPG算法在线辨识控制器的最优参数,并在奖励函数中引入模糊规则,避免陷入局部最优装配策略,提高装配操作质量。在5种不同直径的孔上进行装配实验,并与定参数导纳模型装配效果进行比较。实验结果表明,本文算法明显优于固定参数模型,并在算法收敛后10步内可完成装配操作,有望满足非结构环境自主操作需求。  相似文献   

8.
基于强化学习的多机器人协作   总被引:3,自引:0,他引:3  
提出了一种动态环境下多个机器人获取合作行为的强化学习方法,该方法采用基于瞬时奖励的Q-学习完成单个机器人的学习,并利用人工势场法的思想确定不同机器人的学习顺序,在此基础上采用交替学习来完成多机器人的学习过程。试验结果表明所提方法的可行性和有效性。  相似文献   

9.
任燚  陈宗海 《控制与决策》2006,21(4):430-434
多机器人系统中,随着机器人数目的增加.系统中的冲突呈指数级增加.甚至出现死锁.本文提出了基于过程奖赏和优先扫除的强化学习算法作为多机器人系统的冲突消解策略.针对典型的多机器人可识别群体觅食任务.以计算机仿真为手段,以收集的目标物数量为系统性能指标,以算法收敛时学习次数为学习速度指标,进行仿真研究,并与基于全局奖赏和Q学习算法等其他9种算法进行比较.结果表明所提出的基于过程奖赏和优先扫除的强化学习算法能显著减少冲突.避免死锁.提高系统整体性能.  相似文献   

10.
机器人操作技能学习方法综述   总被引:8,自引:3,他引:8  
结合人工智能技术和机器人技术,研究具备一定自主决策和学习能力的机器人操作技能学习系统,已逐渐成为机器人研究领域的重要分支.本文介绍了机器人操作技能学习的主要方法及最新的研究成果.依据对训练数据的使用方式将机器人操作技能学习方法分为基于强化学习的方法、基于示教学习的方法和基于小数据学习的方法,并基于此对近些年的研究成果进行了综述和分析,最后列举了机器人操作技能学习的未来发展方向.  相似文献   

11.
张立华  刘全  黄志刚  朱斐 《软件学报》2023,34(10):4772-4803
逆向强化学习(inverse reinforcement learning, IRL)也称为逆向最优控制(inverse optimal control, IOC),是强化学习和模仿学习领域的一种重要研究方法,该方法通过专家样本求解奖赏函数,并根据所得奖赏函数求解最优策略,以达到模仿专家策略的目的.近年来,逆向强化学习在模仿学习领域取得了丰富的研究成果,已广泛应用于汽车导航、路径推荐和机器人最优控制等问题中.首先介绍逆向强化学习理论基础,然后从奖赏函数构建方式出发,讨论分析基于线性奖赏函数和非线性奖赏函数的逆向强化学习算法,包括最大边际逆向强化学习算法、最大熵逆向强化学习算法、最大熵深度逆向强化学习算法和生成对抗模仿学习等.随后从逆向强化学习领域的前沿研究方向进行综述,比较和分析该领域代表性算法,包括状态动作信息不完全逆向强化学习、多智能体逆向强化学习、示范样本非最优逆向强化学习和指导逆向强化学习等.最后总结分析当前存在的关键问题,并从理论和应用方面探讨未来的发展方向.  相似文献   

12.
强化学习在足球机器人基本动作学习中的应用   总被引:1,自引:0,他引:1  
主要研究了强化学习算法及其在机器人足球比赛技术动作学习问题中的应用.强化学习的状态空间 和动作空间过大或变量连续,往往导致学习的速度过慢甚至难于收敛.针对这一问题,提出了基于T-S 模型模糊 神经网络的强化学习方法,能够有效地实现强化学习状态空间到动作空间的映射.此外,使用提出的强化学习方 法设计了足球机器人的技术动作,研究了在不需要专家知识和环境模型情况下机器人的行为学习问题.最后,通 过实验证明了所研究方法的有效性,其能够满足机器人足球比赛的需要.  相似文献   

13.
In this paper,a data-driven conflict-aware safe reinforcement learning(CAS-RL)algorithm is presented for control of autonomous systems.Existing safe RL results with predefined performance functions and safe sets can only provide safety and performance guarantees for a single environment or circumstance.By contrast,the presented CAS-RL algorithm provides safety and performance guarantees across a variety of circumstances that the system might encounter.This is achieved by utilizing a bilevel learning control architecture:A higher metacognitive layer leverages a data-driven receding-horizon attentional controller(RHAC)to adapt relative attention to different system’s safety and performance requirements,and,a lower-layer RL controller designs control actuation signals for the system.The presented RHAC makes its meta decisions based on the reaction curve of the lower-layer RL controller using a metamodel or knowledge.More specifically,it leverages a prediction meta-model(PMM)which spans the space of all future meta trajectories using a given finite number of past meta trajectories.RHAC will adapt the system’s aspiration towards performance metrics(e.g.,performance weights)as well as safety boundaries to resolve conflicts that arise as mission scenarios develop.This will guarantee safety and feasibility(i.e.,performance boundness)of the lower-layer RL-based control solution.It is shown that the interplay between the RHAC and the lower-layer RL controller is a bilevel optimization problem for which the leader(RHAC)operates at a lower rate than the follower(RL-based controller)and its solution guarantees feasibility and safety of the control solution.The effectiveness of the proposed framework is verified through a simulation example.  相似文献   

14.
组合最优化问题(COP)的求解方法已经渗透到人工智能、运筹学等众多领域.随着数据规模的不断增大、问题更新速度的变快,运用传统方法求解COP问题在速度、精度、泛化能力等方面受到很大冲击.近年来,强化学习(RL)在无人驾驶、工业自动化等领域的广泛应用,显示出强大的决策力和学习能力,故而诸多研究者尝试使用RL求解COP问题,...  相似文献   

15.
强化学习(Reinforcement learning, RL)在围棋、视频游戏、导航、推荐系统等领域均取得了巨大成功. 然而, 许多强化学习算法仍然无法直接移植到真实物理环境中. 这是因为在模拟场景下智能体能以不断试错的方式与环境进行交互, 从而学习最优策略. 但考虑到安全因素, 很多现实世界的应用则要求限制智能体的随机探索行为. 因此, 安全问题成为强化学习从模拟到现实的一个重要挑战. 近年来, 许多研究致力于开发安全强化学习(Safe reinforcement learning, SRL)算法, 在确保系统性能的同时满足安全约束. 本文对现有的安全强化学习算法进行全面综述, 将其归为三类: 修改学习过程、修改学习目标、离线强化学习, 并介绍了5大基准测试平台: Safety Gym、safe-control-gym、SafeRL-Kit、D4RL、NeoRL. 最后总结了安全强化学习在自动驾驶、机器人控制、工业过程控制、电力系统优化和医疗健康领域中的应用, 并给出结论与展望.  相似文献   

16.
The behavior of reinforcement learning (RL) algorithms is best understood in completely observable, discrete-time controlled Markov chains with finite state and action spaces. In contrast, robot-learning domains are inherently continuous both in time and space, and moreover are partially observable. Here we suggest a systematic approach to solve such problems in which the available qualitative and quantitative knowledge is used to reduce the complexity of learning task. The steps of the design process are to: (i) decompose the task into subtasks using the qualitative knowledge at hand; (ii) design local controllers to solve the subtasks using the available quantitative knowledge, and (iii) learn a coordination of these controllers by means of reinforcement learning. It is argued that the approach enables fast, semi-automatic, but still high quality robot-control as no fine-tuning of the local controllers is needed. The approach was verified on a non-trivial real-life robot task. Several RL algorithms were compared by ANOVA and it was found that the model-based approach worked significantly better than the model-free approach. The learnt switching strategy performed comparably to a handcrafted version. Moreover, the learnt strategy seemed to exploit certain properties of the environment which were not foreseen in advance, thus supporting the view that adaptive algorithms are advantageous to nonadaptive ones in complex environments.  相似文献   

17.
The behavior of reinforcement learning (RL) algorithms is best understood in completely observable, discrete-time controlled Markov chains with finite state and action spaces. In contrast, robot-learning domains are inherently continuous both in time and space, and moreover are partially observable. Here we suggest a systematic approach to solve such problems in which the available qualitative and quantitative knowledge is used to reduce the complexity of learning task. The steps of the design process are to:i) decompose the task into subtasks using the qualitative knowledge at hand; ii) design local controllers to solve the subtasks using the available quantitative knowledge and iii) learn a coordination of these controllers by means of reinforcement learning. It is argued that the approach enables fast, semi-automatic, but still high quality robot-control as no fine-tuning of the local controllers is needed. The approach was verified on a non-trivial real-life robot task. Several RL algorithms were compared by ANOVA and it was found that the model-based approach worked significantly better than the model-free approach. The learnt switching strategy performed comparably to a handcrafted version. Moreover, the learnt strategy seemed to exploit certain properties of the environment which were not foreseen in advance, thus supporting the view that adaptive algorithms are advantageous to non-adaptive ones in complex environments.  相似文献   

18.
Configuration tuning is essential to optimize the performance of systems (e.g., databases, key-value stores). High performance usually indicates high throughput and low latency. At present, most of the tuning tasks of systems are performed artificially (e.g., by database administrators), but it is hard for them to achieve high performance through tuning in various types of systems and in various environments. In recent years, there have been some studies on tuning traditional database systems, but all these methods have some limitations. In this article, we put forward a tuning system based on attention-based deep reinforcement learning named WATuning, which can adapt to the changes of workload characteristics and optimize the system performance efficiently and effectively. Firstly, we design the core algorithm named ATT-Tune for WATuning to achieve the tuning task of systems. The algorithm uses workload characteristics to generate a weight matrix and acts on the internal metrics of systems, and then ATT-Tune uses the internal metrics with weight values assigned to select the appropriate configuration. Secondly, WATuning can generate multiple instance models according to the change of the workload so that it can complete targeted recommendation services for different types of workloads. Finally, WATuning can also dynamically fine-tune itself according to the constantly changing workload in practical applications so that it can better fit to the actual environment to make recommendations. The experimental results show that the throughput and the latency of WATuning are improved by 52.6% and decreased by 31%, respectively, compared with the throughput and the latency of CDBTune which is an existing optimal tuning method.  相似文献   

19.
Conventional robot control schemes are basically model-based methods. However, exact modeling of robot dynamics poses considerable problems and faces various uncertainties in task execution. This paper proposes a reinforcement learning control approach for overcoming such drawbacks. An artificial neural network (ANN) serves as the learning structure, and an applied stochastic real-valued (SRV) unit as the learning method. Initially, force tracking control of a two-link robot arm is simulated to verify the control design. The simulation results confirm that even without information related to the robot dynamic model and environment states, operation rules for simultaneous controlling force and velocity are achievable by repetitive exploration. Hitherto, however, an acceptable performance has demanded many learning iterations and the learning speed proved too slow for practical applications. The approach herein, therefore, improves the tracking performance by combining a conventional controller with a reinforcement learning strategy. Experimental results demonstrate improved trajectory tracking performance of a two-link direct-drive robot manipulator using the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号