首页 | 本学科首页   官方微博 | 高级检索  
     


Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework
M. Mazouchi, S. Nageshrao, and H. Modares, “Conflict-aware safe reinforcement learning: A meta-cognitive learning framework,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 3, pp. 466–481, Mar. 2022. doi: 10.1109/JAS.2021.1004353
Authors:Majid Mazouchi  Subramanya Nageshrao  Hamidreza Modares
Affiliation:1. Michigan State University, East Lansing, MI 48824 USA;2. Ford Research and Innovation Center, Ford Motor Company, Palo Alto, CA 94304 USA
Abstract:In this paper, a data-driven conflict-aware safe reinforcement learning (CAS-RL) algorithm is presented for control of autonomous systems. Existing safe RL results with pre-defined performance functions and safe sets can only provide safety and performance guarantees for a single environment or circumstance. By contrast, the presented CAS-RL algorithm provides safety and performance guarantees across a variety of circumstances that the system might encounter. This is achieved by utilizing a bilevel learning control architecture: A higher meta-cognitive layer leverages a data-driven receding-horizon attentional controller (RHAC) to adapt relative attention to different system’s safety and performance requirements, and, a lower-layer RL controller designs control actuation signals for the system. The presented RHAC makes its meta decisions based on the reaction curve of the lower-layer RL controller using a meta-model or knowledge. More specifically, it leverages a prediction meta-model (PMM) which spans the space of all future meta trajectories using a given finite number of past meta trajectories. RHAC will adapt the system’s aspiration towards performance metrics (e.g., performance weights) as well as safety boundaries to resolve conflicts that arise as mission scenarios develop. This will guarantee safety and feasibility (i.e., performance boundness) of the lower-layer RL-based control solution. It is shown that the interplay between the RHAC and the lower-layer RL controller is a bilevel optimization problem for which the leader (RHAC) operates at a lower rate than the follower (RL-based controller) and its solution guarantees feasibility and safety of the control solution. The effectiveness of the proposed framework is verified through a simulation example. 
Keywords:Optimal control   receding-horizon attentional controller (RHAC)   reinforcement learning (RL)
本文献已被 维普 等数据库收录!
点击此处可从《IEEE/CAA Journal of Automatica Sinica》浏览原始摘要信息
点击此处可从《IEEE/CAA Journal of Automatica Sinica》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号