Digital twin-driven deep reinforcement learning for adaptive task allocation in robotic construction |
| |
Affiliation: | 1. Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China;2. Department of Production Engineering, KTH Royal Institute of Technology, Stockholm, Sweden;1. Kumoh National Institute of Technology, School of Industrial Engineering, South Korea;2. University of Tennessee at Chattanooga, Department of Engineering Management & Technology, Chattanooga, TN, USA;3. University of Texas at Arlington, Department of Industrial Engineering, Arlington, TX, USA;1. Department of Construction Science, Texas A&M University, 3137 TAMU, College Station, TX 77843, United States of America;2. Department of Construction Science, Texas A&M University, Francis Hall 329B, 3137 TAMU, College Station, TX 77843, United States of America;1. School of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China;2. Engineering Research Center of Intelligent Control System and Intelligent Equipment, Ministry of Education, Qinhuangdao 066004, China;3. School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China;4. Nantong Yuetong CNC Equipment Co. Ltd., Nantong 226000, China |
| |
Abstract: | In order to accomplish diverse tasks successfully in a dynamic (i.e., changing over time) construction environment, robots should be able to prioritize assigned tasks to optimize their performance in a given state. Recently, a deep reinforcement learning (DRL) approach has shown potential for addressing such adaptive task allocation. It remains unanswered, however, whether or not DRL can address adaptive task allocation problems in dynamic robotic construction environments. In this paper, we developed and tested a digital twin-driven DRL learning method to explore the potential of DRL for adaptive task allocation in robotic construction environments. Specifically, the digital twin synthesizes sensory data from physical assets and is used to simulate a variety of dynamic robotic construction site conditions within which a DRL agent can interact. As a result, the agent can learn an adaptive task allocation strategy that increases project performance. We tested this method with a case project in which a virtual robotic construction project (i.e., interlocking concrete bricks are delivered and assembled by robots) was digitally twinned for DRL training and testing. Results indicated that the DRL model’s task allocation approach reduced construction time by 36% in three dynamic testing environments when compared to a rule-based imperative model. The proposed DRL learning method promises to be an effective tool for adaptive task allocation in dynamic robotic construction environments. Such an adaptive task allocation method can help construction robots cope with uncertainties and can ultimately improve construction project performance by efficiently prioritizing assigned tasks. |
| |
Keywords: | Digital Twin Proximal Policy Optimization (PPO) Deep Reinforcement Learning (DRL) Autonomous Robot Adaptive Task Allocation |
本文献已被 ScienceDirect 等数据库收录! |
|