首页 | 本学科首页   官方微博 | 高级检索  
     


Minimizing mean weighted tardiness in unrelated parallel machine scheduling with reinforcement learning
Authors:Zhicong Zhang  Li Zheng  Na Li  Weiping Wang  Shouyan Zhong  Kaishun Hu
Affiliation:a Department of Industrial Engineering, School of Mechanical Engineering, Dongguan University of Technology, Songshan Lake District, Dongguan 523808, Guangdong Province, China
b Department of Industrial Engineering, Tsinghua University, Beijing 100084, China
c Department of Industrial Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
d School of Mechanical Engineering, Dongguan University of Technology, China
Abstract:We address an unrelated parallel machine scheduling problem with R-learning, an average-reward reinforcement learning (RL) method. Different types of jobs dynamically arrive in independent Poisson processes. Thus the arrival time and the due date of each job are stochastic. We convert the scheduling problems into RL problems by constructing elaborate state features, actions, and the reward function. The state features and actions are defined fully utilizing prior domain knowledge. Minimizing the reward per decision time step is equivalent to minimizing the schedule objective, i.e. mean weighted tardiness. We apply an on-line R-learning algorithm with function approximation to solve the RL problems. Computational experiments demonstrate that R-learning learns an optimal or near-optimal policy in a dynamic environment from experience and outperforms four effective heuristic priority rules (i.e. WSPT, WMDD, ATC and WCOVERT) in all test problems.
Keywords:Scheduling   Unrelated parallel machines   Reinforcement learning   Tardiness
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号