首页 | 本学科首页   官方微博 | 高级检索  
     


Deep Reinforcement Learning Solves Job-shop Scheduling Problems
Authors:Anjiang Cai  Yangfan Yu  Manman Zhao
Affiliation:School of Mechanical and Electrical Engineering, Xi ''an University of Architecture and Technology, Xi ''an 710055, China; Department of Automation Engineering, Wuxi Higher Vocational and Technical School of Mechanical and Electrical Engineering, Wuxi 214028, China
Abstract:To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning, a deep reinforcement learning framework considering sparse reward problem is proposed. The job shop scheduling problem is transformed into Markov decision process, and six state features are designed to improve the state feature representation by using two-way scheduling method, including four state features that distinguish the optimal action and two state features that are related to the learning goal. An extended variant of graph isomorphic network GIN++ is used to encode disjunction graphs to improve the performance and generalization ability of the model. Through iterative greedy algorithm, random strategy is generated as the initial strategy, and the action with the maximum information gain is selected to expand it to optimize the exploration ability of Actor-Critic algorithm. Through validation of the trained policy model on multiple public test data sets and comparison with other advanced DRL methods and scheduling rules, the proposed method reduces the minimum average gap by 3.49%, 5.31% and 4.16%, respectively, compared with the priority rule-based method, and 5.34% compared with the learning-based method. 11.97% and 5.02%, effectively improving the accuracy of DRL to solve the approximate solution of JSSP minimum completion time.
Keywords:job shop scheduling problems  deep reinforcement learning  state characteristics  policy network
点击此处可从《国外电子测量技术》浏览原始摘要信息
点击此处可从《国外电子测量技术》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号