Reinforcement learning for dynamic environment: a classification of dynamic environments and a detection method of environmental changes |
| |
Authors: | Masato Nagayoshi Hajime Murao H. Tamaki |
| |
Affiliation: | 1. Niigata College of Nursing, 240 Shinnan, Joetsu, 943-0147, Japan 2. Faculty of Cross-Cultural Studies, Kobe University, 1-2-1 Tsurukabuto, Nada-ku, Kobe, 657-8501, Japan 3. Graduate School of Engineering, Kobe University, Rokko-dai, Nada-ku, Kobe, 657-8501, Japan
|
| |
Abstract: | Engineers and researchers are paying more attention to reinforcement learning (RL) as a key technique for realizing computational intelligence such as adaptive and autonomous decentralized systems. In general, it is not easy to put RL into practical use. In prior research our approach mainly dealt with the problem of designing state and action spaces and we have proposed an adaptive co-construction method of state and action spaces. However, it is more difficult to design state and action spaces in dynamic environments than in static ones. Therefore, it is even more effective to use an adaptive co-construction method of state and action spaces in dynamic environments. In this paper, our approach mainly deals with a problem of adaptation in dynamic environments. First, we classify tasks of dynamic environments and propose a detection method of environmental changes to adapt to dynamic environments. Next, we conducted computational experiments using a so-called “path planning problem” with a slowly changing environment where the aging of the system is assumed. The performances of a conventional RL method and the proposed detection method were confirmed. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|