首页 | 本学科首页   官方微博 | 高级检索  
     


Learning goal hierarchies from structured observations and expert annotations
Authors:Tolga Könik  John E Laird
Affiliation:(1) Computational Learning Laboratory, Center for the Study of Language and Information, Stanford University, Stanford, CA 94305, USA;(2) Artificial Intelligence Laboratory, Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109, USA
Abstract:We describe a relational learning by observation framework that automatically creates cognitive agent programs that model expert task performance in complex dynamic domains. Our framework uses observed behavior and goal annotations of an expert as the primary input, interprets them in the context of background knowledge, and returns an agent program that behaves similar to the expert. We map the problem of creating an agent program on to multiple learning problems that can be represented in a “supervised concept learning’’ setting. The acquired procedural knowledge is partitioned into a hierarchy of goals and represented with first order rules. Using an inductive logic programming (ILP) learning component allows our framework to naturally combine structured behavior observations, parametric and hierarchical goal annotations, and complex background knowledge. To deal with the large domains we consider, we have developed an efficient mechanism for storing and retrieving structured behavior data. We have tested our approach using artificially created examples and behavior observation traces generated by AI agents. We evaluate the learned rules by comparing them to hand-coded rules. Editor: Rui Camacho
Keywords:Relational learning by observation  Relational learning  Inductive logic programming (ILP)  Behavioral cloning  Cognitive agent architectures
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号