首页 | 本学科首页   官方微博 | 高级检索  
     


Assessing problem solving in expert systems using human benchmarking
Authors:Harold F. O'Neil   Jr.    Yujing Ni   Eva L. Baker  Merlin C. Wittrock
Affiliation:1. KK Research Centre, KK Woman''s and Children''s Hospital, Singapore, Singapore;2. Biosystems and Micromechanics, Singapore-MIT Alliance for Research and Technology, Singapore, Singapore;3. Department of Mechanical Engineering, Massachusetts Institute of Technology, Boston, USA;4. Department of Reproductive Medicine, KK Woman''s and Children''s Hospital, Singapore, Singapore;5. Obstertics and Gynaecology-Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore;1. College of Life Science, Sichuan University, Chengdu 610044, China;2. College of Life Science and Food Engineering, Yibin University, Yibin 644000, China
Abstract:The human benchmarking approach attempts to assess problem solving in expert systems by measuring their performance against a range of human problem-solving performances. We established a correspondence between functions of the expert system GATES and human problem-solving skills required to perform a scheduling task. We then developed process and outcome measures and gave them to people of different assumed problem-solving ability. The problem-solving ability or “intelligence” of this expert system is extremely high in the narrow domain of scheduling planes to airport gates as indicated by its superior performance compared to that of undergraduates, graduate students and expert human schedulers (i.e. air traffic controllers). In general, the study supports the feasibility of using human benchmarking methodology to evaluate the problem-solving ability of a specific expert system.
Keywords:Assessment   Problem solving   Human benchmarking   Expert systems   Technology
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号