首页 | 本学科首页   官方微博 | 高级检索  
     


An easy-to-use evaluation framework for benchmarking entity recognition and disambiguation systems
Authors:Hui Chen  Bao-gang Wei  Yi-ming Li  Yong-huai Liu  Wen-hao Zhu
Affiliation:1.College of Computer Science and Technology,Zhejiang University,Hangzhou,China;2.Department of Computer Science,Aberystwyth University,Ceredigion,UK;3.School of Computer Engineering and Science,Shanghai University,Shanghai,China
Abstract:Entity recognition and disambiguation (ERD) is a crucial technique for knowledge base population and information extraction. In recent years, numerous papers have been published on this subject, and various ERD systems have been developed. However, there are still some confusions over the ERD field for a fair and complete comparison of these systems. Therefore, it is of emerging interest to develop a unified evaluation framework. In this paper, we present an easy-to-use evaluation framework (EUEF), which aims at facilitating the evaluation process and giving a fair comparison of ERD systems. EUEF is well designed and released to the public as an open source, and thus could be easily extended with novel ERD systems, datasets, and evaluation metrics. It is easy to discover the advantages and disadvantages of a specific ERD system and its components based on EUEF. We perform a comparison of several popular and publicly available ERD systems by using EUEF, and draw some interesting conclusions after a detailed analysis.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号