首页 | 本学科首页   官方微博 | 高级检索  
     


Consolidation of Subtasks for Target Task in Pipelined NLP Model
Authors:Jeong‐Woo Son  Heegeun Yoon  Seong‐Bae Park  Keeseong Cho  Won Ryu
Affiliation:1. Jeong‐Woo Son (corresponding author, jwson@etri.re.kr) is with the Broadcasting & Telecommunications Media Research Laboratory, ETRI, Daejeon and also with the School of Computer Science, Kyungpook National University, Daegu, Rep. of Korea.;2. Heegeun Yoon (hkyoon@sejong.knu.ac.kr) and Seong‐Bae Park (sbpark@sejong.knu.ac.kr) are with the College of IT Engineering, Kyungpook National University, Daegu, Rep. of Korea.;3. Keeseong Cho (chokis@etri.re.kr), and Won Ryu (wlyu@etri.re.kr) are with the Broadcasting & Telecommunications Media Research Laboratory, ETRI, Daejeon, Rep. of Korea.
Abstract:Most natural language processing tasks depend on the outputs of some other tasks. Thus, they involve other tasks as subtasks. The main problem of this type of pipelined model is that the optimality of the subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. As a solution to this problem, this paper proposes a consolidation of subtasks for a target task (CST2). In CST2, all parameters of a target task and its subtasks are optimized to fulfill the objective of the target task. CST2 finds such optimized parameters through a backpropagation algorithm. In experiments in which text chunking is a target task and part‐of‐speech tagging is its subtask, CST2 outperforms a traditional pipelined text chunker. The experimental results prove the effectiveness of optimizing subtasks with respect to the target task.
Keywords:Pipelined NLP model  task consolidation  chained task learning
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号