首页 | 本学科首页   官方微博 | 高级检索  
     

面向聊天机器人的多注意力记忆网络
引用本文:任建龙,杨立,孔维一,左春.面向聊天机器人的多注意力记忆网络[J].计算机系统应用,2019,28(9):18-24.
作者姓名:任建龙  杨立  孔维一  左春
作者单位:中国科学院 软件研究所 精准计算联合实验室,北京 100190;中国科学院大学,北京 100049;中国科学院 软件研究所 精准计算联合实验室,北京 100190;中国科学院大学,北京 100049;中科软科技股份有限公司,北京 100190
基金项目:中国科学院A类战略性先导科技专项(XDA20080200);国家重点研发计划(2018YFB1005002)
摘    要:如何对多轮的对话历史进行建模和推理是构建一个智能聊天机器人的主要挑战之一.基于循环或门控的记忆网络已经被证明是进行对话建模的有效方式.然而,这种方式有两个缺点,一是使用复杂的循环结构,导致计算效率较低;二是使用代价较大的强监督信息或先验信息,不利于扩展和迁移应用到新的领域.针对上述问题,本文提出了一种端到端的多注意力记忆网络.首先,该网络采取结合词向量和位置编码的方式对文本输入进行表示;其次,使用并行的多层注意力在不同子空间捕获对话交互中的关键信息来更好地建模对话历史;最后,通过捷径连接的方式叠加多注意力层管理信息流,实现对建模结果的多次推理.在bAbI-dialog数据集上的实验表明,该网络可以有效地对多轮对话进行建模和推理,而且具有较好的时间性能.

关 键 词:聊天机器人  多轮对话  多注意力  捷径连接
收稿时间:2019/2/27 0:00:00
修稿时间:2019/3/22 0:00:00

Memory Network with Multi-Head Attention for Chatbot
REN Jian-Long,YANG Li,KONG Wei-Yi and ZUO Chun.Memory Network with Multi-Head Attention for Chatbot[J].Computer Systems& Applications,2019,28(9):18-24.
Authors:REN Jian-Long  YANG Li  KONG Wei-Yi and ZUO Chun
Affiliation:Laboratory of Precise Computing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China;University of Chinese Academy of Sciences, Beijing 100049, China,Laboratory of Precise Computing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China;University of Chinese Academy of Sciences, Beijing 100049, China,Laboratory of Precise Computing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China;University of Chinese Academy of Sciences, Beijing 100049, China and Laboratory of Precise Computing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China;University of Chinese Academy of Sciences, Beijing 100049, China;SinoSoft Co. Ltd., Beijing 100190, China
Abstract:Modeling and reasoning about the multi-turn dialogue history is a main challenge for building an intelligent chatbot. Memory Networks with recurrent or gated architectures have been demonstrated promising for conversation modeling. However, it still suffers from two drawbacks, one is relatively low computational efficiency for its complex architectures, the other is costly strong supervision information or fixed priori knowledge, which hinders its extension and application to new domains. This paper proposes an end-to-end memory network with multi-head attention. Firstly, the model adopts a method using word embedding combined with position encoding to represent text input; Secondly, it uses multi-head attention to capture important information in different subspaces of conversational interactions. Finally, multi-layered attention is stacked via shortcut connections to achieve repeatedly reasoning over the modeling result. Experiments on the bAbI-dialog datasets show that the network can effectively model and reason for multi-turn dialogue and has a better time performance.
Keywords:chatbot  multi-turn dialogue  multi-head attention  shortcut connections
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号