首页 | 本学科首页   官方微博 | 高级检索  
     

基于Transformer和双重注意力融合的分层交互答案选择模型
引用本文:郑巧月,段友祥,孙岐峰.基于Transformer和双重注意力融合的分层交互答案选择模型[J].计算机应用研究,2022,39(11).
作者姓名:郑巧月  段友祥  孙岐峰
作者单位:中国石油大学(华东),中国石油大学(华东 ),中国石油大学华东
基金项目:中石油重大科技专项资助项目(ZD2019-183-006);中央高校基础科研业务专项资金资助项目(20CX05017A)
摘    要:答案选择是问答系统中的关键组成部分,提升其准确性是问答系统研究的重要内容之一。近年来深度学习技术广泛应用于答案选择,获得了良好效果,但仍旧有一定的局限性。其中模型对问题语义信息利用度低、缺乏局部语义重视、句子间交互感知能力差等问题尤为突出。针对上述问题提出了一种基于Transformer和双重注意力融合的答案选择模型NHITAS(new hierarchical interactive Transformer for answer selection)。首先,在信息预处理部分提取问题类别和关键词对答案进行初步筛选,并引入外部知识作为句子额外语义补充;其次,采用分层交互的Transformer对句子进行语义上下文建模,提出了UP-Transformer(untied position-Transformer)和DA-Transformer(decay self-attention-Transformer)两种结构学习语义特征;最后,通过双重注意力融合过滤句子噪声,增强问题和答案之间的语义交互。在WikiQA和TrecQA数据集上对NHITAS的性能进行测试,实验结果表明所提模型对比其他模型,能够有效提升答案选择任务的效果。

关 键 词:答案选择    Transformer    双重注意力机制    问答系统    深度学习
收稿时间:2022/4/29 0:00:00
修稿时间:2022/10/20 0:00:00

New hierarchical interactive answer selection model based on Transformer and dual attention fusion
zheng qiaoyue,duan youxiang and sun qifeng.New hierarchical interactive answer selection model based on Transformer and dual attention fusion[J].Application Research of Computers,2022,39(11).
Authors:zheng qiaoyue  duan youxiang and sun qifeng
Affiliation:China University of Petroleum,,
Abstract:Answer selection is a key part of question answering system, and improving its accuracy is also one of the important contents of question answering. In recent years, answer selection task has achieved good results using deep learning techniques. But there are still some limitations. Among them, model''s low utilization of semantic information, lack of local semantic attention, and poor ability to perceive the interaction between sentences are particularly prominent. Regarding the issue above, this paper proposed an answer selection model NHITAS(new hierarchical interactive Transformer for answer selection) based on Transformer and dual attention fusion. Firstly, this model extracted question categories and keywords for preliminary screening of answers in the information preprocessing part, and introduced external knowledge as additional semantic supplements for sentences. Secondly, it used a hierarchical interactive transformer to model the semantic context of sentences, and proposed two structures to learn semantic features, respectively UP-Transformer(untied position-Transformer) and DA-Transformer(decay self-attention-Transformer). Finally, dual attention fusion layers not only filtered sentence noise but also enhanced semantic interaction between question and answer. This paper tested the performance of NHITAS on the WikiQA and TrecQA datasets. The experimental results show that this model can effectively improve the effect of the answer selection task compared with other models.
Keywords:answer selection  Transformer  dual attention mechanism  question answering  deep learning
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号