首页 | 本学科首页   官方微博 | 高级检索  
     

面向知识图谱约束问答的强化学习推理技术
引用本文:毕鑫,聂豪杰,赵相国,袁野,王国仁.面向知识图谱约束问答的强化学习推理技术[J].软件学报,2023,34(10):4565-4583.
作者姓名:毕鑫  聂豪杰  赵相国  袁野  王国仁
作者单位:东北大学 深部金属矿山安全开采教育部重点实验室, 辽宁 沈阳 110819;东北大学 计算机科学与工程学院, 辽宁 沈阳 110819;东北大学 软件学院, 辽宁 沈阳 110819;北京理工大学 计算机科学与技术学院, 北京 110081)
基金项目:国家自然科学基金(62072087, 61932004, 62002054, 61972077, U2001211)
摘    要:知识图谱问答任务通过问题分析与知识图谱推理,将问题的精准答案返回给用户,现已被广泛应用于智能搜索、个性化推荐等智慧信息服务中.考虑到关系监督学习方法人工标注的高昂代价,学者们开始采用强化学习等弱监督学习方法设计知识图谱问答模型.然而,面对带有约束的复杂问题,现有方法面临两大挑战:(1)多跳长路径推理导致奖励稀疏与延迟;(2)难以处理约束问题推理路径分支.针对上述挑战,设计了融合约束信息的奖励函数,能够解决弱监督学习面临的奖励稀疏与延迟问题;设计了基于强化学习的约束路径推理模型COPAR,提出了基于注意力机制的动作选择策略与基于约束的实体选择策略,能够依据问题约束信息选择关系及实体,缩减推理搜索空间,解决了推理路径分支问题.此外,提出了歧义约束处理策略,有效解决了推理路径歧义问题.采用知识图谱问答基准数据集对COPAR的性能进行了验证和对比.实验结果表明:与现有先进方法相比,在多跳数据集上性能相对提升了2%-7%,在约束数据集上性能均优于对比模型,准确率提升7.8%以上.

关 键 词:知识图谱  约束路径推理  约束问答  强化学习
收稿时间:2022/7/5 0:00:00
修稿时间:2022/12/14 0:00:00

Reinforcement Learning Inference Techniques for Knowledge Graph Constrained Question Answering
BI Xin,NIE Hao-Jie,ZHAO Xiang-Guo,YUAN Ye,WANG Guo-Ren.Reinforcement Learning Inference Techniques for Knowledge Graph Constrained Question Answering[J].Journal of Software,2023,34(10):4565-4583.
Authors:BI Xin  NIE Hao-Jie  ZHAO Xiang-Guo  YUAN Ye  WANG Guo-Ren
Affiliation:Key Laboratory of Ministry of Education on Safe Mining of Deep Metal Mines, School of Resources and Civil Engineering, Northeastern University, Shenyang, 110819, China;School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China;College of Software, Northeastern University, Shenyang, 110819, China;School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
Abstract:Abstract: Knowledge graph based question answering (KGQA) analyzes natural language questions, performs reasoning over knowledge graphs, and ultimately returns accurate answers to them. It has been widely used in intelligent information service s, such as modern search engines, and personalized recommendation. Considering the high cost of manual labeling of reasoning steps as supervision in the relation-supervised learning methods, scholars began to explore weak supervised learning methods, such as reinforcement learning, to design knowledge graph based question answering models. However, as for the complex questions with constraints, existing reinforcement learning-based KGQA methods face two major challenges: 1) multi-hop long path reasoning leads to sparsity and delay rewards; 2) existing methods cannot handle the case of reasoning path branches with constraint information. To address the above challenges in constrained question answering tasks, a reward shaping strategy with constraint information is designed to solv e the sparsity and delay rewards. In addition, a reinforcement learning based constrained path reasoning model named COPAR is proposed. COPAR consists of an action determination strategy based on attention mechanism and an entity determination strategy based on const raint information. COPAR is capable of selecting the correct relations and entities according to the question constraint information, reducing the search space of reasoning, and ultimately solving the reasoning path branching problem. Moreover, an ambiguity constraint pro cessing strategy is proposed to effectively solve the ambiguity problem of reasoning path. The performance of COPAR is verified and compared using benchmark datasets of knowledge graph based question answering task. The experimental results indicate that, compared w ith the existing methods, the performance on datasets of multi-hop questions is relatively improved by 2%-7%; the performance on datasets of constrained questions is higher than the rival models, and the accuracy is improved by at least 7.8%.
Keywords:knowledge graph  constrained path reasoning  constrained question answering  reinforcement learning
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号