首页 | 本学科首页   官方微博 | 高级检索  
     

融合实体类型信息的本体-实例联合学习方法
引用本文:游乐圻,裴忠民,罗章凯. 融合实体类型信息的本体-实例联合学习方法[J]. 计算机工程, 2022, 48(7): 82-88. DOI: 10.19678/j.issn.1000-3428.0062113
作者姓名:游乐圻  裴忠民  罗章凯
作者单位:航天工程大学复杂电子系统仿真重点实验室,北京101416
基金项目:国防科技重点实验室基础研究项目(DXZT-JC-ZZ-2018-007);
摘    要:对表示知识图谱的本体图和实例图进行联合学习能够提高嵌入学习效率,但不能区别表示实体在不同场景下的不同意义。在嵌入时考虑三元组中实体的关系类型特征,提出一种融合实体类型信息的本体-实例联合学习方法JOIE-TKRL-CT,达到在联合学习中表示多义实体、提高知识图谱嵌入学习效率的目的。在视图内部关系表示上,利用实体分层类型模型融入实体类型信息,在两个独立的嵌入空间中分别表征学习;在视图间关系表示上,将表征在两个独立空间的本体和实例通过非线性映射的方法跨视图链接。基于YAGO26K-906和DB111K-174数据集的实验结果表明,JOIE-TKRL-CT能够准确捕获知识图谱的实体类型信息,提高联合学习模型性能,与TransE、HolE、DisMult等基线模型相比,其在实例三元组补全和实体分类任务上均获得最优性能,具有较好的知识学习效果。

关 键 词:联合学习  实体多义性  跨视图转换  分层类型模型  三元组补全  实体分类
收稿时间:2021-07-19
修稿时间:2021-09-07

Ontology-Instances Joint Learning Method Fusing Entity Type Information
YOU Leqi,PEI Zhongmin,LUO Zhangkai. Ontology-Instances Joint Learning Method Fusing Entity Type Information[J]. Computer Engineering, 2022, 48(7): 82-88. DOI: 10.19678/j.issn.1000-3428.0062113
Authors:YOU Leqi  PEI Zhongmin  LUO Zhangkai
Affiliation:Key Laboratory of Complex Electronic System Simulation, Space Engineering University, Beijing 101416, China
Abstract:The joint learning of the ontology graph and the instance graph that represent the knowledge graph can effectively improve the efficiency of embedded learning, but it cannot represent the ambiguity of entities in different scenarios.In this regard, this paper integrates the relationship type characteristics of the entities in the triples when embedding, and proposes an ontology-instances joint learning mothed, JOIE-TKRL-CT, that fuses entity type information to achieve the goals of representing polysemous entities in joint learning and improving the efficiency of knowledge map embedded learning.This method expresses the relationship within the view, integrates the entity type information based on the entity hierarchical type model, and characterizes learning in two independent embedding spaces.Meanwhile, it expresses the relationship between the views, passes the ontology and instance represented in the two independent spaces, and uses non-linear mapping methods for cross-view linking.Multi-group comparison experiments are performed on the YAGO26K-906 and DB111K-174 data sets, and the results show that JOIE-TKRL-CT can accurately capture the hierarchical type information of the knowledge graph and improve the performance of the joint learning model.Compared with baseline models such as TransE, HolE, and DisMult, this model achieves the best performance in instance triple complement and entity classification tasks, and has a better knowledge learning effect.
Keywords:joint learning  entity polysemy  cross-view transformation  hierarchical type model  triple completion  entity classification  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号