首页 | 本学科首页   官方微博 | 高级检索  
     

基于特征和图结构信息增强的多教师学习图神经网络
引用本文:张嘉杰,过弋,王家辉.基于特征和图结构信息增强的多教师学习图神经网络[J].计算机应用研究,2023,40(7).
作者姓名:张嘉杰  过弋  王家辉
作者单位:华东理工大学 信息科学与工程学院,华东理工大学 信息科学与工程学院,上海第二工业大学 计算机与信息工程学院
基金项目:上海市科学技术委员会科研计划项目(22DZ1204903,22511104800)
摘    要:近年来,图神经网络对图数据强大的表征能力和建模能力使其在诸多领域广泛应用并取得了重大突破。然而,现有模型往往倾向于对图卷积聚合策略和网络结构进行优化,缺乏了对图数据自身先验知识的探索。针对上述问题,通过知识蒸馏的方法,设计了一种基于特征信息和结构信息增强的多教师学习图神经网络,打破了现有模型对于数据先验知识提取的局限性。针对图数据背后所蕴涵的丰富特征与结构信息,分别设计了节点特征和边的数据增强方式。在此基础上,将原始数据和增强后的数据通过多教师学习模块进行知识嵌入,使得学生模型学习到更多关于数据的先验知识。在Cora、Citeseer和PubMed数据集上,节点分类准确率分别提升了1%、1.3%、1.1%。实验结果表明,提出的信息增强的多教师学习模型能够有效地捕获先验知识。

关 键 词:图神经网络    知识蒸馏    数据增强    节点分类
收稿时间:2022/11/14 0:00:00
修稿时间:2023/6/16 0:00:00

Multi-teacher learning graph neural network based on feature and graph structure information augmentation
Zhang Jiajie,Guo Yi and Wang Jiahui.Multi-teacher learning graph neural network based on feature and graph structure information augmentation[J].Application Research of Computers,2023,40(7).
Authors:Zhang Jiajie  Guo Yi and Wang Jiahui
Affiliation:East China University of Science and Technology,,
Abstract:In recent years, the powerful representation and modeling capabilities of graph neural networks for graph data have made them widely used in many fields and made breakthroughs. However, the existing models tend to optimize the graph convolution aggregation strategy and network structure and lack the exploration of the prior knowledge of the graph data itself. In response to the above problems, this paper designed a multi-teacher learning graph neural network based on feature information and structural information enhancement through the method of knowledge distillation, which breaked the limitations of existing models for data prior knowledge extraction. Given the rich features and structural information behind the graph data, this paper designed the data enhancement methods of node feature and edge respectively. On this basis, knowledge embedding was performed on the original data and the enhanced data through the multi-teacher learning module, so that the student model could learn more prior knowledge about the data. On the Cora, Citeseer, and PubMed datasets, the node classification accuracy increased by 1%, 1.3% and 1.1%, respectively. Experimental results demonstrate that the information-augmented multi-teacher learning model proposed in this paper can effectively capture prior knowledge.
Keywords:graph neural network  knowledge distillation  data augmentation  node classification
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号