首页 | 本学科首页   官方微博 | 高级检索  
     

基于多模态推理图神经网络的场景文本视觉问答模型
引用本文:张海涛,郭欣雨.基于多模态推理图神经网络的场景文本视觉问答模型[J].计算机应用研究,2022,39(1):280-284+302.
作者姓名:张海涛  郭欣雨
作者单位:辽宁工程技术大学 软件学院,辽宁 葫芦岛 125105
基金项目:辽宁省自然科学基金面上项目;中国人民解放军总装备部装备预研基金项目。
摘    要:文本阅读能力差和视觉推理能力不足是现有视觉问答(visual question answering, VQA)模型效果不好的主要原因,针对以上问题,设计了一个基于图神经网络的多模态推理(multi-modal reasoning graph neural network, MRGNN)模型。利用图像中多种形式的信息帮助理解场景文本内容,将场景文本图片分别预处理成视觉对象图和文本图的形式,并且在问题自注意力模块下过滤多余的信息;使用加入注意力的聚合器完善子图之间相互的节点特征,从而融合不同模态之间的信息,更新后的节点利用不同模态的上下文信息为答疑模块提供了更好的功能。在ST-VQA和TextVQA数据集上验证了有效性,实验结果表明,相比较此任务的一些其他模型,MRGNN模型在此任务上有明显的提升。

关 键 词:视觉问答  图神经网络  多模态推理  问题自注意力
收稿时间:2021/6/1 0:00:00
修稿时间:2021/12/18 0:00:00

Visual question answering model of vision and scene text based on multi-modal reasoning graph neural network
zhang haitao and guo xinyu.Visual question answering model of vision and scene text based on multi-modal reasoning graph neural network[J].Application Research of Computers,2022,39(1):280-284+302.
Authors:zhang haitao and guo xinyu
Affiliation:(School of Software,Liaoning Technical University,Huludao Liaoning 125105,China)
Abstract:When the VQA model dealt with the vision and scene text task, it required to read the visual and textual content of the image and reason the question to get the answer. Poor text reading ability and inadequate visual reasoning were the main reasons for the insufficient effect of existing visual question answering models. To solve the above problems, this paper designed a MRGNN. It used various forms of information in images to help understanding the scene text content, preprocessed the scene text image into the visual object graph and text graph respectively, and filtered the redundant information in the question self-attention module. It used an aggregator with attention to perfect the node features between subgraphs and fuse different modality information. The updated nodes used the context information of different modules to provide a better function for answering module. This paper verified the validity of MRGNN model on ST-VQA and TextVQA datasets. The experimental results show that MRGNN model achieves good results compared with some classical models for this task.
Keywords:visual question answering  graph neural network  multi-modal reasoning  question self-attention
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号