首页 | 本学科首页   官方微博 | 高级检索  
     

问题引导的空间关系图推理视觉问答模型
引用本文:兰红,张蒲芬.问题引导的空间关系图推理视觉问答模型[J].中国图象图形学报,2022,27(7):2274-2286.
作者姓名:兰红  张蒲芬
作者单位:江西理工大学信息工程学院, 赣州 341000
基金项目:国家自然科学基金项目(61762046);江西省自然科学基金项目(20161BAB212048)
摘    要:目的 现有视觉问答模型的研究主要从注意力机制和多模态融合角度出发,未能对图像场景中对象之间的语义联系显式建模,且较少突出对象的空间位置关系,导致空间关系推理能力欠佳。对此,本文针对需要空间关系推理的视觉问答问题,提出利用视觉对象之间空间关系属性结构化建模图像,构建问题引导的空间关系图推理视觉问答模型。方法 利用显著性注意力,用Faster R-CNN (region-based convolutional neural network)提取图像中显著的视觉对象和视觉特征;对图像中的视觉对象及其空间关系结构化建模为空间关系图;利用问题引导的聚焦式注意力进行基于问题的空间关系推理。聚焦式注意力分为节点注意力和边注意力,分别用于发现与问题相关的视觉对象和空间关系;利用节点注意力和边注意力权重构造门控图推理网络,通过门控图推理网络的信息传递机制和控制特征信息的聚合,获得节点的深度交互信息,学习得到具有空间感知的视觉特征表示,达到基于问题的空间关系推理;将具有空间关系感知的图像特征和问题特征进行多模态融合,预测出正确答案。结果 模型在VQA (visual question answering) v2数据集上进行训练、验证和测试。实验结果表明,本文模型相比于Prior、Language only、MCB (multimodal compact bilinear)、ReasonNet和Bottom-Up等模型,在各项准确率方面有明显提升。相比于ReasonNet模型,本文模型总体的回答准确率提升2.73%,是否问题准确率提升4.41%,计数问题准确率提升5.37%,其他问题准确率提升0.65%。本文还进行了消融实验,验证了方法的有效性。结论 提出的问题引导的空间关系图推理视觉问答模型能够较好地将问题文本信息和图像目标区域及对象关系进行匹配,特别是对于需要空间关系推理的问题,模型展现出较强的推理能力。

关 键 词:视觉问答(VQA)  图卷积神经网络(GCN)  注意力机制  空间关系推理  多模态学习
收稿时间:2020/11/3 0:00:00
修稿时间:2021/4/20 0:00:00

Question-guided spatial relation graph reasoning model for visual question answering
Lan Hong,Zhang Pufen.Question-guided spatial relation graph reasoning model for visual question answering[J].Journal of Image and Graphics,2022,27(7):2274-2286.
Authors:Lan Hong  Zhang Pufen
Affiliation:School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
Abstract:Objective Current visual question answering (VQA) methods are mostly based on attention mechanism and multimodal fusion. Deep learning have intensively promoted computer vision and natural language processing (NLP) both. Interdisciplinary area between language and vision like VQA has been focused on. VQA is composed of an AI-completed task and it yields a proxy to evaluate our progress towards artificial intelligence (AI)-based quick response reasoning. A VQA based model needs to fully understand the visual scene of the image, especially the interaction between multiple objects. This task inherently requires visual reasoning beyond the relationships between the image objects. Method Our question-guided spatial relationship graph reasoning (QG-SRGR) model is demonstrated in order to solve the issue of spatial relationship reasoning in VQA, which uses the inherent spatial relationship properties between image objects. First,saliency-based attention mechanism is used in our model, the salient visual objects and visual features are extracted by using faster region-based convolutional neural network (Faster R-CNN). Next, the visual objects and their spatial relationships are structured as a spatial relation graph. The visual objects in the image are defined as vertices of spatial relation graph, and the edges of the graph are dynamically constructed by the inherently spatial relation between the visual objects. Then, question-guided focused attention is used to conduct question-based spatial relation reasoning. Focused attention is divided into node attention and edge attention. Node attention is used to find the most relevant visual objects to the question, and edge attention is used to discover the spatial relation that most relevant to the question. Furthermore, the gated graph reasoning network (GGRN) is constructed based on the node attention weights and the edge attention weights, and the features of the neighbor nodes are aggregated by GGRN. Therefore, the deep interaction information between nodes can be obtained, the visual feature representation with spatial perception can be learned, and the question-based spatial relationship reasoning can also be achieved. Finally, the image features with spatial relation-aware and question features are fused to predict the right answer. Result Our QG-SRGR model is trained, validated and tested on the VQA v2.0 dataset. The results illustrate that the overall accuracy is 66.43% on the Test-dev set, where the accuracy of answering "Yes" or "No" questions is 83.58%, the accuracy of answering counting questions is 45.61%, and the accuracy of answering other questions types is 56.62%. The Test-std set based accuracies calculated are 66.65%, 83.86%, 45.36% and 56.93%, respectively. QG-SRGR model improves the average accuracy achieved by the ReasonNet model by 2.73%, 4.41%, 5.37% and 0.65% respectively on the overall, Yes/No, counting and other questions beyond the Test-std set. In addition, the ablation experiments are carried out on validation set. The results of ablation experiments verify the effectiveness of our method. Conclusion Our proposed QG-SRGR model can better match the text information of the question with the image target regions and the spatial relationships of objects, especially for the spatial relationship reasoning oriented questions. Our illustrated QG-SRGR model demonstrates its priority on reasoning ability.
Keywords:visual question answering (VQA)  graph convolution neural network (GCN)  attention mechanism  spatial relation reasoning  multimodal learning
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号