首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Video question answering (Video QA) has received much attention in recent years. It can answer questions according to the visual content of a video clip. Video QA task can be solved only according to the video data. But if the video clip has some relevant text information, It can also be solved by using the fused video and text data. How to select the useful region features from the video frames and select the useful text features from the text information needs to be solved. And how to fuse the video and text features also needs to be solved. Therefore, we propose a forget memory network to solve these problems. The forget memory network with video framework can solve Video QA task only according to the video data. It can select the useful region features for the question and forget the irrelevant region features from the video frames. The forget memory network with video and text framework can extract the useful text features and forget the irrelevant text features for the question. And it can fuse the video and text data to solve Video QA task. The fused video and text features can help improve the experimental performance.  相似文献   

2.
基于视觉和语言的跨媒体问答与推理是人工智能领域的研究热点之一,其目的是基于给定的视觉内容和相关问题,模型能够返回正确的答案。随着深度学习的飞速发展及其在计算机视觉和自然语言处理领域的广泛应用,基于视觉和语言的跨媒体问答与推理也取得了较快的发展。文中首先系统地梳理了当前基于视觉和语言的跨媒体问答与推理的相关工作,具体介绍了基于图像的视觉问答与推理、基于视频的视觉问答与推理以及基于视觉常识推理模型与算法的研究进展,并将基于图像的视觉问答与推理细分为基于多模态融合、基于注意力机制和基于推理3类,将基于视觉常识推理细分为基于推理和基于预训练2类;然后总结了目前常用的问答与推理数据集,以及代表性的问答与推理模型在这些数据集上的实验结果;最后展望了基于视觉和语言的跨媒体问答与推理的未来发展方向。  相似文献   

3.
Commonsense question answering (CQA) requires understanding and reasoning over QA context and related commonsense knowledge, such as a structured Knowledge Graph (KG). Existing studies combine language models and graph neural networks to model inference. However, traditional knowledge graph are mostly concept-based, ignoring direct path evidence necessary for accurate reasoning. In this paper, we propose MRGNN (Meta-path Reasoning Graph Neural Network), a novel model that comprehensively captures sequential semantic information from concepts and paths. In MRGNN, meta-paths are introduced as direct inference evidence and an original graph neural network is adopted to aggregate features from both concepts and paths simultaneously. We conduct sufficient experiments on the CommonsenceQA and OpenBookQA datasets, showing the effectiveness of MRGNN. Also, we conduct further ablation experiments and explain the reasoning behavior through the case study.  相似文献   

4.
针对大多数视频问答(VideoQA)模型将视频和问题嵌入到同一空间进行答案推理所面临的多模态交互困难、视频语义特征保留能力差等问题,提出了一种视频描述机制来获得视频语义特征的文本表示,从而避免了多模态的交互.提出方法将视频特征通过描述机制得到相应的视频描述文本,并将描述文本特征与问题特征进行阅读理解式的交互与分析,最后推理出问题的答案.在MSVD-QA以及MSRVTT-QA数据集上的测试结果显示,提出问答模型的回答准确率较现有模型均有不同程度的提升,说明所提方法能更好地完成视频问答任务.  相似文献   

5.
文本阅读能力差和视觉推理能力不足是现有视觉问答(visual question answering, VQA)模型效果不好的主要原因,针对以上问题,设计了一个基于图神经网络的多模态推理(multi-modal reasoning graph neural network, MRGNN)模型。利用图像中多种形式的信息帮助理解场景文本内容,将场景文本图片分别预处理成视觉对象图和文本图的形式,并且在问题自注意力模块下过滤多余的信息;使用加入注意力的聚合器完善子图之间相互的节点特征,从而融合不同模态之间的信息,更新后的节点利用不同模态的上下文信息为答疑模块提供了更好的功能。在ST-VQA和TextVQA数据集上验证了有效性,实验结果表明,相比较此任务的一些其他模型,MRGNN模型在此任务上有明显的提升。  相似文献   

6.
目的 视频描述定位是视频理解领域一个重要且具有挑战性的任务,该任务需要根据一个自然语言描述的查询,从一段未修剪的视频中定位出文本描述的视频片段。由于语言模态与视频模态之间存在巨大的特征表示差异,因此如何构建出合适的视频—文本多模态特征表示,并准确高效地定位目标片段成为该任务的关键点和难点。针对上述问题,本文聚焦于构建视频—文本多模态特征的优化表示,提出使用视频中的运动信息去激励多模态特征表示中的运动语义信息,并以无候选框的方式实现视频描述定位。方法 基于自注意力的方法提取自然语言描述中的多个短语特征,并与视频特征进行跨模态融合,得到多个关注不同语义短语的多模态特征。为了优化多模态特征表示,分别从时序维度及特征通道两个方面进行建模: 1)在时序维度上使用跳连卷积,即一维时序卷积对运动信息的局部上下文进行建模,在时序维度上对齐语义短语与视频片段; 2)在特征通道上使用运动激励,通过计算时序相邻的多模态特征向量之间的差异,构建出响应运动信息的通道权重分布,从而激励多模态特征中表示运动信息的通道。本文关注不同语义短语的多模态特征融合,采用非局部神经网络(non-local neural network)建模不同语义短语之间的依赖关系,并采用时序注意力池化模块将多模态特征融合为一个特征向量,回归得到目标片段的开始与结束时刻。结果 在多个数据集上验证了本文方法的有效性。在Charades-STA数据集和ActivityNet Captions数据集上,模型的平均交并比(mean intersection over union,mIoU)分别达到了52.36%和42.97%,模型在两个数据集上的召回率R@1 (Recall@1)分别在交并比阈值为0.3、0.5和0.7时达到了73.79%、61.16%和52.36%以及60.54%、43.68%和25.43%。与LGI (local-global video-text interactions)和CPNet (contextual pyramid network)等方法相比,本文方法在性能上均有明显的提升。结论 本文在视频描述定位任务上提出了使用运动特征激励优化视频—文本多模态特征表示的方法,在多个数据集上的实验结果证明了运动激励下的特征能够更好地表征视频片段和语言查询的匹配信息。  相似文献   

7.
常识问答是一项重要的自然语言理解任务, 旨在利用常识知识对自然语言问句进行自动求解, 以得到准确答案. 常识问答在虚拟助手或社交聊天机器人等领域有着广泛的应用前景, 且其蕴涵了知识挖掘与表示、语言理解与计算、答案推理和生成等关键科学问题, 因而受到工业界和学术界的广泛关注. 首先介绍常识问答领域的主要数据集; 其次, 归纳不同常识知识源在构建方式、常识来源和表现形式上的区别; 同时, 重点分析并对比前沿常识问答模型, 以及融合常识知识的特色方法. 特别地, 根据不同问答任务场景中常识知识的共性和特性, 建立包含属性、语义、因果、语境、抽象和意图6大类的知识分类体系. 以此为支撑, 针对常识知识数据集建设, 感知知识融合和预训练语言模型的协作机制, 以及在此基础上的常识知识预分类技术, 进行前瞻性的研究, 并具体报告上述模型在跨数据集迁移场景下的性能变化, 及其在常识答案推理中的潜在贡献. 总体上, 包含对现有数据和前沿技术的回顾, 也包含面向跨数据知识体系建设、技术迁移与通用化的预研内容, 借以在汇报领域技术积累的前提下, 为其理论和技术的进一步发展提供参考意见.  相似文献   

8.
Visual reasoning is a special kind of visual question answering, which is essentially multi-step and compositional, and also requires intensive text-visual interaction. The most important and challenging problem of visual reasoning is to design an effective and robust visual reasoning model. To this end, there are two challenges to overcome. The first is that textual and visual information must be jointly considered to make accurate inferences about reasoning. The second is that existing deep learning-based works are often too specific to a particular task. To address these issues, we propose a knowledge memory embedding model with mutual modulation for visual reasoning. This approach learns not only knowledge-based embeddings derived from key–value memory network to make the full and joint of textual and visual information, but also exploits the prior knowledge to improve the performance with knowledge-based representation learning for applying other general reasoning tasks. Experimental results on four benchmarks show that the proposed approach significantly improves performance compared with other state-of-the-art methods, guarantees the robustness with our model. Most importantly, we apply our model to four reasoning tasks, and experimentally show that our model effectively supports relational reasoning and improves performance in several tasks and datasets.  相似文献   

9.
视觉问答与对话是人工智能领域的重要研究任务,是计算机视觉与自然语言处理交叉领域的代表性问题之一。视觉问答与对话任务要求机器根据指定的视觉图像内容,对单轮或多轮的自然语言问题进行作答。视觉问答与对话对机器的感知能力、认知能力和推理能力均提出了较高的要求,在跨模态人机交互应用中具有实用前景。文中对近年来视觉问答与对话的研究进展进行了综述,对数据集和算法进行了归纳,对研究挑战和问题进行了总结,最后对视觉问答与对话的未来发展趋势进行了讨论。  相似文献   

10.
毕鑫  聂豪杰  赵相国  袁野  王国仁 《软件学报》2023,34(10):4565-4583
知识图谱问答任务通过问题分析与知识图谱推理,将问题的精准答案返回给用户,现已被广泛应用于智能搜索、个性化推荐等智慧信息服务中.考虑到关系监督学习方法人工标注的高昂代价,学者们开始采用强化学习等弱监督学习方法设计知识图谱问答模型.然而,面对带有约束的复杂问题,现有方法面临两大挑战:(1)多跳长路径推理导致奖励稀疏与延迟;(2)难以处理约束问题推理路径分支.针对上述挑战,设计了融合约束信息的奖励函数,能够解决弱监督学习面临的奖励稀疏与延迟问题;设计了基于强化学习的约束路径推理模型COPAR,提出了基于注意力机制的动作选择策略与基于约束的实体选择策略,能够依据问题约束信息选择关系及实体,缩减推理搜索空间,解决了推理路径分支问题.此外,提出了歧义约束处理策略,有效解决了推理路径歧义问题.采用知识图谱问答基准数据集对COPAR的性能进行了验证和对比.实验结果表明:与现有先进方法相比,在多跳数据集上性能相对提升了2%-7%,在约束数据集上性能均优于对比模型,准确率提升7.8%以上.  相似文献   

11.
视频问答是深度学习领域的研究热点之一,广泛应用于安防和广告等系统中。在注意力机制框架下,建立先验MASK注意力机制模型,使用Faster R-CNN模型提取视频关键帧以及视频中的对象标签,将其与问题文本特征进行3种注意力加权,利用MASK屏蔽与问题无关的答案,从而增强模型的可解释性。实验结果表明,该模型在视频问答任务中的准确率达到61%,与VQA+、SA+等视频问答模型相比,其具有更快的预测速度以及更好的预测效果。  相似文献   

12.
Aggregate question answering essentially returns answers for given questions by obtaining query graphs with unique dependencies between values and corresponding objects. Word order dependency, as the key to uniquely identify dependency of the query graph, reflects the dependencies between the words in the question. However, due to the semantic gap caused by the expression difference between questions encoded with word vectors and query graphs represented with logical formal elements, it is not trivial to match the correct query graph for the question. Most existing approaches design more expressive query graphs for complex questions and rank them just by directly calculating their similarities, ignoring the semantic gap between them. In this paper, we propose a novel Structure-sensitive Semantic Matching(SSM) approach that learns aligned representations of dependencies in questions and query graphs to eliminate their gap. First, we propose a cross-structure matching module to bridge the gap between two modalities(i.e., textual question and query graph). Then, we propose an entropy-based gated AQG filter to remove the structural noise caused by the uncertainty of dependencies. Finally, we present a two-channel query graph representation that fuses the semantics of abstract structure and grounding content of the query graph explicitly. Experimental results show that SSM could learn aligned representations of questions and query graphs to eliminate the gaps between their dependencies, and improves up to 12% (F1 score) on aggregation questions of two benchmark datasets.  相似文献   

13.
Xiao  Shaoning  Li  Yimeng  Ye  Yunan  Chen  Long  Pu  Shiliang  Zhao  Zhou  Shao  Jian  Xiao  Jun 《Neural Processing Letters》2020,52(2):993-1003

This work aims to address the problem of video question answering (VideoQA) with a novel model and a new open-ended VideoQA dataset. VideoQA is a challenging field in visual information retrieval, which aims to generate the answer according to the video content and question. Ultimately, VideoQA is a video understanding task. Efficiently combining the multi-grained representations is the key factor in understanding a video. The existing works mostly focus on overall frame-level visual understanding to tackle the problem, which neglects finer-grained and temporal information inside the video, or just combines the multi-grained representations simply by concatenation or addition. Thus, we propose the multi-granularity temporal attention network that enables to search for the specific frames in a video that are holistically and locally related to the answer. We first learn the mutual attention representations of multi-grained visual content and question. Then the mutually attended features are combined hierarchically using a double layer LSTM to generate the answer. Furthermore, we illustrate several different multi-grained fusion configurations to prove the advancement of this hierarchical architecture. The effectiveness of our model is demonstrated on the large-scale video question answering dataset based on ActivityNet dataset.

  相似文献   

14.
Question answering (QA) over knowledge base (KB) aims to provide a structured answer from a knowledge base to a natural language question. In this task, a key step is how to represent and understand the natural language query. In this paper, we propose to use tree-structured neural networks constructed based on the constituency tree to model natural language queries. We identify an interesting observation in the constituency tree: different constituents have their own semantic characteristics and might be suitable to solve different subtasks in a QA system. Based on this point, we incorporate the type information as an auxiliary supervision signal to improve the QA performance. We call our approach type-aware QA. We jointly characterize both the answer and its answer type in a unified neural network model with the attention mechanism. Instead of simply using the root representation, we represent the query by combining the representations of different constituents using task-specific attention weights. Extensive experiments on public datasets have demonstrated the effectiveness of our proposed model. More specially, the learned attention weights are quite useful in understanding the query. The produced representations for intermediate nodes can be used for analyzing the effectiveness of components in a QA system.  相似文献   

15.
遥感视觉问答(remote sensing visual question answering,RSVQA)旨在从遥感图像中抽取科学知识.近年来,为了弥合遥感视觉信息与自然语言之间的语义鸿沟,涌现出许多方法.但目前方法仅考虑多模态信息的对齐和融合,既忽略了对遥感图像目标中的多尺度特征及其空间位置信息的深度挖掘,又缺乏对尺度特征的建模和推理的研究,导致答案预测不够全面和准确.针对以上问题,本文提出了一种多尺度引导的融合推理网络(multi-scale guided fusion inference network,MGFIN),旨在增强RSVQA系统的视觉空间推理能力.首先,本文设计了基于Swin Transformer的多尺度视觉表征模块,对嵌入空间位置信息的多尺度视觉特征进行编码;其次,在语言线索的引导下,本文使用多尺度关系推理模块以尺度空间为线索学习跨多个尺度的高阶群内对象关系,并进行空间层次推理;最后,设计基于推理的融合模块来弥合多模态语义鸿沟,在交叉注意力基础上,通过自监督范式、对比学习方法、图文匹配机制等训练目标来自适应地对齐融合多模态特征,并辅助预测最终答案.实验结果表明,本文提出的模型在两个公共RSVQA数据集上具有显著优势.  相似文献   

16.
包希港  周春来  肖克晶  覃飙 《软件学报》2021,32(8):2522-2544
视觉问答是计算机视觉领域和自然语言处理领域的交叉方向,近年来受到了广泛关注.在视觉问答任务中,算法需要回答基于特定图片(或视频)的问题.自2014年第一个视觉问答数据集发布以来,若干大规模数据集在近5年内被陆续发布,并有大量算法在此基础上被提出.已有的综述性研究重点针对视觉问答任务的发展进行了总结,但近年来,有研究发现,视觉问答模型强烈依赖语言偏见和数据集的分布,特别是自VQA-CP数据集发布以来,许多模型的效果大幅度下降.主要详细介绍近年来提出的算法以及发布的数据集,特别是讨论了算法在加强鲁棒性方面的研究.对视觉问答任务的算法进行分类总结,介绍了其动机、细节以及局限性.最后讨论了视觉问答任务的挑战及展望.  相似文献   

17.
李冠彬  张锐斐  刘梦梦  刘劲  林倞 《软件学报》2023,34(12):5905-5920
视频描述技术旨在为视频自动生成包含丰富内容的文字描述,近年来吸引了广泛的研究兴趣.一个准确而精细的视频描述生成方法,不仅需要对视频有全局上的理解,更离不开具体显著目标的局部空间和时序特征.如何建模一个更优的视频特征表达,一直是视频描述工作的研究重点和难点.另一方面,大多数现有工作都将句子视为一个链状结构,并将视频描述任务视为一个生成单词序列的过程,而忽略了句子的语义结构,这使得算法难以应对和优化复杂的句子描述及长句子中易引起的逻辑错误.为了解决上述问题,提出一种新颖的语言结构引导的可解释视频语义描述生成方法,通过设计一个基于注意力的结构化小管定位机制,充分考虑局部对象信息和句子语义结构.结合句子的语法分析树,所提方法能够自适应地加入具有文本内容的相应时空特征,进一步提升视频描述的生成效果.在主流的视频描述任务基准数据集MSVD和MSR-VTT上的实验结果表明,所提出方法在大多数评价指标上都达到了最先进的水平.  相似文献   

18.
姚暄  高君宇  徐常胜 《软件学报》2023,34(5):2083-2100
视频问答作为一种跨模态理解任务,在给定一段视频和与之相关的问题的条件下,需要通过不同模态语义信息之间的交互来产生问题的答案.近年来,由于图神经网络在跨模态信息融合与推理方面强大的能力,其在视频问答任务中取得了显著的进展.但是,大多数现有的图网络方法由于自身固有的过拟合或过平滑、弱鲁棒性和弱泛化性的缺陷使得视频问答模型的性能未能进一步提升.鉴于预训练技术中自监督对比学习方法的有效性和鲁棒性,在视频问答任务中利用图数据增强的思路提出了一种图网络自监督对比学习框架GMC.该框架使用针对节点和边的两种数据增强操作来生成相异子样本,并通过提升原样本与生成子样本图数据预测分布之间的一致性来提高视频问答模型的准确率和鲁棒性.在视频问答公开数据集上通过与现有先进的视频问答模型和不同GMC变体模型的实验对比验证了所提框架的有效性.  相似文献   

19.
20.
基于视觉特征与文本特征融合的图像问答已经成为自动问答的热点研究问题之一。现有的大部分模型都是通过注意力机制来挖掘图像和问题语句之间的关联关系,忽略了图像区域和问题词在同一模态之中以及不同视角的关联关系。针对该问题,提出一种基于多路语义图网络的图像自动问答模型(MSGN),从多个角度挖掘图像和问题之间的语义关联。MSGN利用图神经网络模型挖掘图像区域和问题词细粒度的模态内模态间的关联关系,进而提高答案预测的准确性。模型在公开的图像问答数据集上的实验结果表明,从多个角度挖掘图像和问题之间的语义关联可提高图像问题答案预测的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号