首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Xiao  Shaoning  Li  Yimeng  Ye  Yunan  Chen  Long  Pu  Shiliang  Zhao  Zhou  Shao  Jian  Xiao  Jun 《Neural Processing Letters》2020,52(2):993-1003

This work aims to address the problem of video question answering (VideoQA) with a novel model and a new open-ended VideoQA dataset. VideoQA is a challenging field in visual information retrieval, which aims to generate the answer according to the video content and question. Ultimately, VideoQA is a video understanding task. Efficiently combining the multi-grained representations is the key factor in understanding a video. The existing works mostly focus on overall frame-level visual understanding to tackle the problem, which neglects finer-grained and temporal information inside the video, or just combines the multi-grained representations simply by concatenation or addition. Thus, we propose the multi-granularity temporal attention network that enables to search for the specific frames in a video that are holistically and locally related to the answer. We first learn the mutual attention representations of multi-grained visual content and question. Then the mutually attended features are combined hierarchically using a double layer LSTM to generate the answer. Furthermore, we illustrate several different multi-grained fusion configurations to prove the advancement of this hierarchical architecture. The effectiveness of our model is demonstrated on the large-scale video question answering dataset based on ActivityNet dataset.

  相似文献   

2.
大量结构无序、内容片面的碎片化信息以文本、图像、视频、网页等不同模态的形式,高度分散存储在不同数据源中,现有的研究通过构建视觉问答系统(visual question answering, VQA),实现对多模态碎片化信息的提取、表达和理解.视觉问答任务给定与图像相关的一个问题,推理相应的答案.在视觉问答任务的基本背景下,以设计出完备的图像碎片化信息问答的框架与算法为目标,重点研究包括图像特征提取、问题文本特征提取、多模态特征融合和答案推理的模型与算法.构建深度神经网络模型提取用于表示图像与问题信息的特征,结合注意力机制与变分推断方法关联图像与问题2种模态特征并推理答案.实验结果表明:该模型能够有效提取和理解多模态碎片化信息,并提高视觉问答任务的准确率.  相似文献   

3.
目的 现有大多数视觉问答模型均采用自上而下的视觉注意力机制,对图像内容无加权统一处理,无法更好地表征图像信息,且因为缺乏长期记忆模块,无法对信息进行长时间记忆存储,在推理答案过程中会造成有效信息丢失,从而预测出错误答案。为此,提出一种结合自底向上注意力机制和记忆网络的视觉问答模型,通过增强对图像内容的表示和记忆,提高视觉问答的准确率。方法 预训练一个目标检测模型提取图像中的目标和显著性区域作为图像特征,联合问题表示输入到记忆网络,记忆网络根据问题检索输入图像特征中的有用信息,并结合输入图像信息和问题表示进行多次迭代、更新,以生成最终的信息表示,最后融合记忆网络记忆的最终信息和问题表示,推测出正确答案。结果 在公开的大规模数据集VQA (visual question answering)v2.0上与现有主流算法进行比较实验和消融实验,结果表明,提出的模型在视觉问答任务中的准确率有显著提升,总体准确率为64.0%。与MCB(multimodal compact bilinear)算法相比,总体准确率提升了1.7%;与性能较好的VQA machine算法相比,总体准确率提升了1%,其中回答是/否、计数和其他类型问题的准确率分别提升了1.1%、3.4%和0.6%。整体性能优于其他对比算法,验证了提出算法的有效性。结论 本文提出的结合自底向上注意力机制和记忆网络的视觉问答模型,更符合人类的视觉注意力机制,并且在推理答案的过程中减少了信息丢失,有效提升了视觉问答的准确率。  相似文献   

4.
Video question answering (Video QA) involves a thorough understanding of video content and question language, as well as the grounding of the textual semantic to the visual content of videos. Thus, to answer the questions more accurately, not only the semantic entity should be associated with certain visual instance in video frames, but also the action or event in the question should be localized to a corresponding temporal slot. It turns out to be a more challenging task that requires the ability of conducting reasoning with correlations between instances along temporal frames. In this paper, we propose an instance-sequence reasoning network for video question answering with instance grounding and temporal localization. In our model, both visual instances and textual representations are firstly embedded into graph nodes, which benefits the integration of intra- and inter-modality. Then, we propose graph causal convolution (GCC) on graph-structured sequence with a large receptive field to capture more causal connections, which is vital for visual grounding and instance-sequence reasoning. Finally, we evaluate our model on TVQA+ dataset, which contains the groundtruth of instance grounding and temporal localization, three other Video QA datasets and three multimodal language processing datasets. Extensive experiments demonstrate the effectiveness and generalization of the proposed method. Specifically, our method outperforms the state-of-the-art methods on these benchmarks.  相似文献   

5.
食品安全领域的智能问答系统旨在对用户通过自然语言进行的食品安全方面的提问做出快速、简洁的反馈,其技术挑战主要在于语义分析和答案句子表示,尤其是在于如何消除问答之间的词汇差距以加强问答匹配能力,以及如何抓取准确的核心单词以增强句子表示能力。尽管基于"短语级别"和众多的注意力模型已经取得了一定的性能提升,但基于注意力的框架都没有很好的重视位置信息。针对上述问题,运用词林和word2vec相结合的方法,提出近义词-主词替换机制(将普通词映射为核心词),实现了语义表示的归一化。同时,受位置上下文提升信息检索性能的启发,假设如果问句中的一个词(称之为问题词)出现在答案句中,问题词的临近词对比偏离词应该被给与更高的权重。基于上述假设,提出了基于双向lstm模型的位置注意力机制(BLSTM-PA)。上述机制给与答案句中问题词的临近文本更高的注意力。以食品安全问答系统为语义分析验证和仿真的平台,通过在食品安全领域数据集(即FS-QA)上进行的对比实验,从MAP和MRR评价指标来看,与基于传统的注意力机制的RNN模型相比,BLSTM-PA实现了5.96%的提升,证明了BLSTM-PA模型的良好性能,同时,集成了提出的问答模型的食品安全问答系统性能也得到了显著的提升。  相似文献   

6.
In this paper we propose a novel biased random sampling strategy for image representation in Bag-of-Words models. We evaluate its impact on the feature properties and the ranking quality for a set of semantic concepts and show that it improves performance of classifiers in image annotation tasks and increases the correlation between kernels and labels. As second contribution we propose a method called Output Kernel Multi-Task Learning (MTL) to improve ranking performance by transfer information between classes. The main advantages of output kernel MTL are that it permits asymmetric information transfer between tasks and scales to training sets of several thousand images. We give a theoretical interpretation of the method and show that the learned contributions of source tasks to target tasks are semantically consistent. Both strategies are evaluated on the ImageCLEF PhotoAnnotation dataset.Our best visual result which used the MTL method was ranked first according to mean Average Precision (mAP) within the purely visual submissions in the ImageCLEF 2011 PhotoAnnotation Challenge. Our multi-modal submission achieved the first rank by mAP among all submissions in the same competition.  相似文献   

7.
Question answering (QA) over knowledge base (KB) aims to provide a structured answer from a knowledge base to a natural language question. In this task, a key step is how to represent and understand the natural language query. In this paper, we propose to use tree-structured neural networks constructed based on the constituency tree to model natural language queries. We identify an interesting observation in the constituency tree: different constituents have their own semantic characteristics and might be suitable to solve different subtasks in a QA system. Based on this point, we incorporate the type information as an auxiliary supervision signal to improve the QA performance. We call our approach type-aware QA. We jointly characterize both the answer and its answer type in a unified neural network model with the attention mechanism. Instead of simply using the root representation, we represent the query by combining the representations of different constituents using task-specific attention weights. Extensive experiments on public datasets have demonstrated the effectiveness of our proposed model. More specially, the learned attention weights are quite useful in understanding the query. The produced representations for intermediate nodes can be used for analyzing the effectiveness of components in a QA system.  相似文献   

8.
针对当前主流视觉问答(visual question answering,VQA)任务使用区域特征作为图像表示而面临的训练复杂度高、推理速度慢等问题,提出一种基于复合视觉语言的卷积网络(composite visionlinguistic convnet,CVlCN)来对视觉问答任务中的图像进行表征.该方法将图像特征和问题语义通过复合学习表示成复合图文特征,然后从空间和通道上计算复合图文特征的注意力分布,以选择性地保留与问题语义相关的视觉信息.在VQA-v2数据集上的测试结果表明,该方法在视觉问答任务上的准确率有明显的提升,整体准确率达到64.4%.模型的计算复杂度较低且推理速度更快.  相似文献   

9.
目的 现有视觉问答模型的研究主要从注意力机制和多模态融合角度出发,未能对图像场景中对象之间的语义联系显式建模,且较少突出对象的空间位置关系,导致空间关系推理能力欠佳。对此,本文针对需要空间关系推理的视觉问答问题,提出利用视觉对象之间空间关系属性结构化建模图像,构建问题引导的空间关系图推理视觉问答模型。方法 利用显著性注意力,用Faster R-CNN(region-based convolutional neural network)提取图像中显著的视觉对象和视觉特征;对图像中的视觉对象及其空间关系结构化建模为空间关系图;利用问题引导的聚焦式注意力进行基于问题的空间关系推理。聚焦式注意力分为节点注意力和边注意力,分别用于发现与问题相关的视觉对象和空间关系;利用节点注意力和边注意力权重构造门控图推理网络,通过门控图推理网络的信息传递机制和控制特征信息的聚合,获得节点的深度交互信息,学习得到具有空间感知的视觉特征表示,达到基于问题的空间关系推理;将具有空间关系感知的图像特征和问题特征进行多模态融合,预测出正确答案。结果 模型在VQA(visual question answering)v2...  相似文献   

10.
In this paper, we consider the problem of buffer space allocation for a tandem production line with unreliable machines. This problem has various formulations all aiming to answer the question: how much buffer storage to allocate between the processing stations? Many authors use the knapsack-type formulation of this problem. We investigate the problem with a broader statement. The criterion depends on the average steady-state production rate of the line and the buffer equipment acquisition cost. We evaluate black-box complexity of this problem and propose a hybrid optimization algorithm (HBBA), combining the genetic and branch-and-bound approaches. HBBA is excellent in computational time. HBBA uses a Markov model aggregation technique for goal function evaluation. Nevertheless, HBBA is more general and can be used with other production rate evaluation techniques.  相似文献   

11.
该文讨论怎样利用语言知识资源来帮助机器进行语义理解和常识推理。首先,指出人类生活在常识和意义世界中,人工智能机器人必须理解自然语言的意义,能够在此基础上进行常识推理。接着,简单梳理了基于知识和基于统计两种自然语言处理路线各自的优长和短缺。然后,说明完全绕开知识的统计方法和深度学习,都不能真正理解概念和语言。该文通过具体案例说明,《实词信息词典》已经配备了有关词项的语义角色关系及其句法配置信息;把这种语言知识加入知识图谱和内容计算中,可以为人工智能提供理解和解释从而造就一种可解释的人工智能。由于“物性角色”描述了名词所指事物的百科知识,可用以回答相关事物是什么(形式角色)、有哪些部件(构成角色)、用什么做的(材料)、怎么形成的(施成)、有什么用途(功用)等常识性问题。  相似文献   

12.
Are Edges Incomplete?   总被引:3,自引:0,他引:3  
We address the problem of computing a general-purpose early visual representation that satisfies two criteria. 1) Explicitness: To be more useful than the original pixel array, the representation must take a significant step toward making important image structure explicit. 2) Completeness: To support a diverse set of high-level tasks, the representation must not discard information of potential perceptual relevance. The most prevalent representation in image processing and computer vision that satisfies the completeness criterion is the wavelet code. In this paper, we propose a very different code which represents the location of each edge and the magnitude and blur scale of the underlying intensity change. By making edge structure explicit, we argue that this representation better satisfies the first criterion than do wavelet codes. To address the second criterion, we study the question of how much visual information is lost in the representation. We report a novel method for inverting the edge code to reconstruct a perceptually accurate estimate of the original image, and thus demonstrate that the proposed representation embodies virtually all of the perceptually relevant information contained in a natural image. This result bears on recent claims that edge representations do not contain all of the information needed for higher level tasks.  相似文献   

13.
包希港  周春来  肖克晶  覃飙 《软件学报》2021,32(8):2522-2544
视觉问答是计算机视觉领域和自然语言处理领域的交叉方向,近年来受到了广泛关注.在视觉问答任务中,算法需要回答基于特定图片(或视频)的问题.自2014年第一个视觉问答数据集发布以来,若干大规模数据集在近5年内被陆续发布,并有大量算法在此基础上被提出.已有的综述性研究重点针对视觉问答任务的发展进行了总结,但近年来,有研究发现...  相似文献   

14.
一般细粒度图像分类只关注图像局部视觉信息,但在一些问题中图像局部的文本 信息对图像分类结果有直接帮助,通过提取图像文本语义信息可以进一步提升图像细分类效果。 我们综合考虑了图像视觉信息与图像局部文本信息,提出一个端到端的分类模型来解决细粒度 图像分类问题。一方面使用深度卷积神经网络获取图像视觉特征,另一方面依据提出的端到端 文本识别网络,提取图像的文本信息,再通过相关性计算模块合并视觉特征与文本特征,送入 分类网络。最终在公共数据集 Con-Text 上测试该方法在图像细分类中的结果,同时也在 SVT 数据集上验证端到端文本识别网络的能力,均较之前方法获得更好的效果。  相似文献   

15.
16.
遥感视觉问答(remote sensing visual question answering,RSVQA)旨在从遥感图像中抽取科学知识.近年来,为了弥合遥感视觉信息与自然语言之间的语义鸿沟,涌现出许多方法.但目前方法仅考虑多模态信息的对齐和融合,既忽略了对遥感图像目标中的多尺度特征及其空间位置信息的深度挖掘,又缺乏对尺度特征的建模和推理的研究,导致答案预测不够全面和准确.针对以上问题,本文提出了一种多尺度引导的融合推理网络(multi-scale guided fusion inference network,MGFIN),旨在增强RSVQA系统的视觉空间推理能力.首先,本文设计了基于Swin Transformer的多尺度视觉表征模块,对嵌入空间位置信息的多尺度视觉特征进行编码;其次,在语言线索的引导下,本文使用多尺度关系推理模块以尺度空间为线索学习跨多个尺度的高阶群内对象关系,并进行空间层次推理;最后,设计基于推理的融合模块来弥合多模态语义鸿沟,在交叉注意力基础上,通过自监督范式、对比学习方法、图文匹配机制等训练目标来自适应地对齐融合多模态特征,并辅助预测最终答案.实验结果表明,本文提出的模型在两个公共RSVQA数据集上具有显著优势.  相似文献   

17.
作为一个交叉领域的研究任务,多模态抑郁症检测在自然语言处理、计算机视觉、心理健康分析等研究领域吸引了越来越多研究人员的关注.目前存在的研究工作主要致力于利用用户产生的社交网络数据进行抑郁症检测.然而,由于社交网络数据量通常较大,已有的研究方法存在捕捉长距离依存信息(即全局信息)不足的缺陷.因此,如何获取用户的全局信息来...  相似文献   

18.
基于深度学习的图像检索系统   总被引:2,自引:0,他引:2  
基于内容的图像检索系统关键的技术是有效图像特征的获取和相似度匹配策略.在过去,基于内容的图像检索系统主要使用低级的可视化特征,无法得到满意的检索结果,所以尽管在基于内容的图像检索上花费了很大的努力,但是基于内容的图像检索依旧是计算机视觉领域中的一个挑战.在基于内容的图像检索系统中,存在的最大的问题是“语义鸿沟”,即机器从低级的可视化特征得到的相似性和人从高级的语义特征得到的相似性之间的不同.传统的基于内容的图像检索系统,只是在低级的可视化特征上学习图像的特征,无法有效的解决“语义鸿沟”.近些年,深度学习技术的快速发展给我们提供了希望.深度学习源于人工神经网络的研究,深度学习通过组合低级的特征形成更加抽象的高层表示属性类别或者特征,以发现数据的分布规律,这是其他算法无法实现的.受深度学习在计算机视觉、语音识别、自然语言处理、图像与视频分析、多媒体等诸多领域取得巨大成功的启发,本文将深度学习技术用于基于内容的图像检索,以解决基于内容的图像检索系统中的“语义鸿沟”问题.  相似文献   

19.

The question answer (QA) system for a reading comprehension task tries to answer the question by retrieving the needed phrase from the given content. Precise answering is the key role of a QA system. An ambiguity is developed when we need to answer a negative question with a positive reply. The negation words change the polarity of the sentence, and hence, the scope of negation words is notable. This has paved the way for studying the role of ‘negation’ in the natural language processing (NLP) task. The handling of these words is considered a major part of our proposed methodology. In this paper, we propose an algorithm to retrieve and replace the negation words present in the content and query. A comparative study is done for performing word embedding over these words using various state-of-the-art methods. In earlier works when handling the negation the semantics of the sentences are changed. Hence, in this paper we try to maintain the semantics through our proposed methodology. The updated content is embedded into the bi-directional long short-term memory (Bi-LSTM) and thus makes the retrieving of an answer for a question answer system easier. The proposed work has been carried over the Stanford Negation, and the SQuAD dataset with a higher precision value of 96.2% has been achieved in retrieving the answers that are given in the dataset.

  相似文献   

20.
Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号