首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Transformer的解码器(Transformer_decoder)模型已被广泛应用于图像描述任务中,其中自注意力机制(Self Attention)通过捕获细粒度的特征来实现更深层次的图像理解。本文对Self Attention机制进行2方面改进,包括视觉增强注意力机制(Vision-Boosted Attention, VBA)和相对位置注意力机制(Relative-Position Attention, RPA)。视觉增强注意力机制为Transformer_decoder添加VBA层,将视觉特征作为辅助信息引入Self Attention模型中,指导解码器模型生成与图像内容更匹配的描述语义。相对位置注意力机制在Self Attention的基础上,引入可训练的相对位置参数,为输入序列添加词与词之间的相对位置关系。基于COCO2014进行实验,结果表明VBA和RPA这2种注意力机制对图像描述任务都有一定改进,且2种注意力机制相结合的解码器模型有更好的语义表述效果。  相似文献   

2.
得益于深度学习的发展和大规模图像标注数据集的出现,图像标题生成作为一种结合了计算机视觉和自然语言处理的综合任务得到了广泛关注。受到神经机器翻译任务的启发,前人将图像标题生成任务看作是一种特殊的翻译任务,即将一张图像视作源端的信息表述,通过编码解码过程,翻译为目标端的自然语言语句。因此,现有研究引入了端到端的神经网络模型,并取得了较好的生成效果。然而,图像标题生成研究依然面临许多挑战,其中最值得关注的难点之一是解决确切性文字表述的问题。一条确切的标题往往是有形且具体的表述,例如“梅西主罚点球”,而目前机器生成的标题则较为粗浅和单调,例如“一个人在踢球”。针对这一问题,该文尝试开展标题生成的有形化研究,并在前瞻性实验中聚焦于标题中人名实体的识别与填充。在技术层面,该文将机器自动生成的图像标题作为处理对象,去除其中抽象人名实体的名称(例如,一个人、男人和他等)或错误的称谓,并将由此形成的带有句法空缺的表述视作完型填空题目,从而引入了以Who问题为目标的阅读理解技术。具体地,该文利用R-NET阅读理解模型实现标题中人名实体的抽取与填充。此外,该文尝试基于图像所在文本的局部信息和外部链接的全局信息,对人名实体进行抽取。实验结果表明,该方法有效提高了图像标题的生成质量,BLEU值相应提升了2.93%;实验结果也显示,利用全局信息有利于发现和填充正确的人名实体。  相似文献   

3.
图像描述生成模型是使用自然语言描述图片的内容及其属性之间关系的算法模型.对现有模型描述质量不高、图片重要部分特征提取不足和模型过于复杂的问题进行了研究,提出了一种基于卷积块注意力机制模块(CBAM)的图像描述生成模型.该模型采用编码器-解码器结构,在特征提取网络Inception-v4中加入CBAM,并作为编码器提取图片的重要特征信息,将其送入解码器长短期记忆网络(LSTM)中,生成对应图片的描述语句.采用MSCOCO2014数据集中训练集和验证集进行训练和测试,使用多个评价准则评估模型的准确性.实验结果表明,改进后模型的评价准则得分优于其他模型,其中Model2实验能够更好地提取到图像特征,生成更加准确的描述.  相似文献   

4.
Li  Yixin  Wu  Xinran  Li  Chen  Li  Xiaoyan  Chen  Haoyuan  Sun  Changhao  Rahaman  Md Mamunur  Yao  Yudong  Zhang  Yong  Jiang  Tao 《Applied Intelligence》2022,52(9):9717-9738

In the Gastric Histopathology Image Classification (GHIC) tasks, which are usually weakly supervised learning missions, there is inevitably redundant information in the images. Therefore, designing networks that can focus on distinguishing features has become a popular research topic. In this paper, to accomplish the tasks of GHIC superiorly and assist pathologists in clinical diagnosis, an intelligent Hierarchical Conditional Random Field based Attention Mechanism (HCRF-AM) model is proposed. The HCRF-AM model consists of an Attention Mechanism (AM) module and an Image Classification (IC) module. In the AM module, an HCRF model is built to extract attention regions. In the IC module, a Convolutional Neural Network (CNN) model is trained with the attention regions selected, and then an algorithm called Classification Probability-based Ensemble Learning is applied to obtain the image-level results from the patch-level output of the CNN. In the experiment, a classification specificity of 96.67% is achieved on a gastric histopathology dataset with 700 images. Our HCRF-AM model demonstrates high classification performance and shows its effectiveness and future potential in the GHIC field. In addition, the AM module and transfer learning technique allow the network to generalize well to other types of image data except histopathology images, and we obtain 95.5% and 95.8% accuracies on IG02 and Oxford-IIIT Pet Datasets.

  相似文献   

5.
6.
VQA attracts lots of researchers in recent years. It could be potentially applied to the remote consultation of COVID-19. Attention mechanisms provide an effective way of utilizing visual and question information selectively in visual question and answering (VQA). The attention methods of existing VQA models generally focus on spatial dimension. In other words, the attention is modeled as spatial probabilities that re-weights the image region or word token features. However, feature-wise attention cannot be ignored, as image and question representations are organized in both spatial and feature-wise modes. Taking the question “What is the color of the woman’s hair” for example, identifying the hair color attribute feature is as important as focusing on the hair region. In this paper, we propose a novel neural network module named “multimodal feature-wise attention module” (MulFA) to model the feature-wise attention. Extensive experiments show that MulFA is capable of filtering representations for feature refinement and leads to improved performance. By introducing MulFA modules, we construct an effective union feature-wise and spatial co-attention network (UFSCAN) model for VQA. Our evaluation on two large-scale VQA datasets, VQA 1.0 and VQA 2.0, shows that UFSCAN achieves performance competitive with state-of-the-art models.  相似文献   

7.
图像描述任务旨在针对一张给出的图像产生其对应描述。针对现有算法中语义信息理解不够全面的问题,提出了一个针对图像描述领域的多模态Transformer模型。该模型在注意模块中同时捕捉模态内和模态间的相互作用;更进一步使用ELMo获得包含上下文信息的文本特征,使模型获得更加丰富的语义描述输入。该模型可以对复杂的多模态信息进行更好地理解与推断并且生成更为准确的自然语言描述。该模型在Microsoft COCO数据集上进行了广泛的实验,实验结果表明,相比于使用bottom-up注意力机制以及LSTM进行图像描述的基线模型具有较大的效果提升,模型在BLEU-1、BLEU-2、BLEU-3、BLEU-4、ROUGE-L、CIDEr-D上分别有0.7、0.4、0.9、1.3、0.6、4.9个百分点的提高。  相似文献   

8.
9.
当前图像描述生成的研究主要仅限于单语言(如英文),这得益于大规模的已人工标注的图像及其英文描述语料。该文探索零标注资源情况下,以英文作为枢轴语言的图像中文描述生成研究。具体地,借助于神经机器翻译技术,该文提出并比较了两种图像中文描述生成的方法: (1)串行法,该方法首先将图像生成英文描述,然后由英文描述翻译成中文描述; (2)构建伪训练语料法,该方法首先将训练集中图像的英文描述翻译为中文描述,得到图像-中文描述的伪标注语料,然后训练一个图像中文描述生成模型。特别地,对于第二种方法,该文还比较了基于词和基于字的中文描述生成模型。实验结果表明,采用构建伪训练语料法优于串行法,同时基于字的中文描述生成模型也要优于基于词的模型,BLEU_4值达到0.341。  相似文献   

10.
In recent years we witnessed a surge of interest in subspace learning for image classification. However, the previous methods lack of high accuracy since they do not consider multiple features of the images. For instance, we can represent a color image by finding a set of visual features to represent the information of its color, texture and shape. According to the “Patch Alignment” Framework, we developed a new subspace learning method, termed Semi-Supervised Multimodal Subspace Learning (SS-MMSL), in which we can encode different features from different modalities to build a meaningful subspace. In particular, the new method adopts the discriminative information from the labeled data to construct local patches and aligns these patches to get the optimal low dimensional subspace for each modality. For local patch construction, the data distribution revealed by unlabeled data is utilized to enhance the subspace learning. In order to find a low dimensional subspace wherein the distribution of each modality is sufficiently smooth, SS-MMSL adopts an alternating and iterative optimization algorithm to explore the complementary characteristics of different modalities. The iterative procedure reaches the global minimum of the criterion due to the strong convexity of the criterion. Our experiments of image classification and cartoon retrieval demonstrate the validity of the proposed method.  相似文献   

11.
目前大多数图像标题生成模型都是由一个基于卷积神经网络(Convolutional Neural Network,CNN)的图像编码器和一个基于循环神经网络(Recurrent Neural Network,RNN)的标题解码器组成。其中图像编码器用于提取图像的视觉特征,标题解码器基于视觉特征通过注意力机制来生成标题。然而,使用基于注意力机制的RNN的问题在于,解码端虽然可以对图像特征和标题交互的部分进行注意力建模,但是却忽略了标题内部交互作用的自我注意。因此,针对图像标题生成任务,文中提出了一种能同时结合循环网络和自注意力网络优点的模型。该模型一方面能够通过自注意力模型在统一的注意力区域内同时捕获模态内和模态间的相互作用,另一方面又保持了循环网络固有的优点。在MSCOCO数据集上的实验结果表明,CIDEr值从1.135提高到了1.166,所提方法能够有效提升图像标题生成的性能。  相似文献   

12.
为解决传统长短时记忆(LSTM)神经网络存在过早饱和的问题,使得对给定的图片能够生成更准确的描述,提出一种基于反正切函数的长短时记忆(ITLSTM)神经网络模型。首先,利用经典的卷积神经网络模型提取图像特征;然后,利用ITLSTM神经网络模型来表征图像对应的描述;最后在Flickr8K数据集上评估模型的性能,并与几种经典的图像标题生成模型如Google NIC等进行比较,实验结果表明本文提出的模型能够有效地提高图像标题生成的准确性。  相似文献   

13.
传统的视频字幕生成模型大多都采用编码器—译码器框架。在编码阶段,使用卷积神经网络对视频进行处理。在解码阶段,使用长短期记忆网络生成视频的相应字幕。基于视频的时序相关性和多模态性,提出了一个混合型模型,即基于硬注意力的多模态视频字幕的生成模型。该模型在编码阶段使用不同的融合模型将视频和音频两种模态进行关联,在解码阶段基于长短期记忆网络的基础上加入了硬注意力机制来生成对视频的描述。这个混合模型在数据集MSR-VTT(Microsoft research video to text)上得到的机器翻译指标较基础模型有0.2%~3.8%的提升。根据实验结果可以判定基于硬注意力机制的多模态混合模型可以生成视频的精准描述字幕。  相似文献   

14.
针对图像描述生成任务在不同场景下表现不佳的缺点,提出一种融合卷积神经网络和先验知识的多场景注意力图像描述生成算法.该算法通过卷积神经网络生成视觉语义单元,使用命名实体识别对图像场景进行识别和预测,并使用该结果自动调整自注意力机制的关键参数并进行多场景注意力计算,最后将得到的区域编码和语义先验知识插入Transforme...  相似文献   

15.
Jia  Xin  Wang  Yunbo  Peng  Yuxin  Chen  Shengyong 《Multimedia Tools and Applications》2022,81(15):21349-21367

Transformer-based architectures have shown encouraging results in image captioning. They usually utilize self-attention based methods to establish the semantic association between objects in an image for predicting caption. However, when appearance features between the candidate object and query object show weak dependence, the self-attention based methods are hard to capture the semantic association between them. In this paper, a Semantic Association Enhancement Transformer model is proposed to address the above challenge. First, an Appearance-Geometry Multi-Head Attention is introduced to model a visual relationship by integrating the geometry features and appearance features of the objects. The visual relationship characterizes the semantic association and relative position among the objects. Secondly, a Visual Relationship Improving module is presented to weigh the importance of appearance feature and geometry feature of query object to the modeled visual relationship. Then, the visual relationship among different objects is adaptively improved according to the constructed importance, especially the objects with weak dependence on appearance features, thereby enhancing their semantic association. Extensive experiments on MS COCO dataset demonstrate that the proposed method outperforms the state-of-the-art methods.

  相似文献   

16.
Attention mechanism has become a widely researched method to improve the performance of convolutional neural networks (CNNs). Most of the researches focus on designing channel-wise and spatial-wise attention modules but neglect the importance of unique information on each feature, which is critical for deciding both “what” and “where” to focus. In this paper, a feature-wise attention module is proposed, which can give each feature of the input feature map an attention weight. Specifically, the module is based on the well-known surround suppression in the discipline of neuroscience, and it consists of two sub-modules, Minus-Square-Add (MSA) operation and a group of learnable non-linear mapping functions. The MSA imitates the surround suppression and defines an energy function which can be applied to each feature to measure its importance. The group of non-linear functions refines the energy calculated by the MSA to more reasonable values. By these two sub-modules, feature-wise attention can be well captured. Meanwhile, due to the simple structure and few parameters of the two sub-modules, the proposed module can easily be almost integrated into any CNN. To verify the performance and effectiveness of the proposed module, several experiments were conducted on the Cifar10, Cifar100, Cinic10, and Tiny-ImageNet datasets, respectively. The experimental results demonstrate that the proposed module is flexible and effective for CNNs to improve their performance.  相似文献   

17.
为了降低视频传感器网络中的网络负载,减少能量消耗并降低时延,提出一种基于动态注意力的图像分层融合方法。通过对网络的结构化部署和节点间的区域映射,对视频监测区域进行逻辑划分。利用动态注意力模型对局部重点区域进行精度优化,实现多质量图像融合。实验结果证明了该方法的有效性。  相似文献   

18.
余娜  刘彦  魏雄炬  万源 《计算机应用》2022,42(3):844-853
针对现有RGB-D室内场景语义分割不能有效融合多模态特征的问题,提出一种基于注意力机制和金字塔融合的RGB-D室内场景图像语义分割网络模型APFNet,并为其设计了两个新模块:注意力机制融合模块与金字塔融合模块.其中,注意力机制融合模块分别提取RGB特征和Depth特征的注意力分配权重,充分利用两种特征的互补性,使网络...  相似文献   

19.
知识图谱表示学习旨在将实体和关系映射到一个低维稠密的向量空间中。现有的大多数相关模型更注重于学习三元组的结构特征,忽略了三元组内的实体关系的语义信息特征和三元组外的实体描述信息特征,因此知识表达能力较差。针对以上问题,提出了一种融合多源信息的知识表示学习模型BAGAT。首先,结合知识图谱特征来构造三元组实体目标节点和邻居节点,并使用图注意力网络(GAT)聚合三元组结构的语义信息表示;然后,使用BERT词向量模型对实体描述信息进行嵌入表示;最后,将两种表示方法映射到同一个向量空间中进行联合知识表示学习。实验结果表明,BAGAT性能较其他模型有较大提升,在公共数据集FB15K-237链接预测任务的Hits@1与Hits@10指标上,与翻译模型TransE相比分别提升了25.9个百分点和22.0个百分点,与图神经网络模型KBGAT相比分别提升了1.8个百分点和3.5个百分点。可见,融合实体描述信息和三元组结构语义信息的多源信息表示方法可以获得更强的表示学习能力。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号