首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
文本摘要应包含源文本中所有重要信息,传统基于编码器-解码器架构的摘要模型生成的摘要准确性较低。根据文本分类和文本摘要的相关性,提出一种多任务学习摘要模型。从文本分类辅助任务中学习抽象信息改善摘要生成质量,使用K-means聚类算法构建Cluster-2、Cluster-10和Cluster-20文本分类数据集训练分类器,并研究不同分类数据集参与训练对摘要模型的性能影响,同时利用基于统计分布的判别法全面评价摘要准确性。在CNNDM测试集上的实验结果表明,该模型在ROUGE-1、ROUGE-2和ROUGE-L指标上相比强基线模型分别提高了0.23、0.17和0.31个百分点,生成摘要的准确性更高。  相似文献   

2.
胡天翔  谢睿  叶蔚  张世琨 《软件学报》2023,34(4):1695-1710
代码摘要通过生成源代码片段的简短自然语言描述, 可帮助开发人员理解代码并减少文档工作. 近期, 关于代码摘要的研究工作主要采用深度学习模型, 这些模型中的大多数都在由独立代码摘要对组成的大型数据集上进行训练. 尽管取得了良好的效果, 这些工作普遍忽略了代码片段和摘要的项目级上下文信息, 而开发人员在编写文档时往往高度依赖这些信息. 针对该问题, 研究了一种与开发者行为和代码摘要工具实现更加一致的代码摘要场景——项目级代码摘要, 其中, 创建了用于项目特定代码摘要的数据集, 该数据集包含800k方法摘要对及其生命周期信息, 用于构建特定时刻准确的项目项目上下文; 提出了一种新颖的深度学习方法, 利用高度相关的代码片段及其相应的摘要来表征上下文语义, 并通过迁移学习整合从大规模跨项目数据集中学到的常识. 实验结果表明: 基于项目级上下文的代码摘要模型不仅能够比通用代码摘要模型获得显著的性能提升, 同时, 针对特定项目能够生成更一致的摘要.  相似文献   

3.
源代码注释生成旨在为源代码生成精确的自然语言注释,帮助开发者更好地理解和维护源代码.传统的研究方法利用信息检索技术来生成源代码摘要,从初始源代码选择相应的词或者改写相似代码段的摘要;最近的研究采用机器翻译的方法,选择编码器-解码器的神经网络模型生成代码段的摘要.现有的注释生成方法主要存在两个问题:一方面,基于神经网络的方法对于代码段中出现的高频词更加友好,但是往往会弱化低频词的处理;另一方面,编程语言是高度结构化的,所以不能简单地将源代码作为序列化文本处理,容易造成上下文结构信息丢失.因此,本文为了解决低频词问题提出了基于检索的神经机器翻译方法,使用训练集中检索到的相似代码段来增强神经网络模型;为了学习代码段的结构化语义信息,本文提出结构化引导的Transformer,该模型通过注意力机制将代码结构信息进行编码.经过实验,结果证明该模型在低频词和结构化语义的处理上对比当下前沿的代码注释生成的深度学习模型具有显著的优势.  相似文献   

4.
抽象神经网络在文本摘要领域取得了长足进步, 展示了令人瞩目的成就. 然而, 由于抽象摘要的灵活性, 它很容易造成生成的摘要忠实性差的问题, 甚至偏离源文档的语义主旨. 针对这一问题, 本文提出了两种方法来提高摘要的保真度. (1)由于实体在摘要中起着重要作用, 而且通常来自于原始文档, 因此本文提出允许模型从源文档中复制实体, 确保生成的实体与源文档中的实体相匹配, 这有助于防止生成不一致的实体. (2)为了更好地防止生成的摘要与原文产生语义偏离, 本文在摘要生成过程中使用关键实体和关键token作为两种不同粒度的指导信息以指导摘要的生成. 本文使用 ROUGE指标在两个广泛使用的文本摘要数据集CNNDM和XSum上评估了本文方法的性能, 实验结果表明, 这两种方法在提高模型性能方面都取得了显著的效果. 此外, 实验还证明了实体复制机制可以在一定程度上借助指导信息以纠正引入的语义噪声.  相似文献   

5.
文本摘要的一个迫切需要解决的问题是如何准确地概括文本的核心内容.目前文本摘要的主要方法是使用编码器-解码器架构,在解码过程中利用软注意力获取所需的上下文语义信息.但是,由于编码器有时候会编码过多的信息,所以生成的摘要不一定会概括源文本的核心内容.为此,该文提出一种基于双注意指针网络的文本摘要模型.首先,该模型使用了双注...  相似文献   

6.
Ma  Yun  Li  Qing 《World Wide Web》2019,22(4):1401-1425

The popularity of social media sites provides new ways for people to share their experiences and convey their opinions, leading to an explosive growth of user-generated content. Text data, owing to the amazing expressiveness of natural language, is of great value for people to explore various kinds of knowledge. However, considerable user-generated text contents are longer than what a reader expects, making automatic document summarization a necessity to facilitate knowledge digestion. In this paper, we focus on the reviews-like sentiment-oriented textual data. We propose the concept of Sentiment-preserving Document Summarization (SDS), aiming at summarizing a long textual document to a shorter version while preserving its main sentiments and not sacrificing readability. To tackle this problem, using deep neural network-based models, we devise an end-to-end weakly-supervised extractive framework, consisting of a hierarchical document encoder, a sentence extractor, a sentiment classifier, and a discriminator to distinguish the extracted summaries from the natural short reviews. The framework is weakly-supervised in that no ground-truth summaries are used for training, while the sentiment labels are available to supervise the generated summary to preserve the sentiments of the original document. In particular, the sentence extractor is trained to generate summaries i) making the sentiment classifier predict the same sentiment category as the original longer documents, and ii) fooling the discriminator into recognizing them as human-written short reviews. Experimental results on two public datasets validate the effectiveness of our framework.

  相似文献   

7.
Automatic text summarization (ATS) has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale corpora. However, there is still no guarantee that the generated summaries are grammatical, concise, and convey all salient information as the original documents have. To make the summarization results more faithful, this paper presents an unsupervised approach that combines rhetorical structure theory, deep neural model, and domain knowledge concern for ATS. This architecture mainly contains three components: domain knowledge base construction based on representation learning, the attentional encoder–decoder model for rhetorical parsing, and subroutine-based model for text summarization. Domain knowledge can be effectively used for unsupervised rhetorical parsing thus rhetorical structure trees for each document can be derived. In the unsupervised rhetorical parsing module, the idea of translation was adopted to alleviate the problem of data scarcity. The subroutine-based summarization model purely depends on the derived rhetorical structure trees and can generate content-balanced results. To evaluate the summary results without golden standard, we proposed an unsupervised evaluation metric, whose hyper-parameters were tuned by supervised learning. Experimental results show that, on a large-scale Chinese dataset, our proposed approach can obtain comparable performances compared with existing methods.  相似文献   

8.
Four modern systems of automatic text summarization are tested on the basis of a model vocabulary composed by subjects. Distribution of terms of the vocabulary in the source text is compared with their distribution in summaries of different length generated by the systems. Principles for evaluation of the efficiency of the current systems of automatic text summarization are described.  相似文献   

9.

Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.

  相似文献   

10.
针对图像隐写分析难度大、现有的检测模型难以对图像隐写区域进行针对性检测的问题,提出了一种基于显著性检测的图像隐写分析方法。该方法利用显著性检测技术引导隐写分析模型更加关注图像隐写区域的特征。首先,显著性检测模块生成图像的显著性区域;其次,区域筛选模块筛选出与隐写区域重合度较高的显著性图,利用图像融合技术与原始图像进行融合;最后,用相应的显著性融合图替换检测错误的图像,提高训练集质量,从而提升模型的训练效果和检测能力。实验使用BOSSbase1.01数据集,分别用空域和JPEG域的自适应隐写算法对数据集嵌入后进行隐写分析,结果表明,该方法能够有效提升现有基于深度学习的隐写分析模型的检测准确率,最多可提升3个百分点。为了进一步验证该方法的泛化性和提高其实用价值,在IStego100K数据集上进行了模型失配测试,测试结果也表明该方法具有一定的泛化能力。  相似文献   

11.
Although the goal of traditional text summarization is to generate summaries with diverse information,most of those applications have no explicit definition of the information structure.Thus,it is difficult to generate truly structureaware summaries because the information structure to guide summarization is unclear.In this paper,we present a novel framework to generate guided summaries for product reviews.The guided summary has an explicitly defined structure which comes from the important aspects of products.The proposed framework attempts to maximize expected aspect satisfaction during summary generation.The importance of an aspect to a generated summary is modeled using Labeled Latent Dirichlet Allocation.Empirical experimental results on consumer reviews of cars show the effectiveness of our method.  相似文献   

12.
Ziyi Zhou  Huiqun Yu  Guisheng Fan 《Software》2020,50(12):2313-2336
Natural language summaries of source codes are important during software development and maintenance. Recently, deep learning based models have achieved good performance on the task of automatic code summarization, which encode token sequence or abstract syntax tree (AST) of code with neural networks. However, there has been little work on the efficient combination of lexical and syntactical information of code for better summarization quality. In this paper, we propose two general and effective approaches to leveraging both types of information: a convolutional neural network that aims to better extract vector representation of AST node for downstream models; and a Switch Network that learns an adaptive weight vector to combine different code representations for summary generation. We integrate these approaches into a comprehensive code summarization model, which includes a sequential encoder for token sequence of code and a tree based encoder for its AST. We evaluate our model on a large Java dataset. The experimental results show that our model outperforms several state-of-the-art models on various metrics, and the proposed approaches contribute a lot to the improvements.  相似文献   

13.
The entity linking task consists in automatically identifying and linking the entities mentioned in a text to their uniform resource identifiers in a given knowledge base. This task is very challenging due to its natural language ambiguity. However, not all the entities mentioned in the document have the same utility in understanding the topics being discussed. Thus, the related problem of identifying the most relevant entities present in the document, also known as salient entities (SE), is attracting increasing interest. In this paper, we propose salient entity linking, a novel supervised 2‐step algorithm comprehensively addressing both entity linking and saliency detection. The first step is aimed at identifying a set of candidate entities that are likely to be mentioned in the document. The second step, besides detecting linked entities, also scores them according to their saliency. Experiments conducted on 2 different data sets show that the proposed algorithm outperforms state‐of‐the‐art competitors and is able to detect SE with high accuracy. Furthermore, we used salient entity linking for extractive text summarization. We found that entity saliency can be incorporated into text summarizers to extract salient sentences from text. The resulting summarizers outperform well‐known summarization systems, proving the importance of using the SE information.  相似文献   

14.
李伯涵  李红莲 《计算机应用研究》2021,38(11):3289-3292,3358
针对生成式文本摘要中模型对文本语义了解不够充分以及生成摘要缺乏关键信息的问题,提出一种融合关键词的中文摘要生成模型KBPM(Key-BERT-Pen model).首先使用TextRank方法将文本中关键词抽取出来,然后将抽取出的关键词与原文一起经过BERT预训练模型得到更加精确的上下文表示,最终将得到的词向量输入到带有双重注意力机制的指针模型中,指针模型从词汇表或原文中取出词汇来生成最终的摘要.实验结果表明,KBPM模型能够生成可读性更好、ROUGE分数更高的文本摘要.通过对比分析也验证了KBPM模型有效解决了生成摘要中缺乏关键信息的问题.  相似文献   

15.
基于循环神经网络和注意力机制的Sequence-to-Sequence模型神经网络方法在信息抽取和自动摘要生成方面发挥了重要作用。然而,该方法不能充分利用文本的语言特征信息,且生成结果中存在未登录词问题,从而影响文本摘要的准确性和可读性。为此,利用文本语言特征改善输入的特性,同时引入拷贝机制缓解摘要生成过程未登录词问题。在此基础上,提出基于Sequence-to-Sequence模型的新方法Copy-Generator模型,以提升文本摘要生成效果。采用中文摘要数据集LCSTS为数据源进行实验,结果表明所提方法能够有效地提高生成摘要的准确率,可应用于自动文本摘要提取任务。  相似文献   

16.
As information is available in abundance for every topic on internet, condensing the important information in the form of summary would benefit a number of users. Hence, there is growing interest among the research community for developing new approaches to automatically summarize the text. Automatic text summarization system generates a summary, i.e. short length text that includes all the important information of the document. Since the advent of text summarization in 1950s, researchers have been trying to improve techniques for generating summaries so that machine generated summary matches with the human made summary. Summary can be generated through extractive as well as abstractive methods. Abstractive methods are highly complex as they need extensive natural language processing. Therefore, research community is focusing more on extractive summaries, trying to achieve more coherent and meaningful summaries. During a decade, several extractive approaches have been developed for automatic summary generation that implements a number of machine learning and optimization techniques. This paper presents a comprehensive survey of recent text summarization extractive approaches developed in the last decade. Their needs are identified and their advantages and disadvantages are listed in a comparative manner. A few abstractive and multilingual text summarization approaches are also covered. Summary evaluation is another challenging issue in this research field. Therefore, intrinsic as well as extrinsic both the methods of summary evaluation are described in detail along with text summarization evaluation conferences and workshops. Furthermore, evaluation results of extractive summarization approaches are presented on some shared DUC datasets. Finally this paper concludes with the discussion of useful future directions that can help researchers to identify areas where further research is needed.  相似文献   

17.
在文本信息数量迅速增长的环境下,为提升阅读效率,提出一种基于深度学习的多文档自动文本摘要模型。在传统文摘模型的基础上将Siamese LSTM深度学习网络应用到文本相似度计算中,计算曼哈顿距离来表征文本相似度,并采用去除停用词的方法改进该网络模型以提升计算效率。实验结果表明,使用Siamese LSTM与传统余弦相似度等方法相比,生成的文摘在语义方面更贴近主题,质量更高,整个文摘系统的工作效率也显著提升。  相似文献   

18.
Huang  Wei  Zeng  Jing  Zhang  Peng  Chen  Guang  Ding  Huijun 《Multimedia Tools and Applications》2018,77(21):28539-28565

Foreground targets localization in video sequences receives much popularity in computer vision during the past few years, and its studies are highly related toward machine learning techniques. Driven by the recent popular deep learning techniques in machine learning, many contemporary localization studies are equipped with popular deep learning methods, and their performance has been benefited a lot by the prominent generalization capability of deep learning methods. In this study, inspired by deep metric learning, which is a new trend in deep learning, a novel single-target localization method is proposed. This new method is composed of two steps. First, an offline deep-ranked metric learning step is fulfilled and its gradient at the end-to-end learning procedure of the whole deep learning model is derived for realizing the conventional stochastic gradient algorithm. Also, an alternative proximal gradient algorithm is introduced to boost the efficiency as well. Second, an online models updating step is employed by the consecutive updating manner as well as the incremental updating manner, in order to make the offline learned outcome more adaptive during the progression of video sequences, in which challenging circumstances, such as sudden illumination changes, obstacles, shape transformation, complex background, etc., are likely to occur. This new single-target localization method has been compared with several shallow learning-based or deep learning-based localization methods in a large video database. Both qualitative and quantitative analysis have been comprehensively conducted to reveal the superiority of the new single-target localization method from the statistical point of view.

  相似文献   

19.

Computational modeling of visual saliency has become an important research problem in recent years, with applications in video quality estimation, video compression, object tracking, retargeting, summarization, and so on. While most visual saliency models for dynamic scenes operate on raw video, several models have been developed for use with compressed-domain information such as motion vectors and transform coefficients. This paper presents a comparative study of eleven such models as well as two high-performing pixel-domain saliency models on two eye-tracking datasets using several comparison metrics. The results indicate that highly accurate saliency estimation is possible based only on a partially decoded video bitstream. The strategies that have shown success in compressed-domain saliency modeling are highlighted, and certain challenges are identified as potential avenues for further improvement.

  相似文献   

20.
针对当前基于深度学习的显著性对象检测算法不能准确保存对象边缘的区域,从而导致检测出的显著性对象边缘区域模糊、准确率不高的问题,提出了一种基于多任务深度学习模型的显著性对象检测算法。首先,基于深度卷积神经网络(CNN),训练一个多任务模型分别学习显著性对象的区域和边缘的特征;然后,利用检测到的边缘生成大量候选区域,再结合显著性区域检测的结果对候选区域进行排序和计算权值;最后提取出完整的显著性图。在三个常用标准数据集上的实验结果表明,所提方法获得了更高的准确率,其中F-measure比基于深度学习的算法平均提高了1.9%,而平均绝对误差(MAE)平均降低了12.6%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号