首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
尽管抽取式自动文摘方法是目前自动文摘领域的主流方法,并且取得了长足的进步,但抽取式自动文摘形成的摘要由于缺乏句子之间的合理指代或篇章结构,使得文摘缺乏连贯性而影响可读性。为提高自动摘要的可读性,该文尝试将篇章修辞结构信息应用于中文自动文摘。首先,基于汉语篇章修辞结构抽取摘要,然后使用基于LSTM的方法对文本连贯性进行建模,并使用该模型对文摘的连贯性做出评价。实验结果表明: 在摘要抽取方面,基于篇章修辞结构的自动文摘相比于传统的抽取方法具有更好的ROUGE评价值;在使用基于LSTM连贯性模型评价摘要连贯性方面,篇章结构信息在自动抽取文摘时可以很好地提炼出文章的主旨,同时使摘要具有更好的结果。  相似文献   

2.
商品评论摘要是从一个商品的所有评论中抽取出一系列有序的能够代表评论广泛意见的句子作为该商品的综合评论。篇章层次结构分析旨在对篇章内部各个语义单元之间的层次结构和语义关系进行分析。由此可见,分析篇章层次结构有利于更加准确地判断篇章内各个语义单元的语义信息和重要程度,这对于抽取篇章的重要内容有很大帮助。因此,文中提出了一种基于篇章层次结构的商品评论摘要方法。该方法基于LSTM(Long Short Term Memory Network)神经网络构建抽取式商品评论摘要模型,并利用注意力机制将篇章层次结构信息作为判断篇章单元重要程度的参照加入该模型中,以便更加准确地抽取出商品评论中的重要内容,从而提升整个任务的性能。将所提方法在Yelp 2013数据集上进行实验,并在ROUGE评价指标上进行评测。实验结果表明,加入篇章层次结构信息后,模型的ROUGE-1值达到了0.3608,与仅考虑评论句子信息的标准LSTM方法相比提升了1.57%,这说明在商品评论摘要任务中引入篇章层次结构信息能够有效地提升该任务的性能。  相似文献   

3.
基于语义的单文档自动摘要算法   总被引:1,自引:0,他引:1  
章芝青 《计算机应用》2010,30(6):1673-1675
单文档自动摘要的目的是在原始的文本中通过摘取、提炼主要信息,提供一篇简洁全面的摘要。自动摘要的主流方法是通过统计和机器学习的技术从文本中直接提取出句子,而单文档由于篇章有限,统计的方法无效。针对此问题,提出了基于语义的单文本自动摘要方法。该方法首先将文档划分为句子,然后计算每一对句子的语义相似度,通过运用改进型K-Medoids聚类算法将相似的句子归类,在每一类中选出最具代表性的句子,最后将句子组成文档摘要。实验结果表明,通过融合语义信息,该方法提高了摘要的质量。  相似文献   

4.
基于事件项语义图聚类的多文档摘要方法   总被引:2,自引:2,他引:0  
基于事件的抽取式摘要方法一般首先抽取那些描述重要事件的句子,然后把它们重组并生成摘要。该文将事件定义为事件项以及与其关联的命名实体,并聚焦从外部语义资源获取的事件项语义关系。首先基于事件项语义关系创建事件项语义关系图并使用改进的DBSCAN算法对事件项进行聚类,接着为每类选择一个代表事件项或者选择一类事件项来表示文档集的主题,最后从文档抽取那些包含代表项并且最重要的句子生成摘要。该文的实验结果证明在多文档自动摘要中考虑事件项语义关系是必要的和可行的。  相似文献   

5.
信息爆炸是信息化时代面临的普遍性问题, 为了从海量文本数据中快速提取出有价值的信息, 自动摘要技术成为自然语言处理(natural language processing, NLP)领域中的研究重点. 多文档摘要的目的是从一组具有相同主题的文档中精炼出重要内容, 帮助用户快速获取关键信息. 针对目前多文档摘要中存在的信息不全面、冗余度高的问题, 提出一种基于多粒度语义交互的抽取式摘要方法, 将多粒度语义交互网络与最大边界相关法(maximal marginal relevance, MMR)相结合, 通过不同粒度的语义交互训练句子的表示, 捕获不同粒度的关键信息, 从而保证摘要信息的全面性; 同时结合改进的MMR以保证摘要信息的低冗余度, 通过排序学习为输入的多篇文档中的各个句子打分并完成摘要句的抽取. 在Multi-News数据集上的实验结果表明基于多粒度语义交互的抽取式多文档摘要模型优于LexRank、TextRank等基准模型.  相似文献   

6.
更新摘要除了要解决传统的面向话题的多文档摘要的两个要求——话题相关性和信息多样性,还要求应对用户对信息新颖性的需求。文中为更新摘要提出一种基于热传导模型的抽取式摘要算法——HeatSum。该方法能够自然利用句子与话题,新句子和旧句子,以及已选句子和待选句子之间的关系,并且为更新摘要找出话题相关、信息多样且内容新颖的句子。实验结果表明,HeatSum与参加TAC09评测的表现最好的抽取式方法性能相当,且更优于其它基准方法。  相似文献   

7.
目前的动态文摘方法几乎都是基于文档批处理机制的,无法适应实际应用中文档数据是以不稳定的数据流形式到来,需要实时更新摘要的需求。针对上述问题,提出一种利用K近邻思想对句子进行建模,再增量聚类句子实现子主题划分的动态文本摘要方法。该方法根据K近邻基本思想形成两层句子图模型,用增量图聚类方法对句子进行处理,同时考虑结合时间因素提高句子新颖度来抽取动态文摘。该方法能基于文档数据流增量式地抽取动态文摘,实现文摘内容的实时更新。通过在TAC2008和TAC2009的Update Summarization数据集上的测试,证明本文方法在动态文摘抽取上的有效性。  相似文献   

8.
分析一些篇章结构特征,探讨一种基于篇章结构的自动文摘方法.充分结合篇章结构提供的信息,采用动态聚类算法划分文章子主题;以各子主题为单位摘要,通过句子相关度计算,合并各部分摘要的重叠内容;将精简后的各部分摘要顺序输出生成篇章摘要.该摘要方法实行全文加权,局部抽取,从全面性和准确性上提高摘要质量.  相似文献   

9.
邓箴  包宏 《计算机与应用化学》2012,29(11):1384-1386
提出了一种基于词汇链抽取,文法分析的抽取文本代表词条的多文档摘要生成的方法。通过计算词义相似度构建词汇链,结合词频与位置特征进行文本代表词条成员的选择,将含有词条权值高的句子经过聚类形成多文档文摘句集合,然后进行质心句的抽取和排序,生成多文档文摘。该方法不仅考虑了词汇之间的语义信息,还考虑了词条对文本的代表成度,能够改善文摘句抽取的性能。实验结果表明,与单纯的由关键词确定文摘的方法相比,召回率和准确率都有不少的提高。  相似文献   

10.
为科技论文生成自动摘要,这能够帮助作者更快撰写摘要,是自动文摘的研究内容之一.相比于常见的新闻文档,科技论文具有文档结构性强、逻辑关系明确等特点.目前,主流的编码-解码的生成式文摘模型主要考虑文档的序列化信息,很少深入探究文档的篇章结构信息.为此,文中针对科技论文的特点,提出了一种基于"单词-章节-文档"层次结构的自动摘要模型,利用单词与章节的关联作用增强文本结构的层次性和层级之间的交互性,从而筛选出科技论文的关键信息.除此之外,该模型还扩充了一个上下文门控单元,旨在更新优化上下文向量,从而能更全面地捕获上下文信息.实验结果表明,提出的模型可有效提高生成文摘在ROUGE评测方法上的各项指标性能.  相似文献   

11.
Sentence extraction is a widely adopted text summarization technique where the most important sentences are extracted from document(s) and presented as a summary. The first step towards sentence extraction is to rank sentences in order of importance as in the summary. This paper proposes a novel graph-based ranking method, iSpreadRank, to perform this task. iSpreadRank models a set of topic-related documents into a sentence similarity network. Based on such a network model, iSpreadRank exploits the spreading activation theory to formulate a general concept from social network analysis: the importance of a node in a network (i.e., a sentence in this paper) is determined not only by the number of nodes to which it connects, but also by the importance of its connected nodes. The algorithm recursively re-weights the importance of sentences by spreading their sentence-specific feature scores throughout the network to adjust the importance of other sentences. Consequently, a ranking of sentences indicating the relative importance of sentences is reasoned. This paper also develops an approach to produce a generic extractive summary according to the inferred sentence ranking. The proposed summarization method is evaluated using the DUC 2004 data set, and found to perform well. Experimental results show that the proposed method obtains a ROUGE-1 score of 0.38068, which represents a slight difference of 0.00156, when compared with the best participant in the DUC 2004 evaluation.  相似文献   

12.
Text summarization is the process of automatically creating a shorter version of one or more text documents. It is an important way of finding relevant information in large text libraries or in the Internet. Essentially, text summarization techniques are classified as Extractive and Abstractive. Extractive techniques perform text summarization by selecting sentences of documents according to some criteria. Abstractive summaries attempt to improve the coherence among sentences by eliminating redundancies and clarifying the contest of sentences. In terms of extractive summarization, sentence scoring is the technique most used for extractive text summarization. This paper describes and performs a quantitative and qualitative assessment of 15 algorithms for sentence scoring available in the literature. Three different datasets (News, Blogs and Article contexts) were evaluated. In addition, directions to improve the sentence extraction results obtained are suggested.  相似文献   

13.
文本摘要是指对文本信息内容进行概括、提取主要内容进而形成摘要的过程。现有的文本摘要模型通常将内容选择和摘要生成独立分析,虽然能够有效提高句子压缩和融合的性能,但是在抽取过程中会丢失部分文本信息,导致准确率降低。基于预训练模型和Transformer结构的文档级句子编码器,提出一种结合内容抽取与摘要生成的分段式摘要模型。采用BERT模型对大量语料进行自监督学习,获得包含丰富语义信息的词表示。基于Transformer结构,通过全连接网络分类器将每个句子分成3类标签,抽取每句摘要对应的原文句子集合。利用指针生成器网络对原文句子集合进行压缩,将多个句子集合生成单句摘要,缩短输出序列和输入序列的长度。实验结果表明,相比直接生成摘要全文,该模型在生成句子上ROUGE-1、ROUGE-2和ROUGE-L的F1平均值提高了1.69个百分点,能够有效提高生成句子的准确率。  相似文献   

14.
With the number of documents describing real-world events and event-oriented information needs rapidly growing on a daily basis, the need for efficient retrieval and concise presentation of event-related information is becoming apparent. Nonetheless, the majority of information retrieval and text summarization methods rely on shallow document representations that do not account for the semantics of events. In this article, we present event graphs, a novel event-based document representation model that filters and structures the information about events described in text. To construct the event graphs, we combine machine learning and rule-based models to extract sentence-level event mentions and determine the temporal relations between them. Building on event graphs, we present novel models for information retrieval and multi-document summarization. The information retrieval model measures the similarity between queries and documents by computing graph kernels over event graphs. The extractive multi-document summarization model selects sentences based on the relevance of the individual event mentions and the temporal structure of events. Experimental evaluation shows that our retrieval model significantly outperforms well-established retrieval models on event-oriented test collections, while the summarization model outperforms competitive models from shared multi-document summarization tasks.  相似文献   

15.
自动文本摘要技术旨在凝练给定文本,以篇幅较短的摘要有效反映出原文核心内容.现阶段,生成型文本摘要技术因能够以更加灵活丰富的词汇对原文进行转述,已成为文本摘要领域的研究热点.然而,现有生成型文本摘要模型在产生摘要语句时涉及对原有词汇的重组与新词的添加,易造成摘要语句不连贯、可读性低.此外,通过传统基于已标注数据的有监督训...  相似文献   

16.
Automatic text summarization is a field situated at the intersection of natural language processing and information retrieval. Its main objective is to automatically produce a condensed representative form of documents. This paper presents ArA*summarizer, an automatic system for Arabic single document summarization. The system is based on an unsupervised hybrid approach that combines statistical, cluster-based, and graph-based techniques. The main idea is to divide text into subtopics then select the most relevant sentences in the most relevant subtopics. The selection process is done by an A* algorithm executed on a graph representing the different lexical–semantic relationships between sentences. Experimentation is conducted on Essex Arabic summaries corpus and using recall-oriented understudy for gisting evaluation, automatic summarization engineering, merged model graphs, and n-gram graph powered evaluation via regression evaluation metrics. The evaluation results showed the good performance of our system compared with existing works.  相似文献   

17.
The goal of abstractive summarization of multi-documents is to automatically produce a condensed version of the document text and maintain the significant information. Most of the graph-based extractive methods represent sentence as bag of words and utilize content similarity measure, which might fail to detect semantically equivalent redundant sentences. On other hand, graph based abstractive method depends on domain expert to build a semantic graph from manually created ontology, which requires time and effort. This work presents a semantic graph approach with improved ranking algorithm for abstractive summarization of multi-documents. The semantic graph is built from the source documents in a manner that the graph nodes denote the predicate argument structures (PASs)—the semantic structure of sentence, which is automatically identified by using semantic role labeling; while graph edges represent similarity weight, which is computed from PASs semantic similarity. In order to reflect the impact of both document and document set on PASs, the edge of semantic graph is further augmented with PAS-to-document and PAS-to-document set relationships. The important graph nodes (PASs) are ranked using the improved graph ranking algorithm. The redundant PASs are reduced by using maximal marginal relevance for re-ranking the PASs and finally summary sentences are generated from the top ranked PASs using language generation. Experiment of this research is accomplished using DUC-2002, a standard dataset for document summarization. Experimental findings signify that the proposed approach shows superior performance than other summarization approaches.  相似文献   

18.
Most of existing text automatic summarization algorithms are targeted for multi-documents of relatively short length, thus difficult to be applied immediately to novel documents of structure freedom and long length. In this paper, aiming at novel documents, we propose a topic modeling based approach to extractive automatic summarization, so as to achieve a good balance among compression ratio, summarization quality and machine readability. First, based on topic modeling, we extract the candidate sentences associated with topic words from a preprocessed novel document. Second, with the goals of compression ratio and topic diversity, we design an importance evaluation function to select the most important sentences from the candidate sentences and thus generate an initial novel summary. Finally, we smooth the initial summary to overcome the semantic confusion caused by ambiguous or synonymous words, so as to improve the summary readability. We evaluate experimentally our proposed approach on a real novel dataset. The experiment results show that compared to those from other candidate algorithms, each automatic summary generated by our approach has not only a higher compression ratio, but also better summarization quality.  相似文献   

19.
A document-sensitive graph model for multi-document summarization   总被引:1,自引:1,他引:0  
In recent years, graph-based models and ranking algorithms have drawn considerable attention from the extractive document summarization community. Most existing approaches take into account sentence-level relations (e.g. sentence similarity) but neglect the difference among documents and the influence of documents on sentences. In this paper, we present a novel document-sensitive graph model that emphasizes the influence of global document set information on local sentence evaluation. By exploiting document–document and document–sentence relations, we distinguish intra-document sentence relations from inter-document sentence relations. In such a way, we move towards the goal of truly summarizing multiple documents rather than a single combined document. Based on this model, we develop an iterative sentence ranking algorithm, namely DsR (Document-Sensitive Ranking). Automatic ROUGE evaluations on the DUC data sets show that DsR outperforms previous graph-based models in both generic and query-oriented summarization tasks.  相似文献   

20.
In paper, we propose an unsupervised text summarization model which generates a summary by extracting salient sentences in given document(s). In particular, we model text summarization as an integer linear programming problem. One of the advantages of this model is that it can directly discover key sentences in the given document(s) and cover the main content of the original document(s). This model also guarantees that in the summary can not be multiple sentences that convey the same information. The proposed model is quite general and can also be used for single- and multi-document summarization. We implemented our model on multi-document summarization task. Experimental results on DUC2005 and DUC2007 datasets showed that our proposed approach outperforms the baseline systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号