首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
抽取式自动文摘研究抽取文档中最能代表文档核心内容的句子作为摘要,篇章主次关系分析则是从篇章结构方面分析出篇章的主要内容和次要内容,因此,篇章主次关系分析和抽取式自动文摘存在较大关联,篇章主次关系可指导摘要的抽取。该文提出了一种基于篇章主次关系的单文档抽取式摘要方法,该方法基于神经网络模型构建了一个篇章主次关系和文本摘要联合学习的模型。该模型在考虑词组、短语等语义信息的基础上同时考虑了篇章的主次关系等结构信息,最终基于篇章内容的整体优化抽取出最能代表文档核心内容的句子作为摘要。实验结果表明,与当前主流的单文档抽取式摘要方法相比,该方法在ROUGE评价指标上有显著提高。  相似文献   

2.
针对现有多文档抽取方法不能很好地利用句子主题信息和语义信息的问题,提出一种融合多信息句子图模型的多文档摘要抽取方法。首先,以句子为节点,构建句子图模型;然后,将基于句子的贝叶斯主题模型和词向量模型得到的句子主题概率分布和句子语义相似度相融合,得到句子最终的相关性,结合主题信息和语义信息作为句子图模型的边权重;最后,借助句子图最小支配集的摘要方法来描述多文档摘要。该方法通过融合多信息的句子图模型,将句子间的主题信息、语义信息和关系信息相结合。实验结果表明,该方法能够有效地改进抽取摘要的综合性能。  相似文献   

3.
自动摘要是解决网络信息过载问题的关键技术之一.在对文本中旬子的特征和句子之间的语义距离分析的基础上,提出了一种基于句子特征和语义距离的自动文本摘要算法.首先计算文档中句子的各个特征权重,在此基础上决定句子的权重;然后,通过句子之间的语义距离计算,修改句子的权重,据此进行排序,权重大的作为文本的主题句;最后,对文摘句进行平滑处理,生成文字流畅的文本摘要.实验表明,该算法在不同的压缩率下生成的摘要接近于人工摘要,具有较好的性能.  相似文献   

4.
现有中文自动文本摘要方法主要是利用文本自身信息,其缺陷是不能充分利用词语之间的语义相关等信息。鉴于此,提出了一种改进的中文文本摘要方法。此方法将外部语料库信息用词向量的形式融入到TextRank算法中,通过TextRank与word2vec的结合,把句子中每个词语映射到高维词库形成句向量。充分考虑了句子之间的相似度、关键词的覆盖率和句子与标题的相似度等因素,以此计算句子之间的影响权重,并选取排序最靠前的句子重新排序作为文本的摘要。实验结果表明,此方法在本文数据集中取得了较好的效果,自动提取中文摘要的效果比原方法好。  相似文献   

5.
自动文本摘要是继信息检索之后信息或知识获取的一个重要步骤,对高质量的文档文摘十分重要。该文提出以句子为基本抽取单位,以位置和标题关键词为句子的加权特征,对句子基于潜语义聚类,提出语义结构的摘要方法。同时给出了较为客观和有效的摘要评价方法。实验表明了该方法的有效性。  相似文献   

6.
在对中文文本进行摘要提取时,传统的TextRank算法只考虑节点间的相似性,忽略了文本的其他重要信息。首先,针对中文单文档,在现有研究的基础上,使用TextRank算法,一方面考虑句子间的相似性,另一方面,使TextRank算法与文本的整体结构信息、句子的上下文信息等相结合,如文档句子或者段落的物理位置、特征句子、核心句子等有可能提升权重的句子,来生成文本的摘要候选句群;然后对得到的摘要候选句群做冗余处理,以除去候选句群中相似度较高的句子,得到最终的文本摘要。最后通过实验验证,该算法能够提高生成摘要的准确性,表明了该算法的有效性。  相似文献   

7.
针对文本水印摘要攻击的语义损失量评估方法   总被引:1,自引:0,他引:1  
为了控制文本水印自动摘要攻击造成的语义信息损失,在已有的自动摘要评估方法的基础上,针对自动摘要文本水印攻击算法提出一种评估文本语义损失度的算法。该方法通过量化句子语义,合理计算摘要攻击造成的语义损失;并分析了攻击造成语义损失的主要因素,以及这些因素和语义损失量之间的数学关系。该方法能够从语义信息损失的角度更真实地评测摘要攻击算法的失真度,实现了自动化评估。  相似文献   

8.
针对微博内容驳杂、信息稀疏的问题,深入研究传统自动摘要技术,结合微博数据特点,在微博事件提取的基础上提出一种基于统计和理解的混合摘要方法。首先根据词频、句子位置等文本特征得到基于统计的初始摘要;然后通过语义词典,计算句子相似度、确定事件主体进行基于语义理解的可读性加工,使最终摘要更具可读性;最后采用合理的摘要评价方法评价所得摘要。实验结果表明,该方法在不同压缩比例下均能获得质量稳定且可读性良好的摘要。  相似文献   

9.
目前,藏文抽取式文本摘要方法主要是提取文本自身的特征,对句子进行打分,不能挖掘句子中深层的语义信息。该文提出了一种改进的藏文抽取式摘要生成方法。该方法将外部语料库的信息以词向量的形式融入到TextRank算法,通过TextRank与词向量的结合,把句子中每个词语映射到高维词库形成句向量,进行迭代为句子打分,并选取分值最高的句子重新排序作为文本的摘要。实验结果表明,该方法能有效提升摘要质量。该文还在传统ROUGE评测方法的基础上,提出了一种采用句子语义相似度计算的方式进行摘要评测的方法。  相似文献   

10.
经典的TextRank算法在文档的自动摘要提取时往往只考虑了句子节点间的相似性,而忽略了文档的篇章结构及句子的上下文信息。针对这些问题,结合中文文本的结构特点,提出一种改进后的iTextRank算法,通过将标题、段落、特殊句子、句子位置和长度等信息引入到TextRank网络图的构造中,给出改进后的句子相似度计算方法及权重调整因子,并将其应用于中文文本的自动摘要提取,同时分析了算法的时间复杂度。最后,实验证明iTextRank比经典的TextRank方法具有更高的准确率和更低的召回率。  相似文献   

11.
自动文摘是自然语言处理领域的一个重要研究话题,基于机器学习的自动文摘方法则是该项研究中的一个热点。然而,自动文摘问题中的数据分布有一个重要现象,即文摘句子与非文摘句子的数量相差非常悬殊,该现象将给传统机器学习算法的应用效果带来负面影响。为此,本文针对自动文摘中句子类别分布严重不平衡这一现象,以支持向量机算法为基础,设计了两种有效的处理非平衡自动文摘数据的分类方法。在第一种方法中,将传统支持向量机中正负类平衡的分类间隔转换为不平衡的分类间隔;在第二种方法中,通过将数据集进行切分,设计了一种支持向量机集成学习算法。通过在DUC2001数据集上的实验证明,本文设计的两种基于非平衡数据分类的单文档自动文摘方法显著优于基于传统分类算法的自动文摘方法。  相似文献   

12.
The task of automatic document summarization aims at generating short summaries for originally long documents. A good summary should cover the most important information of the original document or a cluster of documents, while being coherent, non-redundant and grammatically readable. Numerous approaches for automatic summarization have been developed to date. In this paper we give a self-contained, broad overview of recent progress made for document summarization within the last 5 years. Specifically, we emphasize on significant contributions made in recent years that represent the state-of-the-art of document summarization, including progress on modern sentence extraction approaches that improve concept coverage, information diversity and content coherence, as well as attempts from summarization frameworks that integrate sentence compression, and more abstractive systems that are able to produce completely new sentences. In addition, we review progress made for document summarization in domains, genres and applications that are different from traditional settings. We also point out some of the latest trends and highlight a few possible future directions.  相似文献   

13.
王俊丽  魏绍臣  管敏 《计算机科学》2015,42(12):1-7, 39
互联网技术的快速发展使得信息的采集和传播速度达到了空前的水平,海量的数据使得人们获取有价值的信息越发困难。自动文摘技术可以从海量的信息中提取出能代表原文重要内容且简洁精练的一段文字,高度压缩文档是解决信息超载问题的有效方法,因此自动文摘技术的研究引起人们越来越多的关注。目前诸如统计分析、机器学习技术以及语言学知识等在已有的自动文摘系统中都有所应用。对基于图排序算法的自动文摘的研究成果进行综述,首先阐述自动文摘以及图排序算法的基本知识,然后重点从图的构建、图排序、句子选择3个方面系统地介绍基于图排序算法的自动文摘的研究现状,最后在分析 已有自动文摘系统的基础上,探讨了基于图排序算法的自动文摘的未来发展方向。  相似文献   

14.
The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches.  相似文献   

15.
Automatic document summarization aims to create a compressed summary that preserves the main content of the original documents. It is a well-recognized fact that a document set often covers a number of topic themes with each theme represented by a cluster of highly related sentences. More important, topic themes are not equally important. The sentences in an important theme cluster are generally deemed more salient than the sentences in a trivial theme cluster. Existing clustering-based summarization approaches integrate clustering and ranking in sequence, which unavoidably ignore the interaction between them. In this paper, we propose a novel approach developed based on the spectral analysis to simultaneously clustering and ranking of sentences. Experimental results on the DUC generic summarization datasets demonstrate the improvement of the proposed approach over the other existing clustering-based approaches.  相似文献   

16.
In recent years, algebraic methods, more precisely matrix decomposition approaches, have become a key tool for tackling document summarization problem. Typical algebraic methods used in multi-document summarization (MDS) vary from soft and hard clustering approaches to low-rank approximations. In this paper, we present a novel summarization method AASum which employs the archetypal analysis for generic MDS. Archetypal analysis (AA) is a promising unsupervised learning tool able to completely assemble the advantages of clustering and the flexibility of matrix factorization. In document summarization, given a content-graph data matrix representation of a set of documents, positively and/or negatively salient sentences are values on the data set boundary. These extreme values, archetypes, can be computed using AA. While each sentence in a data set is estimated as a mixture of archetypal sentences, the archetypes themselves are restricted to being sparse mixtures, i.e., convex combinations of the original sentences. Since AA in this way readily offers soft clustering, we suggest to consider it as a method for simultaneous sentence clustering and ranking. Another important argument in favor of using AA in MDS is that in contrast to other factorization methods, which extract prototypical, characteristic, even basic sentences, AA selects distinct (archetypal) sentences and thus induces variability and diversity in produced summaries. Experimental results on the DUC generic summarization data sets evidence the improvement of the proposed approach over the other closely related methods.  相似文献   

17.
As information is available in abundance for every topic on internet, condensing the important information in the form of summary would benefit a number of users. Hence, there is growing interest among the research community for developing new approaches to automatically summarize the text. Automatic text summarization system generates a summary, i.e. short length text that includes all the important information of the document. Since the advent of text summarization in 1950s, researchers have been trying to improve techniques for generating summaries so that machine generated summary matches with the human made summary. Summary can be generated through extractive as well as abstractive methods. Abstractive methods are highly complex as they need extensive natural language processing. Therefore, research community is focusing more on extractive summaries, trying to achieve more coherent and meaningful summaries. During a decade, several extractive approaches have been developed for automatic summary generation that implements a number of machine learning and optimization techniques. This paper presents a comprehensive survey of recent text summarization extractive approaches developed in the last decade. Their needs are identified and their advantages and disadvantages are listed in a comparative manner. A few abstractive and multilingual text summarization approaches are also covered. Summary evaluation is another challenging issue in this research field. Therefore, intrinsic as well as extrinsic both the methods of summary evaluation are described in detail along with text summarization evaluation conferences and workshops. Furthermore, evaluation results of extractive summarization approaches are presented on some shared DUC datasets. Finally this paper concludes with the discussion of useful future directions that can help researchers to identify areas where further research is needed.  相似文献   

18.
The goal of abstractive summarization of multi-documents is to automatically produce a condensed version of the document text and maintain the significant information. Most of the graph-based extractive methods represent sentence as bag of words and utilize content similarity measure, which might fail to detect semantically equivalent redundant sentences. On other hand, graph based abstractive method depends on domain expert to build a semantic graph from manually created ontology, which requires time and effort. This work presents a semantic graph approach with improved ranking algorithm for abstractive summarization of multi-documents. The semantic graph is built from the source documents in a manner that the graph nodes denote the predicate argument structures (PASs)—the semantic structure of sentence, which is automatically identified by using semantic role labeling; while graph edges represent similarity weight, which is computed from PASs semantic similarity. In order to reflect the impact of both document and document set on PASs, the edge of semantic graph is further augmented with PAS-to-document and PAS-to-document set relationships. The important graph nodes (PASs) are ranked using the improved graph ranking algorithm. The redundant PASs are reduced by using maximal marginal relevance for re-ranking the PASs and finally summary sentences are generated from the top ranked PASs using language generation. Experiment of this research is accomplished using DUC-2002, a standard dataset for document summarization. Experimental findings signify that the proposed approach shows superior performance than other summarization approaches.  相似文献   

19.
生成高质量的文档摘要需要用简约而不丢失信息的描述文档,是自动摘要技术的一大难题。该文认为高质量的文档摘要必须尽量多的覆盖原始文档中的信息,同时尽可能的保持紧凑。从这一角度出发,从文档中抽取出熵和相关度这两组特征用以权衡摘要的信息覆盖率和紧凑性。该文采用基于回归的有监督摘要技术对提取的特征进行权衡,并且采用单文档摘要和多文档摘要进行了系统的实验。实验结果证明对于单文档摘要和多文档摘要,权衡熵和相关度均能有效地提高文档摘要的质量。  相似文献   

20.
Nowadays, it is necessary that users have access to information in a concise form without losing any critical information. Document summarization is an automatic process of generating a short form from a document. In itemset-based document summarization, the weights of all terms are considered the same. In this paper, a new approach is proposed for multidocument summarization based on weighted patterns and term association measures. In the present study, the weights of the terms are not equal in the context and are computed based on weighted frequent itemset mining. Indeed, the proposed method enriches frequent itemset mining by weighting the terms in the corpus. In addition, the relationships among the terms in the corpus have been considered using term association measures. Also, the statistical features such as sentence length and sentence position have been modified and matched to generate a summary based on the greedy method. Based on the results of the DUC 2002 and DUC 2004 datasets obtained by the ROUGE toolkit, the proposed approach can outperform the state-of-the-art approaches significantly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号