首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In paper, we propose an unsupervised text summarization model which generates a summary by extracting salient sentences in given document(s). In particular, we model text summarization as an integer linear programming problem. One of the advantages of this model is that it can directly discover key sentences in the given document(s) and cover the main content of the original document(s). This model also guarantees that in the summary can not be multiple sentences that convey the same information. The proposed model is quite general and can also be used for single- and multi-document summarization. We implemented our model on multi-document summarization task. Experimental results on DUC2005 and DUC2007 datasets showed that our proposed approach outperforms the baseline systems.  相似文献   

2.
多文本摘要的目标是对给定的查询和多篇文本(文本集),创建一个简洁明了的摘要,要求该摘要能够表达这些文本的关键内容,同时和给定的查询相关。一个给定的文本集通常包含一些主题,而且每个主题由一类句子来表示,一个优秀的摘要应该要包含那些最重要的主题。如今大部分的方法是建立一个模型来计算句子得分,然后选择得分最高的部分句子来生成摘要。不同于这些方法,我们更加关注文本的主题而不是句子,把如何生成摘要的问题看成一个主题的发现,排序和表示的问题。我们首次引入dominant sets cluster(DSC)来发现主题,然后建立一个模型来对主题的重要性进行评估,最后兼顾代表性和无重复性来从各个主题中选择句子组成摘要。我们在DUC2005、2006、2007三年的标准数据集上进行了实验,最后的实验结果证明了该方法的有效性。  相似文献   

3.
Multi-document summarization via submodularity   总被引:1,自引:1,他引:0  
Multi-document summarization is becoming an important issue in the Information Retrieval community. It aims to distill the most important information from a set of documents to generate a compressed summary. Given a set of documents as input, most of existing multi-document summarization approaches utilize different sentence selection techniques to extract a set of sentences from the document set as the summary. The submodularity hidden in the term coverage and the textual-unit similarity motivates us to incorporate this property into our solution to multi-document summarization tasks. In this paper, we propose a new principled and versatile framework for different multi-document summarization tasks using submodular functions (Nemhauser et al. in Math. Prog. 14(1):265?C294, 1978) based on the term coverage and the textual-unit similarity which can be efficiently optimized through the improved greedy algorithm. We show that four known summarization tasks, including generic, query-focused, update, and comparative summarization, can be modeled as different variations derived from the proposed framework. Experiments on benchmark summarization data sets (e.g., DUC04-06, TAC08, TDT2 corpora) are conducted to demonstrate the efficacy and effectiveness of our proposed framework for the general multi-document summarization tasks.  相似文献   

4.
This paper proposes an optimization-based model for generic document summarization. The model generates a summary by extracting salient sentences from documents. This approach uses the sentence-to-document collection, the summary-to-document collection and the sentence-to-sentence relations to select salient sentences from given document collection and reduce redundancy in the summary. To solve the optimization problem has been created an improved differential evolution algorithm. The algorithm can adjust crossover rate adaptively according to the fitness of individuals. We implemented the proposed model on multi-document summarization task. Experiments have been performed on DUC2002 and DUC2004 data sets. The experimental results provide strong evidence that the proposed optimization-based approach is a viable method for document summarization.  相似文献   

5.
This paper proposes a constraint-driven document summarization approach emphasizing the following two requirements: (1) diversity in summarization, which seeks to reduce redundancy among sentences in the summary and (2) sufficient coverage, which focuses on avoiding the loss of the document’s main information when generating the summary. The constraint-driven document summarization models with tuning the constraint parameters can drive content coverage and diversity in a summary. The models are formulated as a quadratic integer programming (QIP) problem. To solve the QIP problem we used a discrete PSO algorithm. The models are implemented on multi-document summarization task. The comparative results showed that the proposed models outperform other methods on DUC2005 and DUC2007 datasets.  相似文献   

6.
With the rapid growth of information on the Internet and electronic government recently, automatic multi-document summarization has become an important task. Multi-document summarization is an optimization problem requiring simultaneous optimization of more than one objective function. In this study, when building summaries from multiple documents, we attempt to balance two objectives, content coverage and redundancy. Our goal is to investigate three fundamental aspects of the problem, i.e. designing an optimization model, solving the optimization problem and finding the solution to the best summary. We model multi-document summarization as a Quadratic Boolean Programing (QBP) problem where the objective function is a weighted combination of the content coverage and redundancy objectives. The objective function measures the possible summaries based on the identified salient sentences and overlap information between selected sentences. An innovative aspect of our model lies in its ability to remove redundancy while selecting representative sentences. The QBP problem has been solved by using a binary differential evolution algorithm. Evaluation of the model has been performed on the DUC2002, DUC2004 and DUC2006 data sets. We have evaluated our model automatically using ROUGE toolkit and reported the significance of our results through 95% confidence intervals. The experimental results show that the optimization-based approach for document summarization is truly a promising research direction.  相似文献   

7.
自动文摘技术的目标是致力于将冗长的文档内容压缩成较为简短的几段话,将信息全面、简洁地呈现给用户,提高用户获取信息的效率和准确率。所提出的方法在LDA(Latent Dirichlet Allocation)的基础上,使用Gibbs抽样估计主题在单词上的概率分布和句子在主题上的概率分布,结合LDA参数和谱聚类算法提取多文档摘要。该方法使用线性公式来整合句子权重,提取出字数为400字的多文档摘要。使用ROUGE自动摘要评测工具包对DUC2002数据集评测摘要质量,结果表明,该方法能有效地提高摘要的质量。  相似文献   

8.
Multi-document summarization is the process of extracting salient information from a set of source texts and present that information to the user in a condensed form. In this paper, we propose a multi-document summarization system which generates an extractive generic summary with maximum relevance and minimum redundancy by representing each sentence of the input document as a vector of words in Proper Noun, Noun, Verb and Adjective set. Five features, such as TF_ISF, Aggregate Cross Sentence Similarity, Title Similarity, Proper Noun and Sentence Length associated with the sentences, are extracted, and scores are assigned to sentences based on these features. Weights that can be assigned to different features may vary depending upon the nature of the document, and it is hard to discover the most appropriate weight for each feature, and this makes generation of a good summary a very tough task without human intelligence. Multi-document summarization problem is having large number of decision parameters and number of possible solutions from which most optimal summary is to be generated. Summary generated may not guarantee the essential quality and may be far from the ideal human generated summary. To address this issue, we propose a population-based multicriteria optimization method with multiple objective functions. Three objective functions are selected to determine an optimal summary, with maximum relevance, diversity, and novelty, from a global population of summaries by considering both the statistical and semantic aspects of the documents. Semantic aspects are considered by Latent Semantic Analysis (LSA) and Non Negative Matrix Factorization (NMF) techniques. Experiments have been performed on DUC 2002, DUC 2004 and DUC 2006 datasets using ROUGE tool kit. Experimental results show that our system outperforms the state of the art works in terms of Recall and Precision.  相似文献   

9.
针对基于图的多文档摘要,该文提出了一种在图排序中结合维基百科实体信息增强摘要质量的方法。首先抽取文档集合中高频实体的维基词条内容作为该文档集合的背景知识,然后采用PageRank算法对文档集合中的句子进行排序,之后采用改进的DivRank算法对文档集合和背景知识中的句子一起排序,最后根据两次排序结果的线性组合确定文档句子的最终排序以进行摘要句的选取。在DUC2005数据集上的评测结果表明该方法可以有效利用维基百科知识增强摘要的质量。  相似文献   

10.
This work proposes an approach that uses statistical tools to improve content selection in multi-document automatic text summarization. The method uses a trainable summarizer, which takes into account several features: the similarity of words among sentences, the similarity of words among paragraphs, the text format, cue-phrases, a score related to the frequency of terms in the whole document, the title, sentence location and the occurrence of non-essential information. The effect of each of these sentence features on the summarization task is investigated. These features are then used in combination to construct text summarizer models based on a maximum entropy model, a naive-Bayes classifier, and a support vector machine. To produce the final summary, the three models are combined into a hybrid model that ranks the sentences in order of importance. The performance of this new method has been tested using the DUC 2002 data corpus. The effectiveness of this technique is measured using the ROUGE score, and the results are promising when compared with some existing techniques.  相似文献   

11.
In text summarization, relevance and coverage are two main criteria that decide the quality of a summary. In this paper, we propose a new multi-document summarization approach SumCR via sentence extraction. A novel feature called Exemplar is introduced to help to simultaneously deal with these two concerns during sentence ranking. Unlike conventional ways where the relevance value of each sentence is calculated based on the whole collection of sentences, the Exemplar value of each sentence in SumCR is obtained within a subset of similar sentences. A fuzzy medoid-based clustering approach is used to produce sentence clusters or subsets where each of them corresponds to a subtopic of the related topic. Such kind of subtopic-based feature captures the relevance of each sentence within different subtopics and thus enhances the chance of SumCR to produce a summary with a wider coverage and less redundancy. Another feature we incorporate in SumCR is Position, i.e., the position of each sentence appeared in the corresponding document. The final score of each sentence is a combination of the subtopic-level feature Exemplar and the document-level feature Position. Experimental studies on DUC benchmark data show the good performance of SumCR and its potential in summarization tasks.  相似文献   

12.
Sentence-based multi-document summarization is the task of generating a succinct summary of a document collection, which consists of the most salient document sentences. In recent years, the increasing availability of semantics-based models (e.g., ontologies and taxonomies) has prompted researchers to investigate their usefulness for improving summarizer performance. However, semantics-based document analysis is often applied as a preprocessing step, rather than integrating the discovered knowledge into the summarization process.This paper proposes a novel summarizer, namely Yago-based Summarizer, that relies on an ontology-based evaluation and selection of the document sentences. To capture the actual meaning and context of the document sentences and generate sound document summaries, an established entity recognition and disambiguation step based on the Yago ontology is integrated into the summarization process.The experimental results, which were achieved on the DUC’04 benchmark collections, demonstrate the effectiveness of the proposed approach compared to a large number of competitors as well as the qualitative soundness of the generated summaries.  相似文献   

13.
A New Approach for Multi-Document Update Summarization   总被引:1,自引:1,他引:0       下载免费PDF全文
1958(2)
  • Manifold-ranking based topic-focused multi-document summarization 2007
  • An Introduction to Kolmogorov Complexity and Its Applications 1997
  • The use of MMR,diversity-based reranking for reordering documents and producing summaries 1998
  • Centroid-based summarization of multiple documents 2004(6)
  • A trainable document summarizer 1995
  • Impact of linguistic analysis on the semantic graph coverage and learning of document extracts 2005
  • Document summarization using conditional random fields 2007
  • Adasum:An adaptive model for summarization 2008
  • Lexpagerank:Prestige in multidocument text summarization 2004
  • Mihalcea R.Taran P Textrank-Bring order into texts 2004
  • Mihalcea R.Tarau P A language independent algorithm for single and multiple document summarization 2005
  • Wan X.Yang J.Xiao J Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction 2007
  • Wan X An exploration of document impact on graph-based multi-document summarization 2008
  • Bennett C H.Gács P.Li M.Vitányi P M,Zurek W H Information distance 1998(4)
  • Li M.Badger J H.Chen X.Kwong S,Kearney P,Zhang H An information-based sequence distance and its application to whole mitochondrial genome phylogeny 2001(2)
  • Li M.Chen X.Li X.Ma B Vitányi P M The similarity metric 2004(12)
  • Long C.Zhu X.Li M.Ma B Information shared by many objects 2008
  • Benedetto D.Caglioti E.Loreto V Language trees and zipping 2002(4)
  • Bennett C H.Li M.Ma B Chain letters and evolutionary histories 2003(6)
  • Cilibrasi R L.Vitányi P M The Google similarity distance 2007(3)
  • Zhang X.Hao Y.Zhu X.Li M Information distance from a question to an answer 2007
  • Ziv J.Lempel A A universal algorithm for sequential data compression 1977(3)
  • Lin C Y.Hovy E Automatic evaluation of summaries using n-gram co-occurrence statistics 2003
  • Nenkova A.Passonneau R.Mckeown K The pyramid method:Incorporating human content selection variation in summarization evaluation 2007(2)
  • >>更多...  相似文献   


    14.
    Most existing research on applying the matrix factorization approaches to query-focused multi-document summarization (Q-MDS) explores either soft/hard clustering or low rank approximation methods. We employ a different kind of matrix factorization method, namely weighted archetypal analysis (wAA) to Q-MDS. In query-focused summarization, given a graph representation of a set of sentences weighted by similarity to the given query, positively and/or negatively salient sentences are values on the weighted data set boundary. We choose to use wAA to compute these extreme values, archetypes, and hence to estimate the importance of sentences in target documents set. We investigate the impact of using the multi-element graph model for query focused summarization via wAA. We conducted experiments on the data of document understanding conference (DUC) 2005 and 2006. Experimental results evidence the improvement of the proposed approach over other closely related methods and many of state-of-the-art systems.  相似文献   

    15.
    针对现有多文档抽取方法不能很好地利用句子主题信息和语义信息的问题,提出一种融合多信息句子图模型的多文档摘要抽取方法。首先,以句子为节点,构建句子图模型;然后,将基于句子的贝叶斯主题模型和词向量模型得到的句子主题概率分布和句子语义相似度相融合,得到句子最终的相关性,结合主题信息和语义信息作为句子图模型的边权重;最后,借助句子图最小支配集的摘要方法来描述多文档摘要。该方法通过融合多信息的句子图模型,将句子间的主题信息、语义信息和关系信息相结合。实验结果表明,该方法能够有效地改进抽取摘要的综合性能。  相似文献   

    16.
    With the number of documents describing real-world events and event-oriented information needs rapidly growing on a daily basis, the need for efficient retrieval and concise presentation of event-related information is becoming apparent. Nonetheless, the majority of information retrieval and text summarization methods rely on shallow document representations that do not account for the semantics of events. In this article, we present event graphs, a novel event-based document representation model that filters and structures the information about events described in text. To construct the event graphs, we combine machine learning and rule-based models to extract sentence-level event mentions and determine the temporal relations between them. Building on event graphs, we present novel models for information retrieval and multi-document summarization. The information retrieval model measures the similarity between queries and documents by computing graph kernels over event graphs. The extractive multi-document summarization model selects sentences based on the relevance of the individual event mentions and the temporal structure of events. Experimental evaluation shows that our retrieval model significantly outperforms well-established retrieval models on event-oriented test collections, while the summarization model outperforms competitive models from shared multi-document summarization tasks.  相似文献   

    17.
    Sentence extraction is a widely adopted text summarization technique where the most important sentences are extracted from document(s) and presented as a summary. The first step towards sentence extraction is to rank sentences in order of importance as in the summary. This paper proposes a novel graph-based ranking method, iSpreadRank, to perform this task. iSpreadRank models a set of topic-related documents into a sentence similarity network. Based on such a network model, iSpreadRank exploits the spreading activation theory to formulate a general concept from social network analysis: the importance of a node in a network (i.e., a sentence in this paper) is determined not only by the number of nodes to which it connects, but also by the importance of its connected nodes. The algorithm recursively re-weights the importance of sentences by spreading their sentence-specific feature scores throughout the network to adjust the importance of other sentences. Consequently, a ranking of sentences indicating the relative importance of sentences is reasoned. This paper also develops an approach to produce a generic extractive summary according to the inferred sentence ranking. The proposed summarization method is evaluated using the DUC 2004 data set, and found to perform well. Experimental results show that the proposed method obtains a ROUGE-1 score of 0.38068, which represents a slight difference of 0.00156, when compared with the best participant in the DUC 2004 evaluation.  相似文献   

    18.
    We proposed a novel text summarization model based on 0–1 non-linear programming problem. This proposed model covers the main content of the given document(s) through sentence assignment. We implemented our model on multi-document summarization task. When comparing our method to several existing summarization methods on an open DUC2001 and DUC2002 datasets, we found that the proposed method could improve the summarization results significantly. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-W metrics.  相似文献   

    19.
    基于事件项语义图聚类的多文档摘要方法   总被引:2,自引:2,他引:0  
    基于事件的抽取式摘要方法一般首先抽取那些描述重要事件的句子,然后把它们重组并生成摘要。该文将事件定义为事件项以及与其关联的命名实体,并聚焦从外部语义资源获取的事件项语义关系。首先基于事件项语义关系创建事件项语义关系图并使用改进的DBSCAN算法对事件项进行聚类,接着为每类选择一个代表事件项或者选择一类事件项来表示文档集的主题,最后从文档抽取那些包含代表项并且最重要的句子生成摘要。该文的实验结果证明在多文档自动摘要中考虑事件项语义关系是必要的和可行的。  相似文献   

    20.
    SBGA系统将多文档自动摘要过程视为一个从源文档集中抽取句子的组合优化过程,并用演化算法来求得近似最优解。与基于聚类的句子抽取方法相比,基于演化算法进行句子抽取的方法是面向摘要整体的,因此能获得更好的近似最优摘要。演化算法的评价函数中考虑了衡量摘要的4个标准:长度符合用户要求、信息覆盖率高、更多地保留原文传递的重要信息、无冗余。另外,为了提高词频计算的精度, SBGA采用了一种改进的词频计算方法TFS,将加权后词的同义词频率加到了原词频中。在DUC2004测试数据集上的实验结果表明,基于演化算法进行句子抽取的方法有很好的性能,其ROUGE-1分值比DUC2004最优参赛系统仅低0.55%。改进的词频计算方法TFS对提高文档质量也起到了良好的作用。  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号