首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
为了进一步提高网页相关性判断的速度和准确率,提出了一种新的用于聚焦文摘的句子权重计算方法。在查询返回的结果集的基础上,通过计算关键词间的互信息,对输入的查询语句进行短语识别;利用网页文本中的标签信息,对网页结构进行分析,并将关键词短语和网页结构等信息融入句子权重计算。实验结果表明,基于该算法生成的查询摘要在相关性判断的速度和准确率等方面均优于现有方法。  相似文献   

2.
该文提出了一种基于云模型的文摘单元选取方法,利用云模型,全面考虑文摘单元的随机性和模糊性,提高面向查询的多文档自动文摘系统的性能。首先计算文摘单元和查询条件的相关性,将文摘单元和各个查询词的相关度看成云滴,通过对云的不确定性的计算,找出与查询条件真正意义相关的文摘单元;随后利用文档集合重要度对查询相关的结果进行修正,将文摘句和其他各文摘句的相似度看成云滴,利用云的数字特征计算句子重要度,找出能够概括尽可能多的文档集合内容的句子,避免片面地只从某一个方面回答查询问题。为了证明文摘单元选取方法的有效性,在英文大规模公开语料上进行了实验,并参加了国际自动文摘公开评测,取得了较好的成绩。
  相似文献   

3.
随着网络信息日益增多,文本摘要变得越来越重要。大多数现有的文摘方法采用的是独立于查询的方法来生成文摘。论文提出了一种将基于查询条件的句子权值计算融入句子重要度计算的文摘技术,实验结果表明该方法生成的文摘能有效提高用户搜索信息的速度并提高准确性。  相似文献   

4.
针对面向查询的多文档自动文摘,本文将查询句混入多文档集合中的各句子中间,采用高效的软聚类算法SSC对所有的句子进行聚类。采用轮转法抽取文摘句,最后生成文摘。该方法在DUC2005的语料中测试效果很好。  相似文献   

5.
基于时间戳的多文档自动文摘   总被引:1,自引:0,他引:1       下载免费PDF全文
网站的新闻专题往往包含大量的网页,多文档自动文摘可以帮助人们从中快速获取主要信息。该文提出了利用时间戳改善文摘句子抽取质量和排序的方法。介绍了句子抽取方法、句子重要度计算、句子冗余减小方法。实验表明,形成的文摘性能良好,可以应用于实际系统中。  相似文献   

6.
基于概念统计和语义层次分析的英文自动文摘研究   总被引:5,自引:1,他引:5  
传统的自动文摘方法基于词语统计抽取文摘句,未进行文本的语义分析,导致文摘精度不高。为了克服传统方法的缺点,本文提出了一种基于主题概念的自动文摘方法,以概念统计和层次分析为基础设计并实现了一个英文自动文摘系统。系统利用WordNet以概念统计代替传统的词频统计,基于主题概念构建向量空间模型,计算句子重要度。并且根据主题概念在概念层次树上的分布进行文本结构分析划分意义块,以意义块为单元抽取文摘,初步解决了多主题文章的文摘结构不平衡问题。本文主要介绍了概念层次树的构造,主题概念的抽取步骤,基于主题概念的句子重要度的计算和意义块的划分算法。测试表明,通过概念统计和语义层次分析的方法,我们设计了更理想的向量空间模型,系统生成的文摘精度较高,并更全面地反映了原文的主要内容。  相似文献   

7.
基于潜在语义索引和句子聚类的中文自动文摘   总被引:2,自引:0,他引:2  
自动文摘是自然语言处理领域的一项重要的研究课题.提出一种基于潜在语义索引和句子聚类的中文自动文摘方法.该方法的特色在于:使用潜在语义索引计算句子的相似度,并将层次聚类算法和K-中心聚类算法相结合进行句子聚类,这样提高了句子相似度计算和主题划分的准确性,有利于生成的文摘在全面覆盖文档主题的同时减少自身的冗余.实验结果验证了该文提出的方法的有效性,对比传统的基于聚类的自动文摘方法,该方法生成的文摘质量获得了显著的提高.  相似文献   

8.
一种基于主题词集的自动文摘方法*   总被引:1,自引:1,他引:0  
提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘.该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句子,最后按原文顺序输出文摘.实验在哈工大信息检索研究室单文档自动文摘语料库上进行,使用内部评测自动评...  相似文献   

9.
主题模型LDA的多文档自动文摘   总被引:3,自引:0,他引:3  
近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA (latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型中句子的主题概率分布和主题的词汇概率分布,以句子中主题权重的加和确定各个主题的重要程度,并根据LDA模型中主题的概率分布和句子的概率分布提出了2种不同的句子权重计算模型.实验中使用ROUGE评测标准,与代表最新水平的SumBasic方法和其他2种基于LDA的多文档自动文摘方法在通用型多文档摘要测试集DUC2002上的评测数据进行比较,结果表明提出的基于LDA的多文档自动文摘方法在ROUGE的各个评测标准上均优于SumBasic方法,与其他基于LDA模型的文摘相比也具有优势.  相似文献   

10.
郭红建  黄兵 《计算机应用研究》2013,30(11):3299-3301
针对多文档文摘生成过程中话题容易中断和文摘句子语义出现不连贯这两个研究难点, 分析了潜在语义分析聚类算法在句子排序中的应用, 以期提高文摘的生成质量。先采用潜在语义分析聚类算法将文摘句子聚类, 从而形成话题集, 以达到解决话题中断的目的。通过计算文档的文摘展现力, 挑选出文摘展现力最大的文档作为模板, 然后根据模板对文摘句子进行两趟排序。实验结果表明, 提出的算法是有效的, 该算法能够提高文摘的可读性。  相似文献   

11.
The goal of abstractive summarization of multi-documents is to automatically produce a condensed version of the document text and maintain the significant information. Most of the graph-based extractive methods represent sentence as bag of words and utilize content similarity measure, which might fail to detect semantically equivalent redundant sentences. On other hand, graph based abstractive method depends on domain expert to build a semantic graph from manually created ontology, which requires time and effort. This work presents a semantic graph approach with improved ranking algorithm for abstractive summarization of multi-documents. The semantic graph is built from the source documents in a manner that the graph nodes denote the predicate argument structures (PASs)—the semantic structure of sentence, which is automatically identified by using semantic role labeling; while graph edges represent similarity weight, which is computed from PASs semantic similarity. In order to reflect the impact of both document and document set on PASs, the edge of semantic graph is further augmented with PAS-to-document and PAS-to-document set relationships. The important graph nodes (PASs) are ranked using the improved graph ranking algorithm. The redundant PASs are reduced by using maximal marginal relevance for re-ranking the PASs and finally summary sentences are generated from the top ranked PASs using language generation. Experiment of this research is accomplished using DUC-2002, a standard dataset for document summarization. Experimental findings signify that the proposed approach shows superior performance than other summarization approaches.  相似文献   

12.
Nowadays, it is necessary that users have access to information in a concise form without losing any critical information. Document summarization is an automatic process of generating a short form from a document. In itemset-based document summarization, the weights of all terms are considered the same. In this paper, a new approach is proposed for multidocument summarization based on weighted patterns and term association measures. In the present study, the weights of the terms are not equal in the context and are computed based on weighted frequent itemset mining. Indeed, the proposed method enriches frequent itemset mining by weighting the terms in the corpus. In addition, the relationships among the terms in the corpus have been considered using term association measures. Also, the statistical features such as sentence length and sentence position have been modified and matched to generate a summary based on the greedy method. Based on the results of the DUC 2002 and DUC 2004 datasets obtained by the ROUGE toolkit, the proposed approach can outperform the state-of-the-art approaches significantly.  相似文献   

13.
Traditional summarization methods only use the internal information of a Web document while ignoring its social information such as tweets from Twitter, which can provide a perspective viewpoint for readers towards an event. This paper proposes a framework named SoRTESum to take the advantages of social information such as document content reflection to extract summary sentences and social messages. In order to do that, the summarization was formulated in two steps: scoring and ranking. In the scoring step, the score of a sentence or social message is computed by using intra-relation and inter-relation which integrate the support of local and social information in a mutual reinforcement form. To calculate these relations, 16 features are proposed. After scoring, the summarization is generated by selecting top m ranked sentences and social messages. SoRTESum was extensively evaluated on two datasets. Promising results show that: (i) SoRTESum obtains significant improvements of ROUGE-scores over state-of-the-art baselines and competitive results with the learning to rank approach trained by RankBoost and (ii) combining intra-relation and inter-relation benefits single-document summarization.  相似文献   

14.
浅层狄利赫雷分配(Latent Dirichlet Allocation,LDA)方法近年来被广泛应用于文本聚类、分类、段落切分等等,并且也有人将其应用于基于提问的无监督的多文档自动摘要。该方法被认为能较好地对文本进行浅层语义建模。该文在前人工作基础上提出了基于LDA的条件随机场(Conditional Random Field, CRF)自动文摘(LCAS)方法,研究了LDA在有监督的单文档自动文摘中的作用,提出了将LDA提取的主题(Topic)作为特征加入CRF模型中进行训练的方法,并分析研究了在不同Topic下LDA对摘要结果的影响。实验结果表明,加入LDA特征后,能够有效地提高以传统特征为输入的CRF文摘系统的质量。  相似文献   

15.
Graph model has been widely applied in document summarization by using sentence as the graph node, and the similarity between sentences as the edge. In this paper, a novel graph model for document summarization is presented, that not only sentences relevance but also phrases relevance information included in sentences are utilized. In a word, we construct a phrase-sentence two-layer graph structure model (PSG) to summarize document(s). We use this model for generic document summarization and query-focused summarization. The experimental results show that our model greatly outperforms existing work.  相似文献   

16.
生成高质量的文档摘要需要用简约而不丢失信息的描述文档,是自动摘要技术的一大难题。该文认为高质量的文档摘要必须尽量多的覆盖原始文档中的信息,同时尽可能的保持紧凑。从这一角度出发,从文档中抽取出熵和相关度这两组特征用以权衡摘要的信息覆盖率和紧凑性。该文采用基于回归的有监督摘要技术对提取的特征进行权衡,并且采用单文档摘要和多文档摘要进行了系统的实验。实验结果证明对于单文档摘要和多文档摘要,权衡熵和相关度均能有效地提高文档摘要的质量。  相似文献   

17.
以互联网为代表的信息技术的发展使人们索取信息变得前所未有的便捷,同时也对如何有效利用信息提出了挑战。自动文摘技术通过自动选择文档中的代表句子,可以极大提高信息使用的效率。近年来,基于英文和中文的自动文摘技术获得广泛关注并取得长足进展,而对少数民族语言的自动文摘研究还不够充分,例如维吾尔语。构造了一个面向维吾尔语的自动文摘系统。首先利用维吾尔语的语言学知识对文档进行预处理,之后对文档进行了关键词提取,利用这些关键词进行了抽取式自动文摘。比较了基于TF-IDF和基于TextRank的两种关键词提取算法,证明TextRank方法提取出的关键词更适合自动文摘应用。通过研究证明了在充分考虑到维吾尔语语言信息的前提下,基于关键词的自动文摘方法可以取得让人满意的效果。  相似文献   

18.
In recent years, algebraic methods, more precisely matrix decomposition approaches, have become a key tool for tackling document summarization problem. Typical algebraic methods used in multi-document summarization (MDS) vary from soft and hard clustering approaches to low-rank approximations. In this paper, we present a novel summarization method AASum which employs the archetypal analysis for generic MDS. Archetypal analysis (AA) is a promising unsupervised learning tool able to completely assemble the advantages of clustering and the flexibility of matrix factorization. In document summarization, given a content-graph data matrix representation of a set of documents, positively and/or negatively salient sentences are values on the data set boundary. These extreme values, archetypes, can be computed using AA. While each sentence in a data set is estimated as a mixture of archetypal sentences, the archetypes themselves are restricted to being sparse mixtures, i.e., convex combinations of the original sentences. Since AA in this way readily offers soft clustering, we suggest to consider it as a method for simultaneous sentence clustering and ranking. Another important argument in favor of using AA in MDS is that in contrast to other factorization methods, which extract prototypical, characteristic, even basic sentences, AA selects distinct (archetypal) sentences and thus induces variability and diversity in produced summaries. Experimental results on the DUC generic summarization data sets evidence the improvement of the proposed approach over the other closely related methods.  相似文献   

19.
The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches.  相似文献   

20.
We propose a novel approach based on predictive quantization (PQ) for online summarization of multiple time-varying data streams. A synopsis over a sliding window of most recent entries is computed in one pass and dynamically updated in constant time. The correlation between consecutive data elements is effectively taken into account without the need for preprocessing. We extend PQ to multiple streams and propose structures for real-time summarization and querying of a massive number of streams. Queries on any subsequence of a sliding window over multiple streams are processed in real time. We examine each component of the proposed approach, prediction, and quantization separately and investigate the space-accuracy trade-off for synopsis generation. Complementing the theoretical optimality of PQ-based approaches, we show that the proposed technique, even for very short prediction windows, significantly outperforms the current techniques for a wide variety of query types on both synthetic and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号