首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
温嘉宝  杨敏 《集成技术》2024,13(1):62-71
裁判文书自动摘要的目的在于让计算机能够自动选择、抽取和压缩法律文本中的重要信息,从而减轻法律从业者的工作量。目前,大多数基于预训练语言模型的摘要算法对输入文本的长度存在限制,因此无法对长文本进行有效摘要。为此,该文提出了一种新的抽取式摘要算法,利用预训练语言模型生成句子向量,并基于Transformer编码器结构融合包括句子向量、句子位置和句子长度在内的信息,完成句子摘要。实验结果显示,该算法能够有效处理长文本摘要任务。此外,在2020年中国法律智能技术评测(CAIL)摘要数据集上进行测试的结果表明,与基线模型相比,该模型在ROUGE-1、ROUGE-2和ROUGE-L指标上均有显著提升。  相似文献   

2.
抽取式自动文摘研究抽取文档中最能代表文档核心内容的句子作为摘要,篇章主次关系分析则是从篇章结构方面分析出篇章的主要内容和次要内容,因此,篇章主次关系分析和抽取式自动文摘存在较大关联,篇章主次关系可指导摘要的抽取。该文提出了一种基于篇章主次关系的单文档抽取式摘要方法,该方法基于神经网络模型构建了一个篇章主次关系和文本摘要联合学习的模型。该模型在考虑词组、短语等语义信息的基础上同时考虑了篇章的主次关系等结构信息,最终基于篇章内容的整体优化抽取出最能代表文档核心内容的句子作为摘要。实验结果表明,与当前主流的单文档抽取式摘要方法相比,该方法在ROUGE评价指标上有显著提高。  相似文献   

3.
Text summarization is either extractive or abstractive. Extractive summarization is to select the most salient pieces of information (words, phrases, and/or sentences) from a source document without adding any external information. Abstractive summarization allows an internal representation of the source document so as to produce a faithful summary of the source. In this case, external text can be inserted into the generated summary. Because of the complexity of the abstractive approach, the vast majority of work in text summarization has adopted an extractive approach.In this work, we focus on concepts fusion and generalization, i.e. where different concepts appearing in a sentence can be replaced by one concept which covers the meanings of all of them. This is one operation that can be used as part of an abstractive text summarization system. The main goal of this contribution is to enrich the research efforts on abstractive text summarization with a novel approach that allows the generalization of sentences using semantic resources. This work should be useful in intelligent systems more generally since it introduces a means to shorten sentences by producing more general (hence abstractions of the) sentences. It could be used, for instance, to display shorter texts in applications for mobile devices. It should also improve the quality of the generated text summaries by mentioning key (general) concepts. One can think of using the approach in reasoning systems where different concepts appearing in the same context are related to one another with the aim of finding a more general representation of the concepts. This could be in the context of Goal Formulation, expert systems, scenario recognition, and cognitive reasoning more generally.We present our methodology for the generalization and fusion of concepts that appear in sentences. This is achieved through (1) the detection and extraction of what we define as generalizable sentences and (2) the generation and reduction of the space of generalization versions. We introduce two approaches we have designed to select the best sentences from the space of generalization versions. Using four NLTK1 corpora, the first approach estimates the “acceptability” of a given generalization version. The second approach is Machine Learning-based and uses contextual and specific features. The recall, precision and F1-score measures resulting from the evaluation of the concept generalization and fusion approach are presented.  相似文献   

4.
The entity linking task consists in automatically identifying and linking the entities mentioned in a text to their uniform resource identifiers in a given knowledge base. This task is very challenging due to its natural language ambiguity. However, not all the entities mentioned in the document have the same utility in understanding the topics being discussed. Thus, the related problem of identifying the most relevant entities present in the document, also known as salient entities (SE), is attracting increasing interest. In this paper, we propose salient entity linking, a novel supervised 2‐step algorithm comprehensively addressing both entity linking and saliency detection. The first step is aimed at identifying a set of candidate entities that are likely to be mentioned in the document. The second step, besides detecting linked entities, also scores them according to their saliency. Experiments conducted on 2 different data sets show that the proposed algorithm outperforms state‐of‐the‐art competitors and is able to detect SE with high accuracy. Furthermore, we used salient entity linking for extractive text summarization. We found that entity saliency can be incorporated into text summarizers to extract salient sentences from text. The resulting summarizers outperform well‐known summarization systems, proving the importance of using the SE information.  相似文献   

5.
Text summarization is the process of automatically creating a shorter version of one or more text documents. It is an important way of finding relevant information in large text libraries or in the Internet. Essentially, text summarization techniques are classified as Extractive and Abstractive. Extractive techniques perform text summarization by selecting sentences of documents according to some criteria. Abstractive summaries attempt to improve the coherence among sentences by eliminating redundancies and clarifying the contest of sentences. In terms of extractive summarization, sentence scoring is the technique most used for extractive text summarization. This paper describes and performs a quantitative and qualitative assessment of 15 algorithms for sentence scoring available in the literature. Three different datasets (News, Blogs and Article contexts) were evaluated. In addition, directions to improve the sentence extraction results obtained are suggested.  相似文献   

6.
The idea of automatic summarization dates back to 1958, when Luhn invented the “auto abstract” (Luhn, 1958). Since then, many diverse automatic summarization approaches have been proposed, but no single technique has solved the increasingly urgent need for automatic summarization. Rather than proposing one more such technique, we suggest that the best solution is likely a system able to combine multiple summarization techniques, as required by the type of documents being summarized. Thus, this paper presents HAUSS: a framework to quickly build specialized summarizers, integrating several base techniques into a single approach. To recognize relevant text fragments, rules are created that combine frequency, centrality, citation and linguistic information in a context-dependent way. An incremental knowledge acquisition framework strongly supports the creation of these rules, using a training corpus to guide rule acquisition, and produce a powerful knowledge base specific to the domain. Using HAUSS, we created a knowledge base for catchphrase extraction in legal text. The system outperforms existing state-of-the-art general-purpose summarizers and machine learning approaches. Legal experts rated the extracted summaries similar to the original catchphrases given by the court. Our investigation of knowledge acquisition methods for summarization therefore demonstrates that it is possible to quickly create effective special-purpose summarizers, which combine multiple techniques, into a single context-aware approach.  相似文献   

7.
信息爆炸是信息化时代面临的普遍性问题, 为了从海量文本数据中快速提取出有价值的信息, 自动摘要技术成为自然语言处理(natural language processing, NLP)领域中的研究重点. 多文档摘要的目的是从一组具有相同主题的文档中精炼出重要内容, 帮助用户快速获取关键信息. 针对目前多文档摘要中存在的信息不全面、冗余度高的问题, 提出一种基于多粒度语义交互的抽取式摘要方法, 将多粒度语义交互网络与最大边界相关法(maximal marginal relevance, MMR)相结合, 通过不同粒度的语义交互训练句子的表示, 捕获不同粒度的关键信息, 从而保证摘要信息的全面性; 同时结合改进的MMR以保证摘要信息的低冗余度, 通过排序学习为输入的多篇文档中的各个句子打分并完成摘要句的抽取. 在Multi-News数据集上的实验结果表明基于多粒度语义交互的抽取式多文档摘要模型优于LexRank、TextRank等基准模型.  相似文献   

8.
文本摘要旨在实现从海量的文本数据中快速准确地获取关键信息。为探索新颖的摘要句特征因素,该文将文句中的关键词嵌入知识网络进行建模,并将文句映射至知识网络进行表达,进而提出文句的关键词建构渗透度特征模型,在摘要句判别中引入文句中关键词组的宽度和深度的渗透特性。结合最大熵建模分类方法,针对领域语料库进行不同特征的影响系数建模,实现了监督学习下摘要句的有效分类和自动提取。文中实验结果良好,表明了新特征模型的有效性和在领域语料库中的稳定性,且特征计算方法简洁,具有良好的综合实用性。  相似文献   

9.
Automatic text summarization is an essential tool in this era of information overloading. In this paper we present an automatic extractive Arabic text summarization system where the user can cap the size of the final summary. It is a direct system where no machine learning is involved. We use a two pass algorithm where in pass one, we produce a primary summary using Rhetorical Structure Theory (RST); this is followed by the second pass where we assign a score to each of the sentences in the primary summary. These scores will help us in generating the final summary. For the final output, sentences are selected with an objective of maximizing the overall score of the summary whose size should not exceed the user selected limit. We used Rouge to evaluate our system generated summaries of various lengths against those done by a (human) news editorial professional. Experiments on sample texts show our system to outperform some of the existing Arabic summarization systems including those that require machine learning.  相似文献   

10.
经典的TextRank算法在文档的自动摘要提取时往往只考虑了句子节点间的相似性,而忽略了文档的篇章结构及句子的上下文信息。针对这些问题,结合中文文本的结构特点,提出一种改进后的iTextRank算法,通过将标题、段落、特殊句子、句子位置和长度等信息引入到TextRank网络图的构造中,给出改进后的句子相似度计算方法及权重调整因子,并将其应用于中文文本的自动摘要提取,同时分析了算法的时间复杂度。最后,实验证明iTextRank比经典的TextRank方法具有更高的准确率和更低的召回率。  相似文献   

11.
新闻与案件的相关性分析是法律领域新闻舆情分析的重要环节,可转化为新闻文本与案件文本的相似度计算任务。借助孪生网络计算文本相似度是一种有效途径,其对平衡样本具有良好的学习能力,但在新闻与案件的相关性计算中面临文本不平衡和新闻文本冗余的问题,因此,该文提出了基于非对称孪生网络的新闻与案件相关性计算方法。通过计算文本中句子与标题的相似度选取与新闻标题最相关的句子表征文档,去除新闻文本中的冗余句子,利用非对称孪生网络建模,考虑到案件要素蕴含案件的关键语义信息,将案件要素作为监督信息融入到非对称孪生网络中对新闻文档和案件描述进行编码,解决新闻和案件在结构和语义上不平衡的问题,最终实现新闻与案件的相关性判断。实验表明该模型相比基线模型准确率提升了2.52%。  相似文献   

12.
Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort. Evaluating and selecting the most informative sentences from biomedical articles is always challenging. This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information. The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model. The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them. The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences. The quality of the framework is assessed via different parameters like information retention, coverage, readability, cohesion, and ROUGE scores in clustering and non-clustering modes. The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption. The configurable settings of combined parameters reduce execution time, enhance memory utilization, and extract relevant information outperforming other biomedical baseline models. An improvement of 17% is achieved when the proposed model is checked against similar biomedical text summarizers.  相似文献   

13.
庞超  尹传环 《计算机科学》2018,45(1):144-147, 178
自动文本摘要是自然语言处理领域中一项重要的研究内容,根据实现方式的不同其分为摘录式和理解式,其中理解式文摘是基于不同的形式对原始文档的中心内容和概念的重新表示,生成的文摘中的词语无需与原始文档相同。提出了一种基于分类的理解式文摘模型。该模型将基于递归神经网络的编码-解码结构与分类结构相结合,并充分利用监督信息,从而获得更多的摘要特性;通过在编码-解码结构中使用注意力机制,模型能更精确地获取原文的中心内容。模型的两部分可以同时在大数据集下进行训练优化,训练过程简单且有效。所提模型表现出了优异的自动摘要性能。  相似文献   

14.
We present methods of extractive query-oriented single-document summarization using a deep auto-encoder (AE) to compute a feature space from the term-frequency (tf) input. Our experiments explore both local and global vocabularies. We investigate the effect of adding small random noise to local tf as the input representation of AE, and propose an ensemble of such noisy AEs which we call the Ensemble Noisy Auto-Encoder (ENAE). ENAE is a stochastic version of an AE that adds noise to the input text and selects the top sentences from an ensemble of noisy runs. In each individual experiment of the ensemble, a different randomly generated noise is added to the input representation. This architecture changes the application of the AE from a deterministic feed-forward network to a stochastic runtime model. Experiments show that the AE using local vocabularies clearly provide a more discriminative feature space and improves the recall on average 11.2%. The ENAE can make further improvements, particularly in selecting informative sentences. To cover a wide range of topics and structures, we perform experiments on two different publicly available email corpora that are specifically designed for text summarization. We used ROUGE as a fully automatic metric in text summarization and we presented the average ROUGE-2 recall for all experiments.  相似文献   

15.
With the number of documents describing real-world events and event-oriented information needs rapidly growing on a daily basis, the need for efficient retrieval and concise presentation of event-related information is becoming apparent. Nonetheless, the majority of information retrieval and text summarization methods rely on shallow document representations that do not account for the semantics of events. In this article, we present event graphs, a novel event-based document representation model that filters and structures the information about events described in text. To construct the event graphs, we combine machine learning and rule-based models to extract sentence-level event mentions and determine the temporal relations between them. Building on event graphs, we present novel models for information retrieval and multi-document summarization. The information retrieval model measures the similarity between queries and documents by computing graph kernels over event graphs. The extractive multi-document summarization model selects sentences based on the relevance of the individual event mentions and the temporal structure of events. Experimental evaluation shows that our retrieval model significantly outperforms well-established retrieval models on event-oriented test collections, while the summarization model outperforms competitive models from shared multi-document summarization tasks.  相似文献   

16.
抽取式方法从源文本中抽取句子,会造成信息冗余;生成式方法可以生成非源文词,会产生语法问题,自然性差。BERT作为一种双向Transformer模型,在自然语言理解任务上展现了优异的性能,但在文本生成任务的应用有待探索。针对以上问题,提出一种基于预训练的三阶段复合式文本摘要模型(TSPT),结合抽取式方法和生成式方法,将源本文经过预训练产生的双向上下文信息词向量由sigmoid函数获取句子得分抽取关键句,在摘要生成阶段将关键句作为完形填空任务重写,生成最终摘要。实验结果表明,该模型在CNN/Daily Mail数据集中取得了良好效果。  相似文献   

17.
研究自动摘要技术,结合统计与文本关系图并基于复杂网络中的社区划分算法,提出一种多主题文本摘要抽取方法。抽取文本中权重较高的句子,通过句子的相似度计算建立文本关系图,利用社区划分算法解决子主题划分的问题。实验结果表明,该方法对多主题文本摘要的抽取质量较好,能抽取出较多的子主题。  相似文献   

18.
基于LDA模型的文本分割   总被引:9,自引:0,他引:9  
文本分割在信息提取、文摘自动生成、语言建模、首语消解等诸多领域都有极为重要的应用.基于LDA模型的文本分割以LDA为语料库及文本建模,利用MCMC中的Gibbs抽样进行推理,间接计算模型参数,获取词汇的概率分布,使隐藏于片段内的不同主题与文本表面的字词建立联系.实验以汉语的整句作为基本块,尝试多种相似性度量手段及边界估计策略,其最佳结果表明二者的恰当结合可以使片段边界的识别错误率远远低于其它同类算法.  相似文献   

19.
As information is available in abundance for every topic on internet, condensing the important information in the form of summary would benefit a number of users. Hence, there is growing interest among the research community for developing new approaches to automatically summarize the text. Automatic text summarization system generates a summary, i.e. short length text that includes all the important information of the document. Since the advent of text summarization in 1950s, researchers have been trying to improve techniques for generating summaries so that machine generated summary matches with the human made summary. Summary can be generated through extractive as well as abstractive methods. Abstractive methods are highly complex as they need extensive natural language processing. Therefore, research community is focusing more on extractive summaries, trying to achieve more coherent and meaningful summaries. During a decade, several extractive approaches have been developed for automatic summary generation that implements a number of machine learning and optimization techniques. This paper presents a comprehensive survey of recent text summarization extractive approaches developed in the last decade. Their needs are identified and their advantages and disadvantages are listed in a comparative manner. A few abstractive and multilingual text summarization approaches are also covered. Summary evaluation is another challenging issue in this research field. Therefore, intrinsic as well as extrinsic both the methods of summary evaluation are described in detail along with text summarization evaluation conferences and workshops. Furthermore, evaluation results of extractive summarization approaches are presented on some shared DUC datasets. Finally this paper concludes with the discussion of useful future directions that can help researchers to identify areas where further research is needed.  相似文献   

20.
Knowledge is information that has been contextualised in a certain domain, to be used or applied. It represents the basic core of our Cultural Heritage and Natural Language provides us with prime versatile means of construing experience at multiple levels of organization. The natural language generation field consists in the creation of texts providing information contained in other kind of sources (numerical data, graphics, taxonomies and ontologies or even other texts), with the aim of making such texts indistinguishable, as far as possible, from those created by humans. On the other hand, the knowledge extraction, basing on text mining and text analysis tasks, as examples of the many applications born from computational linguistic, provides summarization, categorization, topics extractions from textual resources using linguistic concepts, which deal with the imprecision and ambiguity of human language. This paper presents a research activity focused on exploring and scientifically describing knowledge structure and organization involved in textual resources’ generation. Thus, a novel multidimensional model for the representation of conceptual knowledge, is proposed. Furthermore, a real case study in the Cultural Heritage domain is described to demonstrate the effectiveness and the feasibility of the proposed model and approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号