首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks.  相似文献   

2.
一种英文自动摘要方法   总被引:1,自引:0,他引:1       下载免费PDF全文
随着在线网页的指数型增长,自动摘要技术越来越受到人们的关注。针对抽取型摘要很少对文本进行语义分析、抽取出的句子可能偏离主题等缺陷,结合单文本摘要的特点,提出了一种英文自动摘要方法TLETS(TF-ISF and LexRank based English Text Summarization)。该方法采用WordNet对向量空间模型的特征词进行概念统计,计算每个概念词的TF-ISF值作为其权值,最后计算每个句子的LexRank权值并提取出权值最高的几个句子作为摘要。实验结果表明,TLETS方法能很好地得到摘要结果。  相似文献   

3.
Extractive summarization aims to automatically produce a short summary of a document by concatenating several sentences taken exactly from the original material. Due to its simplicity and easy-to-use, the extractive summarization methods have become the dominant paradigm in the realm of text summarization. In this paper, we address the sentence scoring technique, a key step of the extractive summarization. Specifically, we propose a novel word-sentence co-ranking model named CoRank, which combines the word-sentence relationship with the graph-based unsupervised ranking model. CoRank is quite concise in the view of matrix operations, and its convergence can be theoretically guaranteed. Moreover, a redundancy elimination technique is presented as a supplement to CoRank, so that the quality of automatic summarization can be further enhanced. As a result, CoRank can serve as an important building-block of the intelligent summarization systems. Experimental results on two real-life datasets including nearly 600 documents demonstrate the effectiveness of the proposed methods.  相似文献   

4.
Automatic text summarization (ATS) has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale corpora. However, there is still no guarantee that the generated summaries are grammatical, concise, and convey all salient information as the original documents have. To make the summarization results more faithful, this paper presents an unsupervised approach that combines rhetorical structure theory, deep neural model, and domain knowledge concern for ATS. This architecture mainly contains three components: domain knowledge base construction based on representation learning, the attentional encoder–decoder model for rhetorical parsing, and subroutine-based model for text summarization. Domain knowledge can be effectively used for unsupervised rhetorical parsing thus rhetorical structure trees for each document can be derived. In the unsupervised rhetorical parsing module, the idea of translation was adopted to alleviate the problem of data scarcity. The subroutine-based summarization model purely depends on the derived rhetorical structure trees and can generate content-balanced results. To evaluate the summary results without golden standard, we proposed an unsupervised evaluation metric, whose hyper-parameters were tuned by supervised learning. Experimental results show that, on a large-scale Chinese dataset, our proposed approach can obtain comparable performances compared with existing methods.  相似文献   

5.

在多领域数据的文本生成场景中,不同领域中的数据通常存在差异性,而新领域的引入会同时带来数据缺失的问题. 传统的有监督方法,需要目标领域中大量包含标记的数据来训练深度神经网络文本生成模型,而且训练好的模型无法在新领域中取得良好的泛化效果. 针对多领域场景中数据差异和数据缺失的问题,受到迁移学习方法的启发,设计了一种综合性的迁移式文本生成方法,减少了不同领域之间文本数据的差异性,同时借助已有领域和新领域之间文本数据上的语义关联性,帮助深度神经网络文本生成模型在新领域上进行泛化. 通过在公开数据集上的实验,验证了所提方法在多领域场景下领域迁移的有效性,模型在新领域上进行文本生成时具有较好的表现,对比现有的其他迁移式文本生成方法,在各项文本生成评价指标上均有提升.

  相似文献   

6.
Recent studies have applied different approaches for summarizing software artifacts, and yet very few efforts have been made in summarizing the source code fragments available on web. This paper investigates the feasibility of generating code fragment summaries by using supervised learning algorithms.We hire a crowd of ten individuals from the same work place to extract source code features on a corpus of 127 code fragments retrieved from Eclipse and Net- Beans Official frequently asked questions (FAQs). Human annotators suggest summary lines. Our machine learning algorithms produce better results with the precision of 82% and performstatistically better than existing code fragment classifiers. Evaluation of algorithms on several statistical measures endorses our result. This result is promising when employing mechanisms such as data-driven crowd enlistment improve the efficacy of existing code fragment classifiers.  相似文献   

7.
Text summarization is the process of automatically creating a shorter version of one or more text documents. It is an important way of finding relevant information in large text libraries or in the Internet. Essentially, text summarization techniques are classified as Extractive and Abstractive. Extractive techniques perform text summarization by selecting sentences of documents according to some criteria. Abstractive summaries attempt to improve the coherence among sentences by eliminating redundancies and clarifying the contest of sentences. In terms of extractive summarization, sentence scoring is the technique most used for extractive text summarization. This paper describes and performs a quantitative and qualitative assessment of 15 algorithms for sentence scoring available in the literature. Three different datasets (News, Blogs and Article contexts) were evaluated. In addition, directions to improve the sentence extraction results obtained are suggested.  相似文献   

8.
黄丽雯  钱微 《计算机应用》2006,26(11):2626-2627,2630
提出了一种对HITS算法进行改进的新方法,本方法将文档内容与一些启发信息如“短语”,“句子长度”和“首句优先”等结合,用于发现多文档子主题,并且将文档子主题特征转换成图节点进行排序。通过对DUC2004数据的实验,结果显示本方法是一种有效的多文本摘要方法。  相似文献   

9.
Extensible Markup Language (XML) is a simple, flexible text format derived from SGML, which is originally designed to support large-scale electronic publishing. Nowadays XML plays a fundamental role in the exchange of a wide variety of data on the Web. As XML allows designers to create their own customized tags, enables the definition, transmission, validation, and interpretation of data between applications, devices and organizations, lots of works in soft computing employ XML to take control and responsibility for the information, such as fuzzy markup language, and accordingly there are lots of XML-based data or documents. However, most of mobile and interactive ubiquitous multimedia devices have restricted hardware such as CPU, memory, and display screen. So, it is essential to compress an XML document/element collection to a brief summary before it is delivered to the user according to his/her information need. Query-oriented XML text summarization aims to provide users a brief and readable substitution of the original retrieved documents/elements according to the user’s query, which can relieve users’ reading burden effectively. We propose a query-oriented XML summarization system QXMLSum, which extracts sentences and combines them as a summary based on three kinds of features: user’s queries, the content of XML documents/elements, and the structure of XML documents/elements. Experiments on the IEEE-CS datasets used in Initiative for the Evaluation of XML Retrieval show that the query-oriented XML summary generated by QXMLSum is competitive.  相似文献   

10.
提出并实现了一种改进的基于词汇链分析的自动摘要系统,该系统结合语法分析的特点计算候选词的上下文窗口,提高了采用贪心策略构建词汇链的质量;并针对传统的词汇链分析方法生成的摘要长度的相对固定性给予了改善.实验表明,该系统与采用原始贪心策略构建词汇链的自动摘要系统相比较,生成的文摘质量有明显提高.  相似文献   

11.
介绍了XML文本自动摘要的研究现状,对现存的XML文本自动文摘技术进行了分析和评估,论述了目前该研究方向上尚未解决的一些问题和未来的发展趋势。  相似文献   

12.
Four modern systems of automatic text summarization are tested on the basis of a model vocabulary composed by subjects. Distribution of terms of the vocabulary in the source text is compared with their distribution in summaries of different length generated by the systems. Principles for evaluation of the efficiency of the current systems of automatic text summarization are described.  相似文献   

13.
14.
This paper describes an experimental method for automatic text genre recognition based on 45 statistical, lexical, syntactic, positional, and discursive parameters. The suggested method includes: (1) the development of software permitting heterogeneous parameters to be normalized and clustered using the k-means algorithm; (2) the verification of parameters; (3) the selection of the parameters that are the most significant for scientific, newspaper, and artistic texts using two-factor analysis algorithms. Adaptive summarization algorithms have been developed based on these parameters.  相似文献   

15.
从单文档中生成简短精炼的摘要文本可有效缓解信息爆炸给人们带来的阅读压力。近年来,序列到序列(sequence-to-sequence,Seq2Seq)模型在各文本生成任务中广泛应用,其中结合注意力机制的Seq2Seq模型已成为生成式文本摘要的基本框架。为生成能体现摘要的特定写作风格特征的摘要,在基于注意力和覆盖率机制的Seq2Seq模型基础上,在解码阶段利用变分自编码器(variational auto-encoder,VAE)刻画摘要风格特征并用于指导摘要文本生成;最后,利用指针生成网络来缓解模型中可能出现的未登录词问题。基于新浪微博LCSTS数据集的实验结果表明,该方法能有效刻画摘要风格特征、缓解未登录词及重复生成问题,使得生成的摘要准确性高于基准模型。  相似文献   

16.
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features.  相似文献   

17.
自动文摘技术应尽可能获取准确的相似度以确定句子或段落的权重,但目前常用的基于向量空间模型的计算方法却忽视句子、段落、文本中词的顺序.提出了一种新的基于相邻词序组的相似度度量方法并应用于文本的自动摘要,采用基于聚类的方法实现了词序组的向量表示并以此刻画句子、段落、文本,通过线性插值将基于不同长度词序组的相似度结果予以综合.同时,提出了新的基于含词序组重要性累计度的句子或段落的权重指标.实验证明利用词序信息可有效提高自动文摘质量.  相似文献   

18.
In the paper, the most state-of-the-art methods of automatic text summarization, which build summaries in the form of generic extracts, are considered. The original text is represented in the form of a numerical matrix. Matrix columns correspond to text sentences, and each sentence is represented in the form of a vector in the term space. Further, latent semantic analysis is applied to the matrix obtained to construct sentences representation in the topic space. The dimensionality of the topic space is much less than the dimensionality of the initial term space. The choice of the most important sentences is carried out on the basis of sentences representation in the topic space. The number of important sentences is defined by the length of the demanded summary. This paper also presents a new generic text summarization method that uses nonnegative matrix factorization to estimate sentence relevance. Proposed sentence relevance estimation is based on normalization of topic space and further weighting of each topic using sentences representation in topic space. The proposed method shows better summarization quality and performance than state-of-the-art methods on the DUC 2001 and DUC 2002 standard data sets.  相似文献   

19.
In text summarization, relevance and coverage are two main criteria that decide the quality of a summary. In this paper, we propose a new multi-document summarization approach SumCR via sentence extraction. A novel feature called Exemplar is introduced to help to simultaneously deal with these two concerns during sentence ranking. Unlike conventional ways where the relevance value of each sentence is calculated based on the whole collection of sentences, the Exemplar value of each sentence in SumCR is obtained within a subset of similar sentences. A fuzzy medoid-based clustering approach is used to produce sentence clusters or subsets where each of them corresponds to a subtopic of the related topic. Such kind of subtopic-based feature captures the relevance of each sentence within different subtopics and thus enhances the chance of SumCR to produce a summary with a wider coverage and less redundancy. Another feature we incorporate in SumCR is Position, i.e., the position of each sentence appeared in the corresponding document. The final score of each sentence is a combination of the subtopic-level feature Exemplar and the document-level feature Position. Experimental studies on DUC benchmark data show the good performance of SumCR and its potential in summarization tasks.  相似文献   

20.
传统生成式模型中存在的梯度经过多次传播后,倾向于消失或爆炸,且存在语言理解不充分的性能缺陷,为此提出一种生成式自动文本摘要方法(BiGRUAtten-LSTM).编码器端将原始文本输入到编码器并结合双向门控循环单元生成固定长度的语义向量,使用注意力机制分配每个输入词的权重来减少输入序列信息的细节损失.解码器端使用LST...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号