共查询到20条相似文献,搜索用时 0 毫秒
1.
Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks. 相似文献
2.
Extractive summarization aims to automatically produce a short summary of a document by concatenating several sentences taken exactly from the original material. Due to its simplicity and easy-to-use, the extractive summarization methods have become the dominant paradigm in the realm of text summarization. In this paper, we address the sentence scoring technique, a key step of the extractive summarization. Specifically, we propose a novel word-sentence co-ranking model named CoRank, which combines the word-sentence relationship with the graph-based unsupervised ranking model. CoRank is quite concise in the view of matrix operations, and its convergence can be theoretically guaranteed. Moreover, a redundancy elimination technique is presented as a supplement to CoRank, so that the quality of automatic summarization can be further enhanced. As a result, CoRank can serve as an important building-block of the intelligent summarization systems. Experimental results on two real-life datasets including nearly 600 documents demonstrate the effectiveness of the proposed methods. 相似文献
3.
Text summarization is the process of automatically creating a shorter version of one or more text documents. It is an important way of finding relevant information in large text libraries or in the Internet. Essentially, text summarization techniques are classified as Extractive and Abstractive. Extractive techniques perform text summarization by selecting sentences of documents according to some criteria. Abstractive summaries attempt to improve the coherence among sentences by eliminating redundancies and clarifying the contest of sentences. In terms of extractive summarization, sentence scoring is the technique most used for extractive text summarization. This paper describes and performs a quantitative and qualitative assessment of 15 algorithms for sentence scoring available in the literature. Three different datasets (News, Blogs and Article contexts) were evaluated. In addition, directions to improve the sentence extraction results obtained are suggested. 相似文献
4.
提出了一种对HITS算法进行改进的新方法,本方法将文档内容与一些启发信息如“短语”,“句子长度”和“首句优先”等结合,用于发现多文档子主题,并且将文档子主题特征转换成图节点进行排序。通过对DUC2004数据的实验,结果显示本方法是一种有效的多文本摘要方法。 相似文献
5.
Extensible Markup Language (XML) is a simple, flexible text format derived from SGML, which is originally designed to support large-scale electronic publishing. Nowadays XML plays a fundamental role in the exchange of a wide variety of data on the Web. As XML allows designers to create their own customized tags, enables the definition, transmission, validation, and interpretation of data between applications, devices and organizations, lots of works in soft computing employ XML to take control and responsibility for the information, such as fuzzy markup language, and accordingly there are lots of XML-based data or documents. However, most of mobile and interactive ubiquitous multimedia devices have restricted hardware such as CPU, memory, and display screen. So, it is essential to compress an XML document/element collection to a brief summary before it is delivered to the user according to his/her information need. Query-oriented XML text summarization aims to provide users a brief and readable substitution of the original retrieved documents/elements according to the user’s query, which can relieve users’ reading burden effectively. We propose a query-oriented XML summarization system QXMLSum, which extracts sentences and combines them as a summary based on three kinds of features: user’s queries, the content of XML documents/elements, and the structure of XML documents/elements. Experiments on the IEEE-CS datasets used in Initiative for the Evaluation of XML Retrieval show that the query-oriented XML summary generated by QXMLSum is competitive. 相似文献
6.
Four modern systems of automatic text summarization are tested on the basis of a model vocabulary composed by subjects. Distribution of terms of the vocabulary in the source text is compared with their distribution in summaries of different length generated by the systems. Principles for evaluation of the efficiency of the current systems of automatic text summarization are described. 相似文献
7.
This paper describes an experimental method for automatic text genre recognition based on 45 statistical, lexical, syntactic, positional, and discursive parameters. The suggested method includes: (1) the development of software permitting heterogeneous parameters to be normalized and clustered using the k-means algorithm; (2) the verification of parameters; (3) the selection of the parameters that are the most significant for scientific, newspaper, and artistic texts using two-factor analysis algorithms. Adaptive summarization algorithms have been developed based on these parameters. 相似文献
8.
In the paper, the most state-of-the-art methods of automatic text summarization, which build summaries in the form of generic
extracts, are considered. The original text is represented in the form of a numerical matrix. Matrix columns correspond to
text sentences, and each sentence is represented in the form of a vector in the term space. Further, latent semantic analysis
is applied to the matrix obtained to construct sentences representation in the topic space. The dimensionality of the topic
space is much less than the dimensionality of the initial term space. The choice of the most important sentences is carried
out on the basis of sentences representation in the topic space. The number of important sentences is defined by the length
of the demanded summary. This paper also presents a new generic text summarization method that uses nonnegative matrix factorization
to estimate sentence relevance. Proposed sentence relevance estimation is based on normalization of topic space and further
weighting of each topic using sentences representation in topic space. The proposed method shows better summarization quality
and performance than state-of-the-art methods on the DUC 2001 and DUC 2002 standard data sets. 相似文献
9.
自动文摘技术应尽可能获取准确的相似度以确定句子或段落的权重,但目前常用的基于向量空间模型的计算方法却忽视句子、段落、文本中词的顺序.提出了一种新的基于相邻词序组的相似度度量方法并应用于文本的自动摘要,采用基于聚类的方法实现了词序组的向量表示并以此刻画句子、段落、文本,通过线性插值将基于不同长度词序组的相似度结果予以综合.同时,提出了新的基于含词序组重要性累计度的句子或段落的权重指标.实验证明利用词序信息可有效提高自动文摘质量. 相似文献
11.
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. 相似文献
12.
In text summarization, relevance and coverage are two main criteria that decide the quality of a summary. In this paper, we propose a new multi-document summarization approach SumCR via sentence extraction. A novel feature called Exemplar is introduced to help to simultaneously deal with these two concerns during sentence ranking. Unlike conventional ways where the relevance value of each sentence is calculated based on the whole collection of sentences, the Exemplar value of each sentence in SumCR is obtained within a subset of similar sentences. A fuzzy medoid-based clustering approach is used to produce sentence clusters or subsets where each of them corresponds to a subtopic of the related topic. Such kind of subtopic-based feature captures the relevance of each sentence within different subtopics and thus enhances the chance of SumCR to produce a summary with a wider coverage and less redundancy. Another feature we incorporate in SumCR is Position, i.e., the position of each sentence appeared in the corresponding document. The final score of each sentence is a combination of the subtopic-level feature Exemplar and the document-level feature Position. Experimental studies on DUC benchmark data show the good performance of SumCR and its potential in summarization tasks. 相似文献
13.
Abstractive Text Summarization (ATS), which is the task of constructing summary sentences by merging facts from different source sentences and condensing them into a shorter representation while preserving information content and overall meaning. It is very difficult and time consuming for human beings to manually summarize large documents of text. In this paper, we propose an LSTM-CNN based ATS framework (ATSDL) that can construct new sentences by exploring more fine-grained fragments than sentences, namely, semantic phrases. Different from existing abstraction based approaches, ATSDL is composed of two main stages, the first of which extracts phrases from source sentences and the second generates text summaries using deep learning. Experimental results on the datasets CNN and DailyMail show that our ATSDL framework outperforms the state-of-the-art models in terms of both semantics and syntactic structure, and achieves competitive results on manual linguistic quality evaluation. 相似文献
14.
传统生成式模型中存在的梯度经过多次传播后,倾向于消失或爆炸,且存在语言理解不充分的性能缺陷,为此提出一种生成式自动文本摘要方法(BiGRUAtten-LSTM).编码器端将原始文本输入到编码器并结合双向门控循环单元生成固定长度的语义向量,使用注意力机制分配每个输入词的权重来减少输入序列信息的细节损失.解码器端使用LST... 相似文献
15.
Automatic summarization is a topic of common concern in computational linguistics and information science, since a computer
system of text summarization is considered to be an effective means of processing information resources. A method of text
summarization based on latent semantic indexing (LSI), which uses semantic indexing to calculate the sentence similarity,
is proposed in this article. It improves the accuracy of sentence similarity calculations and subject delineation, and helps
the abstracts generated to cover the documents comprehensively as well as reducing redundancies. The effectiveness of the
method is proved by the experimental results. Compared with the traditional keyword-based vector space model method of automatic
text summarization, the quality of the abstracts generated was significantly improved. 相似文献
16.
Text summarization presents several challenges such as considering semantic relationships among words, dealing with redundancy and information diversity issues. Seeking to overcome these problems, we propose in this paper a new graph-based Arabic summarization system that combines statistical and semantic analysis. The proposed approach utilizes ontology hierarchical structure and relations to provide a more accurate similarity measurement between terms in order to improve the quality of the summary. The proposed method is based on a two-dimensional graph model that makes uses statistical and semantic similarities. The statistical similarity is based on the content overlap between two sentences, while the semantic similarity is computed using the semantic information extracted from a lexical database whose use enables our system to apply reasoning by measuring semantic distance between real human concepts. The weighted ranking algorithm PageRank is performed on the graph to produce significant score for all document sentences. The score of each sentence is performed by adding other statistical features. In addition, we address redundancy and information diversity issues by using an adapted version of Maximal Marginal Relevance method. Experimental results on EASC and our own datasets showed the effectiveness of our proposed approach over existing summarization systems. 相似文献
17.
This article focuses on the problems of application of artificial intelligence to represent legal knowledge. The volume of legal knowledge used in practice is unusually large, and therefore the ontological knowledge representation is proposed to be used for semantic analysis, presentation and use of common vocabulary, and knowledge integration of problem domain. At the same time some features of legal knowledge representation in Ukraine have been taken into account. The software package has been developed to work with the ontology. The main features of the program complex, which has a Web-based interface and supports multi-user filling of the knowledge base, have been described. The crowdsourcing method is due to be used for filling the knowledge base of legal information. The success of this method is explained by the self-organization principle of information. However, as a result of such collective work a number of errors are identified, which are distributed throughout the structure of the ontology. The results of application of this program complex are discussed in the end of the article and the ways of improvement of the considered technique are planned. 相似文献
18.
Artificial Intelligence Review - Nowadays we see huge amount of information is available on both, online and offline sources. For single topic we see hundreds of articles are available, containing... 相似文献
19.
The paper classifies systems of automatic text summarization and considers in detail the architecture and algorithms in the functioning of systems of a superficial level. It formulates a universal criterion for selection of linguistic units in the process of information search, which is an interpretation of the Zipf law. The problems and prospects in the development of this object domain are considered. 相似文献
20.
Buyers in online auctions write feedback comments to the sellers from whom they have bought the items. Other bidders read them to determine which item to bid for. In this research, we aim at helping bidders by summarizing the feedback comments. First, we examine feedback comments in online auctions. From the results of the examination, we propose a method called social summarization method. It uses social relationships in online auctions for summarizing feedback comments. This method extracts feedback comments which the buyers seemed to have written from their heart. We implement a system based on our method and evaluate its effectiveness. The results are that our method deleted 80.8% of courteous comments (comments with almost no information). We also found that there are two types of comments in the summaries: comments that are generally infrequent and seem to have been written with real feeling and comments that are generally frequent and seem to have been written with real feeling. Finally, we propose an interactive presentation method of the summaries which identifies the types of the comments. The user experiment indicates that this presentation helps users judge which seller to bid for. 相似文献
|