首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field.  相似文献   

2.
基于滑动窗口的微博时间线摘要算法   总被引:1,自引:0,他引:1  
时间线摘要是在时间维度上对文本进行内容归纳和概要生成的技术。传统的时间线摘要主要研究诸如新闻之类的长文本,而本文研究微博短文本的时间线摘要问题。由于微博短文本内容特征有限,无法仅依靠文本内容生成摘要,本文采用内容覆盖性、时间分布性和传播影响力3种指标评价时间线摘要,并提出了基于滑动窗口的微博时间线摘要算法(Microblog timeline summariaztion based on sliding window, MTSW)。该算法首先利用词项强度和熵来确定代表性词项;然后基于上述3种指标构建出评价时间线摘要的综合评价指标;最后采用滑动窗口的方法,遍历时间轴上的微博消息序列,生成微博时间线摘要。利用真实微博数据集的实验结果表明,MTSW算法生成的时间线摘要可以有效地反映热点事件发展演化的过程。  相似文献   

3.
We present an optimization-based unsupervised approach to automatic document summarization. In the proposed approach, text summarization is modeled as a Boolean programming problem. This model generally attempts to optimize three properties, namely, (1) relevance: summary should contain informative textual units that are relevant to the user; (2) redundancy: summaries should not contain multiple textual units that convey the same information; and (3) length: summary is bounded in length. The approach proposed in this paper is applicable to both tasks: single- and multi-document summarization. In both tasks, documents are split into sentences in preprocessing. We select some salient sentences from document(s) to generate a summary. Finally, the summary is generated by threading all the selected sentences in the order that they appear in the original document(s). We implemented our model on multi-document summarization task. When comparing our methods to several existing summarization methods on an open DUC2005 and DUC2007 data sets, we found that our method improves the summarization results significantly. This is because, first, when extracting summary sentences, this method not only focuses on the relevance scores of sentences to the whole sentence collection, but also the topic representative of sentences. Second, when generating a summary, this method also deals with the problem of repetition of information. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-SU4 metrics. In this paper, we also demonstrate that the summarization result depends on the similarity measure. Results of the experiment showed that combination of symmetric and asymmetric similarity measures yields better result than their use separately.  相似文献   

4.
In this paper we address extractive summarization of long threads in online discussion fora. We present an elaborate user evaluation study to determine human preferences in forum summarization and to create a reference data set. We showed long threads to ten different raters and asked them to create a summary by selecting the posts that they considered to be the most important for the thread. We study the agreement between human raters on the summarization task, and we show how multiple reference summaries can be combined to develop a successful model for automatic summarization. We found that although the inter-rater agreement for the summarization task was slight to fair, the automatic summarizer obtained reasonable results in terms of precision, recall, and ROUGE. Moreover, when human raters were asked to choose between the summary created by another human and the summary created by our model in a blind side-by-side comparison, they judged the model’s summary equal to or better than the human summary in over half of the cases. This shows that even for a summarization task with low inter-rater agreement, a model can be trained that generates sensible summaries. In addition, we investigated the potential for personalized summarization. However, the results for the three raters involved in this experiment were inconclusive. We release the reference summaries as a publicly available dataset.  相似文献   

5.
源代码的摘要可以帮助软件开发人员快速地理解代码,帮助维护人员更快地完成维护任务.但是,手工编写摘要代价高、效率低,因此人们试图利用计算机自动地为源代码生成摘要.近年来,基于神经网络的代码摘要技术成为自动源代码摘要研究的主流技术和软件工程领域的研究热点.首先阐述了代码摘要的概念和自动代码摘要的定义,回顾了自动代码摘要技术...  相似文献   

6.
Data summarization allows analysts to explore datasets that may be too complex or too large to visualize in detail. Designers face a number of design and implementation choices when using summarization in visual analytics systems. While these choices influence the utility of the resulting system, there are no clear guidelines for the use of these summarization techniques. In this paper, we codify summarization use in existing systems to identify key factors in the design of summary visualizations. We use quantitative content analysis to systematically survey examples of visual analytics systems and enumerate the use of these design factors in data summarization. Through this analysis, we expose the relationship between design considerations, strategies for data summarization in visualization systems, and how different summarization methods influence the analyses supported by systems. We use these results to synthesize common patterns in real‐world use of summary visualizations and highlight open challenges and opportunities that these patterns offer for designing effective systems. This work provides a more principled understanding of design practices for summary visualization and offers insight into underutilized approaches.  相似文献   

7.
Summarization is an important intermediate step for expediting knowledge discovery tasks such as anomaly detection. In the context of anomaly detection from data stream, the summary needs to represent both anomalous and normal data. But streaming data has distinct characteristics, such as one-pass constraint, for which conducting data mining operations are difficult. Existing stream summarization techniques are unable to create summary which represent both normal and anomalous instances. To address this problem, in this paper, a number of hybrid summarization techniques are designed and developed using the concept of reservoir for anomaly detection from network traffic. Experimental results on thirteen benchmark data streams show that the summaries produced from stream using pairwise distance (PSSR) and template matching (TMSSR) techniques can retain more anomalies than existing stream summarization techniques, and anomaly detection technique can identify the anomalies with high true positive and low false positive rate.  相似文献   

8.
视频摘要技术的目的是在缩短视频长度的同时,概括视频的主要内容,这样可以极大地节省人们浏览视频的时间。视频摘要技术的一个关键步骤是评估生成摘要的性能,现有的大多数方法是基于整个视频进行评估。然而,基于整个视频序列进行评估的计算成本很高,特别是对于长视频。而且在整个视频上评估生成摘要往往忽略了视频数据固有的时序关系,导致生成摘要缺乏故事情节的逻辑性。因此,提出了一个关注局部信息的视频摘要网络,称为自注意力和局部奖励视频摘要网络(ALRSN)。确切地说,该模型采用自注意力机制预测视频帧的重要性分数,然后通过重要性分数生成视频摘要。为了评估生成摘要的性能,进一步设计了一个局部奖励函数,同时考虑了视频摘要的局部多样性和局部代表性。该函数将生成摘要映射回原视频,并在局部范围内评估摘要的性能,使其具有原视频的时序结构。通过在局部范围内获得更高的奖励分数,使模型生成更多样化、更具代表性的视频摘要。综合实验表明,在两个基准数据集SumMe和TvSum上,ALRSN模型优于现有方法。  相似文献   

9.
文本摘要应包含源文本中所有重要信息,传统基于编码器-解码器架构的摘要模型生成的摘要准确性较低。根据文本分类和文本摘要的相关性,提出一种多任务学习摘要模型。从文本分类辅助任务中学习抽象信息改善摘要生成质量,使用K-means聚类算法构建Cluster-2、Cluster-10和Cluster-20文本分类数据集训练分类器,并研究不同分类数据集参与训练对摘要模型的性能影响,同时利用基于统计分布的判别法全面评价摘要准确性。在CNNDM测试集上的实验结果表明,该模型在ROUGE-1、ROUGE-2和ROUGE-L指标上相比强基线模型分别提高了0.23、0.17和0.31个百分点,生成摘要的准确性更高。  相似文献   

10.
VISON: VIdeo Summarization for ONline applications   总被引:1,自引:0,他引:1  
Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data to be accessed in a user-friendly way. This has been the goal of a quickly evolving research area known as video summarization. Most of existing techniques to address the problem of summarizing a video sequence have focused on the uncompressed domain. However, decoding and analyzing of a video sequence are two extremely time-consuming tasks. Thus, video summaries are usually produced off-line, penalizing any user interaction. The lack of customization is very critical, as users often have different demands and resources. Since video data are usually available in compressed form, it is desirable to directly process video material without decoding. In this paper, we present VISON, a novel approach for video summarization that works in the compressed domain and allows user interaction. The proposed method is based on both exploiting visual features extracted from the video stream and on using a simple and fast algorithm to summarize the video content. Results from a rigorous empirical comparison with a subjective evaluation show that our technique produces video summaries with high quality relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.  相似文献   

11.
Source code documentation often contains summaries of source code written by authors. Recently, automatic source code summarization tools have emerged that generate summaries without requiring author intervention. These summaries are designed for readers to be able to understand the high-level concepts of the source code. Unfortunately, there is no agreed upon understanding of what makes up a “good summary.” This paper presents an empirical study examining summaries of source code written by authors, readers, and automatic source code summarization tools. This empirical study examines the textual similarity between source code and summaries of source code using Short Text Semantic Similarity metrics. We found that readers use source code in their summaries more than authors do. Additionally, this study finds that accuracy of a human written summary can be estimated by the textual similarity of that summary to the source code.  相似文献   

12.
13.
In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PriCA) and compared it with other summarization algorithms in the literature with respect to both performance and the output quality. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We show that PriCA is faster than existing algorithms and also produces better quality summaries. We also briefly describe a soccer video summarization system we have built on using the PriCA architecture and various (classical) image processing algorithms.  相似文献   

14.
As information is available in abundance for every topic on internet, condensing the important information in the form of summary would benefit a number of users. Hence, there is growing interest among the research community for developing new approaches to automatically summarize the text. Automatic text summarization system generates a summary, i.e. short length text that includes all the important information of the document. Since the advent of text summarization in 1950s, researchers have been trying to improve techniques for generating summaries so that machine generated summary matches with the human made summary. Summary can be generated through extractive as well as abstractive methods. Abstractive methods are highly complex as they need extensive natural language processing. Therefore, research community is focusing more on extractive summaries, trying to achieve more coherent and meaningful summaries. During a decade, several extractive approaches have been developed for automatic summary generation that implements a number of machine learning and optimization techniques. This paper presents a comprehensive survey of recent text summarization extractive approaches developed in the last decade. Their needs are identified and their advantages and disadvantages are listed in a comparative manner. A few abstractive and multilingual text summarization approaches are also covered. Summary evaluation is another challenging issue in this research field. Therefore, intrinsic as well as extrinsic both the methods of summary evaluation are described in detail along with text summarization evaluation conferences and workshops. Furthermore, evaluation results of extractive summarization approaches are presented on some shared DUC datasets. Finally this paper concludes with the discussion of useful future directions that can help researchers to identify areas where further research is needed.  相似文献   

15.
Evaluation of automatic text summarization is a challenging task due to the difficulty of calculating similarity of two texts. In this paper, we define a new dissimilarity measure – compression dissimilarity to compute the dissimilarity between documents. Then we propose a new automatic evaluating method based on compression dissimilarity. The proposed method is a completely “black box” and does not need preprocessing steps. Experiments show that compression dissimilarity could clearly distinct automatic summaries from human summaries. Compression dissimilarity evaluating measure could evaluate an automatic summary by comparing with high-quality human summaries, or comparing with its original document. The evaluating results are highly correlated with human assessments, and the correlation between compression dissimilarity of summaries and compression dissimilarity of documents can serve as a meaningful measure to evaluate the consistency of an automatic text summarization system.  相似文献   

16.
Managing large-scale time series databases has attracted significant attention in the database community recently. Related fundamental problems such as dimensionality reduction, transformation, pattern mining, and similarity search have been studied extensively. Although the time series data are dynamic by nature, as in data streams, current solutions to these fundamental problems have been mostly for the static time series databases. In this paper, we first propose a framework to online summary generation for large-scale and dynamic time series data, such as data streams. Then, we propose online transform-based summarization techniques over data streams that can be updated in constant time and space. We present both the exact and approximate versions of the proposed techniques and provide error bounds for the approximate case. One of our main contributions in this paper is the extensive performance analysis. Our experiments carefully evaluate the quality of the online summaries for point, range, and knn queries using real-life dynamic data sets of substantial size. Edited by W. Aref  相似文献   

17.
The amount of data that is generated during the execution of a business process is growing. As a consequence it is increasingly hard to extract useful information from the large amount of data that is produced. Linguistic summarization helps to point business analysts in the direction of useful information, by verbalizing interesting patterns that exist in the data. In previous work we showed how linguistic summarization can be used to automatically generate diagnostic statements about event logs, such as ‘for most cases that contained the sequence ABC, the throughput time was long’. However, we also showed that our technique produced too many of these statements to be useful in a practical setting. Therefore this paper presents a novel technique for linguistic summarization of event logs, which generates linguistic summaries that are concise enough to be used in a practical setting, while at the same time enriching the summaries that are produced by also enabling conjunctive statements. The improved technique is based on pruning and clustering of linguistic summaries. We show that it can be used to reduce the number of summary statements 80–100% compared to previous work. In a survey among 51 practitioners, we found that practitioners consider linguistic summarization useful and easy to use and intend to use it if it were commercially available.  相似文献   

18.
A summarization technique creates a concise version of large amount of data (big data!) which reduces the computational cost of analysis and decision-making. There are interesting data patterns, such as rare anomalies, which are more infrequent in nature than other data instances. For example, in smart healthcare environment, the proportion of infrequent patterns is very low in the underlying cyber physical system (CPS). Existing summarization techniques overlook the issue of representing such interesting infrequent patterns in a summary. In this paper, a novel clustering-based technique is proposed which uses an information theoretic measure to identify the infrequent frequent patterns for inclusion in a summary. The experiments conducted on seven benchmark CPS datasets show substantially good results in terms of including the infrequent patterns in summaries than existing techniques.  相似文献   

19.
Keyframe-based video summarization using Delaunay clustering   总被引:1,自引:0,他引:1  
Recent advances in technology have made tremendous amounts of multimedia information available to the general population. An efficient way of dealing with this new development is to develop browsing tools that distill multimedia data as information oriented summaries. Such an approach will not only suit resource poor environments such as wireless and mobile, but also enhance browsing on the wired side for applications like digital libraries and repositories. Automatic summarization and indexing techniques will give users an opportunity to browse and select multimedia document of their choice for complete viewing later. In this paper, we present a technique by which we can automatically gather the frames of interest in a video for purposes of summarization. Our proposed technique is based on using Delaunay Triangulation for clustering the frames in videos. We represent the frame contents as multi-dimensional point data and use Delaunay Triangulation for clustering them. We propose a novel video summarization technique by using Delaunay clusters that generates good quality summaries with fewer frames and less redundancy when compared to other schemes. In contrast to many of the other clustering techniques, the Delaunay clustering algorithm is fully automatic with no user specified parameters and is well suited for batch processing. We demonstrate these and other desirable properties of the proposed algorithm by testing it on a collection of videos from Open Video Project. We provide a meaningful comparison between results of the proposed summarization technique with Open Video storyboard and K-means clustering. We evaluate the results in terms of metrics that measure the content representational value of the proposed technique.  相似文献   

20.
周凯  李芳 《计算机应用与软件》2009,26(6):231-232,255
针对事件摘要方法进行了深入研究,提出了一种基于句子特征与模糊推断的中文突发事件摘要实现机制。该机制综合考虑句子的特征重要性和与用户需求的内在相关性为单篇新闻生成摘要,在事件所有新闻摘要的句子上进行聚类、排序、抽取并最终生成事件的多主题摘要。在中文突发事件语料库上进行了实验,结果证明该机制能够有效地为中文突发事件生成摘要。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号