首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
2.
Keyframe-based video summarization using Delaunay clustering   总被引:1,自引:0,他引:1  
Recent advances in technology have made tremendous amounts of multimedia information available to the general population. An efficient way of dealing with this new development is to develop browsing tools that distill multimedia data as information oriented summaries. Such an approach will not only suit resource poor environments such as wireless and mobile, but also enhance browsing on the wired side for applications like digital libraries and repositories. Automatic summarization and indexing techniques will give users an opportunity to browse and select multimedia document of their choice for complete viewing later. In this paper, we present a technique by which we can automatically gather the frames of interest in a video for purposes of summarization. Our proposed technique is based on using Delaunay Triangulation for clustering the frames in videos. We represent the frame contents as multi-dimensional point data and use Delaunay Triangulation for clustering them. We propose a novel video summarization technique by using Delaunay clusters that generates good quality summaries with fewer frames and less redundancy when compared to other schemes. In contrast to many of the other clustering techniques, the Delaunay clustering algorithm is fully automatic with no user specified parameters and is well suited for batch processing. We demonstrate these and other desirable properties of the proposed algorithm by testing it on a collection of videos from Open Video Project. We provide a meaningful comparison between results of the proposed summarization technique with Open Video storyboard and K-means clustering. We evaluate the results in terms of metrics that measure the content representational value of the proposed technique.  相似文献   

3.
VISON: VIdeo Summarization for ONline applications   总被引:1,自引:0,他引:1  
Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data to be accessed in a user-friendly way. This has been the goal of a quickly evolving research area known as video summarization. Most of existing techniques to address the problem of summarizing a video sequence have focused on the uncompressed domain. However, decoding and analyzing of a video sequence are two extremely time-consuming tasks. Thus, video summaries are usually produced off-line, penalizing any user interaction. The lack of customization is very critical, as users often have different demands and resources. Since video data are usually available in compressed form, it is desirable to directly process video material without decoding. In this paper, we present VISON, a novel approach for video summarization that works in the compressed domain and allows user interaction. The proposed method is based on both exploiting visual features extracted from the video stream and on using a simple and fast algorithm to summarize the video content. Results from a rigorous empirical comparison with a subjective evaluation show that our technique produces video summaries with high quality relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.  相似文献   

4.
白晨  范涛  王文静  王国中 《计算机应用研究》2023,40(11):3276-3281+3288
针对传统视频摘要算法没有充分利用视频的多模态信息、难以确保摘要视频片段时序一致性的问题,提出了一种融合多模态特征与时区检测的视频摘要算法(MTNet)。首先,通过GoogLeNet与VGGish预训练模型提取视频图像与音频的特征表示,设计了一种维度平滑操作对齐两种模态特征,使模型具备全面的表征能力;其次,考虑到生成的视频摘要应具备全局代表性,因此通过单双层自注意力机制结合残差结构分别提取视频图像与音频特征的长范围时序特征,获取模型在时序范围的单一向量表示;最后,通过分离式时区检测与权值共享方法对视频逐个时序片段的摘要边界与重要性进行预测,并通过非极大值抑制来选取关键视频片段生成视频摘要。实验结果表明,在两个标准数据集SumMe与TvSum上,MTNet的表征能力与鲁棒性更强;它的F1值相较基于无锚框的视频摘要算法DSNet-AF以及基于镜头重要性预测的视频摘要算法VASNet,在两个数据集上分别有所提高。  相似文献   

5.
基于向量空间模型的视频语义相关内容挖掘   总被引:1,自引:0,他引:1       下载免费PDF全文
对海量视频数据库中所蕴涵的语义相关内容进行挖掘分析,是视频摘要生成方法面临的难题。该文提出了一种基于向量空间模型的视频语义相关内容挖掘方法:对新闻视频进行预处理,将视频转化为向量形式的数据集,采用主题关键帧提取算法对视频聚类内容进行挖掘,保留蕴涵场景独特信息的关键帧,去除视频中冗余的内容,这些主题关键帧按原有的时间顺序排列生成视频的摘要。实验结果表明,使用该视频语义相关内容挖掘的算法生成的新闻视频具有良好的压缩率和内容涵盖率。  相似文献   

6.
Summaries are an essential component of video retrieval and browsing systems. Most research in video summarization has focused on content analysis to obtain compact yet comprehensive representations of video items. However, important aspects such as how they can be effectively integrated in mobile interfaces and how to predict the quality and usability of the summaries have not been investigated. Conventional summaries are limited to a single instance with certain length (i.e. a single scale). In contrast, scalable summaries target representations with multiple scales, that is, a set of summaries with increasing length in which longer summaries include more information about the video. Thus, scalability provides high flexibility that can be exploited in devices such as smartphones or tablets to provide versions of the summary adapted to the limited visualization area. In this paper, we explore the application of scalable storyboards to summary adaptation and zoomable video navigation in handheld devices. By introducing a new adaptation dimension related with the summarization scale, we can formulate navigation and adaptation in a two-dimensional adaptation space, where different navigation actions modify the trajectory in that space. We also describe the challenges to evaluate scalable summaries and some usability issues that arise from having multiple scales, proposing some objective metrics that can provide useful insight about their potential quality and usability without requiring very costly user studies. Experimental results show a reasonable agreement with the trends shown in subjective evaluations. Experiments also show that content-based scalable storyboards are less redundant and useful than the content-blind baselines.  相似文献   

7.
Exploring video content structure for hierarchical summarization   总被引:4,自引:0,他引:4  
In this paper, we propose a hierarchical video summarization strategy that explores video content structure to provide the users with a scalable, multilevel video summary. First, video-shot- segmentation and keyframe-extraction algorithms are applied to parse video sequences into physical shots and discrete keyframes. Next, an affinity (self-correlation) matrix is constructed to merge visually similar shots into clusters (supergroups). Since video shots with high similarities do not necessarily imply that they belong to the same story unit, temporal information is adopted by merging temporally adjacent shots (within a specified distance) from the supergroup into each video group. A video-scene-detection algorithm is thus proposed to merge temporally or spatially correlated video groups into scenario units. This is followed by a scene-clustering algorithm that eliminates visual redundancy among the units. A hierarchical video content structure with increasing granularity is constructed from the clustered scenes, video scenes, and video groups to keyframes. Finally, we introduce a hierarchical video summarization scheme by executing various approaches at different levels of the video content hierarchy to statically or dynamically construct the video summary. Extensive experiments based on real-world videos have been performed to validate the effectiveness of the proposed approach.Published online: 15 September 2004 Corespondence to: Xingquan ZhuThis research has been supported by the NSF under grants 9972883-EIA, 9974255-IIS, 9983248-EIA, and 0209120-IIS, a grant from the state of Indiana 21th Century Fund, and by the U.S. Army Research Laboratory and the U.S. Army Research Office under grant DAAD19-02-1-0178.  相似文献   

8.
The fast evolution of digital video has brought many new multimedia applications and, as a consequence, has increased the amount of research into new technologies that aim at improving the effectiveness and efficiency of video acquisition, archiving, cataloging and indexing, as well as increasing the usability of stored videos. Among possible research areas, video summarization is an important topic that potentially enables faster browsing of large video collections and also more efficient content indexing and access. Essentially, this research area consists of automatically generating a short summary of a video, which can either be a static summary or a dynamic summary. In this paper, we present VSUMM, a methodology for the production of static video summaries. The method is based on color feature extraction from video frames and k-means clustering algorithm. As an additional contribution, we also develop a novel approach for the evaluation of video static summaries. In this evaluation methodology, video summaries are manually created by users. Then, several user-created summaries are compared both to our approach and also to a number of different techniques in the literature. Experimental results show - with a confidence level of 98% - that the proposed solution provided static video summaries with superior quality relative to the approaches to which it was compared.  相似文献   

9.
10.
基于滑动窗口的微博时间线摘要算法   总被引:1,自引:0,他引:1  
时间线摘要是在时间维度上对文本进行内容归纳和概要生成的技术。传统的时间线摘要主要研究诸如新闻之类的长文本,而本文研究微博短文本的时间线摘要问题。由于微博短文本内容特征有限,无法仅依靠文本内容生成摘要,本文采用内容覆盖性、时间分布性和传播影响力3种指标评价时间线摘要,并提出了基于滑动窗口的微博时间线摘要算法(Microblog timeline summariaztion based on sliding window, MTSW)。该算法首先利用词项强度和熵来确定代表性词项;然后基于上述3种指标构建出评价时间线摘要的综合评价指标;最后采用滑动窗口的方法,遍历时间轴上的微博消息序列,生成微博时间线摘要。利用真实微博数据集的实验结果表明,MTSW算法生成的时间线摘要可以有效地反映热点事件发展演化的过程。  相似文献   

11.
Dynamic video summarization using two-level redundancy detection   总被引:1,自引:0,他引:1  
The mushroom growth of video information, consequently, necessitates the progress of content-based video analysis techniques. Video summarization, aiming to provide a short video summary of the original video document, has drawn much attention these years. In this paper, we propose an algorithm for video summarization with a two-level redundancy detection procedure. By video segmentation and cast indexing, the algorithm first constructs story boards to let users know main scenes and cast (when this is a video with cast) in the video. Then it removes redundant video content using hierarchical agglomerative clustering in the key frame level. The impact factors of scenes and key frames are defined, and parts of key frames are selected to generate the initial video summary. Finally, a repetitive frame segment detection procedure is designed to remove redundant information in the initial video summary. Results of experimental applications on TV series, movies and cartoons are given to illustrate the proposed algorithm.
Wei-Bo Wang
  相似文献   

12.
Video summarization and retrieval using singular value decomposition   总被引:2,自引:0,他引:2  
In this paper, we propose novel video summarization and retrieval systems based on unique properties from singular value decomposition (SVD). Through mathematical analysis, we derive the SVD properties that capture both the temporal and spatial characteristics of the input video in the singular vector space. Using these SVD properties, we are able to summarize a video by outputting a motion video summary with the user-specified length. The motion video summary aims to eliminate visual redundancies while assigning equal show time to equal amounts of visual content for the original video program. On the other hand, the same SVD properties can also be used to categorize and retrieve video shots based on their temporal and spatial characteristics. As an extended application of the derived SVD properties, we propose a system that is able to retrieve video shots according to their degrees of visual changes, color distribution uniformities, and visual similarities.  相似文献   

13.
Video shot boundary detection (SBD) is a fundamental step in automatic video content analysis toward video indexing, summarization and retrieval. Despite the beneficial previous works in the literature, reliable detection of video shots is still a challenging issue with many unsolved problems. In this paper, we focus on the problem of hard cut detection and propose an automatic algorithm in order to accurately determine abrupt transitions from video. We suggest a fuzzy rule-based scene cut identification approach in which a set of fuzzy rules are evaluated to detect cuts. The main advantage of the proposed method is that, we incorporate spatial and temporal features to describe video frames, and model cut situations according to temporal dependency of video frames as a set of fuzzy rules. Also, while existing cut detection algorithms are mainly threshold dependent; our method identifies cut transitions using a fuzzy logic which is more flexible. The proposed algorithm is evaluated on a variety of video sequences from different genres. Experimental results, in comparison with the most standard cut detection algorithms confirm our method is more robust to object and camera movements as well as illumination changes.  相似文献   

14.
源代码的摘要可以帮助软件开发人员快速地理解代码,帮助维护人员更快地完成维护任务.但是,手工编写摘要代价高、效率低,因此人们试图利用计算机自动地为源代码生成摘要.近年来,基于神经网络的代码摘要技术成为自动源代码摘要研究的主流技术和软件工程领域的研究热点.首先阐述了代码摘要的概念和自动代码摘要的定义,回顾了自动代码摘要技术...  相似文献   

15.
16.
Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.  相似文献   

17.
In this paper we address extractive summarization of long threads in online discussion fora. We present an elaborate user evaluation study to determine human preferences in forum summarization and to create a reference data set. We showed long threads to ten different raters and asked them to create a summary by selecting the posts that they considered to be the most important for the thread. We study the agreement between human raters on the summarization task, and we show how multiple reference summaries can be combined to develop a successful model for automatic summarization. We found that although the inter-rater agreement for the summarization task was slight to fair, the automatic summarizer obtained reasonable results in terms of precision, recall, and ROUGE. Moreover, when human raters were asked to choose between the summary created by another human and the summary created by our model in a blind side-by-side comparison, they judged the model’s summary equal to or better than the human summary in over half of the cases. This shows that even for a summarization task with low inter-rater agreement, a model can be trained that generates sensible summaries. In addition, we investigated the potential for personalized summarization. However, the results for the three raters involved in this experiment were inconclusive. We release the reference summaries as a publicly available dataset.  相似文献   

18.
Automatic text summarization is an essential tool in this era of information overloading. In this paper we present an automatic extractive Arabic text summarization system where the user can cap the size of the final summary. It is a direct system where no machine learning is involved. We use a two pass algorithm where in pass one, we produce a primary summary using Rhetorical Structure Theory (RST); this is followed by the second pass where we assign a score to each of the sentences in the primary summary. These scores will help us in generating the final summary. For the final output, sentences are selected with an objective of maximizing the overall score of the summary whose size should not exceed the user selected limit. We used Rouge to evaluate our system generated summaries of various lengths against those done by a (human) news editorial professional. Experiments on sample texts show our system to outperform some of the existing Arabic summarization systems including those that require machine learning.  相似文献   

19.
视频摘要是对视频内容进行浓缩的一项技术,对于快速了解视频内容至关重要。如何对视频摘要的效果进行评价,是值得研究的一个问题。论文基于层次分析法构建了视频摘要评价模型,将视频摘要质量作为最终评价目标,以内容合理性和结构合理性作为准则,以内容完整性、特殊重要性、整体流畅性等作为测度层,从而建立了视频摘要评价指标体系。最后,通过对随机生成、基于特写人脸的摘要生成以及融合视音频特征的摘要生成三种算法对所提评价方法进行了实验验证,表明该方法能够有效反映出视频摘要的质量。  相似文献   

20.
This paper presents a state of the art review of features extraction for soccer video summarization research. The all existing approaches with regard to event detection, video summarization based on video stream and application of text sources in event detection have been surveyed. As regard the current challenges for automatic and real time provision of summary videos, different computer vision approaches are discussed and compared. Audio, video feature extraction methods and their combination with textual methods have been investigated. Available commercial products are presented to better clarify the boundaries in this domain and future directions for improvement of existing systems have been suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号