首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Keyframe-based video summarization using Delaunay clustering   总被引:1,自引:0,他引:1  
Recent advances in technology have made tremendous amounts of multimedia information available to the general population. An efficient way of dealing with this new development is to develop browsing tools that distill multimedia data as information oriented summaries. Such an approach will not only suit resource poor environments such as wireless and mobile, but also enhance browsing on the wired side for applications like digital libraries and repositories. Automatic summarization and indexing techniques will give users an opportunity to browse and select multimedia document of their choice for complete viewing later. In this paper, we present a technique by which we can automatically gather the frames of interest in a video for purposes of summarization. Our proposed technique is based on using Delaunay Triangulation for clustering the frames in videos. We represent the frame contents as multi-dimensional point data and use Delaunay Triangulation for clustering them. We propose a novel video summarization technique by using Delaunay clusters that generates good quality summaries with fewer frames and less redundancy when compared to other schemes. In contrast to many of the other clustering techniques, the Delaunay clustering algorithm is fully automatic with no user specified parameters and is well suited for batch processing. We demonstrate these and other desirable properties of the proposed algorithm by testing it on a collection of videos from Open Video Project. We provide a meaningful comparison between results of the proposed summarization technique with Open Video storyboard and K-means clustering. We evaluate the results in terms of metrics that measure the content representational value of the proposed technique.  相似文献   

3.
图象和视频的检索技术   总被引:10,自引:0,他引:10  
随着网络技术的发展,多媒体数据将成为网络服务的主要内容,因此对多媒体数据管理问题的研究成为近几年的热点。由于媒体信息表现性质的不同,传统关系数据库的检索方式不再适用于图象和视频,因此,必须采用基于自身内容的检索方式。文章对基于内容的图象和视频检索技术分不同层次进行了全面的总结,内容包括依据基本特征,色彩、纹理、形状、和位置关系的技术,视频的场景分割、关键帧提取技术以及基于声音、文字的检索技术等,并阐述了各种方法的优缺点,现状及发展方向。  相似文献   

4.
5.
陈卓夷 《计算机科学》2007,34(4):119-120
关键帧提取是基于内容的视频检索的一个重要的组成部分,所提取的关键帧的有效性,直接影响视频检索的结果。文中提出了一种基于非参数密度估计聚类的关键帧提取方法。首先,通过提取图像的颜色特征和运动特征,然后利用均值漂移聚类方法对融合了颜色和运动信息的特征空间进行聚类。它能自动确定类别数并具有严格的收敛陛,从而大大减少了运算量,提高了运算速度。实验证明,本方法的提取结果与人的主观视觉感知系统具有良好的一致性。  相似文献   

6.
Video annotation is an important issue in video content management systems. Rapid growth of the digital video data has created a need for efficient and reasonable mechanisms that can ease the annotation process. In this paper, we propose a novel hierarchical clustering based system for video annotation. The proposed system generates a top-down hierarchy of the video streams using hierarchical k-means clustering. A tree-based structure is produced by dividing the video recursively into sub-groups, each of which consists of similar content. Based on the visual features, each node of the tree is partitioned into its children using k-means clustering. Each sub-group is then represented by its key frame, which is selected as the closest frame to the centroids of the corresponding cluster, and then can be displayed at the higher level of the hierarchy. The experiments show that very good hierarchical view of the video sequences can be created for annotation in terms of efficiency.  相似文献   

7.
Exploring video content structure for hierarchical summarization   总被引:4,自引:0,他引:4  
In this paper, we propose a hierarchical video summarization strategy that explores video content structure to provide the users with a scalable, multilevel video summary. First, video-shot- segmentation and keyframe-extraction algorithms are applied to parse video sequences into physical shots and discrete keyframes. Next, an affinity (self-correlation) matrix is constructed to merge visually similar shots into clusters (supergroups). Since video shots with high similarities do not necessarily imply that they belong to the same story unit, temporal information is adopted by merging temporally adjacent shots (within a specified distance) from the supergroup into each video group. A video-scene-detection algorithm is thus proposed to merge temporally or spatially correlated video groups into scenario units. This is followed by a scene-clustering algorithm that eliminates visual redundancy among the units. A hierarchical video content structure with increasing granularity is constructed from the clustered scenes, video scenes, and video groups to keyframes. Finally, we introduce a hierarchical video summarization scheme by executing various approaches at different levels of the video content hierarchy to statically or dynamically construct the video summary. Extensive experiments based on real-world videos have been performed to validate the effectiveness of the proposed approach.Published online: 15 September 2004 Corespondence to: Xingquan ZhuThis research has been supported by the NSF under grants 9972883-EIA, 9974255-IIS, 9983248-EIA, and 0209120-IIS, a grant from the state of Indiana 21th Century Fund, and by the U.S. Army Research Laboratory and the U.S. Army Research Office under grant DAAD19-02-1-0178.  相似文献   

8.
9.
VISON: VIdeo Summarization for ONline applications   总被引:1,自引:0,他引:1  
Recent advances in technology have increased the availability of video data, creating a strong requirement for efficient systems to manage those materials. Making efficient use of video information requires that data to be accessed in a user-friendly way. This has been the goal of a quickly evolving research area known as video summarization. Most of existing techniques to address the problem of summarizing a video sequence have focused on the uncompressed domain. However, decoding and analyzing of a video sequence are two extremely time-consuming tasks. Thus, video summaries are usually produced off-line, penalizing any user interaction. The lack of customization is very critical, as users often have different demands and resources. Since video data are usually available in compressed form, it is desirable to directly process video material without decoding. In this paper, we present VISON, a novel approach for video summarization that works in the compressed domain and allows user interaction. The proposed method is based on both exploiting visual features extracted from the video stream and on using a simple and fast algorithm to summarize the video content. Results from a rigorous empirical comparison with a subjective evaluation show that our technique produces video summaries with high quality relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.  相似文献   

10.
关键帧提取是基于内容的视频摘要生成中的一个重要技术.首次引入仿射传播聚类方法来提取视频关键帧.该方法结合两个连续图像帧的颜色直方图交,通过消息传递,实现数据点的自动聚类.并与k means和SVC(support vector clustering)算法的关键帧提取方法进行了比较.实验结果表明,AP(Affinity Propagation)聚类的关键帧提取速度快,准确性高,生成的视频摘要具有良好的压缩率和内容涵盖率.  相似文献   

11.
Video summarization has great potential to enable rapid browsing and efficient video indexing in many applications. In this study, we propose a novel compact yet rich key frame creation method for compressed video summarization. First, we directly extract DC coefficients of I frame from a compressed video stream, and DC-based mutual information is computed to segment the long video into shots. Then, we select shots with static background and moving object according to the intensity and range of motion vector in the video stream. Detecting moving object outliers in each selected shot, the optimal object set is then selected by importance ranking and solving an optimum programming problem. Finally, we conduct an improved KNN matting approach on the optimal object outliers to automatically and seamlessly splice these outliers to the final key frame as video summarization. Previous video summarization methods typically select one or more frames from the original video as the video summarization. However, these existing key frame representation approaches for video summarization eliminate the time axis and lose the dynamic aspect of the video scene. The proposed video summarization preserves both compactness and considerably richer information than previous video summaries. Experimental results indicate that the proposed key frame representation not only includes abundant semantics but also is natural, which satisfies user preferences.  相似文献   

12.
张圆圆  黄宜军  王跃飞 《计算机应用》2018,38(12):3409-3413
针对目前室内场景视频中关键物体的检测、跟踪及信息编辑等方面主要是采用人工处理方式,存在效率低、精度不高等问题,提出了一种基于纹理信息的室内场景语义标注学习方法。首先,采用光流方法获取视频帧间的运动信息,利用关键帧标注和帧间运动信息进行非关键帧的标注初始化;然后,利用非关键帧的图像纹理信息约束及其初始化标注构建能量方程;最后,利用图割方法优化得到该能量方程的解,即为非关键帧语义标注。标注的准确率和视觉效果的实验结果表明,与运动估计法和基于模型的学习法相比较,所提基于纹理信息的室内场景语义标注学习法具有较好的效果。该方法可以为服务机器人、智能家居、应急响应等低时延决策系统提供参考。  相似文献   

13.
The paper presents an automatic video summarization technique based on graph theory methodology and the dominant sets clustering algorithm. The large size of the video data set is handled by exploiting the connectivity information of prototype frames that are extracted from a down-sampled version of the original video sequence. The connectivity information for the prototypes which is obtained from the whole set of data improves video representation and reveals its structure. Automatic selection of the optimal number of clusters and hereafter keyframes is accomplished at a next step through the dominant set clustering algorithm. The method is free of user-specified modeling parameters and is evaluated in terms of several metrics that quantify its content representational ability. Comparison of the proposed summarization technique to the Open Video storyboard, the Adaptive clustering algorithm and the Delaunay clustering approach, is provided.
D. BesirisEmail:
  相似文献   

14.
To support effective multimedia information retrieval, video annotation has become an important topic in video content analysis. Existing video annotation methods put the focus on either the analysis of low-level features or simple semantic concepts, and they cannot reduce the gap between low-level features and high-level concepts. In this paper, we propose an innovative method for semantic video annotation through integrated mining of visual features, speech features, and frequent semantic patterns existing in the video. The proposed method mainly consists of two main phases: 1) Construction of four kinds of predictive annotation models, namely speech-association, visual-association, visual-sequential, and statistical models from annotated videos. 2) Fusion of these models for annotating un-annotated videos automatically. The main advantage of the proposed method lies in that all visual features, speech features, and semantic patterns are considered simultaneously. Moreover, the utilization of high-level rules can effectively complement the insufficiency of statistics-based methods in dealing with complex and broad keyword identification in video annotation. Through empirical evaluation on NIST TRECVID video datasets, the proposed approach is shown to enhance the performance of annotation substantially in terms of precision, recall, and F-measure.  相似文献   

15.
视频关键帧提取是视频摘要的重要组成部分,关键帧提取的质量直接影响人们对视频的认识。传统的关键帧提取算法大多都是基于视觉相关的提取算法,即单纯提取底层信息计算其相似度,忽略语义相关性,容易引起误差,同时也造成了一定的冗余。对此提出了一种基于语义的视频关键帧提取算法。该算法首先使用层次聚类算法对视频关键帧进行初步提取;然后结合语义相关算法对初步提取的关键帧进行直方图对比,去掉冗余帧,确定视频的关键帧;最后与其他算法比较,所提算法提取的关键帧冗余度相对较小。  相似文献   

16.
一种MPEG-7视频语义摘要系统   总被引:1,自引:0,他引:1  
谢波  申瑞民  江济 《计算机仿真》2004,21(2):132-135
该文设计并且实现了一个应用于E-Learning领域的视频语义摘要系统,它包括一个兼容MPEG-7标准的标注子系统、搜索引擎子系统和一个内容整合子系统;它专注于E-Learning领域,不但可以在多媒体的后期处理中添加描述,而且能实时录制/自动实时描述和辅助以语音参考描述。  相似文献   

17.
18.
为了提高对Web动画素材的组织、管理,该文提出了基于文本特征和视觉特征融合的Web动画素材标注算法。首先利用自动提取的Web动画素材上下文信息,结合Web动画素材名称、页面主题、URL以及ALT等属性组成特征集,提取出文本关键字;然后利用视觉与标注字之间的相关性,对自动提取的标注字进行过滤,实现Web动画素材的自动标注。实验表明该文提出的基于文本特征和视觉特征融合的Web动画素材标注算法可有效地应用于Web动画素材自动标注。  相似文献   

19.
With the advance of digital video recording and playback systems, the request for efficiently managing recorded TV video programs is evident so that users can readily locate and browse their favorite programs. In this paper, we propose a multimodal scheme to segment and represent TV video streams. The scheme aims to recover the temporal and structural characteristics of TV programs with visual, auditory, and textual information. In terms of visual cues, we develop a novel concept named program-oriented informative images (POIM) to identify the candidate points correlated with the boundaries of individual programs. For audio cues, a multiscale Kullback-Leibler (K-L) distance is proposed to locate audio scene changes (ASC), and accordingly ASC is aligned with video scene changes to represent candidate boundaries of programs. In addition, latent semantic analysis (LSA) is adopted to calculate the textual content similarity (TCS) between shots to model the inter-program similarity and intra-program dissimilarity in terms of speech content. Finally, we fuse the multimodal features of POIM, ASC, and TCS to detect the boundaries of programs including individual commercials (spots). Towards effective program guide and attracting content browsing, we propose a multimodal representation of individual programs by using POIM images, key frames, and textual keywords in a summarization manner. Extensive experiments are carried out over an open benchmarking dataset TRECVID 2005 corpus and promising results have been achieved. Compared with the electronic program guide (EPG), our solution provides a more generic approach to determine the exact boundaries of diverse TV programs even including dramatic spots.  相似文献   

20.
Automatic video annotation is to bridge the semantic gap and facilitate concept based video retrieval by detecting high level concepts from video data. Recently, utilizing context information has emerged as an important direction in such domain. In this paper, we present a novel video annotation refinement approach by utilizing extrinsic semantic context extracted from video subtitles and intrinsic context among candidate annotation concepts. The extrinsic semantic context is formed by identifying a set of key terms from video subtitles. The semantic similarity between those key terms and the candidate annotation concepts is then exploited to refine initial annotation results, while most existing approaches utilize textual information heuristically. Similarity measurements including Google distance and WordNet distance have been investigated for such a refinement purpose, which is different with approaches deriving semantic relationship among concepts from given training datasets. Visualness is also utilized to discriminate individual terms for further refinement. In addition, Random Walk with Restarts (RWR) technique is employed to perform final refinement of the annotation results by exploring the inter-relationship among annotation concepts. Comprehensive experiments on TRECVID 2005 dataset have been conducted to demonstrate the effectiveness of the proposed annotation approach and to investigate the impact of various factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号