共查询到14条相似文献,搜索用时 125 毫秒
1.
基于仿射传播聚类的自适应关键帧提取 总被引:3,自引:0,他引:3
关键帧提取技术,是基于内容的视频检索的一个重要组成部分。为了能从不同类型的视频里有效地提取关键帧,提出了基于仿射传播聚类的自适应关键帧提取算法。首先通过图像的颜色特征获取视频镜头的相似性矩阵,然后通过仿射传播聚类自适应地提取视频关键帧。该算法从视频的本身信息分布出发,自适应地搜索出视频最优关键帧,且运算速度快。实验表明,该算法能有效地提取出视频最优关键帧,且算法快速稳健。 相似文献
2.
结合互信息量与模糊聚类的关键帧提取方法 总被引:1,自引:0,他引:1
关键帧是描述一个镜头的关键图像帧,它通常反映一个镜头的主要内容,因此,关键帧提取技术是视频分析和基于内容的视频检索的基础。提出了一种结合互信息量与模糊聚类的关键帧提取方法,一方面通过互信息量算法对视频片段进行镜头检测可以保持视频的时间序列和动态信息,另一方面通过模糊聚类使镜头中的关键帧能很好的反映视频镜头的主要内容。最后构建了一套针对MPEG-4视频的关键帧提取系统,通过实验证明该系统提取的关键帧,可以较好地代表视频内容,并且有利于实现视频分析和检索。 相似文献
3.
改进的蚁群算法与凝聚相结合的关键帧提取 总被引:1,自引:0,他引:1
关键帧提取技术,对基于内容的视频检索有着重要的作用。为了从不同类型的视频中有效地提取关键帧,提出了改进的蚁群算法与凝聚相结合的关键帧提取算法。该方法提取视频中每帧的颜色与边缘特征向量,利用改进的蚁群算法自组织地对颜色和边缘特征向量进行聚类,得到初始聚类。通过凝聚算法对初始聚类进行优化,得到最终聚类。提取每类中距离聚类中心最近的向量,将其对应帧作为关键帧。实验结果表明:使用该算法提取的关键帧不仅可以充分表达出视频的主要内容,而且可以根据视频内容的变化提取出适当数量的关键帧。 相似文献
4.
5.
6.
针对k均值聚类提取关键帧存在的不足,提出了优化初始聚类中心的关键帧提取算法。该算法的初始聚类中心由视频数据本身的分布来决定,增强了结果的稳定性;聚类数k不再唯一地决定于给定值,而是根据视频内容自适应获得最佳取值。实验表明该算法有良好的自适应性,获得的关键帧能有效地代表视频内容。 相似文献
7.
8.
9.
关键帧提取是基于内容的视频摘要生成中的一个重要技术.首次引入仿射传播聚类方法来提取视频关键帧.该方法结合两个连续图像帧的颜色直方图交,通过消息传递,实现数据点的自动聚类.并与k means和SVC(support vector clustering)算法的关键帧提取方法进行了比较.实验结果表明,AP(Affinity Propagation)聚类的关键帧提取速度快,准确性高,生成的视频摘要具有良好的压缩率和内容涵盖率. 相似文献
10.
刘晓楠 《计算机与数字工程》2010,38(7):26-29
提出了一种基于内容二次聚类的关键帧提取算法。此算法在计算图像帧相似度时,通过分块给不同的块赋予不同的权值,以体现出图像在语义上的重点内容。同时,通过计算自适应阈值对视频进行初次聚类,并计算类间距离,在此基础上再进行二次聚类后得到最终类别,从每个最终类中选取距离类中心最近的图像帧作为关键帧。这种方法经过二次聚类后可克服一次聚类会出现冗余的缺点,实验证明,此算法提取的关键帧更全面、准确地体现了原视频的内容。 相似文献
11.
关键帧提取是基于内容的视频检索的一个重要的组成部分,所提取的关键帧的有效性,直接影响视频检索的结果。文中提出了一种基于非参数密度估计聚类的关键帧提取方法。首先,通过提取图像的颜色特征和运动特征,然后利用均值漂移聚类方法对融合了颜色和运动信息的特征空间进行聚类。它能自动确定类别数并具有严格的收敛陛,从而大大减少了运算量,提高了运算速度。实验证明,本方法的提取结果与人的主观视觉感知系统具有良好的一致性。 相似文献
12.
The purpose of video segmentation is to segment video sequence into shots where each shot represents a sequence of frames having the same contents, and then select key frames from each shot for indexing. Existing video segmentation methods can be classified into two groups: the shot change detection (SCD) approach for which thresholds have to be pre-assigned, and the clustering approach for which a prior knowledge of the number of clusters is required. In this paper, we propose a video segmentation method using a histogram-based fuzzy c-means (HBFCM) clustering algorithm. This algorithm is a hybrid of the two approaches aforementioned, and is designed to overcome the drawbacks of both approaches. The HBFCM clustering algorithm is composed of three phases: the feature extraction phase, the clustering phase, and the key-frame selection phase. In the first phase, differences between color histogram are extracted as features. In the second phase, the fuzzy c-means (FCM) is used to group features into three clusters: the shot change (SC) cluster, the suspected shot change (SSC) cluster, and the no shot change (NSC) cluster. In the last phase, shot change frames are identified from the SC and the SSC, and then used to segment video sequences into shots. Finally, key frames are selected from each shot. Simulation results indicate that the HBFCM clustering algorithm is robust and applicable to various types of video sequences. 相似文献
13.
Suet-Peng Yong Jeremiah D. Deng Martin K. Purvis 《Multimedia Tools and Applications》2013,62(2):359-376
There is a growing evidence that visual saliency can be better modeled using top-down mechanisms that incorporate object semantics. This suggests a new direction for image and video analysis, where semantics extraction can be effectively utilized to improve video summarization, indexing and retrieval. This paper presents a framework that models semantic contexts for key-frame extraction. Semantic context of video frames is extracted and its sequential changes are monitored so that significant novelties are located using a one-class classifier. Working with wildlife video frames, the framework undergoes image segmentation, feature extraction and matching of image blocks, and then a co-occurrence matrix of semantic labels is constructed to represent the semantic context within the scene. Experiments show that our approach using high-level semantic modeling achieves better key-frame extraction as compared with its counterparts using low-level features. 相似文献
14.
InsightVideo: toward hierarchical video content organization for efficient browsing, summarization and retrieval 总被引:2,自引:0,他引:2
Xingquan Zhu Elmagarmid A.K. Xiangyang Xue Lide Wu Catlin A.C. 《Multimedia, IEEE Transactions on》2005,7(4):648-666
Hierarchical video browsing and feature-based video retrieval are two standard methods for accessing video content. Very little research, however, has addressed the benefits of integrating these two methods for more effective and efficient video content access. In this paper, we introduce InsightVideo, a video analysis and retrieval system, which joins video content hierarchy, hierarchical browsing and retrieval for efficient video access. We propose several video processing techniques to organize the content hierarchy of the video. We first apply a camera motion classification and key-frame extraction strategy that operates in the compressed domain to extract video features. Then, shot grouping, scene detection and pairwise scene clustering strategies are applied to construct the video content hierarchy. We introduce a video similarity evaluation scheme at different levels (key-frame, shot, group, scene, and video.) By integrating the video content hierarchy and the video similarity evaluation scheme, hierarchical video browsing and retrieval are seamlessly integrated for efficient content access. We construct a progressive video retrieval scheme to refine user queries through the interactions of browsing and retrieval. Experimental results and comparisons of camera motion classification, key-frame extraction, scene detection, and video retrieval are presented to validate the effectiveness and efficiency of the proposed algorithms and the performance of the system. 相似文献