首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 687 毫秒
1.
提出了一种基于交互信息量的视频摘要生成方法。该方法首先使用基于交互信息量的方法进行视频镜头检测,通过对检测到镜头帧的聚类提取镜头候选关键帧。然后对候选关键帧按照相邻帧间交互信息量的比较来提取镜头关键帧,最后将镜头关键帧按时序排列在一起形成视频摘要。试验表明,这种关键帧提取算法是有效的,其建立的视频摘要能较好的反映原视频的内容。  相似文献   

2.
马慧芳  刘芳  夏琴  郝占军 《电子学报》2018,46(6):1410-1414
针对科技文献类标题短文本关键词提取时,已有自然语言处理算法难以建模文献时间与权威性且短文本词语较少建模往往存在高维稀疏问题,本文提出了一个综合实时性以及权威性的关键词提取算法为研究者进行相关推荐.该方法将文献标题视为超边,将标题中不同词项视为超点来构建超图,并对超图中的超边与超点同时加权,进而设计一种基于加权超图随机游走的关键词提取算法对文献标题的词项进行提取.该模型通过对文献来源,发表年份以及被引次数建模来对超边进行加权,根据节点之间的关联度以及每对节点在特定标题中的共现距离对超点加权.最后,通过超图上的随机游走计算出节点的重要性进而确立可推荐的关键词.实验表明,与三种基准短文本关键词提取算法相比,本文算法在精确率和召回率方面均有所提高.  相似文献   

3.
为了自动获取主要视频信息且冗余信息较少的视频摘要,本文提出了LLE-自适应FCM和LLE-自适应阈值FCM算法.这两种方法首先利用流形学习算法局部线性嵌入(LLE)提取视频帧的特征向量,然后将得到的特征向量输入到自适应FCM和自适应阈值FCM中,得出分类效果和聚类中心.自适应FCM通过聚类有效性函数来确定分类类别数,而自适应阈值FCM是通过阈值的自动变化来确定分类类别数.最后把离聚类中心最近的视频帧作为视频摘要.实验的结果表明,在不需要人工干预的情况下,所提取的视频摘要既反映了视频的主要内容,而且冗余信息少.  相似文献   

4.
针对现有视频增强算法的不足,提出了一种基于多兴趣区域融合的增强算法.该算法首先对视频帧中每一个兴趣区域进行混合高斯建模,然后对每个兴趣区域构建一个映射函数,并利用融合的思想得到一个全局映射函数对视频帧进行增强,最后引入相邻帧之间的时间相关性进行时域融合以保证相邻帧之间的增强效果连续性.该算法可以灵活应对多样的视频内容.实验结果表明,该算法可以有效增强含有多个兴趣区域且兴趣区域内纹理复杂的视频序列,具有很好的鲁棒性.  相似文献   

5.
基于帧间共谋的视频隐写分析   总被引:2,自引:0,他引:2  
基于视频序列的时域特性,提出一种应用帧间共谋策略的视频隐写分析算法.分析了视频序列中所含局部运动对隐写分析检测精度的干扰,将秘密信息和局部运动建模为双模噪声,据此提出基于视频帧间分块相关度的特征提取策略.隐写分析算法依据提取的特征应用GRNN分类器进行视频帧的分类,以识别含秘密信息的可疑视频帧,在一定程度上降低了局部运动的影响.实验结果表明该算法可实际应用于视频隐写分析.  相似文献   

6.
图像和视频亮度的自动调整   总被引:2,自引:0,他引:2       下载免费PDF全文
 对曝光不足的图像和视频进行亮度调整具有重要的理论研究意义和实际应用价值,本文提出一种基于梯度域操作的图像和视频亮度自动调整算法.对于静态图像,算法首先将图像分割为不同的亮度区域;然后分别计算各区域的亮度调整算子;最后通过求解一个梯度约束方程得到结果图像.我们进而将该算法延伸到视频,首先选取若干关键帧并使用上述图像亮度调整算法进行处理;然后对非关键帧进行分割并通过光流算法确定非关键帧上的分割区域与前后关键帧区域的对应关系;最后利用对应关系通过关键帧区域的亮度调整算子以及调整后的亮度指导非关键帧上各区域的亮度调整,并生成结果视频序列.本文算法可以有效处理空间和时间上曝光不足和不均的图像和视频,并能够较好地保持图像、视频的细节纹理信息,实验结果表明了算法的有效性.  相似文献   

7.
HGHD:一种基于超图的高维空间数据聚类算法   总被引:2,自引:0,他引:2  
传统聚类算法无法有效地处理现实世界中存在许多高维空间数据。为此,提出一种基于超图横式的高维空间数据聚类算法HGHD,通过数据集中的数据及其间关系建立超图横型,并应用超图划分进行聚类,从而把一个求解高维空间数据聚类问题转换为一个超图分割寻优问题。该方法采用自底向上的分层思想,相对于传统方法最大的优势是不需要降维。直接用超图模式描述原始数据之间的关系,能产生高质量的聚类结果。  相似文献   

8.
为了提高关键帧提取的准确率,改善视频摘要的质量,提出了一种HEVC压缩域的视频摘要关键帧提取方法。首先,对视频序列进行编解码,在解码中统计HEVC帧内编码PU块的亮度预测模式数目。然后,特征提取是利用统计得到的模式数目构建成模式特征向量,并将其作为视频帧的纹理特征用于关键帧的提取。最后,利用融合迭代自组织数据分析算法(ISODATA)的自适应聚类算法对模式特征向量进行聚类,在聚类结果中选取每个类内中间向量对应的帧作为候选关键帧,并通过相似度对候选关键帧进行再次筛选,剔除冗余帧,得到最终的关键帧。实验结果表明,在Open Video Project数据集上进行的大量实验验证,该方法提取关键帧的精度为79.9%、召回率达到93.6%、F-score为86.2%,有效地改善了视频摘要的质量。   相似文献   

9.
基于稀疏方位超图匹配的图像配准算法   总被引:1,自引:1,他引:0  
陈华杰 《光电子.激光》2010,(12):1865-1870
为提高超图匹配的正确匹配率并降低其计算复杂度,提出了一种基于稀疏方位超图匹配的图像配准算法。提取图像的结构特征点为图节点,采用最小生成树算法获取节点间的主要连接关系,并用包含邻近的节点与边的三元组结构定义超边,计算超边的方位角度信息,由此构建稀疏方位超图;利用方位信息构建亲近矩阵,并采用全局最优匹配方法实现匹配。实验表明,对于实际图像的配准,该算法既具有较低的计算复杂度,又有良好的匹配效果。  相似文献   

10.
胡春筠  胡斌杰 《电子学报》2016,44(6):1490-1495
提出一种基于伪随机码置乱的分布式视频残差编码端码率控制算法,利用伪随机码对残差视频帧的像素进行置乱处理,将信源图像与其边信息图像之间的差别均匀化,实现帧级别上的码率估计,即每一帧用同一码率发送。如果收端译码失败,利用提出的一种量化序号估计算法能显著提高译码成功率,解决码率低估问题。同时发端视频残差帧的特性能近似表示收发两端信号之间的相关性,因此,发端无需产生一个预测的边信息。仿真结果表明,该算法发端复杂度低、译码成功率高、系统延迟小、率失真性能良好。  相似文献   

11.
Key frame based video summarization has emerged as an important area of research for the multimedia community. Video key frames enable an user to access any video in a friendly and meaningful way. In this paper, we propose an automated method of video key frame extraction using dynamic Delaunay graph clustering via an iterative edge pruning strategy. A structural constraint in form of a lower limit on the deviation ratio of the graph vertices further improves the video summary. We also employ an information-theoretic pre-sampling where significant valleys in the mutual information profile of the successive frames in a video are used to capture more informative frames. Various video key frame visualization techniques for efficient video browsing and navigation purposes are incorporated. A comprehensive evaluation on 100 videos from the Open Video and YouTube databases using both objective and subjective measures demonstrate the superiority of our key frame extraction method.  相似文献   

12.
Video Summarization is a technique to reduce the original raw video into a short video summary. Video summarization automates the task of acquiring key frames/segments from the video and combining them to generate a video summary. This paper provides a framework for summarization based on different criteria and also compares different literature work related to video summarization. The framework deals with formulating model for video summarization based on different criteria. Based on target audience/ viewership, number of videos, type of output intended, type of video summary and summarization factor; a model generating video summarization framework is proposed. The paper examines significant research works in the area of video summarization to present a comprehensive review against the framework. Different techniques, perspectives and modalities are considered to preserve the diversity of survey. This paper examines important mathematical formulations to provide meaningful insights for video summarization model creation.  相似文献   

13.
Multiview video summarization plays a crucial role in abstracting essential information form multiple videos of the same location and time. In this paper, we propose a new approach for the multiview summarization. The proposed approach uses the BIRCH clustering algorithm for the first time on the initial set of frames to get rid of the static and redundant. The work presents a new approach for shot boundary detection using frame similarity measures Jaccard and Dice. The algorithm performs effectively synchronized merging of keyframes from all camera-views to obtain the final summary. Extensive experimentation conducted on various datasets suggests that the proposed approach significantly outperforms most of the existing video summarization approaches. To state a few, a 1.5% improvement on video length reduction, 24.28% improvement in compression ratio, and 6.4% improvement in quality assessment ratio is observed on the lobby dataset.  相似文献   

14.
Video summarization refers to an important set of abstraction techniques aimed to provide a compact representation of the video essential to effectively browse and retrieve video content from multimedia repositories. Most of these video summarization techniques, such as image storyboards, video skims and fast previews, are based on selecting some frames or segments. H.264/AVC has become a widely accepted coding standard and is expected that many of the content will be available in this format soon. This paper proposes a generic model of video summarization especially suitable for generating summaries of H.264/AVC bitstreams in a highly efficient manner, using the concept of temporal scalability via hierarchical prediction structures. Along with the model, specific examples of summarization techniques are given to prove the utility of the model.  相似文献   

15.
Video summarization is a method to reduce redundancy and generate succinct representation of the video data. One of the mechanisms to generate video summaries is to extract key frames which represent the most important content of the video. In this paper, a new technique for key frame extraction is presented. The scheme uses an aggregation mechanism to combine the visual features extracted from the correlation of RGB color channels, color histogram, and moments of inertia to extract key frames from the video. An adaptive formula is then used to combine the results of the current iteration with those from the previous. The use of the adaptive formula generates a smooth output function and also reduces redundancy. The results are compared to some of the other techniques based on objective criteria. The experimental results show that the proposed technique generates summaries that are closer to the summaries created by humans.  相似文献   

16.
Video Super-Resolution (SR) reconstruc-tion produces video sequences with High Resolu-tion (HR) via the fusion of several Low-Resolution (LR) video frames. Traditional methods rely on the accurate estimation of subpixel motion, which con-strains their applicability to video sequences with relatively simple motions such as global translation. We propose an efficient iterative spatio-temporal a-daptive SR reconstruction model based on Zernike Moment (ZM), which is effective for spatial video sequences with arbitrary motion. The model uses re-gion correlation judgment and self-adaptive thresh-old strategies to improve the effect and time effi-ciency of the ZM-based SR method. This leads to better mining of non-local self-similarity and local structural regularity, and is robust to noise and rota-tion. An efficient iterative curvature-based interpo-lation scheme is introduced to obtain the initial HR estimation of each LR video frame. Experimental results both on spatial and standard video sequences demonstrate that the proposed method outperforms existing methods in terms of both subjective visual and objective quantitative evaluations, and greatly improves the time efficiency.  相似文献   

17.
毋立芳  赵宽  简萌  王向东 《信号处理》2019,35(11):1871-1879
关键帧检测是有效的视频内容分析的关键环节。常用的基于手工特征的方法运行效率高但很难有效表征关键帧特征,因而性能不好。基于深度特征的方法因为网络结构复杂,导致效率不高。在体育比赛类视频中,关键帧常为比赛转播中镜头变化的最后一帧。但广播视频中除了包含比赛视频还包括很多其他类型的镜头如中场休息、渐变镜头等。因此检测最后一帧包含很多比赛无关内容。针对这一问题,本文提出了一种手工特征与深度特征相结合的视频关键帧检测方法。首先基于颜色直方图特征进行镜头边界检测获取最后一帧。进一步基于直方图相似性提出一种类似聚类的方法得到候选关键帧。最后,基于深度神经网络对候选关键帧进行分类,得到真正的关键帧。在冰壶比赛视频和篮球比赛视频上的对比实验结果表明,相对于传统的背景差分法、光流法等,本文提出方法能够快速、可靠地提取关键帧。   相似文献   

18.
由于视频帧数量较大,视频序列拼接时容易造成拼接误差大、耗时较多,为有效解决此问题,提出一种基于自适应关键帧的视频序列拼接方法。将固定间隔采样帧作为关键帧并对其进行特征点提取,利用特征点匹配结合RANSAC鲁棒估计算法得到关键帧间单映矩阵,依此计算关键帧间重叠区域,按照重叠区域比例结合折半排序方法重新定位关键帧,将此关键帧作为基准帧,重复帧采样、重叠区域确定、定位后续所需关键帧过程,直至关键帧提取完毕,最后,利用级联单映矩阵和加权融合实现视频序列无缝拼接。实验验证了该方法的有效性。  相似文献   

19.
视频对象的分割是基于内容的视频处理的重要组成部分。提出并实现了一种基于水平集的运动视频对象分割算法。算法通过视频帧间的亮度差值提取初始轮廓曲线,将该曲线作为水平集算法的初始零水平集,采用窄带水平集方法演化曲线。得到最终的分割结果。实验表明该算法简单高效,具有很好的分割效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号