首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 156 毫秒
1.
提出一种工作在MPEG压缩域的检测算法.首先从压缩视频中提取DC图像和重构参考帧,然后经过全局运动补偿之后进行基于边对象变化率的镜头分割点检测,最后结合DC图直方图差法构成联合检测算法.本算法能准确地检测到镜头渐变,镜头切变,能得到渐变类型等特性.用AdobePremiere5.1生成的各类镜头转换视频片段进行检测,实验结果验证了算法的有效性.  相似文献   

2.
一种基于H.264/AVC压缩域的GOP级视频场景转换检测算法   总被引:1,自引:0,他引:1  
高宇  卓力  王素玉  沈兰荪 《电子学报》2010,38(2):382-386
本文提出了一种基于H.264/AVC压缩域的GOP(Group of Pictures)级视频场景转换检测算法.该算法利用H.264/AVC基本档次码流中的帧内预测模式、运动矢量和宏块编码类型等可用信息,提出了基于子块的色度模式差异、累积运动值和累积帧内宏块数等三个判决准则,然后综合利用这三个判决准则,提出了一种GOP级的视频场景转换检测算法.实验结果表明,与现有的一个COP级场景检测算法对比,本文提出的GOP级视频场景转换检测算法可以获得更好的检测性能.  相似文献   

3.
含有复杂运动的MPEG压缩域镜头分割算法   总被引:4,自引:0,他引:4  
本文针对包含有复杂运动的MPG视频流,给出了一种压缩域的镜头分割算法,该方法主要应用DC亮度系数直方图差变化,I图密度分布,B图的双向运动向量数比的变化及相应的曲线振荡频率,获得镜头分割,并进而可检测出切变,渐变及复杂运动等,该算法已在Pentium II PC机上实现,并与传统方法进行了比较,得到了较满意的结果,实验表明该方法较适用于含有复杂运动的视频流的镜头分割,不仅可以应用于新闻视频,而且对含有较大运动及较多特持的视频流,有较好的镜头分割效果。  相似文献   

4.
提出一种基于亮度帧差的自适应镜头边界检测算法.通过计算视频图像帧之间的亮度差来自适应检测视频镜头的转换,并且加入一些简单而有效的判断,和传统的方法相比较,可有效提高镜头检测的检出率和准确率,实验证明可快速准确进行实时的镜头分割.  相似文献   

5.
针对传统的基于空域的视频镜头检测算法需要解压缩,计算量大、效率低的缺点,提出了一种基于压缩域的视频镜头检测方法。该方法首先根据MPEG压缩标准,从视频流中提取I帧中8个低频DCT系数,并对其进行分区加权求二次帧差,从而确定镜头变换所在的GoP;在GoP中通过计算P帧和B帧的不同类型宏块比率,来精确定位镜头边界。实验结果表明,该方法可大大减少计算时间和数据量,且具有良好的检测效果。  相似文献   

6.
为了进行视频结构化和视频内容分析,需要准确有效地提取视频镜头的边界信息.为此提出了一种利用支持向量机(SVM)学习压缩域特征的算法进行镜头边界检测,只需简单译码即可得到MPEG1/2等各类视频流压缩域的特征信息.经TRECVID2005镜头边界检测集的评测,该算法在保证查全率和检测精度的情况下获得了满意的效果.  相似文献   

7.
压缩域中基于支持向量机的镜头边界检测算法   总被引:1,自引:0,他引:1  
曹建荣  蔡安妮 《电子学报》2008,36(1):203-208
针对如何进一步提高镜头边界检测精度问题,本文提出了一个基于支持向量机SVM (Support Vector Machine)的镜头边界检测算法.该算法利用视频压缩域中特征,如宏块类型,帧间对应宏块DC系数差和帧类型将视频帧分为发生切变的帧、发生渐变的帧和非镜头变换帧三类,从而实现视频的镜头分割.实验结果表明该算法对摄像机的运动和大物体的进入具有很好的鲁棒性,且没有大多数算法中阈值选择的困难,将我们的算法与2001 TREC评估中最佳指标进行了比较,在综合度量查全率和查准率的性能指标F1上,比2001 TREC评估中最佳指标高约8%.  相似文献   

8.
武打片中的动作场景检测方法   总被引:4,自引:0,他引:4  
程文刚  柳长安  须德 《电子学报》2006,34(5):915-920
本文提出了一种简单有效的方法检测武打片中的动作场景:首先根据动作场景的节奏特点,从影片层次出发,使用镜头长度和MPEG-7运动活力描述符定义了镜头的步调函数来度量节奏,由此定位快节奏区域,找到动作场景的大体位置;之后根据动作场景的内容发展特点,从镜头层次出发,分析快节奏区域及周边的镜头的内容,根据视觉特征确定动作场景的边界点.两个层次(影片和镜头)信息的充分利用使得方法简单易操作,基于压缩视频的处理方法提高了运算速度,实验结果表明了该检测方法的有效性.  相似文献   

9.
昼夜转换场景中的车辆检测   总被引:2,自引:0,他引:2  
刘勃  周荷琴 《信号处理》2006,22(3):390-394
在城市交通流量视频检测系统中,昼夜转换是必须面对的问题,在白天和黑夜的过渡期间,简单的使用白天算法或夜间算法检测效果较差。本文提出一种针对昼夜转换场景的车辆检测算法,该算法首先提取出背景图像,针对昼夜转换场景中光线昏暗、变化较快的特点,建立了一种能够快速跟踪背景变化的背景更新模型;然后采用背景差分的方法检测运动车辆。实验表明,本文算法能够很好的检测昼夜转换场景中的运动车辆。  相似文献   

10.
彭程 《电视技术》2021,45(3):18-20
视频会议场景对视频增强的实时性有较高的要求.针对现有视频增强算法(如BM3D等)存在的耗时长的问题,基于变化区域检测提出一种基于运动区域检测的视频增强技术.对时序帧数据进行颜色空间转换,快速地将视频场景分为静止区域和运动区域,之后对静止区域进行时域降噪和视频增强.在视频帧集合的实验结果表明,该算法可以显著地增强图像纹理...  相似文献   

11.
There is an urgent need to extract key information from video automatically for the purposes of indexing, fast retrieval, and scene analysis. To support this vision, reliable scene change detection algorithms must be developed. Several algorithms have been proposed for both sudden and gradual scene change detection in uncompressed and compressed video. In this paper some common algorithms that have been proposed for scene change detection are reviewed. A novel algorithm for sudden scene change detection for MPEG-2 compressed video is then presented. This uses the number of interpolated macroblocks in B-frames to identify the sudden scene changes. A gradual scene change detection algorithm based on statistical features is also presented  相似文献   

12.
Detecting and locating a desired information in hefty amount of video data through manual procedure is very cumbersome. This necessitates segregation of large video into shots and finding the boundary between the shots. But shot boundary detection problem is unable to achieve satisfactory performance for video sequences consisting of flash light and complex object/camera motion. The proposed method is intended for recognising abrupt boundary between shots in the presence of motion and illumination change in an automatic way. Typically any scene change detection algorithm assimilates time separation in a shot resemblance metric. In this communication, absolute sum gradient orientation feature difference is matched to automatically generated threshold for sensing a cut. Experimental study on TRECVid 2001 data set and other publicly available data set certifies the potentiality of the proposed scheme that identifies scene boundaries efficiently, in a complex environment while preserving a good trade-off between recall and precision measure.  相似文献   

13.
Detection of gradual transition and the elimination of disturbances caused by illumination change or fast object and camera motion are the major challenges to the current shot boundary detection techniques. These disturbances are often mistaken as shot boundaries. Therefore, it is a challenging task to develop a method that is not only insensitive to various disturbances but also sensitive enough to capture a shot change. To address these challenges, we propose an algorithm for shot boundary detection in the presence of illumination change, fast object motion, and fast camera motion. This is important for accurate and robust detection of shot boundaries and in turn critical for high-level content-based analysis of video. First, the propose algorithm extracts structure features from each video frame by using dual-tree complex wavelet transform. Then, spatial domain structure similarity is computed between adjacent frames. The declaration of shot boundaries are decided based on carefully chosen thresholds. Experimental study is performed on a number of videos that include significant illumination change and fast motion of camera and objects. The performance comparison of the proposed algorithm with other existing techniques validates its effectiveness in terms of better Recall, Precision, and F1 score.  相似文献   

14.
In many surveillance systems the video is stored in wavelet compressed form. In this paper, an algorithm for moving object and region detection in video which is compressed using a wavelet transform (WT) is developed. The algorithm estimates the WT of the background scene from the WTs of the past image frames of the video. The WT of the current image is compared with the WT of the background and the moving objects are determined from the difference. The algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background. This leads to a computationally efficient method and a system compared to the existing motion estimation methods.  相似文献   

15.
Saliency detection is widely used to pick out relevant parts of a scene as visual attention regions for various image/video applications. Since video is increasingly being captured, moved and stored in compressed form, there is a need for detecting video saliency directly in compressed domain. In this study, a compressed video saliency detection algorithm is proposed based on discrete cosine transformation (DCT) coefficients and motion information within a visual window. Firstly, DCT coefficients and motion information are extracted from H.264 video bitstream without full decoding. Due to a high quantization parameter setting in encoder, skip/intra is easily chosen as the best prediction mode, resulting in a large number of blocks with zero motion vector and no residual existing in video bitstream. To address these problems, the motion vectors of skip/intra coded blocks are calculated by interpolating its surroundings. In addition, a visual window is constructed to enhance the contrast of features and to avoid being affected by encoder. Secondly, after spatial and temporal saliency maps being generated by the normalized entropy, a motion importance factor is imposed to refine the temporal saliency map. Finally, a variance-like fusion method is proposed to dynamically combine these maps to yield the final video saliency map. Experimental results show that the proposed approach significantly outperforms other state-of-the-art video saliency detection models.  相似文献   

16.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号