共查询到16条相似文献,搜索用时 156 毫秒
1.
2.
一种基于H.264/AVC压缩域的GOP级视频场景转换检测算法 总被引:1,自引:0,他引:1
本文提出了一种基于H.264/AVC压缩域的GOP(Group of Pictures)级视频场景转换检测算法.该算法利用H.264/AVC基本档次码流中的帧内预测模式、运动矢量和宏块编码类型等可用信息,提出了基于子块的色度模式差异、累积运动值和累积帧内宏块数等三个判决准则,然后综合利用这三个判决准则,提出了一种GOP级的视频场景转换检测算法.实验结果表明,与现有的一个COP级场景检测算法对比,本文提出的GOP级视频场景转换检测算法可以获得更好的检测性能. 相似文献
3.
含有复杂运动的MPEG压缩域镜头分割算法 总被引:4,自引:0,他引:4
本文针对包含有复杂运动的MPG视频流,给出了一种压缩域的镜头分割算法,该方法主要应用DC亮度系数直方图差变化,I图密度分布,B图的双向运动向量数比的变化及相应的曲线振荡频率,获得镜头分割,并进而可检测出切变,渐变及复杂运动等,该算法已在Pentium II PC机上实现,并与传统方法进行了比较,得到了较满意的结果,实验表明该方法较适用于含有复杂运动的视频流的镜头分割,不仅可以应用于新闻视频,而且对含有较大运动及较多特持的视频流,有较好的镜头分割效果。 相似文献
4.
提出一种基于亮度帧差的自适应镜头边界检测算法.通过计算视频图像帧之间的亮度差来自适应检测视频镜头的转换,并且加入一些简单而有效的判断,和传统的方法相比较,可有效提高镜头检测的检出率和准确率,实验证明可快速准确进行实时的镜头分割. 相似文献
5.
6.
7.
压缩域中基于支持向量机的镜头边界检测算法 总被引:1,自引:0,他引:1
针对如何进一步提高镜头边界检测精度问题,本文提出了一个基于支持向量机SVM (Support Vector Machine)的镜头边界检测算法.该算法利用视频压缩域中特征,如宏块类型,帧间对应宏块DC系数差和帧类型将视频帧分为发生切变的帧、发生渐变的帧和非镜头变换帧三类,从而实现视频的镜头分割.实验结果表明该算法对摄像机的运动和大物体的进入具有很好的鲁棒性,且没有大多数算法中阈值选择的困难,将我们的算法与2001 TREC评估中最佳指标进行了比较,在综合度量查全率和查准率的性能指标F1上,比2001 TREC评估中最佳指标高约8%. 相似文献
8.
9.
昼夜转换场景中的车辆检测 总被引:2,自引:0,他引:2
在城市交通流量视频检测系统中,昼夜转换是必须面对的问题,在白天和黑夜的过渡期间,简单的使用白天算法或夜间算法检测效果较差。本文提出一种针对昼夜转换场景的车辆检测算法,该算法首先提取出背景图像,针对昼夜转换场景中光线昏暗、变化较快的特点,建立了一种能够快速跟踪背景变化的背景更新模型;然后采用背景差分的方法检测运动车辆。实验表明,本文算法能够很好的检测昼夜转换场景中的运动车辆。 相似文献
10.
视频会议场景对视频增强的实时性有较高的要求.针对现有视频增强算法(如BM3D等)存在的耗时长的问题,基于变化区域检测提出一种基于运动区域检测的视频增强技术.对时序帧数据进行颜色空间转换,快速地将视频场景分为静止区域和运动区域,之后对静止区域进行时域降噪和视频增强.在视频帧集合的实验结果表明,该算法可以显著地增强图像纹理... 相似文献
11.
《Electronics & Communication Engineering Journal》2001,13(3):117-126
There is an urgent need to extract key information from video automatically for the purposes of indexing, fast retrieval, and scene analysis. To support this vision, reliable scene change detection algorithms must be developed. Several algorithms have been proposed for both sudden and gradual scene change detection in uncompressed and compressed video. In this paper some common algorithms that have been proposed for scene change detection are reviewed. A novel algorithm for sudden scene change detection for MPEG-2 compressed video is then presented. This uses the number of interpolated macroblocks in B-frames to identify the sudden scene changes. A gradual scene change detection algorithm based on statistical features is also presented 相似文献
12.
Detecting and locating a desired information in hefty amount of video data through manual procedure is very cumbersome. This necessitates segregation of large video into shots and finding the boundary between the shots. But shot boundary detection problem is unable to achieve satisfactory performance for video sequences consisting of flash light and complex object/camera motion. The proposed method is intended for recognising abrupt boundary between shots in the presence of motion and illumination change in an automatic way. Typically any scene change detection algorithm assimilates time separation in a shot resemblance metric. In this communication, absolute sum gradient orientation feature difference is matched to automatically generated threshold for sensing a cut. Experimental study on TRECVid 2001 data set and other publicly available data set certifies the potentiality of the proposed scheme that identifies scene boundaries efficiently, in a complex environment while preserving a good trade-off between recall and precision measure. 相似文献
13.
Detection of gradual transition and the elimination of disturbances caused by illumination change or fast object and camera motion are the major challenges to the current shot boundary detection techniques. These disturbances are often mistaken as shot boundaries. Therefore, it is a challenging task to develop a method that is not only insensitive to various disturbances but also sensitive enough to capture a shot change. To address these challenges, we propose an algorithm for shot boundary detection in the presence of illumination change, fast object motion, and fast camera motion. This is important for accurate and robust detection of shot boundaries and in turn critical for high-level content-based analysis of video. First, the propose algorithm extracts structure features from each video frame by using dual-tree complex wavelet transform. Then, spatial domain structure similarity is computed between adjacent frames. The declaration of shot boundaries are decided based on carefully chosen thresholds. Experimental study is performed on a number of videos that include significant illumination change and fast motion of camera and objects. The performance comparison of the proposed algorithm with other existing techniques validates its effectiveness in terms of better Recall, Precision, and F1 score. 相似文献
14.
《Signal Processing: Image Communication》2005,20(3):255-264
In many surveillance systems the video is stored in wavelet compressed form. In this paper, an algorithm for moving object and region detection in video which is compressed using a wavelet transform (WT) is developed. The algorithm estimates the WT of the background scene from the WTs of the past image frames of the video. The WT of the current image is compared with the WT of the background and the moving objects are determined from the difference. The algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background. This leads to a computationally efficient method and a system compared to the existing motion estimation methods. 相似文献
15.
Saliency detection is widely used to pick out relevant parts of a scene as visual attention regions for various image/video applications. Since video is increasingly being captured, moved and stored in compressed form, there is a need for detecting video saliency directly in compressed domain. In this study, a compressed video saliency detection algorithm is proposed based on discrete cosine transformation (DCT) coefficients and motion information within a visual window. Firstly, DCT coefficients and motion information are extracted from H.264 video bitstream without full decoding. Due to a high quantization parameter setting in encoder, skip/intra is easily chosen as the best prediction mode, resulting in a large number of blocks with zero motion vector and no residual existing in video bitstream. To address these problems, the motion vectors of skip/intra coded blocks are calculated by interpolating its surroundings. In addition, a visual window is constructed to enhance the contrast of features and to avoid being affected by encoder. Secondly, after spatial and temporal saliency maps being generated by the normalized entropy, a motion importance factor is imposed to refine the temporal saliency map. Finally, a variance-like fusion method is proposed to dynamically combine these maps to yield the final video saliency map. Experimental results show that the proposed approach significantly outperforms other state-of-the-art video saliency detection models. 相似文献