首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出一种基于运动区域定位的视频水印算法.算法采用独立分量分析(ICA)算法,从原始视频的相邻两帧中提取包含这两帧相对运动信息的运动分量帧.根据提取的运动分量帧,定位原始视频帧中相对运动最剧烈的区域,此区域对应至原始视频相邻两帧中的前帧,即为嵌入水印的运动区域.在嵌入水印时,采用基于Watson视觉模型的量化索引调制(QIM)算法,以保证算法的鲁棒性.实验结果表明,本算法在保持视频良好视觉质量的同时,对高斯白噪声、MPEG-2压缩、帧删除及帧剪裁具有较好的鲁棒性.  相似文献   

2.
This paper presents an infrared single‐pixel video imaging to surveille sea surface. Based on the temporal redundancy of the surveillance video, a two‐step scheme, including low‐scale detection and high‐scale detection, is proposed. For each frame, low‐scale detection performs low‐resolution single‐pixel imaging to obtain a “preview” image of the scene, where the moving target can be located. These targets are further refined in the high‐scale detection where the high‐resolution single‐pixel imagings focusing on these targets are used. The frame is reconstructed by merging these two‐level images. The simulated experiments show that for a video with 128 × 128 pixels and 150 frames, the sampling rate of our scheme is about 17.8%, and the reconstructed video presents a good visual quality.  相似文献   

3.
移动终端播放足球视频时,通常会受其屏幕尺寸小、分辨率低的制约,而易于导致细节信息丢失,从而影响视觉质量.本文提出一种新的面向移动终端的足球视频自适应显示方法.在足球检测过程中,运用二维哈希表和图像分块检测技术,可快速、准确地追踪足球对象.此外,根据窗口大小和球员位置分别动态地调整感兴趣区域窗口的位置和大小.仿真实验结果表明:算法能比较准确地定位足球区域,且很好地兼顾了处理速度和视频质量的要求,适合于制作面向移动终端的视频节目,高效而可靠地在移动设备上实时地播放足球比赛.  相似文献   

4.
针对在同一场景下获取的体育运动视频,提出了一种基于全局运动补偿及运动前景区域信息的体育运动视频合成方法。首先,对待合成视频,通过全局运动估计与补偿,将相邻帧在空间上对齐到当前帧。通过计算帧差,得到当前帧中的运动前景区域信息。然后根据两段待合成视频之间背景的相似性,计算并修正全局运动参数,确定待合成对应帧之间的位置关系。最后,依据已经获得的运动前景区域信息,生成合成帧。实验结果表明,该方法可自动合成在同一场景中获得的有相似动态背景的体育视频,保持了前景与背景的清晰度,能清晰地显示运动员动作的差异。  相似文献   

5.
庞希愚  高胜法  王祥 《计算机应用》2007,27(5):1164-1166
为了克服利用变化检测分割视频对象过程中的噪声、复杂运动、暴露背景的影响,提出了一种新的视频对象分割方法。该方法利用间隔为k帧的两帧图像代替连续两帧求帧差,然后取三次帧差边缘的交集,并且对运动对象的断裂轮廓点进行连接。最后,通过填充和数学形态学处理实现视频对象的分割。试验结果表明,该算法能够自动精确的定位运动对象的外轮廓。  相似文献   

6.
As we all know, video frame rate determines the quality of the video. The higher the frame rate, the smoother the movements in the picture, the clearer the information expressed, and the better the viewing experience for people. Video interpolation aims to increase the video frame rate by generating a new frame image using the relevant information between two consecutive frames, which is essential in the field of computer vision. The traditional motion compensation interpolation method will cause holes and overlaps in the reconstructed frame, and is easily affected by the quality of optical flow. Therefore, this paper proposes a video frame interpolation method via optical flow estimation with image inpainting. First, the optical flow between the input frames is estimated via combined local and global-total variation (CLG-TV) optical flow estimation model. Then, the intermediate frames are synthesized under the guidance of the optical flow. Finally, the nonlocal self-similarity between the video frames is used to solve the optimization problem, to fix the pixel loss area in the interpolated frame. Quantitative and qualitative experimental results show that this method can effectively improve the quality of optical flow estimation, generate realistic and smooth video frames, and effectively increase the video frame rate.  相似文献   

7.
Temporal segmentation of successive actions in a long-term video sequence has been a long-standing problem in computer vision. In this paper, we exploit a novel learning-based framework. Given a video sequence, only a few characteristic frames are selected by the proposed selection algorithm, and then the likelihood to trained models is calculated in a pair-wise way, and finally segmentation is obtained as the optimal model sequence to realize the maximum likelihood. The average accuracy on IXMAS dataset reached to 80.5% at frame level, using only 16.5% of all frames in computation time of 1.57 s per video which has 1160 frames on the average.  相似文献   

8.
In conventional motion compensated temporal filtering based wavelet coding scheme, where the group of picture structure and low-pass frame position are fixed, variations in motion activities of video sequences are not considered. In this paper, we propose an adaptive group of picture structure selection scheme, which the group of picture size and low-pass frame position are selected based on mutual information. Furthermore, the temporal decomposition process is determined adaptively according to the selected group of picture structure. A large amount of experimental work is carried out to compare the compression performance of proposed method with the conventional motion compensated temporal filtering encoding scheme and adaptive group of picture structure in standard scalable video coding model. The proposed low-pass frame selection can improve the compression quality by about 0.3–0.5 dB comparing to the conventional scheme in video sequences with high motion activities. In the scenes with un-even variation of motion activities, e.g. frequent shot cuts, the proposed adaptive group of picture size can achieve a better compression capability than conventional scheme. When comparing to adaptive group of picture in standard scalable video coding model, the proposed group of picture structure scheme can lead to about 0.2~0.8 dB improvements in sequences with high motion activities or shot cut.
Zhao-Guang LiuEmail:
  相似文献   

9.
多视点视频是目前视频编码的研究热点之一。为了提高多视点视频的压缩率和视觉质量,本文提出了一种自适应地选取B帧作为参考帧的编码方案。通过将相邻的两个P帧或I帧间的残差与两个预先定义的阈值进行比较,该方案自适应地选取B帧作为参考帧,从而获得较好的压缩效率和视觉效果。该方案采用H.264/AVC的编码器JM实现,并取得了预期的测试结果。  相似文献   

10.
This paper addresses the problem of ensuring the integrity of a digital video and presents a scalable signature scheme for video authentication based on cryptographic secret sharing. The proposed method detects spatial cropping and temporal jittering in a video, yet is robust against frame dropping in the streaming video scenario. In our scheme, the authentication signature is compact and independent of the size of the video. Given a video, we identify the key frames based on differential energy between the frames. Considering video frames as shares, we compute the corresponding secret at three hierarchical levels. The master secret is used as digital signature to authenticate the video. The proposed signature scheme is scalable to three hierarchical levels of signature computation based on the needs of different scenarios. We provide extensive experimental results to show the utility of our technique in three different scenarios—streaming video, video identification and face tampering.
Mohan S. KankanhalliEmail:
  相似文献   

11.
相比于之前主流的H.264视频压缩编码标准,HEVC在保证重建视频质量相同的前提下,可以将码率降低近50%,节省了传输所需的带宽.即便如此,由于一些特定的网络带宽限制,为继续改善HEVC视频编码性能,进一步提升对视频的压缩效率仍然是当前研究的热点.本文提出一种HEVC标准编码与帧率变换方法相结合的新型的视频压缩编码算法,首先在编码端,提出一种自适应抽帧方法,降低原视频帧率,减少所需传输数据量,对低帧率视频进行编解码;在解码端,结合从HEVC传输码流中提取的运动信息以及针对HEVC编码特定的视频帧的分块模式信息等,对丢失帧运动信息进行估计;最后,通过本文提出的改进基于块覆盖双向运动补偿插帧方法对视频进行恢复重建.实验结果证实了本文所提算法的有效性.  相似文献   

12.
运动视频中特定运动帧的获取是运动智能化教学实现的重要环节,为了得到视频中的特定运动 帧以便进一步地对视频进行分析,并利用姿态估计和聚类的相关知识,提出了一种对运动视频提取特定运动帧 的方法。首先选用 HRNet 姿态估计模型作为基础,该模型精度高但模型规模过大,为了实际运用的需求,对 该模型进行轻量化处理并与 DARK 数据编码相结合,提出了 Small-HRNet 网络模型,在基本保持精度不变的情 况下参数量减少了 82.0%。然后利用 Small-HRNet 模型从视频中提取人体关节点,将每一视频帧中的人体骨架特 征作为聚类的样本点,最终以标准运动帧的骨架特征为聚类中心,对整个视频进行聚类得到视频的特定运动帧, 在武术运动数据集上进行实验。该方法对武术动作帧的提取准确率为 87.5%,能够有效地提取武术动作帧。  相似文献   

13.
张艳  王涛  孙雷  徐青 《计算机仿真》2007,24(4):193-197
提出混合凸集投影算法HPOCS对视频图像进行超分辨率重建,利用连续视频图像间的互异信息生成更高分辨率的视频图像.该算法首先采用图像匹配估计视频图像间的运动位移;然后进行基于的APEX盲解卷积,估计点扩散函数和理想视频图像;最后在凸集投影的理论框架下进行图像重建.实验表明,HPOCS重建后,视频图像的分辨率相对于原始图像、双线性内插图像和POCS重建图像明显提高,图像边缘更加清晰,细节信息更加突出.  相似文献   

14.
石念峰  侯小静  张平 《计算机应用》2017,37(9):2605-2609
为提高运动视频关键帧的运动表达能力和压缩率,提出柔性姿态估计和时空特征嵌入结合的运动视频关键帧提取技术。首先,利用人体动作的时间连续性保持建立具有时间约束限制的柔性部件铰接人体(ST-FMP)模型,通过非确定性人体部位动作连续性约束,采用N-best算法估计单帧图像中的人体姿态参数;接着,采用人体部位的相对位置和运动方向描述人体运动特征,通过拉普拉斯分值法实施数据降维,获得局部拓扑结构表达能力强的判别性人体运动特征向量;最后,采用迭代自组织数据分析技术(ISODATA)算法动态地确定关键帧。在健美操动作视频关键帧提取实验中,ST-FMP模型将柔性混合铰接人体模型(FMP)的非确定性人体部位的识别准确率提高约15个百分点,取得了81%的关键帧提取准确率,优于KFE和运动块的关键帧算法。所提算法对人体运动特征和人体姿态敏感,适用于运动视频批注审阅。  相似文献   

15.
16.
基于运动估计的Kalman滤波视频对象跟踪   总被引:1,自引:0,他引:1  
提出了一种利用Kalman滤波对运动目标的形心进行预测,从而实现视频对象跟踪的算法。首先进行视频对象分割,求出运动目标的形心。再利用视频序列中连续两帧的形心和运动矢量信息,用Kalman滤波对运动目标的形心在下一帧的位置进行预测,从而快速、有效地自动跟踪多个目标对象。实验结果表明,该算法对运动目标的出现和消失,以及非刚性物体的尺度变化和变形,具有较强的鲁棒性。  相似文献   

17.
In video coding, a well-designed rate control scheme should be concerned with both the objective and subjective quality. However, the existing H.264 rate control algorithms mainly aim at improving the objective quality without considering the human visual system. In this paper, we propose a novel rate control algorithm that takes into account visual attention. In a group of pictures, bits allocated to each frame are related to the local motion attention in it, and more bits are allocated to the frames with strong local motion attention. Similarly, in each frame, more bits are assigned to visually significant macroblocks (MBs), and fewer to visually insignificant MBs. Experiment results show that the proposed algorithm improves the coding quality in frames with strong local motion, and reduces PSNR fluctuation across frames by up to 22.15%. In addition, PSNR in visually important regions is increased by up to 1.45 dB as compared to the standard H.264 rate control scheme that improves the subjective quality. Increased computation complexity of the proposed algorithm is less than 4%, which is negligible.  相似文献   

18.
Video summarization is an integral component of video archiving systems. It provides small versions of the videos that are suitable for enhancing browsing and navigation capabilities. A popular method to generate summaries is to extract a set of key frames from the video, which conveys the overall message of the video. This paper introduces a novel feature aggregation based visual saliency detection mechanism and its usage for extracting key frames. The saliency maps are computed based on the aggregated features and motion intensity. A non-linear weighted fusion mechanism combines the two saliency maps. On the resultant map, a Gaussian weighting scheme is used to assign more weight to the pixels close to the center of the frame. Based on the final attention value of each frame, the key frames are extracted adaptively. The experimental results, based on different evaluation standards, demonstrate that the proposed scheme extracts semantically significant key frames.  相似文献   

19.

Tele-training in surgical education has not been effectively implemented. There is a stringent need for a high transmission rate, reliability, throughput, and reduced distortion for high-quality video transmission in the real-time network. This work aims to propose a system that improves video quality during real-time surgical tele-training. The proposed approach aims to minimise the video frame’s total distortion, ensuring better flow rate allocation and enhancing the video frames’ reliability. The proposed system consists of a proposed algorithm for Enhancing Video Quality, Distorting Minimization, Bandwidth efficiency, and Reliability Maximization called (EVQDMBRM) algorithm. The proposed algorithm reduces the video frame’s total distortion. In addition, it enhances the video quality in a real-time network by dynamically allocating the flow rate at the video source and maximizing the transmission reliability of the video frames. The result shows that the proposed EVQDMBRM algorithm improves the video quality with the minimized total distortion. Therefore, it improves the Peak Signal to Noise Ratio (PSNR) average by 51.13 dB against 47.28 dB in the existing systems. Furthermore, it reduces the video frames processing time average by 58.2 milliseconds (ms) against 76.1, and the end-to-end delay average by 114.57 ms against 133.58 ms comparing to the traditional methods. The proposed system concentrates on minimizing video distortion and improving the surgical video transmission quality by using an EVQDMBRM algorithm. It provides the mechanism to allocate the video rate at the source dynamically. Besides that, it minimizes the packet loss ratio and probing status, which estimates the available bandwidth.

  相似文献   

20.
庄燕滨  桂源  肖贤建 《计算机应用》2013,33(9):2577-2579
为了解决传统视频压缩传感方法中对视频逐帧单独重构所产生的图像模糊,将压缩传感理论与MPEG标准视频编码的相关技术相结合,提出了一种基于运动估计与运动补偿的视频压缩传感方法,以消除视频信号在空域和时域上的冗余。该方法在充分考虑视频序列时域相关性的同时,首先对视频图像进行前、后向和双向预测和补偿,然后采用回溯自适应正交匹配追踪(BAOMP)算法,对运动预测残差进行重构,最后实现当前帧的重构。实验结果表明,该方法较逐帧重构的视频图像质量有较大改善,且可获得更高的峰值信噪比。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号