首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
提出并实现了一种基于空间转换网络(STN)的视频盲水印方法。针对目前视频水印 不能同时抵抗噪声、压缩等信号处理攻击和缩放、剪切等几何攻击的问题,设计一个子块水印 嵌入机制,并依赖 STN 确定水印嵌入子块区域以及空间变换系数。选择变换域中的中频系数作 为嵌入通道,通过空间变换系数矫正旋转和缩放后的图像已获得鲁棒视频水印方法。实验结果 表明,基于 STN 的视频盲水印方法具有较高的视觉隐蔽性以及较强的稳定性和鲁棒性,同时嵌 入水印容量达到 256 位以上。  相似文献   

2.
基于HVS的自适应鲁棒视频水印算法   总被引:2,自引:0,他引:2  
陈光喜  成彦 《计算机科学》2008,35(11):214-216
视频水印与静止图像比较,包含有运动部分,具有变化的特性。根据人眼对快速运动的物体敏感度会下降的特点及DCT域视觉掩蔽模型,提出一种自适应的鲁棒视频水印算法。算法首先通过比较I帧与相邻帧的同一区域的DCT直流系数,得到一个快速运动图像块集,然后在该图像块集中根据视觉掩蔽模型计算图像块的DCT中频系数,选择区域自适应嵌入水印。仿真实验表明,该水印算法不仅能很好地抵抗MPEG压缩,而且能较好地协调不可见性和鲁棒性,并能抵抗一定的几何攻击。  相似文献   

3.
刘立冬  田翔 《计算机应用》2013,33(7):1866-1869
为了解决H.264视频信息的版权保护问题,提出了一种基于即时解码刷新(IDR)帧离散余弦变换(DCT)域的盲水印算法。该算法首先分析IDR帧的纹理特征,利用滑动矩形窗根据图像的梯度特性提取出复杂纹理区域;然后,计算该区域内每个宏块的16个4×4子块能量,得到能量最高子块;最后,通过自适应修改子块的一个交流(AC)系数幅值以达到嵌入水印的目的。实验结果表明:对CIF分辨率视频测试序列嵌入水印后,视频图像的峰值信噪比(PSNR)平均下降0.15dB,码率平均增加了0.49%,水印检测准确率达到91%以上,并且该算法能够有效抵抗不同量化参数(QP)的重复编码攻击。  相似文献   

4.
为了设计更加鲁棒的数字水印算法,在对水印的嵌入过程、提取过程及其抗攻击性能进行分析的基础上,提出了一种图像灰度水印自适应嵌入与自适应提取新方案。该方案基于小波变换的多分辨率特性,利用人眼的亮度感觉及对比灵敏度阈值等人类视觉系统特性,并结合图像各个局部小波子块的亮度与纹理特征,在嵌入的过程中,通过动态地计算各个局部块的自适应嵌入强度因子来嵌入水印。提取时,则根据从不同区域提取出的水印信息自适应计算最佳的水印提取结果,这就从提取的角度大大改善了水印系统的性能。实验结果表明,该算法对多种攻击具有较强的鲁棒性,尤其对平滑、滤波、JPEG压缩、JPEG2000压缩以及位平面去除等攻击,其鲁棒性更明显。  相似文献   

5.
基于形态学梯度的DWT域自适应水印算法   总被引:1,自引:0,他引:1  
提出了一种基于形态学梯度的小波域自适应水印算法.利用形态学梯度,分析了小波包分解后的各子块的纹理分布及其强弱,并进行纹理排序.依据图像各子块自身纹理特征,自适应确定各子块阈值,选取大于其阈值的位置作为重要系数的位置进行水印信息的嵌入.然后,通过噪声可见函数(NVF)自适应调制各位置处的嵌入强度,在保证良好视觉掩蔽效果的基础上,采取最优的嵌入强度,更好的达到鲁棒性和不可见性的最佳效果.大量的实验结果表明,算法对常见的图像处理操作均有较好的鲁棒性.  相似文献   

6.
借助主观试验,给出了一种主观自适应视频水印算法.通过主观试验确定图像中具有不同活动性的像素点水印嵌入强度可觉察门限,得到掩盖函数,并依此控制水印信息的嵌入强度,从而在确保图像具有高质量的同时,充分挖掘视觉潜力,提高水印信息的嵌入强度.实验结果表明,与空域非自适应水印算法和基于简单视觉模型的自适应水印算法相比,基于主观试验的自适应水印算法在保证图像具有高主观质量的同时获得了更高的水印信息检测正确率.  相似文献   

7.
陈淑琴  李智  程欣宇  高奇 《计算机应用》2017,37(7):1936-1942
针对视频水印容易遭受几何攻击以及水印的鲁棒性与透明性的平衡问题,提出一种基于人眼视觉特性与尺度不变特征变换(SIFT)相结合的抗几何攻击视频双水印算法。首先获取视频序列中人眼视觉掩蔽阈值作为水印的最大嵌入强度。其次,将视频帧进行离散小波变换(DWT),对中高频子带系数提出基于视频运动信息的自适应水印算法;针对低频子带,提出基于小波低频系数统计特性的抗几何攻击视频水印算法。最后,以SIFT作为触发器判断视频帧是否遭受几何攻击,对遭受几何攻击的视频帧利用SIFT的尺度与方向不变性进行校正,并对校正后的视频帧提取水印信号;针对非几何攻击的视频帧,直接利用中高频提取算法。所提算法与实时性视频水印算法——基于小波域直方图的视频水印(VW-HDWT)算法比较,峰值性噪比值(PSNR)提高了7.5%;与基于特征区域的水印算法相比,水印嵌入容量提高约10倍。实验结果表明,在保证水印透明度较好的情况下,所提算法对常规几何攻击具有较强的鲁棒性。  相似文献   

8.
基于SVD和小波包分解的自适应鲁棒水印算法   总被引:2,自引:0,他引:2  
针对当前水印鲁棒性和透明性矛盾的问题,提出了一种基于奇异值变换(SVD)和小波包分解的自适应鲁棒水印算法,对二值水印图像进行Arnold置乱预处理,以增强水印信息的安全性。在水印嵌入过程中,将原始宿主图像分为8×8子图像块,并对每一子块进行小波包分解。根据人眼视觉特性和图像块自身的亮度以及纹理特征确定最佳量化步长,将水印信息通过量化调制的方法自适应地嵌入至相应高低频区域的奇异值中。实验结果表明,该算法在具有良好透明性的同时,对JPEG压缩、加噪、滤波、几何攻击等常见的攻击方式也有着较强的鲁棒性。  相似文献   

9.
一种基于图像分割和HVS的自适应数字水印   总被引:8,自引:0,他引:8  
李道远  常敏  袁春风 《计算机工程》2003,29(20):130-131,172
提出了一种基于图像分割和HVS的自适应数字水印算法。利用数字图像分割提取出图像中具有重要语义的区域,称为兴趣区域,作为水印嵌入的主要部位。每个兴趣区域按照其语义上的重要程度得到一个语义权值。对兴趣区域的子块,按照Hvs特征给出子块的视觉屏蔽权值。两个权值的加权和决定各子块待嵌水印的强度。水印嵌入到各子块DCT域的低频部分。实验结果表明,该水印有很好的不可见性和鲁棒性。  相似文献   

10.
一种新的半脆弱视频水印方案   总被引:1,自引:0,他引:1       下载免费PDF全文
半脆弱视频水印是一种进行视频数据内容完整性认证的重要技术.为了得到具有更强认证能力的半脆弱视频水印信息,提出了一种新的方案,该方案采用DCT块组能量关系特征和块灰度均值特征相结合的双特征提取方法构成水印信息,然后将水印信息进行Turbo编码,再利用改进的DEW算法实现水印的嵌入.双特征提取算法可以克服单特征提取的不完备性,增强篡改判断和定位能力,Turbo编码可以提高水印信息的鲁棒性,降低认证虚警率.实验结果表明,该算法在不破坏视觉质量的基础上,能够对常见的篡改操作进行完备的认证,虚警概率小.  相似文献   

11.
Objective video quality assessment is of great importance in a variety of video processing applications. Most existing video quality metrics either focus primarily on capturing spatial artifacts in the video signal, or are designed to assess only grayscale video thereby ignoring important chrominance information. In this paper, on the basis of the top-down visual analysis of cognitive understanding and video features, we propose and develop a novel full-reference perceptual video assessment technique that accepts visual information inputs in the form of a quaternion consisting of contour, color and temporal information. Because of the more important role of chrominance information in the “border-to-surface” mechanism at early stages of cognitive visual processing, our new metric takes into account the chrominance information rather than the luminance information utilized in conventional video quality assessment. Our perceptual quaternion model employs singular value decomposition (SVD) and utilizes the human visual psychological features for SVD block weighting to better reflect perceptual focus and interest. Our major contributions include: a new perceptual quaternion that takes chrominance as one spatial feature, and temporal information to model motion or changes across adjacent frames; a three-level video quality measure to reflect visual psychology; and the two weighting methods based on entropy and frame correlation. Our experimental validation on the video quality experts’ group (VQEG) Phase I FR-TV test dataset demonstrated that our new assessment metric outperforms PSNR, SSIM, PVQM (P8) and has high correlation with perceived video quality.  相似文献   

12.
基于视频感知哈希的视频篡改检测与多粒度定位   总被引:1,自引:0,他引:1       下载免费PDF全文
为了对被篡改过的视频进行准确快速的篡改检测与定位,引入人类视觉可计算模型,提出一种多层次、多粒度的视频篡改快速检测与定位算法.采用随机分块采样技术,提取视频结构感知特征及视频图像时域感知特征,利用哈希理论的单向摘要特性量化感知特征,获取视频摘要哈希.通过应用相似度矩阵进行多粒度、多层次篡改部位检测与定位.实验结果表明,相似度拟合图能够体现视频篡改攻击强度和攻击部位,算法表现出更好的篡改检测准确率与定位精确度.  相似文献   

13.
针对视频压缩等处理导致视频失真的问题,通过对视频质量感知特征的分析,提出一种空域和频域联合特征挖掘的无参考视频质量评价方法。该方法主要提取了空域和频域联合感知特征,包括灰度-梯度共生矩阵、空间熵、谱熵、相关熵以及自然指数特征。在提取视频特征的过程中,通过计算视频帧特征方差来表示整个视频的特征,比传统方法中取视频帧平均值更有利于区分不同失真类型的视频。最后,使用支持向量回归模型构建了感知特征与视频质量之间的关系。该方法在LIVE和IVP 视频数据库上的实验结果表明,提出的方法相较当前文献报道方法,有着更好的性能。  相似文献   

14.
Appropriate organization of video databases is essential for pertinent indexing and retrieval of visual information. This paper proposes a new feature called block intensity comparison code (BICC) for video classification and retrieval. Block intensity comparison code represents the average block intensity difference between blocks of a frame. The extracted feature is further processed using principal component analysis (PCA) to reduce the redundancy while exploiting the correlations between the feature elements. The temporal nature of video is modeled by hidden Markov model (HMM) with BICC as the features. It is found that, BICC outperforms other visual features such as edge, motion and histogram which are commonly used for video classification.  相似文献   

15.
A coherent computational approach to model bottom-up visual attention   总被引:5,自引:0,他引:5  
Visual attention is a mechanism which filters out redundant visual information and detects the most relevant parts of our visual field. Automatic determination of the most visually relevant areas would be useful in many applications such as image and video coding, watermarking, video browsing, and quality assessment. Many research groups are currently investigating computational modeling of the visual attention system. The first published computational models have been based on some basic and well-understood human visual system (HVS) properties. These models feature a single perceptual layer that simulates only one aspect of the visual system. More recent models integrate complex features of the HVS and simulate hierarchical perceptual representation of the visual input. The bottom-up mechanism is the most occurring feature found in modern models. This mechanism refers to involuntary attention (i.e., salient spatial visual features that effortlessly or involuntary attract our attention). This paper presents a coherent computational approach to the modeling of the bottom-up visual attention. This model is mainly based on the current understanding of the HVS behavior. Contrast sensitivity functions, perceptual decomposition, visual masking, and center-surround interactions are some of the features implemented in this model. The performances of this algorithm are assessed by using natural images and experimental measurements from an eye-tracking system. Two adequate well-known metrics (correlation coefficient and Kullbacl-Leibler divergence) are used to validate this model. A further metric is also defined. The results from this model are finally compared to those from a reference bottom-up model.  相似文献   

16.
以传统的词袋模型为基础,根据相邻镜头关键帧之间具有相关性的特点提出了一种用于视频场景分类的模型。将视频片段进行分割,提取关键帧,对关键帧图像归一化。将关键帧图像作为图像块以时序关系合成新图像,提取新图像的SIFT特征及HSV颜色特征,将图像的SIFT特征及HSV颜色特征数据映射到希尔伯特空间。通过多核学习,选取合适的核函数组对每个图像进行训练,得到分类模型。通过对多种视频进行实验,实验结果表明,该方法在视频场景分类中能取得很好的效果。  相似文献   

17.
18.
Similarity Analysis of Video Sequences Using an Artificial Neural Network   总被引:1,自引:1,他引:0  
Comparison of video sequences is an important operation in many multimedia information systems. The similarity measure for comparison is typically based on some measure of correlation with the perceptual similarity (or difference) amongst the video sequences or with the similarity (or difference) in some measure of semantics associated with the video sequences. In content-based similarity analysis, the video data are expressed in terms of different features. Similarity matching is then performed by quantifying the feature relationships between the target video and query video shots, with either an individual feature or with a feature combination. In this study, two approaches are proposed for the similarity analysis of video shots. In the first approach, mosaic images are created from video shots, and the similarity analysis is done by determining the similarities amongst the mosaic images. In the second approach, key frames are extracted for each video shot and the similarity amongst video shots is determined by comparing the key frames of the video shots. The features extracted include image histograms, slopes, edges, and wavelets. Both individual features and feature combinations are used in similarity matching using an artificial neural network. The similarity rank of the query video shots is determined based on the values of the coefficients of determination and the mean absolute error. The study reported in this paper shows that the mosaic-based similarity analysis can be expected to yield a more reliable result, whereas the key frame-based similarity analysis could be potentially applied to a wider range of applications. The weighted non-linear feature combination is shown to yield better results than a single feature for video similarity analysis. The coefficient of determination is shown to be a better criterion than the mean absolute error in similarity matching analysis.  相似文献   

19.
Feiniu Yuan 《Pattern recognition》2012,45(12):4326-4336
Traditional methods for video smoke detection can easily achieve very low training errors but their generalization performances are not good due to arbitrary shapes of smoke, intra-class variations, occlusions and clutters. To overcome these problems, a double mapping framework is proposed to extract partition based features with AdaBoost. The first mapping is from an original image to block features. A feature vector is presented by concatenating histograms of edge orientation, edge magnitude and Local Binary Pattern (LBP) bit, and densities of edge magnitude, LBP bit, color intensity and saturation. Each component of the feature vector produces a feature image. To obtain shape-invariant features, a detection window is partitioned into a set of small blocks called a partition, and many multi-scale partitions are generated by changing block sizes and partition schemes. The sum of each feature image within each block of each partition is computed to generate block features. The second mapping is from the block features to statistical features. The statistical features of the block features, such as, mean, variance, skewness, kurtosis and Hu moments, are computed on all partitions to form a feature pool. AdaBoost is used to select discriminative shape-invariant features from the feature pool. Experiments show that the proposed method has better generalization performance and less insensitivity to geometry transform than traditional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号