首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, on the basis of the theories and methods of Watson’s perceptual model and rational dither modulation (RDM), a hybrid quantization-based watermarking in the discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains is studied. In the design of the quantization-based watermarking, quantization step-size plays an important role in many watermarking algorithms. RDM at both the embedder and decoder adopts a gain-invariant adaptive quantization step-size. Therefore, we investigated combining the modified Watson’s perceptual model with RDM. Its improved robustness is due to the embedding in the high entropy region of low-frequency sub-band image and adaptive control of its quantization step-size. The Euclidean distance decoder is used to extract the watermark data. The performance of the proposed scheme is analytically calculated and verified by simulation. Experimental results confirm the imperceptibility of the proposed watermarking and its higher robustness against attacks compared to alternative watermarking methods in the literature.  相似文献   

2.
Adaptive image coding with perceptual distortion control   总被引:6,自引:0,他引:6  
This paper presents a discrete cosine transform (DCT)-based locally adaptive perceptual image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder uses a locally adaptive perceptual quantization scheme based on a tractable perceptual distortion metric. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels in order to meet the desired target perceptual distortion. The proposed coding scheme is flexible in that it can be easily extended to work with any subband-based decomposition in addition to block-based transform methods. Compared to existing perceptual coding methods, the proposed perceptual coding method exhibits superior performance in terms of bit rate and distortion control. Coding results are presented to illustrate the performance of the presented coding scheme.  相似文献   

3.
Locally adaptive perceptual image coding   总被引:6,自引:0,他引:6  
Most existing efforts in image and video compression have focused on developing methods to minimize not perceptual but rather mathematically tractable, easy to measure, distortion metrics. While nonperceptual distortion measures were found to be reasonably reliable for higher bit rates (high-quality applications), they do not correlate well with the perceived quality at lower bit rates and they fail to guarantee preservation of important perceptual qualities in the reconstructed images despite the potential for a good signal-to-noise ratio (SNR). This paper presents a perceptual-based image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder is based on a locally adaptive perceptual quantization scheme for compressing the visual data. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion based on a subband decomposition. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels to the local amount of masking present at the level of each subband transform coefficient. Compared to the existing non-locally adaptive perceptual quantization methods, the new locally adaptive algorithm exhibits superior performance and does not require additional side information. This is accomplished by estimating the amount of available masking from the already quantized data and linear prediction of the coefficient under consideration. By virtue of the local adaptation, the proposed quantization scheme is able to remove a large amount of perceptually redundant information. Since the algorithm does not require additional side information, it yields a low entropy representation of the image and is well suited for perceptually lossless image compression.  相似文献   

4.
In this paper, two modified spread transform dither modulation (MSTDM) algorithms are proposed based on combination of transforms of discrete wavelet transform and discrete cosine transform (DD: abbreviation of the two discrete transform). They are called MSTDM-CO-DD algorithm based on correlation between adjacent blocks and MSTDM-PCO-DD based on the correlation between adjacent blocks after pretreatment of the adjacent sub-block before embedding. In both algorithms, we first make wavelet decomposition of the image, and divide the low-frequency sub-band into blocks. MSTDM-CO-DD gets the projection vectors and quantization steps from the previous sub-block to modulate the latter sub-block based on the correlation between adjacent blocks. As a result, MSTDM-CO-DD solves the problem of the difference of projection vector and quantization step between embedding and detecting. MSTDM-PCO-DD exchanges the coefficients of the previous sub-block and the latter one before embedding. As a result MSTDM-PCO-DD is more robust. The quantization step of the proposed algorithms varies with the brightness of the carrier image adaptively based on modified Watson model. From the results of numerical simulation, it is obvious that our proposed algorithms are robust to many attacks. Moreover, based on the Quantized Projection Methods our algorithms have been analyzed in performance.  相似文献   

5.
Universal trellis coded quantization   总被引:2,自引:0,他引:2  
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.  相似文献   

6.
In the present work, a novel image watermarking algorithm using vector quantization (VQ) approach is presented for digital image authentication. Watermarks are embedded in two successive stages for image integrity verification and authentication. In the first stage, a key based approach is used to embed robust zero level watermark using properties of indices of vector quantized image. In the second stage, semifragile watermark is embedded by using modified index key based (MIKB) method. Random keys are used to improve the integrity and security of the designed system. Further, to classify an attack quantitatively as acceptable or as a malicious attack, pixel neighbourhood clustering approach is introduced. Proposed approach is evaluated on 250 standard test images using performance measures such as peak signal to noise ratio (PSNR) and normalized hamming similarity (NHS). The experimental results shows that propose approach achieve average false positive rate 0.00024 and the average false negative rate 0.0012. Further, the average PSNR and tamper detection/localization accuracy of watermarked image is 42 dB and 99.8% respectively; while tamper localization sensitivity is very high. The proposed model is found to be robust to common content preserving attacks while fragile to content altering attacks.  相似文献   

7.
A hierarchical image coding algorithm based on sub-band coding and adaptive block-size multistage vector quantization (VQ) is proposed, and its coding performance is examined for super high definition (SHD) image. First, the concept on SHD image is briefly described. Next, the signal power spectrum is evaluated, and the sub-band analysis pattern is determined from its characteristics. Several quadrature mirror filters are examined from the viewpoints of reconstruction accuracy, coding gain, and low-pass signal quality. Then an optimum filter is selected for the sub-band analysis. The two-stage VQ using the adaptive bit allocation is also introduced to control quantization accuracy and to achieve high-quality image reproduction. Coding performance and hierarchical image reconstruction are demonstrated using SNR and some photographs.  相似文献   

8.
Watermarking in the Joint Photographic Experts Group (JPEG)2000 coding pipeline is investigated in this paper. A joint quantization and watermarking method based on trellis-coded quantization (TCQ) is proposed to reliably embed data during the quantization stage of the JPEG2000 part 2 codec. The central contribution of this work is the use of a single quantization module to jointly perform quantization and watermark embedding at the same time. The TCQ-based watermarking technique allows embedding the watermark in the detail sub-bands of one or more resolution levels except the first one. Watermark recovery is performed after image decompression. The performance of this joint scheme in terms of image quality and robustness against common image attacks was estimated on real images.  相似文献   

9.
In visual perception, human only perceive discrete-scale quality levels over a wide range of coding bitrate. More clearly, the videos compressed with a series of quantization parameters (QPs) only have limited perceived quality levels. In this paper, perceptual quantization is transformed into the problem of how to determine the just perceived QP for each quality level, and a just noticeable coding distortion (JNCD) based perceptual quantization scheme is proposed. Specifically, multiple visual masking effects are analyzed and a linear regression (LR) based JNCD model is proposed to predict JNCD thresholds for all quality levels at first. According to the JNCD prediction model, the frame-level perceptual QPs for all quality levels are then derived on the premise of that coding distortions are infinitely close to the predicted JNCD thresholds. Based on the predicted frame-level perceptual QPs, the perceived QPs of all quality levels for each coding unit (CU) are finally determined according to a perceptual modulation function. Experimental results show that the proposed quality-wise perceptual quantization scheme is superior to the existing perceptual video coding algorithms significantly, i.e., the proposed perceptual quantization could save more bitrate with better quality.  相似文献   

10.
MPEG-2视频编码的自适应量化器设计   总被引:1,自引:0,他引:1  
本文在研究MPEG-2TM5建议的自适应量化策略的基础上,设计了一种新的自适应量化器。以块为基础分析宏块的局部视觉活动特性,并通过综合评价宏块中各块的视觉活动特性来最终决定自适应视觉量化因子。实验结果表明,本文所设计的自适应量化器能均匀分布图像编码主观失真,改善了图像质量,特别是减少了平坦区的块效应,降低了平坦区强边缘的失真。  相似文献   

11.
We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification.  相似文献   

12.
Adaptive quantization proves to be an effective tool to improve coding performance. In this paper, we propose an adaptive spatiotemporal perception aware quantization algorithm to increase subjective coding performance. To measure the spatiotemporally perceptual redundancy, the perceptual complexity models are firstly established with spatial and temporal characteristics respectively. With the help of the models, the adaptive spatial and temporal quantization parameter (QP) offsets are then calculated for each coding tree unit (CTU), respectively. Finally, the perceptually optimal Lagrange multiplier of each CTU is determined with the spatial–temporal QP offset. Experimental results show that the proposed algorithm reduces 8.6% and 8.4% Bjontegaard-Delta Rate (BD-Rate) with Structural Similarity Index Metric (SSIM) in average over the second generation of Audio Video Coding Standard (AVS2) reference software RD17.0 in Low-Delay-P (LDP) and Random-Access (RA) configurations, respectively. The subjective assessment proves that the proposed algorithm can reduce the bitrates with the same subjective quality significantly.  相似文献   

13.

Communication fields are growing rapidly in the recent era, so transmitting the multimedia contents through an open channel becomes a challenging task. The multimedia contents that are transmitted through this channel are highly prone to vulnerabilities and attacks. Therefore, secure and efficient data communication is considered as a major concern in the multimedia communication systems. So, major efforts are taken by researchers to safeguard the originality of each image. In a conventional system, the secure image communication process was achieved by compressing the content first, and then encryption is performed over the compressed data. Even though it met the required security and compression ratio, but some applications may require the reverse system. In this method, the encryption process is conducted prior to compression to improve the privacy of user data. Moreover, the initial concentration is given for improving content privacy rather than concentrating on size reduction. This paper proposes a reversed system that uses block based perceptual encryption algorithm for encryption and vector quantization (VQ) with hybrid Lloyd–Buzo–Gray (LBG)-Adaptive Deer Hunting Optimization (ADHO) algorithm (VQ-LBG-ADHO) for compression. So, the content secrecy gets improved. The involvement of this adaptive optimization method enhances the performance of VQ in the compression process. This method highly concentrates on secure communication, so the reverse process is followed in this method. It not only improves the image secrecy, however, it further enhances the image quality. The performance of this secure communication process is compared with state-of-the-art algorithms, and the results reveal that the proposed method outperforms the other existing methods.

  相似文献   

14.
In this paper, an adaptive in-loop quantization technique is proposed for quantizing inner product coefficients in matching pursuit. For each matching pursuit (MP) stage a different quantizer is used based on the probability distribution of MP coefficients. The quantizers are optimized for a given rate budget constraint. Additionally, our proposed adaptive quantization scheme finds the optimal quantizers for each stage based on the already encoded inner product coefficients. Experimental results show that our proposed adaptive quantization scheme outperforms existing quantization methods used in matching pursuit image coding.  相似文献   

15.
Previous adaptive JPEG steganography algorithms mostly calculate image distortion before secret information embedding,so they can’t dynamically adjust distortion costs.Considering the mutual impacts during embedding,an adaptive JPEG steganography algorithm based on distortion cost updating was proposed.Firstly,three factors that affect embedding fluctuations were analyzed,including quantization step,the absolute value of the quantized DCT coefficient to be modified and perturbation error.Then the embedding update strategy (EUS) was proposed,which enabled to dynamically update the distortion costs.After that,an adaptive JPEG steganography algorithm was implemented combining the strategy.The experimental result illustrates that the algorithm can significantly improve the secure performance of JPEG steganography algorithm.  相似文献   

16.
In this paper, a wavelet-based watermarking scheme for color images is proposed. The watermarking scheme is based on the design of a color visual model that is the modification of a perceptual model used in the image coding of gray scale images. The model is to estimate the noise detection threshold of each wavelet coefficient in luminance and chrominance components of color images in order to satisfy transparency and robustness required by the color image watermarking technique. The noise detection thresholds of coefficients in each color component are derived in a locally adaptive fashion based on the wavelet decomposition, by which perceptually significant coefficients are selected and a perceptually lossless quantization matrix is constructed for embedding watermarks. Performance in terms of robustness and transparency is obtained by embedding the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme is more robust than the existing scheme while retaining the watermark transparency.  相似文献   

17.
In order to reduce the blocking artifact in the Joint Photographic Experts Group (JPEG)-compressed images, a new noniterative postprocessing algorithm is proposed. The algorithm consists of a two-step operation: low-pass filtering and then predicting. Predicting the original image from the low-pass filtered image is performed by using the predictors, which are constructed based on a broken line regression model. The constructed predictor is a generalized version of the projector onto the quantization constraint set, or the narrow quantization constraint set. We employed different predictors depending on the frequency components in the discrete cosine transform (DCT) domain since each component has different statistical properties. Further, by using a simple classifier, we adaptively applied the predictors depending on the local variance of the DCT block. This adaptation enables an appropriate blurring depending on the smooth or detail region, and shows improved performance in terms of the average distortion and the perceptual view. For the major-edge DCT blocks, which usually suffer from the ringing artifact, the quality of fit to the regression model is usually not good. By making a modification of the regression model for such DCT blocks, we can also obtain a good perceptual view. The proposed algorithm does not employ any sophisticated edge-oriented classifiers and nonlinear filters. Compared to the previously proposed algorithms, the proposed algorithm provides comparable or better results with less computational complexity.  相似文献   

18.
In order to reduce the time complexity and improve the reconstruction performance of traditional method for Radial Harmonic Fourier Moments (RHFMs), we introduce a fast and precise method by using FFT and based on the which, this paper proposes a novel image watermarking algorithm which is robust to geometric attacks. We firstly compute the RHFMs of the original image by using the proposed method and select the robust RHFMs which are suitable for watermark embedding. Then the watermark is embedded by modifying the magnitudes of RHFMs based on quantization. In the decoder, the watermark can be extracted from the magnitudes of RHFMs directly without using the original image. Simulation results show the proposed algorithm provides an excellent watermark invisibility and can be resilient to geometric attacks and common image processing attacks effectively.  相似文献   

19.
首先分析了在合成孔径雷达(SAR)原始数据中通常使用的块自适应量化(BAQ)算法,然后在此基础上详细讨论了两种基于块自适应量化的变换域编码算法,即基于快速傅里叶变换块自适应量化(FFT-BAQ)和基于小波变换块自适应量化(WT-BAQ),并对这两种算法压缩得到数据解压缩获得图像与块自适应量化得到的图像进行分析比较,结果显示变换域编码技术能改善SAR原始数据压缩性能。  相似文献   

20.
DVB-S2标准低密度奇偶校验码(LDPC)译码器在深空通信中面临着低复杂度、高灵活性及普适性方面的迫切需求。通过对LDPC译码算法中量化结构的研究,提出一种动态自适应量化结构的设计方法。该方法在常规均匀硬件量化的基础上,提出了修正化Min-Sum译码算法中的数据信息初始化及迭代译码的动态自适应量化结构,解决了DVB-S2标准LDPC码译码时存在的校验节点运算与变量节点运算之间的复杂度不平衡的问题,并由此提高了译码器的译码性能。实验证明,以DVB-S2标准LDPC码中码长为16 200,码率为1/2的为例,提供动态自适应量化结构与常规的均匀量化结构相比,节省硬件资源为4%。此外,动态自适应量化结构支持动态可配置功能,保证了DVB-S2标准LDPC译码器的灵活性及普适性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号