首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new lossy image compression technique which uses singular value decomposition (SVD) and wavelet difference reduction (WDR). These two techniques are combined in order for the SVD compression to boost the performance of the WDR compression. SVD compression offers very high image quality but low compression ratios; on the other hand, WDR compression offers high compression. In the Proposed technique, an input image is first compressed using SVD and then compressed again using WDR. The WDR technique is further used to obtain the required compression ratio of the overall system. The proposed image compression technique was tested on several test images and the result compared with those of WDR and JPEG2000. The quantitative and visual results are showing the superiority of the proposed compression technique over the aforementioned compression techniques. The PSNR at compression ratio of 80:1 for Goldhill is 33.37 dB for the proposed technique which is 5.68 dB and 5.65 dB higher than JPEG2000 and WDR techniques respectively.  相似文献   

2.
基于自适应哈夫曼编码的密文可逆信息隐藏算法   总被引:1,自引:0,他引:1  
随着云存储和隐私保护的发展,密文域可逆信息隐藏作为一种可以在密文中嵌入秘密信息,保证嵌入后的信息可以无错误提取,并能无损恢复原始明文图像的技术,越来越受到人们的关注.本文提出了一种基于自适应哈夫曼编码的密文域可逆信息隐藏算法,对不同的图像采用不同的哈夫曼码字编码腾出空间来嵌入秘密信息.首先利用自然图像相邻像素间的相关性对原始明文图像进行像素值预测,从最高有效位到最低有效位,对原始像素值和预测像素值的相同比特位进行自适应的哈夫曼编码标记.然后,利用流密码对原始明文图像进行加密.最后在腾出的空间,通过位替换来自适应的嵌入秘密信息.由于哈夫曼编码和解码的可逆性,合法接收者可以对原始明文图像和秘密信息实现分离的无损恢复和提取.实验结果表明,与现有的几种方法相比,本文提出的方法具有更好的安全性和更高的嵌入率,在BOSSBase、BOWS-2和UCID三个数据集上的平均嵌入率比MPHC算法分别提高了0.09bpp、0.062 bpp和0.06bpp,在最佳情况下比MPHC算法能分别高出0.958 bpp、0.797 bpp和0.320 bpp,最差情况下的嵌入率比MPHC算法也分别高出了 0.01 bpp、0.039 bpp和0.061 bpp.  相似文献   

3.
针对卫星图像的特点及当前卫星图像在传输和存储上面临的问题,提出了一种基于稀疏表示的卫星图像二级无损压缩算法。通过传输稀疏表示后的稀疏系数来代替图像本身的传输,完成对卫星图像的第一级压缩;对非零稀疏系数先作预处理后实现聚类,然后依据聚类索引对原始非零稀疏系数的位置排序;最后对处理后的非零稀疏系数和位置数据分块,并利用改进的自适应哈夫曼算法对非零稀疏系数的数据块编码,利用差分编码和改进的自适应哈夫曼算法对位置数据块编码,完成对图像数据的第二级压缩。实验结果表明,与传统算法相比,所提算法具有明显优势,改进算法的压缩率是传统算法的1/3~1/2,且可同时实现卫星图像的高倍无损压缩与高分辨率重建。  相似文献   

4.
针对目前密文域可逆信息隐藏算法嵌入容量较小的问题,提出了基于预测误差双重编码的大容量密文域可逆可分离信息隐藏算法。首先为了预留秘密信息的嵌入空间,图像拥有者利用基于预测误差的哈夫曼编码及扩展游程编码对图像进行预处理,然后加密图像;数据嵌入者在加密后的图像中嵌入秘密信息;接收者根据信息隐藏密钥可以准确无误地提取秘密信息,根据解密密钥可以无损恢复图像,两者无顺序要求。实验结果表明,预测误差双重编码的应用有效地提高了嵌入容量。  相似文献   

5.
A.  M.  Sabah M.   《Digital Signal Processing》2003,13(4):604-622
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. It is based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal. In this algorithm, the input signal is divided into blocks and each block goes through a discrete wavelet transform; then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using various coding methods, including modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent root-mean square difference (PRD) and visual inspection measures. By this compression method, small PRD and high compression ratio with low implementation complexity are achieved. Finally, we have compared the performance of the ECG compression algorithm on data from the MIT-BIH database.  相似文献   

6.
Digital media is often handled in a compressed and encrypted form in Digital Asset Management Systems. And watermarking of the compressed encrypted media items in the compressed-encrypted domain itself is required sometimes for copyright violation detection or other purposes. In this paper, we propose a robust image watermarking technique for partially compressed-encrypted JPEG images. However, arbitrary embedding of a watermark in a partially compressed encrypted image can cause drastic degradation of the quality as the underlying change may result in random decrypted values. In addition, due to the encryption the compression efficiency may become very low. Thus the challenge is to design a watermarking technique that provides good watermarked image quality and at the same time gives good compression efficiency. While the proposed technique embeds watermark in the partially compressed-encrypted domain, the extraction of watermark can be done in the encrypted or decrypted domains. The experiments show that the watermarked image quality is good and the reduction in compression efficiency is low. The proposed watermarking technique is robust to common signal processing attacks. The watermark detection performance of the proposed scheme is better than the existing encrypted domain watermarking techniques.  相似文献   

7.
In spite of great advancements in multimedia data storage and communication technologies, compression of medical data remains challenging. This paper presents a novel compression method for the compression of medical images. The proposed method uses Ripplet transform to represent singularities along arbitrarily shaped curves and Set Partitioning in Hierarchical Trees encoder to encode the significant coefficients. The main objective of the proposed method is to provide high quality compressed images by representing images at different scales and directions and to achieve high compression ratio. Experimental results obtained on a set of medical images demonstrate that besides providing multiresolution and high directionality, the proposed method attains high Peak Signal to Noise Ratio and significant compression ratio as compared with conventional and state-of-art compression methods.  相似文献   

8.
提出了一种基于分块小波变换与奇异值阈值压缩的人脸特征提取与识别算法.该方法首先对人脸图像进行分块小波变换,并根据图像块的位置分布选取不同的频率分量,然后对该分量进行奇异值阈值压缩与特征融合,最后在ORL人脸库上利用最近邻分类器对该特征进行分类识别,验证了算法的有效性.  相似文献   

9.
为获得比较理想的图像压缩比和清晰的压缩后图像,使用了奇异值分解作为数据矩阵的压缩原理.详细解析了奇异值分解的原理及用奇异值分解压缩图像的原理.提出了按特征值个数占比阈值、按特征值之和占比阈值两种取特征值个数的方法.实验表明,特征值个数占比阈值在0.1时,图像清晰且压缩比达到5.99;特征值之和占比阈值在0.85时,图像...  相似文献   

10.

The images comprise not only photographic images but also graphic and text images, they are determined in magazines, brochures and websites. The segmentation and compression of compound images (for instance, computer-generated images, scanned documents and so on) are tough to the procedure.The existing segmentation and compression techniques do not provide a complete comprehensive solution. To solve the problems in existing techniques, here we segmented the compound images via an optimization depended on K-means clustering technique along with AC (Alternate Current) coefficient method for the dynamic segmentation and then compressed individually. The AC coefficient based segmentation method results in detachment of smooth (background) and non-smooth (text, image and overlapping) areas. Further, the non-smooth part is segmented via the optimization depended on K-means clustering technique. Also, the density of segmented objects is headed applying different compression strategies such as the Huffman coder, arithmetic coder, and Jpeg coders. With the being approaches, the entire projected architecture is implemented in MATLAB and the function of the scheme is measured and equated. Our proposed system achieves better compression ratio (21.16), and also improves the performance for image quality index (0.931574), PSNR (Peak Signal to Noise Ratio) (34.91338), RMSE (Root Mean Square Error) (0.931574), SSIM (Structural Similarity) (0.546882), and SDME (Second Derivative-like Measure of Enhancement) (44.91293) than the available CS K-means algorithm.

  相似文献   

11.
A HDWT-based reversible data hiding method   总被引:1,自引:0,他引:1  
This paper presents a reversible data hiding method which provides a high payload and a high stego-image quality. The proposed method transforms a spatial domain cover image into a frequency domain image using the Haar digital wavelet transform (HDWT) method, compresses the coefficients of the high frequency band by the Huffman (or arithmetic) coding method, and then embeds the compression data and the secret data in the high frequency band. Since the high frequency band incorporates less energy than other bands of an image, it can be exploited to carry secret data. Furthermore, the proposed method utilizes the Huffman (or arithmetic) coding method to recover the cover image without any distortion. The proposed method is simple and the experimental results show that the designed method can give a high hiding capacity with a high quality of stego-image.  相似文献   

12.
基于快速稀疏表示的医学图像压缩   总被引:1,自引:0,他引:1  
随着数字医学图像数据量的日益增大,有必要采取一定的图像压缩技术进行压缩存储。为此,提出基于快速稀疏表示的医学图像压缩方法。使用K-奇异值分解算法构造医学图像过完备字典,采用批量正交匹配追踪(Batch-OMP)算法进行稀疏编码。该方法只需要存储稀疏编码非零位置的系数信息,利用过完备字典即可实现原始医学图像的重构。实验结果表明,该方法可提高图像稀疏编码的速度,与正交匹配追踪(OMP)算法相比可提速40%左右,并且图像重构效果优于联合图像专家组(JPEG)算法和多级树集合分裂(SPIHT)算法的压缩效果,相对JPEG压缩的图像峰值信噪比平均提高18%,相对SPIHT算法平均提高50%。  相似文献   

13.
Ling   《Digital Signal Processing》2007,17(6):1065-1070
In this paper, an adaptive learning technique based on the classification of activity measure is proposed for compression artifacts reduction. All the coding artifacts are treated as digital noise and are removed in a unified framework. Pixels are explicitly classified into object details or various coding artifacts using the combination of local entropy and dynamic range. A least mean square optimization technique is applied on a training set that is composed of the original images and the compressed versions of the original images. The optimal filter coefficients for each class are obtained by making the mean square error (MSE) between the original pixels and the filtered outputs of the compressed apertures minimized statistically. Evaluation results show that the proposed algorithm outperforms more expensive state-of-the-art structure based methods.  相似文献   

14.
Database machines are special purpose backend architectures that are designed to support efficiently database management system operations. An important problem in the development of database machines has been that of increasing their performance. Earlier research on the performance evaluation of database machines has indicated that I/O operations constitute a principle performance bottleneck. This is increasingly the case with the advances in multiprocessing and a growth in the volume of data handled by a database machine. One possible strategy to improve the performance of such a system which handles huge volumes of data is to store the data in a compressed form. This can be achieved by introducing VLSI chips for data compression so that data can be compressed and decompressed “on-the-fly”. A set of hardware algorithms for data compression based on the Huffman coding scheme proposed in an earlier work is described. The main focus of this paper is the investigation conducted by the authors to study the effect of incorporating such hardware in a special purpose backend relational database machine. Detailed analytical models of a relational database machine and the analytical results that quantify the performance improvement due to compression hardware are presented.  相似文献   

15.
Due to the excessive utilization of memory, data compression is an evergreen research topic. Realizing the constant demand of compression algorithms, this article presents a compression algorithm to analyse the digital VLSI circuits for constraint optimization, such as test data volume, switching power, chip area overhead and processing speed of testing. This article proposes a new power transition X filling based selective Huffman encoding technique, which achieves better data compression, switching power reduction, chip area overhead reduction and speed of testing. The performance of the proposed work is examined with the help of ISCAS benchmark circuits. Initially, the test set is occupied by using the power transition X filling technique to replace the don't care bits and the filled test set is further encoded by selective Huffman encoding technique. The experimental results show that the proposed power transition X filling based selective Huffman encoding gives effective results compared to the related data compression techniques with minimal time and memory consumption.  相似文献   

16.
民用GPS数据准无损压缩算法   总被引:1,自引:0,他引:1  
为了提高民用GPS精度范围内的定位数据压缩率和压缩速度,在对霍夫曼编码和算术编码的性能进行分析比较的基础上,将预测编码与霍夫曼编码有机结合,提出了面向民用GPS精度范围的定位信息准无损压缩算法.该算法通过压缩预处理和二次量化去除冗余信息,采用预测编码提高编码效率,总压缩效率可达87%.采用MSP430单片机对该算法进行了测试,在压缩数据量为668 KB时,压缩率为87.1%, 处理时间为31.4 s,与仿真结果基本吻合.实验结果表明,该算法经过优化后对硬件要求较低,提高了压缩率和压缩速度,节约了存储资源,节省了数据传输时的通信费用.  相似文献   

17.
This work proposes a novel scheme of lossy compression for encrypted gray images. In the encryption phase, the original image is decomposed into a sub-image and several layers of prediction errors, and the sub-image and prediction errors are encrypted using an exclusive-or operation and a pseudo-random permutation, respectively. Although a channel provider does not know the cryptographic key and the original content, he can still effectively reduce the amount of encrypted data by quantizing the permuted prediction errors on various layers, and an optimization method with rate-distortion criteria can be employed to select the values of quantization steps. At receiver side with the knowledge of cryptographic key, a decoder integrating dequantization, decryption and image reconstruction functions is used to retrieve the principal content of original image from the compressed data. Experimental result shows the rate-distortion performance of the proposed scheme is significantly better than that of previous technique.  相似文献   

18.
In this paper a novel multiresolution human visual system and statistically based image coding scheme is presented. It decorrelates the input image into a number of subbands using a lifting based wavelet transform. The codec employs a novel statistical encoding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the statistical encoding process. The baseband coefficients are losslessly coded. An extension of the codec to the progressive transmission of images is also developed. To evaluate the performance of the coding scheme, it was applied to a number of test images and its performance with and without perceptual weights is evaluated. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when perceptual weights are employed. The performance of the proposed technique was also compared to JPEG and JPEG2000. The results show that the proposed coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios.  相似文献   

19.
一种不用建造Huffman树的高效Huffman编码算法   总被引:8,自引:0,他引:8       下载免费PDF全文
Huffman编码作为一种高效的不等长编码技术正日益广泛地在文本、图像、视频压缩及通信、密码等领域得到应用。为了更有效地利用内存空间、简化编码步骤和相关操作,首先研究了重建Huffman树所需要的信息,并提出通过对一类一维结构数组进行相关操作来获取上述信息的方法,然后利用这些信息,并依据提出的规范Huffman树的编码性质,便能直接得到Huffman编码。与传统的Huffman算法及近年来国内外文献中提出的改进算法相比,由于该方法不需要构造Huffman树,不仅使内存需求大大减少,而且编码步骤和相关操作更简洁,因而更利于程序的实现和移植。更重要的是,该算法思路为Huffman算法的研究和发展提供了新的途径。  相似文献   

20.
Securing medical data while transmission on the network is required because it is sensitive and life-dependent data. Many methods are used for protection, such as Steganography, Digital Signature, Cryptography, and Watermarking. This paper introduces a novel robust algorithm that combines discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) digital image-watermarking algorithms. The host image is decomposed using a two-dimensional DWT (2D-DWT) to approximate low-frequency sub-bands in the embedding process. Then the sub-band low-high (LH) is decomposed using 2D-DWT to four new sub-bands. The resulting sub-band low-high (LH1) is decomposed using 2D-DWT to four new sub-bands. Two frequency bands, high-high (HH2) and high-low (HL2), are transformed by DCT, and then the SVD is applied to the DCT coefficients. The strongest modified singular values (SVs) vary very little for most attacks, which is an important property of SVD watermarking. The two watermark images are encrypted using two layers of encryption, circular and chaotic encryption techniques, to increase security. The first encrypted watermark is embedded in the S component of the DCT components of the HL2 coefficients. The second encrypted watermark is embedded in the S component of the DCT components of the HH2 coefficients. The suggested technique has been tested against various attacks and proven to provide excellent stability and imperceptibility results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号