首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
在处理海量数据时,以软件方式实现的Z标准(Zstd)无损压缩算法难以满足特定应用领域对压缩速度的需求.对Zstd进行硬件加速设计是解决这一问题的有效方案,尤其是针对Zstd的有限状态熵编码(finitestateentropy,FSE)的硬件加速.因此,提出一种适用于Zstd的FSE压缩、解压硬件实现架构,采用固定压缩表实现最优的硬件加速步骤;通过增加序列映射的硬件模块来降低存储空间并提高传输速度;采用软硬件协同设计方案,并对硬件实现架构进行7级流水设计.通过VisualStudio与Modelsim的联合验证平台进行验证,实验结果表明在TSMC55 nm的工艺下,系统最高频率可达到750 MHz.与软件实现相比,整体压缩速度提高了9倍以上,整体解压速度提高了约100倍.  相似文献   

2.
Jan Wassenberg 《Software》2012,42(9):1095-1106
This report introduces a new lossless asymmetric single instruction multiple data codec designed for extremely efficient decompression of large satellite images. A throughput in excess of 3GB/s allows decompression to proceed in parallel with asynchronous transfers from fast block devices such as disk arrays. This is made possible by a simple and fast single instruction multiple data entropy coder that removes leading null bits. Our main contribution is a new approach for vectorized prediction and encoding. Unlike previous approaches that treat the entropy coder as a black box, we account for its properties in the design of the predictor. The resulting compressed stream is 1.2 to 1.5 times as large as JPEG‐2000, but can be decompressed 100 times as quickly – even faster than copying uncompressed data in memory. Applications include streaming decompression for out of core visualization. To the best of our knowledge, this is the first entirely vectorized algorithm for lossless compression. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
Digital image processing (DIP) has great application values in many fields, especially in remote sensing image processing, which represents the acquisition, enhancement, analysis, encoding, transmission, and storage of remote sensing images. With the development of chip technology and parallel computing technology, various digital image processing technologies have been successfully applied to satellite applications to help researchers exploit reliable information from remote-sensing images. However, the huge amount of images generated by ultra-high resolution optical remote sensing satellites put great pressure on existing transmission, storage, and processing technologies. Therefore, this paper proposes a spatio-temporal compression pipeline for remote sensing images based on lossy compression methods with ultra-high compression ratios to reduce the overhead required for the transmission and storage of remote sensing images while maintaining the quality of the compressed images. The experimental results show that the proposed method outperforms the classical image compression such as JPEG-2000.  相似文献   

4.
In the past decade, the number of mobile devices has increased significantly. These devices are in turn showing more computational capabilities. It is therefore possible to envision a near future where client applications may be deployed on these devices. There are, however, constraints that hinder this deployment, especially the limited communication bandwidth and storage space available. This paper describes the Efficient XML Data Exchange Manager (EXEM) that combines context-dependent lossy and lossless compression mechanisms used to support lightweight exchange of objects in XML format between server and client applications. The lossy compression mechanism reduces the size of XML messages by using known information about the application. The lossless compression mechanism decouples data and metadata (compression dictionary) content. We illustrate the use of EXEM with a prototype implementation of the lossless compression mechanism that shows the optimization of the available resources on the server and the mobile client. These experimental results demonstrate the efficiency of the EXEM approach for XML data exchange in the context of mobile application development.
Serhan DagtasEmail:
  相似文献   

5.
数字图像重压缩检测研究综述   总被引:1,自引:0,他引:1  
随着数字图像处理技术的广泛应用,数字图像处理软件在给人们的工作和生活带来便利的同时,由恶意篡改图像所引发的一系列社会问题也亟待解决,因此能够对图像的真实性和完整性进行判断的数字图像取证技术显得尤其重要。篡改图像必然会经过重压缩这一步骤,因此数字图像重压缩检测能够为数字图像取证提供强有力的辅助依据。文中对数字图像重压缩检测研究进行了系统的梳理,提出了数字图像重压缩检测的技术框架,详细阐述了无损图像压缩历史检测、有损压缩图像双重压缩检测、有损压缩图像多重压缩检测以及其他格式的重压缩检测的取证算法和思路,对现有算法进行了性能分析和评价。然后,总结了图像重压缩检测的应用。最后,分析了数字图像重压缩检测目前存在的问题,并对未来的发展方向进行了展望。  相似文献   

6.
在建的高能同步辐射光源预计会产生海量原始数据,其中硬X射线实验线站产生的图像数据占比最高且具有高分辨率和高帧率的特点,亟需有效的无损压缩方法缓解存储和传输压力,然而现有通用无损压缩方法对该类图像压缩效果不佳,基于深度学习的无损压缩方法又耗时较长。结合同步辐射光源图像的特点,提出一种在保证图像压缩比前提下的可并行智能无损图像压缩方法。通过参数自适应的可逆分区量化方法,大幅缩小图像经过时间差分后的像素值分布范围,能够节省20%以上的存储空间。将以CNN为基础架构的时空学习网络C-Zip作为概率预测器,同时以数据集为单位过拟合训练模型进一步优化图像压缩比。针对压缩过程中耗时较长的算术编码过程,利用概率距离量化代替算术编码,结合深度学习进行无损编码,增加编码过程的并行度。实验结果表明,该方法的图像压缩比相比于PNG、FLIF等传统图像无损压缩方法提升了0.23~0.58,对于同步辐射光源图像具有更好的压缩效果。  相似文献   

7.
压缩技术被广泛应用于数据存储和传输中,然而由于其内在的串行特性,大多数已有的基于字典的压缩与解压缩算法被设计在CPU上串行执行。为了探究使用图形处理器(graphic processing unit,GPU)对压缩与解压缩过程潜在性能的提升,结合合并内存访问与并行组装的技术,基于CUDA(compute unified device archi-tecture)平台研究了两种并行压缩与解压缩方法:基于字典的无状态压缩和基于字典的LZW压缩。实验结果表明,与传统的单核实现比较,所提方法能够显著改善已有的基于字典的串行压缩与解压缩算法的性能。  相似文献   

8.
为了将图像数据高质量地传输,便于图像数据的检索、分析、处理和存储,设计了一个以JPEG标准的静态图像压缩编码系统;将原始图像进行图像压缩编码,同时利用DSP芯片处理速度快的特点,进行核心算法的处理满足系统实时性的要求;经过系统的多次验证,图像压缩编码系统成功地将一个31K大小的BMP格式图像转换为3K大小的JPEG格式图像,完成了10∶1的JPEG标准静态图像压缩;因此将原始图像进行图像压缩编码是解决图像数据量大小与有限存储容量和传输带宽矛盾的最合理方法。  相似文献   

9.
利用多光谱传感器对同一观测对象在多个窄光谱范围上获得的图像,称为多光谱遥感图像。这类具有高空间和谱间分辨率的谱图像数据量大,其存储和传输都比较困难。因此,对海量数据进行有效的数据压缩便成了遥感资料应用中迫切需要解决的问题之一。为了防止有用信息的丢失而影响图像的进一步处理和应用,采用无损数据压缩方法是解决该问题的有效途径之一。本文分析了多光谱图像的空间和谱间相关性等特点,从多光谱图像预处理、预测和变换无损压缩方法等方面介绍了目前主要的多光谱图像无损压缩方法,并对其特点进行了对比分析。  相似文献   

10.
In the digital prepress workflow, images are represented in the CMYK colour space. Lossy image compression alleviates the need for high storage and bandwidth capacities, resulting from the high spatial and tonal resolution. After the image has been printed on paper, the introduced visual quality loss should not be noticeable to a human observer. Since visual image quality depends on the compression algorithm both quantitatively and qualitatively, and since no visual image quality models incorporating the end-to-end image reproduction process are satisfactory, an experimental comparison is the only viable way to quantify subjective image quality. This paper presents the results from an intensive psychovisual study based on a two-alternative forced-choice approach involving 164 people, with expert and non-expert observers distinguished. The primary goal is to evaluate two previously published adaptations of JPEG to CMYK images, and to determine a visually lossless compression ratio threshold for typical printing applications. The improvements are based on tonal decorrelation and overlapping block transforms. Results on three typical prepress test images indicate that the proposed adaptations are useful and that for the investigated printing configuration, compression ratios up to 20 can be used safely.  相似文献   

11.
面向任务的医学图象压缩   总被引:4,自引:0,他引:4       下载免费PDF全文
现代医学成象技术产生了大量的医学数字图象,而这些图象的存储和传输却存在很大问题,传统上,采用无损压缩编码方法改善这些图象的存储和传输效率,全为了达到较高的压缩比,必须采用有损压缩,然而,有损压缩会给图象带来失真,必须谨慎使用,医学图象通常,由二类区域构成,其中一类包含重要的诊断信息,由于其错误描述的代价非常高,因此提供一种高重的质量的压缩方法更加必要,另一类区域的信息较为次要,其压缩的目标则要求达到尽可能高的压缩比,为了既能保证感兴趣区图象的重构质量,又能获得较高压缩比,提出了一种面向任务的医学图象压缩算法,该方法把无损压缩和有损压缩统一在小波变换的框架下,对感兴趣区采用无损压缩,而对其他部分则采用有损压缩,实验证明,该压缩方法在压缩比和重建图象质量上均达到了较好的性能。  相似文献   

12.
LeGall(5,3)提升小波直接在时域完成小波构造,不仅算法简单、适合并行处理,而且还能同时用于图像的无损或有损压缩。为了满足远程医疗系统静态医学图像压缩处理的特定要求,该文详细讨论了基于提升格式的LeGall(5,3)小波变换的算法,并在DSP上加以实现。实验结果表明该方案在提高运算处理速度、改善图像压缩比和重建图像质量等方面都取得了较好的效果。  相似文献   

13.
充分利用数字减影(DSA)图象序列在时间域上的减影图象之间仍然存在一定的相似性,提出了一种基于二阶差分PCM(D^2PCM)的编码方式。方法在空间坐标与时间坐标上构成了一个二阶差分,能够有效地减少图象在空间以及时间域上的冗余度,从而实现对DSA序列的无损压缩,减少了存储空间与传输时间。实际应用证明该处法是稳定和实用的。  相似文献   

14.
This paper presents an image lossless compression and information-hiding schemes based on the same methodology. The methodology presented here is based on the known SCAN formal language for data accessing and processing. In particular, the compression part produces a lossless compression ratio of 1.88 for the standard Lenna, while the hiding part is able to embeds digital information at 12.5% of the size of the original image. Results from various images are also provided.  相似文献   

15.
提出了一种ARGB数据的无损压缩优化算法以及FPGA实现方法。为了避免对整个文件的解压和压缩,采用了Deflate算法的相关方法对图像按块进行压缩和解压,极大提高了存储器的访问效率。利用了Deflate算法对小块进行压缩,发挥了Deflate中LZ77压缩的Huffman压缩技术来优化压缩算法。通过VIVADO HLS将算法实现成FPGA电路,采用多张图片进行了实际应用,证实了算法的有效性,并分析了其功耗和时序信息。  相似文献   

16.
周成兵  宋余庆  卢佳 《计算机工程与设计》2005,26(10):2660-2661,2694
如何高效地压缩医学图像,以便减少存储空间和传输时间,已经成为迫切需要解决的问题。在分析现有的图像压缩方法和医学图像特点的基础上,针对医学图像有损压缩方法和无损压缩方法各自的不足,研究了一种基于ROI(Region of Interest)的医学图像无损压缩方法。  相似文献   

17.
In this paper, we present a novel method for fast lossy or lossless compression and decompression of regular height fields. The method is suitable for SIMD parallel implementation and thus inherently suitable for modern GPU architectures. Lossy compression is achieved by approximating the height field with a set of quadratic Bezier surfaces. In addition, lossless compression is achieved by superimposing the residuals over the lossy approximation. We validated the method’s efficiency through a CUDA implementation of compression and decompression algorithms. The method allows independent decompression of individual data points, as well as progressive decompression. Even in the case of lossy decompression, the decompressed surface is inherently seamless. In comparison with the GPU-oriented state-of-the-art method, the proposed method, combined with a widely available lossless compression method (such as DEFLATE), achieves comparable compression ratios. The method’s efficiency slightly outperforms the state-of-the-art method for very high workloads and considerably for lower workloads.  相似文献   

18.
为解决大量工业浮点数据在GPRS网络上传输时实时性降低的问题,提出了基于科学计算双浮点数压缩算法(FPC)与区间编码相结合的无损压缩方法IFPC实现工业浮点数据的压缩传输及解压缩。先对FPC算法与通用无损压缩算法应用在浮点数部分时的压缩效果作实验对比,实验结果表明FPC算法相比于通用的无损压缩算法在浮点数压缩上具有较好的压缩率以及较短的压缩与解压缩时间。将FPC算法与区间编码结合后的IFPC算法对整个数据域压缩与解压缩的实验结果表明,所提出的方法相比通用无损压缩算法,压缩率最低可提高7.6%,压缩时间最低可减少49.1%,综合传输时间减少了21.3%,提高了传输实时性。  相似文献   

19.
一种基于网格编码量化的高光谱图像无损压缩方法   总被引:3,自引:1,他引:3       下载免费PDF全文
由于遥感图像的数据量非常庞大,给有限的存储空间和传输带宽带来很大的压力,同时,由于高光谱图像获取代价昂贵,具有广泛的应用领域,且压缩时一般不能丢失任何信息,即要求无损压缩,因此没有有效的压缩方法,高光谱图像的普及应用将受到极大的限制.网格编码量化(TCQ)借鉴了网格编码调制(TCM)中信号集合扩展、信号集合划分和网格状态转移的思想,其虽具有良好的均方误差(MSE)性能,而且计算复杂度适中,但目前TCQ主要被应用于图像的有损压缩,为了对高光谱图像进行有效的无损压缩,通过将TCQ引入高光谱图像的无损压缩,并根据高光谱图像的特点,提出了一种基于小波变换和TCQ的高光谱图像无损压缩方法.实验结果表明,与JPEG2000和JPEG-LS中无损压缩算法相比,该算法对高光谱图像具有更好的压缩性能.  相似文献   

20.
《Real》2001,7(2):203-217
This paper presents a VLSI architecture to implement the forward and inverse two dimensional Discrete Wavelet Transform (DWT), to compress medical images for storage and retrieval. Lossless compression is usually required in the medical image field. The word length required for lossless compression makes too expensive the area cost of the architectures that appear in the literature. Thus, there is a clear need for designing a cost-effective architecture to implement the lossless compression of medical images using DWT. The data path word length has been selected to ensure the lossless accuracy criteria leading a high speed implementation with small chip area. The pyramid algorithm is reorganized and the algorithm locality is improved in order to obtain an efficient hardware implementation. The result is a pipelined architecture that supports single chip implementation in VLSI technology. The implementation employs only one multiplier and 352 memory elements to compute all scales what results in a considerable smaller chip area (45 mm2) than former implementations. The hardware design has been captured by means of the VHDL language and simulated on data taken from random images. Implemented in a 0.7 μm technology, it can compute both the forward and inverse DWT at a rate of 3.5 512×512 12 bit images/s corresponding to a clock speed of 33 MHz. This chip is the core of a PCI board that will speedup the DWT computation on desktop computers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号