首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
近年来由于科技的发展和互联网的兴起,图像资料已被广泛地应用在网络上,而图像压缩不但可以节省图像资料占用的内存空间,并且可以加速其传输速度,因此图像压缩技术目前被广泛应用于医学、手机、数据传输、多媒体影音、互联网络等。这里主要是针对无失真的图像压缩技术,先将原始图像转成256色的GIF格式,然后再建立一个索引矩阵,矩阵中元素是由原色RGB信息对应所组成,利用索引矩阵排序法并配合编码簿,再加上配合LZW、CALIC、JPEG2000、JPEG-LS等不同压缩算法,来比较压缩的效果。  相似文献   

2.
In lossless image compression, many prediction methods are proposed so far to achieve better compression performance/complexity trade off. In this paper, we concentrate on some well-known and widely used low-complexity algorithms exploited in many modern compression systems, including MED, GAP, Graham, Ljpeg, DARC, and GBSW. This paper proposes a new gradient-based tracking and adapting technique that outperforms some existing methods. This paper aims to design an efficient highly adaptive predictor that can be incorporated in modeling step of image compression systems. This claim is proved by testing the proposed method upon a wide variety of images with different characteristics. Six special sets of images including face, sport, texture, sea, text, and medical constitute our dataset.  相似文献   

3.
Compression algorithms are widely used in medical imaging systems for efficient image storage, transmission, and display. In the acceptance of lossy compression algorithms in the clinical environment, important factors are the assessment of 'visually lossless' compression thresholds, as well as the development of assessment methods requiring fewer data and time than observer performance based studies. In this study a set of quantitative measurements related to medical image quality parameters is proposed for compression assessment. Measurements were carried out using region of interest (ROI) operations on computer-generated test images, with characteristics similar to radiographic images. As a paradigm, the assessment of the lossy Joint Photographic Expert Group (JPEG) algorithm, available in a telematics application for healthcare, is presented. A compression ratio of 15 was found as the visually lossless threshold for the JPEG lossy algorithm, in agreement with previous observer performance studies. Up to this ratio low contrast discrimination is not affected, image noise level is decreased, high contrast line-pair amplitude is decreased by less than 3%, and input/output gray level differences are minor (less than 1%). This type of assessment provides information regarding the type of loss, offering cost and time benefits, in parallel with the advantages of test image adaptation to the requirements of a certain imaging modality and clinical study.  相似文献   

4.
面向任务的医学图象压缩   总被引:4,自引:0,他引:4       下载免费PDF全文
现代医学成象技术产生了大量的医学数字图象,而这些图象的存储和传输却存在很大问题,传统上,采用无损压缩编码方法改善这些图象的存储和传输效率,全为了达到较高的压缩比,必须采用有损压缩,然而,有损压缩会给图象带来失真,必须谨慎使用,医学图象通常,由二类区域构成,其中一类包含重要的诊断信息,由于其错误描述的代价非常高,因此提供一种高重的质量的压缩方法更加必要,另一类区域的信息较为次要,其压缩的目标则要求达到尽可能高的压缩比,为了既能保证感兴趣区图象的重构质量,又能获得较高压缩比,提出了一种面向任务的医学图象压缩算法,该方法把无损压缩和有损压缩统一在小波变换的框架下,对感兴趣区采用无损压缩,而对其他部分则采用有损压缩,实验证明,该压缩方法在压缩比和重建图象质量上均达到了较好的性能。  相似文献   

5.
一种基于网格编码量化的高光谱图像无损压缩方法   总被引:3,自引:1,他引:3       下载免费PDF全文
由于遥感图像的数据量非常庞大,给有限的存储空间和传输带宽带来很大的压力,同时,由于高光谱图像获取代价昂贵,具有广泛的应用领域,且压缩时一般不能丢失任何信息,即要求无损压缩,因此没有有效的压缩方法,高光谱图像的普及应用将受到极大的限制.网格编码量化(TCQ)借鉴了网格编码调制(TCM)中信号集合扩展、信号集合划分和网格状态转移的思想,其虽具有良好的均方误差(MSE)性能,而且计算复杂度适中,但目前TCQ主要被应用于图像的有损压缩,为了对高光谱图像进行有效的无损压缩,通过将TCQ引入高光谱图像的无损压缩,并根据高光谱图像的特点,提出了一种基于小波变换和TCQ的高光谱图像无损压缩方法.实验结果表明,与JPEG2000和JPEG-LS中无损压缩算法相比,该算法对高光谱图像具有更好的压缩性能.  相似文献   

6.
提出了一种新颖的适合医学图像的压缩算法AR-EWC(embedded wavelet coding of arbitrary ROI).该算法保证了感兴趣区域边界与背景光滑过渡,支持任意形状ROI的无失真编码,生成内嵌有损到无损图像质量渐进优化的码流,并且可以对ROI独立随机存取.针对医学图像的特点,对包含重要临床诊断信息区域采用完全无失真的无损压缩,而对背景采用高压缩比的有损压缩,既保证了医学图像对高质量的要求,相对于传统的无损方案又大大提高了图像的整体压缩比.在临床头部MR图像数据集上的实验表明,该算法在保证了ROI区域无损压缩的前提下,达到了与经典的有损压缩算法相当的压缩比率.  相似文献   

7.
《Real》2001,7(2):203-217
This paper presents a VLSI architecture to implement the forward and inverse two dimensional Discrete Wavelet Transform (DWT), to compress medical images for storage and retrieval. Lossless compression is usually required in the medical image field. The word length required for lossless compression makes too expensive the area cost of the architectures that appear in the literature. Thus, there is a clear need for designing a cost-effective architecture to implement the lossless compression of medical images using DWT. The data path word length has been selected to ensure the lossless accuracy criteria leading a high speed implementation with small chip area. The pyramid algorithm is reorganized and the algorithm locality is improved in order to obtain an efficient hardware implementation. The result is a pipelined architecture that supports single chip implementation in VLSI technology. The implementation employs only one multiplier and 352 memory elements to compute all scales what results in a considerable smaller chip area (45 mm2) than former implementations. The hardware design has been captured by means of the VHDL language and simulated on data taken from random images. Implemented in a 0.7 μm technology, it can compute both the forward and inverse DWT at a rate of 3.5 512×512 12 bit images/s corresponding to a clock speed of 33 MHz. This chip is the core of a PCI board that will speedup the DWT computation on desktop computers.  相似文献   

8.
Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-infrared (IR), mid-IR and thermal IR portions of the spectrum, thus requiring large data storage on board the satellite and high bandwidth of the downlink transmission channel to ground stations. Image compression techniques are required to compensate for the limitations in terms of on-board storage and communication link bandwidth. In most remote-sensing applications, preservation of the original information is important and urges studies on lossless compression techniques for on-board implementation. This article first reviews hyperspectral spaceborne missions and compression techniques for hyperspectral images used on board satellites. The rest of the article investigates the suitability of the integer Karhunen–Loève transform (KLT) for lossless inter-band compression in spaceborne hyperspectral imaging payloads. Clustering and tiling strategies are employed to reduce the computational complexity of the algorithm. The integer KLT performance is evaluated through a comprehensive numerical experimentation using four airborne and four spaceborne hyperspectral datasets. In addition, an implementation of the integer KLT algorithm is ported to an embedded platform including a digital signal processor (DSP). The DSP performance results are reported and compared with the desktop implementation. The effects of clustering and tiling techniques on the compression ratio and latency are assessed for both desktop and the DSP implementation.  相似文献   

9.
S+P变换是由AmirSaid犤7犦提出的一种多分辨表示方法,能够实现整数到整数的变换,从而成功地应用于图像的无损压缩,其性能优于基于线性预测的JPEG标准。为了进一步提高压缩性能,论文提出了一种基于DPCM与S+P变换的图像无损压缩算法。首先,对原始图像进行线性预测,得到差值图像;其次,对差值图像进行S+P变换;最后,对变换系数进行熵编码压缩。新算法利用像素间的相关性,给出了一种新的利用一维S+P变换实现图像变换的方法,减少了变换增加的数据量,有效地解决了边界处理问题。实验结果与性能比较表明:新算法有效的,优于其它著名的基于多分辨分解的无损图像压缩编码算法犤7,10,12犦。  相似文献   

10.

There is an increasing number of image data produced in our life nowadays, which creates a big challenge to store and transmit them. For some fields requiring high fidelity, the lossless image compression becomes significant, because it can reduce the size of image data without quality loss. To solve the difficulty in improving the lossless image compression ratio, we propose an improved lossless image compression algorithm that theoretically provides an approximately quadruple compression combining the linear prediction, integer wavelet transform (IWT) with output coefficients processing and Huffman coding. A new hybrid transform exploiting a new prediction template and a coefficient processing of IWT is the main contribution of this algorithm. The experimental results on three different image sets show that the proposed algorithm outperforms state-of-the-art algorithms. The compression ratios are improved by at least 6.22% up to 72.36%. Our algorithm is more suitable to compress images with complex texture and higher resolution at an acceptable compression speed.

  相似文献   

11.
The paper presents a new lossless ECG compression scheme. The short-term predictor and the coder use conditioning on a small number of contexts. The long-term prediction is based on an algorithm for R-R interval estimation. Several QRS detection algorithms are investigated to select a low complexity and reliable detection algorithm. The coding of prediction residuals uses primarily the Golomb-Rice (GR) codes, but, to improve the coding results, escape codes GR-ESC are used in some contexts for a limited number of samples. Experimental results indicate the good overall performance of the lossless ECG compression algorithms (reducing the storage needs from 12 to about 3-4 bits per sample). The scheme consistently outperforms other waveform or general purpose coding algorithms.  相似文献   

12.
针对卫星图像的特点及当前卫星图像在传输和存储上面临的问题,提出了一种基于稀疏表示的卫星图像二级无损压缩算法。通过传输稀疏表示后的稀疏系数来代替图像本身的传输,完成对卫星图像的第一级压缩;对非零稀疏系数先作预处理后实现聚类,然后依据聚类索引对原始非零稀疏系数的位置排序;最后对处理后的非零稀疏系数和位置数据分块,并利用改进的自适应哈夫曼算法对非零稀疏系数的数据块编码,利用差分编码和改进的自适应哈夫曼算法对位置数据块编码,完成对图像数据的第二级压缩。实验结果表明,与传统算法相比,所提算法具有明显优势,改进算法的压缩率是传统算法的1/3~1/2,且可同时实现卫星图像的高倍无损压缩与高分辨率重建。  相似文献   

13.
组合压缩在存储测试系统中的应用   总被引:1,自引:0,他引:1  
在某些特殊的测试环境中,存储测试系统中既要求大容量数据存储又要求微体积.为解决这一矛盾,在研究了游程压缩和LZW两种算法的基础上,提出了以FPGA为核心实现两种算法的无损组合压缩,利用FPGA芯片内的RAM来建立字典,用VHDL语言和状态机实现该压缩算法.仿真和综合验证表明,通过FPGA实现该组合算法,压缩效果显著,压...  相似文献   

14.
李畅 《现代计算机》2014,(12):61-64
随着图像信息的大量存储和传输,图像压缩技术的研究越来越深入。分析现有的无损图像压缩技术——基于统计概率方法、基于字典编码方法和预测编码方法。并详细介绍有序二叉决策图OBDD,并对其进行新型和有效的编码,同时分析有损图像压缩技术,并对无损压缩算法和有损压缩算法进行比较。  相似文献   

15.
为解决实时数据库数据量大导致存储困难等问题,提出一种分类的数据压缩算法,实现对实时数据库数据的无损和高效压缩。首先将实时数据库的数据分为数值、时间戳和质量码3部分,然后根据每种数据的特征形态,将LZ78和LZW数据压缩算法融合,分别设计对应的数据压缩算法。实验结果表明,该算法在提高数据库的实际存储容量的同时也提高了实时数据库的实时性。   相似文献   

16.
激光雷达数据无损压缩的FPGA实现   总被引:3,自引:0,他引:3  
为提高测距激光雷达海量回波数据的存储和传输效率,在以FPGA为核心的激光雷达数据采集系统中实现了对回波数据的Lempel-Ziv-Welch(LZW算法)基于字典的无损压缩;通过对字典管理进行简化,利用FPGA芯片内的RAM来存储字典,采用逻辑电路来处理压缩算法,算法的主体为Verilog语言描述的有限状态机;经过仿真验证与综合,结果表明该算法的FPGA实现能获得30%左右的压缩比,压缩速度满足系统要求.  相似文献   

17.
Latest advancements in capture and display technologies demand better compression techniques for the storage and transmission of still images and video. High efficiency video coding (HEVC) is the latest video compression standard developed by the joint collaborative team on video coding (JCTVC) with this objective. Although the main design goal of HEVC is the compression of high resolution video, its performance in still image compression is at par with state-of-the-art still image compression standards. This work explores the possibility of incorporating the efficient intra prediction techniques employed in HEVC into the compression of high resolution still images. In the lossless coding mode of HEVC, sample- based angular intra prediction (SAP) methods have shown better prediction accuracy compared to the conventional block-based prediction (BP). In this paper, we propose an improved sample-based angular intra prediction (ISAP), which enhances the accuracy of the highly crucial intra prediction within HEVC. The experimental results show that ISAP in lossless compression of still images outclasses archival tools, state-of-the-art image compression standards and other HEVC-based lossless image compression codecs.  相似文献   

18.
仇杰  梁久祯  吴秦  王培斌 《计算机应用》2015,35(11):3232-3237
为解决大量工业远程监控数据在通用分组无线服务(GPRS)网络上的传输延迟问题,提出了基于改进科学计算浮点数压缩(FPC)算法的工业远程监控数据无损压缩方法.首先,根据工业监控数据中浮点数部分的特点对原FPC算法中的预测器结构进行改进,并将该改进算法作为浮点数部分的压缩算法; 然后,与区间编码相结合作为整个数据域的压缩方法.改进前后的浮点数部分压缩实验结果表明改进的FPC算法提高了预测器的预测精度,且在保持较高压缩效率的同时提高了压缩率.与通用无损压缩算法相比,所提算法提高了12%以上的平均压缩率,减少了38.5%以上的平均压缩时间,使得传输时间降低了23.7%以上,在传输数据量大且传输速率不高的情况下大大提高了监控的实时性.  相似文献   

19.
在建的高能同步辐射光源预计会产生海量原始数据,其中硬X射线实验线站产生的图像数据占比最高且具有高分辨率和高帧率的特点,亟需有效的无损压缩方法缓解存储和传输压力,然而现有通用无损压缩方法对该类图像压缩效果不佳,基于深度学习的无损压缩方法又耗时较长。结合同步辐射光源图像的特点,提出一种在保证图像压缩比前提下的可并行智能无损图像压缩方法。通过参数自适应的可逆分区量化方法,大幅缩小图像经过时间差分后的像素值分布范围,能够节省20%以上的存储空间。将以CNN为基础架构的时空学习网络C-Zip作为概率预测器,同时以数据集为单位过拟合训练模型进一步优化图像压缩比。针对压缩过程中耗时较长的算术编码过程,利用概率距离量化代替算术编码,结合深度学习进行无损编码,增加编码过程的并行度。实验结果表明,该方法的图像压缩比相比于PNG、FLIF等传统图像无损压缩方法提升了0.23~0.58,对于同步辐射光源图像具有更好的压缩效果。  相似文献   

20.
In this article, we present a popular lossless compression/decompression algorithm, GZIP, and the study to implement it on an FPGA-based architecture, the ADM-XRC board from ALPHA DATA parallel system ltd. The algorithm is lossless, and applied to “bi-level” images of large size (A0 format). It ensures a minimum compression rate for the images we are considering. It aims to decrease storage requirements and transfer times, which are critical for wide format printing systems. In a wide format document industry, raster data are most of time processed in an uncompressed format, in order to apply processing (P) before printing (p). An example of a copy chain is composed of scanner, set of processing operations, storage, link and printer. We propose to use a compressed format as the new data-flow representation to improve the performances of the printing system. For example, the compression (C) is applied as soon as the data are produced by the scanner, and decompression (D) is performed at the last stage, before printing. The set of processing is applied to compressed images. The proposed architecture for the compressor is based on a hash table and the decompressor is based on a parallel decoder of the Huffman codes. We implemented the proposed architecture for compression and decompression algorithms on FPGA Xilinx Virtex XCV 400.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号