首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ECG data compression techniques-a unified approach   总被引:25,自引:0,他引:25  
A broad spectrum of techniques for electrocardiogram (ECG) data compression have been proposed during the last three decades. Such techniques have been vital in reducing the digital ECG data volume for storage and transmission. These techniques are essential to a wide variety of applications ranging from diagnostic to ambulatory ECG's. Due to the diverse procedures that have been employed, comparison of ECG compression methods is a major problem. Present evaluation methods preclude any direct comparison among existing ECG compression techniques. The main purpose of this paper is to address this issue and to establish a unified view of ECG compression techniques. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are: ECG differential pulse code modulation and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods briefly presented, include: Fourier, Walsh, and K-L transforms. The theoretical basis behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, differential pulse code modulation (DPCM), and entropy coding methods. The paper concludes with the presentation of a framework for evaluation and comparison of ECG compression schemes.  相似文献   

2.
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.  相似文献   

3.
Fixed percentage of wavelet coefficients to be zeroed for ECG compression   总被引:4,自引:0,他引:4  
A new method for ECG compression is presented. After the pyramidal wavelet decomposition, the resultant coefficients are subjected to an iterative threshold until a fixed percentage target of wavelet coefficients to be zeroed is reached. Next, the lossless Huffman's coding is used to increase the compression ratio. Quality preservation for good compression ratios is reported.  相似文献   

4.
This paper introduces a new methodology for compressing ECG signals in an automatic way guaranteeing signal interpretation quality. The approach is based on noise estimation in the ECG signal that is used as a compression threshold in the coding stage. The Set Partitioning in Hierarchical Trees algorithm is used to code the signal in the wavelet domain. Forty different ECG records from two different ECG databases commonly used in ECG compression have been considered to validate the approach. Three cardiologists have participated in the clinical trial using mean opinion score tests in order to rate the signals quality. Results showed that the approach not only achieves very good ECG reconstruction quality but also enhances the visual quality of the ECG signal.   相似文献   

5.
基于自适应小波变换的嵌入图像压缩算法   总被引:3,自引:1,他引:2  
针对遥感、指纹、地震资料等图像纹理复杂丰富、局部相关性较弱等特点,文章通过实施自适应小波变换、合理确定系数扫描次序、分类量化小波系数等措施,提出了一种高效的图像压缩编码算法.仿真结果表明,相同压缩比下,本文算法的图像复原质量明显优于SPIHT算法(特别是对于纹理图像,如标准图像Barbara).  相似文献   

6.
本文提出一种采用改进图形变换的3D点云压缩算法。所提算法首先通过改进图形变换将每个块中的所有子图连接为一个图,从源头减少直流系数个数,同时用每个块所有点的均值作为直流系数以降低直流量幅值,并对去平均的颜色值进行图形变换。考虑到量化后的交流系数的零系数占比比较大,本文采用了Run-Level的编码方法对非零的交流系数进行编码。对于直流系数,本文设计了一种预测编码方法对其进行有效编码。最后,编码完的交流系数和预测残差均采用霍夫曼编码器进行熵编码。实验结果表明所提算法相比多个现有3D点云压缩算法具有更高的压缩效率。   相似文献   

7.
This paper evaluates the compression performance and characteristics of two wavelet coding compression schemes of electrocardiogram (ECG) signals suitable for real-time telemedical applications. The two proposed methods, namely the optimal zonal wavelet coding (OZWC) method and the wavelet transform higher order statistics-based coding (WHOSC) method, are used to assess the ECG compression issues. The WHOSC method employs higher order statistics (HOS) and uses multirate processing with the autoregressive HOS model technique to provide increasing robustness to the coding scheme. The OZWC algorithm used is based on the optimal wavelet-based zonal coding method developed for the class of discrete “Lipschitizian” signals. Both methodologies were evaluated using the normalized rms error (NRMSE) and the average compression ratio (CR) and bits per sample criteria, applied on abnormal clinical ECG data samples selected from the MIT-BIH database and the Creighton University Cardiac Center database. Simulation results illustrate that both methods can contribute to and enhance the medical data compression performance suitable for a hybrid mobile telemedical system that integrates these algorithmic approaches for real-time ECG data transmission scenarios with high CRs and low NRMSE ratios, especially in low bandwidth mobile systems  相似文献   

8.
Modified JPEG Huffman coding   总被引:3,自引:0,他引:3  
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method.  相似文献   

9.
黄博强  陈建华  汪源源 《电子学报》2008,36(9):1810-1813
 提出一种基于Context模型的ECG信号二维压缩方案.通过模极大检测和循环匹配识别R波特征,自动构建ECG图像,并根据心动周期信息制作编码数据图,之后对ECG图像进行一维离散小波变换和带截止区均匀量化,量化系数被分解为重要位置图、符号流、最高位位置流和剩余比特流,最后结合编码数据图进行基于Context模型的自适应算术编码.实验针对MIT-BIH心律失常数据库的两个数据集进行压缩.压缩比为20时,新方案的百分均方根误差分别为2.93%、4.31%,低于JPEG2000压缩方案的3.26%、4.8%.结果表明新方案优于其它ECG压缩算法.  相似文献   

10.
一种高效的抗误码能力强的图像编码方案   总被引:3,自引:2,他引:1  
提出了一种新的图像编码方案。对子波变换后的系数进行变系数定长编码,由符号概率表生成字典,并用类似算术编码的方法进行符号串和字典中码字的映射,同时采用一系列的纠错方法以进一步提高方案的抗误码能力。模拟结果表明,该方案与其他方案相比,具有压缩比高,抗误码能力强等优点。  相似文献   

11.
In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. In this paper, we investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.  相似文献   

12.
In this study, a new compression algorithm for ECG signal is proposed based on selecting important subbands of wavelet packet transform (WPT) and applying subband-dependent quantization algorithm. To this end, first WPT was applied on ECG signal and then more important subbands are selected according to their Shannon entropy. In the next step, content-based quantization and denoising method are applied to the coefficients of the selected subbands. Finally, arithmetic coding is employed to produce compressed data. The performance of the proposed compression method is evaluated using compression rate (CR), percentage root-mean-square difference (PRD) as signal distortion, and wavelet energy-based diagnostic distortion (WEDD) as diagnostic distortion measures on MIT-BIH Arrhythmia database. The average CR of the proposed method is 29.1, its average PRD is <2.9 % and WEDD is <3.2 %. These results demonstrated that the proposed method has a good performance compared to the state-of-the-art compression algorithms.  相似文献   

13.
Digital video compression-an overview   总被引:1,自引:0,他引:1  
Some of the more commonly used video compression schemes are reviewed. They are differential pulse code modulation (DPCM), Huffman coding, transform coding, vector quantization (VQ), and subband coding. The use of motion compensation to improve compression is also discussed  相似文献   

14.
A low memory zerotree coding for arbitrarily shaped objects   总被引:2,自引:0,他引:2  
The set partitioning in hierarchical trees (SPIHT) algorithm is a computationally simple and efficient zerotree coding technique for image compression. However, the high working memory requirement is its main drawback for hardware realization. We present a low memory zerotree coder (LMZC), which requires much less working memory than SPIHT. The LMZC coding algorithm abandons the use of lists, defines a different tree structure, and merges the sorting pass and the refinement pass together. The main techniques of LMZC are the recursive programming and a top-bit scheme (TBS). In TBS, the top bits of transformed coefficients are used to store the coding status of coefficients instead of the lists used in SPIHT. In order to achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. A compact emplacement of the transformed coefficients is also proposed to further reduce working memory. The LMZC carefully treats "don't care" nodes in the wavelet tree and does not use bits to code such nodes. Comparison of LMZC with SPIHT shows that for coding a 768 /spl times/ 512 color image, LMZC saves at least 5.3 MBytes of memory but only increases a little execution time and reduces minor peak signal-to noise ratio (PSNR) values, thereby making it highly promising for some memory limited applications.  相似文献   

15.
基于率失真优化的递进UTCQ编码   总被引:1,自引:0,他引:1  
本文提出了一种基于UTCQ量化器的递进静态图像小波编码算法。一致网格编码量化(UTCQ)用于小波系数的量化并得到了非常好的量化效果。UTCQ超集索引值构成系数位平面,率失真优化按照率失真斜率递减的顺序从系数位平面选择编码系数位。最先编码的位具有最大的率失真斜率,每编码一位都会使失真减少最大。率失真斜率的计算仅仅是利用MQ自适应算术编码器的概率状态估计表而进行的查表过程。MQ算术编码器进一步压缩率失真优化选择的系数位。率失真门限方法的编码速度比搜索最大的率失真斜率更快。该算法有较快的编码速度以及好的压缩效果。  相似文献   

16.
Signal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for addressing the problem. In this paper, we present a time domain algorithm based on the coding of line segments which are used to approximate the signal. These segments are fit in a way that is optimal in the rate distortion sense. Although the approach is applicable to any type of signal, we focus, in this paper, on the compression of electrocardiogram (ECG) signals. ECG signal compression has traditionally been tackled by heuristic approaches. However, it has been demonstrated [1] that exact optimization algorithms outperform these heuristic approaches by a wide margin with respect to reconstruction error. By formulating the compression problem as a graph theory problem, known optimization theory can be applied in order to yield optimal compression. In this paper, we present an algorithm that will guarantee the smallest possible distortion among all methods applying linear interpolation given an upper bound on the available number of bits. Using a varied signal test set, extensive coding experiments are presented. We compare the results from our coding method to traditional time domain ECG compression methods, as well as, to more recently developed frequency domain methods. Evaluation is based both on percentage root-mean-square difference (PRD) performance measure and visual inspection of the reconstructed signals. The results demonstrate that the exact optimization methods have superior performance compared to both traditional ECG compression methods and the frequency domain methods.  相似文献   

17.
Wavelet transform can decompose images into various multiresolution subbands. In these subbands the correlation exists. A novel technique for image coding by taking advantage of the correlation is addressed. It is based on predictive edge detection from the LL band of the lowest resolution level to predict the edge in the LH, HL and HH bands in the higher resolution level. If the coefficient is predicted as an edge it is preserved; otherwise, it is discarded. In the decoder, the location of the preserved coefficients can also be found as in the encoder. Therefore, no overhead is needed. Instead of complex vector quantization, which is commonly used in subband image coding for high compression ratio, simple scalar quantization is used to code the remaining coefficients and achieves very good results.  相似文献   

18.
提出了一种有效的新型测试数据压缩编码——PTIDR编码。该编码方法综合利用哈夫曼编码和前缀编码。理论分析和实验结果表明,在测试集中,0的概率p满足p≥0.7610时,能取得比FDR编码更高的压缩率,从而降低芯片测试成本。该编码方法的解码器也较FDR编码的解码器简单、易实现,且能有效节省硬件开销,并进一步节省芯片面积,从而降低芯片制造成本。  相似文献   

19.
Hierarchical partition priority wavelet image compression   总被引:3,自引:0,他引:3  
Image compression methods for progressive transmission using optimal hierarchical decomposition, partition priority coding (PPC), and multiple distribution entropy coding (MDEC) are presented. In the proposed coder, a hierarchical subband/wavelet decomposition transforms the original image. The analysis filter banks are selected to maximize the reproduction fidelity in each stage of progressive image transmission. An efficient triple-state differential pulse code modulation (DPCM) method is applied to the smoothed subband coefficients, and the corresponding prediction error is Lloyd-Max quantized. Such a quantizer is also designed to fit the characteristics of the detail transform coefficients in each subband, which are then coded using novel hierarchical PPC (HPPC) and predictive HPPC (PHPPC) algorithms. More specifically, given a suitable partitioning of their absolute range, the quantized detail coefficients are ordered based on both their decomposition level and partition and then are coded along with the corresponding address map. Space filling scanning further reduces the coding cost by providing a highly spatially correlated address map of the coefficients in each PPC partition. Finally, adaptive MDEC is applied to both the DPCM and HPPC/PHPPC outputs by considering a division of the source (quantized coefficients) into multiple subsources and adaptive arithmetic coding based on their corresponding histograms. Experimental results demonstrate the great performance of the proposed compression methods.  相似文献   

20.
研究了一种将变换矩阵分解为三角元素可逆矩阵(TERM)实现的主成分变换整数近似算法(IPCT).为限制误差和提高计算效率,改进了TERM分解中选择主元的方法.结合可逆整数PCT和三维Tarp编码技术,提出了一种新的高光谱图像无损压缩算法.该算法在进行空间维小波变换以后,利用改进的IPCT去除谱间相关;在编码阶段,新颖的三维Tarp编码器能利用五个简单的递归滤波器进行概率估计,以驱动一个非自适应算术编码器,对变换系数的显著性映射和细化信息进行熵编码.该算法复杂度较低,能够产生嵌入式码流,并且较已有的算法能获得更高的压缩比.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号