首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
A.  M.  Sabah M.   《Digital Signal Processing》2003,13(4):604-622
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. It is based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal. In this algorithm, the input signal is divided into blocks and each block goes through a discrete wavelet transform; then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using various coding methods, including modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent root-mean square difference (PRD) and visual inspection measures. By this compression method, small PRD and high compression ratio with low implementation complexity are achieved. Finally, we have compared the performance of the ECG compression algorithm on data from the MIT-BIH database.  相似文献   

2.
基于3-参数变长编码的图像无损压缩算法   总被引:1,自引:0,他引:1  
高健  饶珺  孙瑞鹏 《自动化学报》2013,39(8):1289-1294
通过对 Huffman 编码方法的研究和分析, 提出了一种基于3-参数变长编码(3-PVLC)的图像数据无损压缩算法. 在图像数据转换为混合差分数据基础上, 采用3-PVLC 对差分数据进行一次编码, 并利用一种自适应性的游长缩减法对一次编码后的二值码流进行二次编码. 本文的编解码方法较灵活, 可依据具体需要进行基于3-PVLC 方法的一次编码或在一次编码基础上完成二次编码, 且具较高压缩比.  相似文献   

3.
为减少测试数据存储量,提出一种有效的新型测试数据压缩编码--PTIDR编码,并构建了基于该编码的压缩/解压缩方案.PTIDR编码能够取得比FDR,EFDR, Alternating FDR等编码更高的压缩率,其解码器也较简单、易实现,且能有效地降低硬件开销.与Selective Huffman, CDCR编码相比,PTIDR编码能够得到较高的压缩率面积开销比.特别地,在差分测试集中0的概率满足p≥0.7610时,PTIDR编码能取得比FDR编码更高的压缩率,从而降低芯片测试成本.  相似文献   

4.
提出一种基于DPCM与Hilbert曲线的医疗图像无损压缩方法,通过差分脉码调制技术(DPCM)对图像进行预测处理,得到差值图像,再利用Hilbert曲线对医疗图像像素的进行扫描,得到图像的一维数据,然后分别用哈夫曼编码、游程编码和字典编码对一维数据进行压缩。实验结果显示Hilbert扫描可以增加像素的相关性,对提高压缩比有一定的贡献。  相似文献   

5.
In this paper, we present a lossy compression scheme based on the application of the 3D fast wavelet transform to code medical video. This type of video has special features, such as its representation in gray scale, its very few interframe variations, and the quality requirements of the reconstructed images. These characteristics as well as the social impact of the desired applications demand a design and implementation of coding schemes especially oriented to exploit them. We analyze different parameters of the codification process, such as the utilization of different wavelets functions, the number of steps the wavelet function is applied to, the way the thresholds are chosen, and the selected methods in the quantization and entropy encoder. In order to enhance our original encoder, we propose several improvements in the entropy encoder: 3D-conscious run-length, hexadecimal coding and the application of arithmetic coding instead of Huffman. Our coder achieves a good trade-off between compression ratio and quality of the reconstructed video. We have also compared our scheme with MPEG-2 and EZW, obtaining better compression ratios up to 119% and 46%, respectively for the same PSNR.  相似文献   

6.
We present a new technique for automatic data reduction and pattern recognition of time-domain signals such as electrocardiogram (ECG) waveforms. Data reduction is important because only a few significant features of each heart beat are of interest in pattern analysis, while the patient data collection system acquires an enormous number of data samples. We present a significant point extraction algorithm, based on the analysis of curvature, that identifies data samples that represent clinically significant information in the ECG waveform. Data reduction rates of up to 1:10 are possible without significantly distorting the appearance of the waveform. This method is unique in that common procedures help in both data reduction as well as pattern recognition. Part II of this work deals specifically with pattern analysis of normal and abnormal heart beats.  相似文献   

7.
This paper describes the application of text compression methods to machine-readable files of nucleic acid and protein sequence data. Two main methods are used to reduce the storage requirements of such files, these being n-gram coding and run-length coding. A Pascal program combining both of these techniques resulted in a compression figure of 74.6% for the GenBank data-base and a program that used only n-gram coding gave a compression figure of 42.8% for the Protein Identification Resource database.  相似文献   

8.
利用优化哈夫曼编码进行数据压缩的探索   总被引:5,自引:0,他引:5  
数据压缩是当今计算机科学领域中十分活跃的论题。哈夫曼编码作为一种最常用的不等长无损压缩编码方法,在数据压缩程序中具有非常重要的应用。文章通过对传统静态哈夫曼编码的讨论以及与动态哈夫曼编码的对比,研究一种改进的数据压缩算法,并用程序实现之。  相似文献   

9.
通过对Huffman编码方法的研究,文中提出了一种基于多参数的数据无损压缩算法。基于原始数据集的元素个数统计,对原始数据集进行多次的合并,使合并后所得到的新数据集满足Huffman最佳编码要求,由此生成规模较小的数据合并对应表,并将数据编码分为一元即时码(前缀)和区分码(后缀)两个部分。数据多次合并的不同起始点为文中无损压缩方法的多参数,利用这些参数结合编码前缀及后缀即可唯一表示原始数据,去除了编码表。解码时无需逐位匹配即可复原原始数据。与传统方法相比,文中构造的基于多参数的数据无损压缩方法,编码结构简单,运算开销小,编解码效率较高。  相似文献   

10.
图像压缩是数字图像处理的一项重要技术。论文研究了基于统计特性的两种熵编码图像压缩编码方法——香农编码和哈夫曼编码,并以C#为工具,对两种编码方法进行实验及对比。实验表明,哈夫曼编码的编码效率远高于香农编码。香农编码占用的存储空间较大,单位码长表达的信息量少;哈夫曼编码节省存储空间,单位码长表达了更为丰富的信息量。  相似文献   

11.
当今图像数据信息的海量化,使得图像压缩编码应用显得尤为重要,图像压缩的分类方法很多,限失真编码以其压缩比高而得到广泛应用。常见的限失真编码主要采用余弦变换,K-L变换,小波变换等。该文另辟蹊径采用快速傅里叶变换,及霍夫曼编码方式对标准lena图像数据进行限失真压缩编码压缩,得到了较好的压缩效果。  相似文献   

12.
通过信源符号之间的时序关系、并行关系、或因果关系的分析方法,为信源符号之间的相关性结构构建了基于关联性框架的统一模型。在该模型框架下分析得出游程编码属于信源并行关系结构、词典编码属于信源因果关系结构、自适应Huffman编码属于信源时序关系结构的结论。不仅为这些无损压缩编码编码方法提供了一个统一描述途径。也为进一步改进关联性编码提出了新的思路。  相似文献   

13.
提出了一种普遍适用于网格拓扑压缩的高效熵编码方法.不同于以往的单纯利用算术编码或Huffman编码对遍历网格生成的拓扑流进行编码压缩,对这些拓扑流的每个符号先计算其Huffman编码,然后采用基于上下文(已编码序列的倒数第2个符号作为上下文)的算术编码方法来编码其Huffman值,从而实现对网格模型拓扑信息的有效压缩.实验结果表明,熵编码方法普遍适用于各种网格拓扑压缩方法得到的拓扑流的压缩,其压缩结果普遍高于拓扑流序列的熵值——绝大多数拓扑压缩算法各自最好的压缩比.  相似文献   

14.
本文改进了Huffman编码算法,主要是针对Huffman编码生成Huffman树构造中的排序方法的改进,提出一种基于"堆排序"的新方法。采用堆排序找到最小值实现Huffman编码,经过这种改进的Huffman编码方法对内存读写的次数大为减少,从而提高了响应速度。使得Huffman编码效率有所提高。通过对JPEG的Huffman压缩算法的分析以及采用4个JPG文件对改进的和传统的Huffman算法进行了仿真实验,对比分析表明改进算法的性能无论是压缩比率还是压缩时间方面都比经典的Huffman算法性能有所提高。  相似文献   

15.
We explore the effect of using bagged decision tree (BDT) as an ensemble learning method with proposed time-domain feature extraction methods on electrocardiogram (ECG) arrhythmia beat classification comparing with single decision tree (DT) classifier. RR interval is the main property which defines irregular heart rhythm, and its ratio to the previous value and difference from mean value are used as morphological feature extraction methods. Form factor, its ratio to the previous value and difference from mean value are used to express ECG waveform complexity. In addition, skewness and second-order linear predictive coding coefficients are added to the feature vector of 56,569 ECG heart beats obtained from MIT–BIH arrhythmia database as time-domain feature extraction methods. The quarter of ECG heart beat samples are used as test data for DT and BDT. The performance measures of these classifiers are evaluated using the metrics such as accuracy, sensitivity, specificity and Kappa coefficient for both classifiers, and the performance of BDT classifier is examined for number of base learners up to 75. The BDT results in more predictive performance than DT according to the performance measures. BDT with 69 base learners has 99.51 % of accuracy, 97.50 % of sensitivity, 99.80 % of specificity and 0.989 of Kappa coefficient while DT gives 98.78, 96.05, 99.57 and 0.975 %, respectively. These metrics show that the suggested BDT increases the numbers of successfully identified arrhythmia beats. Moreover, BDT with at least three base learners has higher distinguishing capability than DT.  相似文献   

16.
民用GPS数据准无损压缩算法   总被引:1,自引:0,他引:1  
为了提高民用GPS精度范围内的定位数据压缩率和压缩速度,在对霍夫曼编码和算术编码的性能进行分析比较的基础上,将预测编码与霍夫曼编码有机结合,提出了面向民用GPS精度范围的定位信息准无损压缩算法.该算法通过压缩预处理和二次量化去除冗余信息,采用预测编码提高编码效率,总压缩效率可达87%.采用MSP430单片机对该算法进行了测试,在压缩数据量为668 KB时,压缩率为87.1%, 处理时间为31.4 s,与仿真结果基本吻合.实验结果表明,该算法经过优化后对硬件要求较低,提高了压缩率和压缩速度,节约了存储资源,节省了数据传输时的通信费用.  相似文献   

17.
日益增加的集成电路测试成本变得越来越难以接受,因而提出了一种简单而有效的解决方案.该方案把循环移位技术应用到测试数据压缩中,比起一般的移位技术,该方案更能有效地利用测试集中无关位.结合异或逻辑运算,所提方案累积无关位,进一步提高测试向量与其参考向量的相容性和反向相容性.在编码过程中对各种可能移位状态进行统计,建立Huffman树,找出最优化编码形式,因而可以增加短码字的利用率,减少长码字的使用频次.通过给出的分析和实验,说明了所提方案在附加硬件成本很低的情况下既能够提高测试数据压缩率,又能够减少测试时间,优于已发表的游程编码方案和其他同类型的编码压缩技术.  相似文献   

18.
图像压缩是数字图像处理的一项重要技术。本文研究基于统计特性的三种熵编码图像压缩编码方法—香农编码、香农-弗诺编码和哈夫曼编码。并以C#为工具,对三种编码方法进行实验及对比,并通过实验结果分析各算法的特点。实验表明,哈夫曼编码最节省存储空间,单位码长表达的信息量最为丰富;香农-弗诺编码所占的存储空间稍大于哈夫曼编码,单位码长表达的信息量比哈夫曼编码稍少一些;而香农编码所占存储空间最大,单位码长表达的信息量最少。  相似文献   

19.
在讨论静态和自适应哈夫曼数据压缩算法的优点和不足后,借助于引进两个参数和一个节点符号频数表,提出了按相同频率进行分组的自适应哈夫曼数据压缩算法,减少哈夫曼树的层数。通过对高尔夫球场草坪温湿度的监测,实验表明该算法的压缩比比自适应哈夫曼算法有明显改善,这种算法编码简单、编码速度较快,适合用在能量有限的无线传感器网络的传感器节点。  相似文献   

20.
针对目前密文域可逆信息隐藏算法嵌入容量较小的问题,提出了基于预测误差双重编码的大容量密文域可逆可分离信息隐藏算法。首先为了预留秘密信息的嵌入空间,图像拥有者利用基于预测误差的哈夫曼编码及扩展游程编码对图像进行预处理,然后加密图像;数据嵌入者在加密后的图像中嵌入秘密信息;接收者根据信息隐藏密钥可以准确无误地提取秘密信息,根据解密密钥可以无损恢复图像,两者无顺序要求。实验结果表明,预测误差双重编码的应用有效地提高了嵌入容量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号