首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper evaluates the compression performance and characteristics of two wavelet coding compression schemes of electrocardiogram (ECG) signals suitable for real-time telemedical applications. The two proposed methods, namely the optimal zonal wavelet coding (OZWC) method and the wavelet transform higher order statistics-based coding (WHOSC) method, are used to assess the ECG compression issues. The WHOSC method employs higher order statistics (HOS) and uses multirate processing with the autoregressive HOS model technique to provide increasing robustness to the coding scheme. The OZWC algorithm used is based on the optimal wavelet-based zonal coding method developed for the class of discrete “Lipschitizian” signals. Both methodologies were evaluated using the normalized rms error (NRMSE) and the average compression ratio (CR) and bits per sample criteria, applied on abnormal clinical ECG data samples selected from the MIT-BIH database and the Creighton University Cardiac Center database. Simulation results illustrate that both methods can contribute to and enhance the medical data compression performance suitable for a hybrid mobile telemedical system that integrates these algorithmic approaches for real-time ECG data transmission scenarios with high CRs and low NRMSE ratios, especially in low bandwidth mobile systems  相似文献   

2.
This paper introduces a new methodology for compressing ECG signals in an automatic way guaranteeing signal interpretation quality. The approach is based on noise estimation in the ECG signal that is used as a compression threshold in the coding stage. The Set Partitioning in Hierarchical Trees algorithm is used to code the signal in the wavelet domain. Forty different ECG records from two different ECG databases commonly used in ECG compression have been considered to validate the approach. Three cardiologists have participated in the clinical trial using mean opinion score tests in order to rate the signals quality. Results showed that the approach not only achieves very good ECG reconstruction quality but also enhances the visual quality of the ECG signal.   相似文献   

3.
A multilead electrocardiography (ECG) data compression method is presented. First, a linear transform is applied to the standard ECG lead signals, which are highly correlated with each other. In this way a set of uncorrelated transform domain signals is obtained. Then, the resulting transform domain signals are compressed using various coding methods, including multirate signal processing and transform domain coding techniques  相似文献   

4.
A genetic segmentation of ECG signals   总被引:1,自引:0,他引:1  
  相似文献   

5.
Mean-shape vector quantizer for ECG signal compression   总被引:1,自引:0,他引:1  
A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring  相似文献   

6.
ECG signal compression using analysis by synthesis coding   总被引:6,自引:0,他引:6  
In this paper, an elecrocardiogram (ECG) compression algorithm, called analysis by synthesis ECG compressor (ASEC), is introduced. The ASEC algorithm is based on analysis by synthesis coding, and consists of a beat codebook, long and short-term predictors, and an adaptive residual quantizer. The compression algorithm uses a defined distortion measure in order to efficiently encode every heartbeat, with minimum bit rate, while maintaining a predetermined distortion level. The compression algorithm was implemented and tested with both the percentage rms difference (PRD) measure and the recently introduced weighted diagnostic distortion (WDD) measure. The compression algorithm has been evaluated with the MIT-BIH Arrhythmia Database. A mean compression rate of approximately 100 bits/s (compression ratio of about 30:1) has been achieved with a good reconstructed signal quality (WDD below 4% and PRD below 8%). The ASEC was compared with several well-known ECG compression algorithms and was found to be superior at all tested bit rates. A mean opinion score (MOS) test was also applied. The testers were three independent expert cardiologists. As in the quantitative test, the proposed compression algorithm was found to be superior to the other tested compression algorithms.  相似文献   

7.
We propose a novel scheme for signal compression based on the discrete wavelet packet transform (DWPT) decompositon. The mother wavelet and the basis of wavelet packets were optimized and the wavelet coefficients were encoded with a modified version of the embedded zerotree algorithm. This signal dependant compression scheme was designed by a two-step process. The first (internal optimization) was the best basis selection that was performed for a given mother wavelet. For this purpose, three additive cost functions were applied and compared. The second (external optimization) was the selection of the mother wavelet based on the minimal distortion of the decoded signal given a fixed compression ratio. The mother wavelet was parameterized in the multiresolution analysis framework by the scaling filter, which is sufficient to define the entire decomposition in the orthogonal case. The method was tested on two sets of ten electromyographic (EMG) and ten electrocardiographic (ECG) signals that were compressed with compression ratios in the range of 50%-90%. For 90% compression ratio of EMG (ECG) signals, the percent residual difference after compression decreased from (mean +/- SD) 48.6 +/- 9.9% (21.5 +/- 8.4%) with discrete wavelet transform (DWT) using the wavelet leading to poorest performance to 28.4 +/- 3.0% (6.7 +/- 1.9%) with DWPT, with optimal basis selection and wavelet optimization. In conclusion, best basis selection and optimization of the mother wavelet through parameterization led to substantial improvement of performance in signal compression with respect to DWT and randon selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for compression and can thus be applied to any type of biomedical signal.  相似文献   

8.
For the compression of medical signals such as electrocardiogram (ECG), excellent reconstruction quality of a highly compressed signal can be obtained by using a wavelet-based approach. The most widely used objective quality criterion for the compressed ECG is called the percent of root-mean-square difference (PRD). In this paper, given a user-specified PRD, an algorithm is proposed to meet the PRD demand by searching for an appropriate bit rate in an automatic, smooth, and fast manner for the wavelet-based compression. The bit rate searching is modeled as a root-finding problem for a one-dimensional function, where an unknown rate-distortion curve represents the function and the desired rate is the root to be sought. A solution derived from root-finding methods in numerical analysis is proposed. The proposed solution is incorporated in a well-known wavelet-based coding strategy called set partitioning in hierarchical trees. ECG signals taken from the MIT/BIH database are tested, and excellent results in terms of convergence speed, quality variation, and coding performance are obtained.  相似文献   

9.
10.
The application of advanced motion compensation techniques-control grid interpolation (CGI) and overlapped block motion compensation (OBMC)-to video coding systems provides significant performance advantages, terms of compression ratio and visual quality, over traditional block-matching motion compensation. However, the two-dimensional (2-D) interdependence among motion vectors introduced by these compensation frameworks makes the problem of finding rate-distortion optimal motion vectors, computationally prohibitive. Thus, iterative optimization techniques are often used to achieve good compensation performance. While most reported optimization algorithms adopt an approach that uses a block-matching algorithm to obtain an initial estimate and then successively optimize each motion vector, the over-relaxed motion-vector dependency relations often result in considerable performance degradation. In view of this problem, we present a new optimization scheme for dependent motion-vector optimization problems, one based on dynamic programming. Our approach efficiently decomposes 2-D dependency problems into a series of one-dimensional (1-D) dependency problems. We show that a reliable initial estimate of motion vectors can be obtained efficiently by only considering the dependency in the rate term. We also show that at the iterative optimization stage an effective logarithmic search strategy can be used with dynamic programming to reduce the necessary complexity involved in distortion computation. Compared to conventional iterative approaches, our experimental results demonstrate that our algorithm provides superior rate and distortion performance while maintaining reasonable complexity.  相似文献   

11.
This paper addresses the problem of electrocardiogram (ECG) signal compression with the goal to provide a simple compression method that outperforms previously proposed methods. Starting with the study of the ECG signal nature, the manner has been found to optimize rate-quality ratio of the ECG signal by means of differential pulse code modulation (DPCM) and subframe after subframe procession. Particularly, the proposed method includes two kinds of adaptations, short-time and long-time adaptations. The switched quantization i.e. the short-time DPCM quantizer range adaptation is performed according to the statistics of the ECG signal within particular subframes. It is ascertained that the short-time adaptation enables a sophisticated compression control as well as a constant quality of the ECG signal in both segments of low amplitude and high amplitude dynamics. In addition, by grouping the subframes of a particular frame into two groups according to their dynamics and performing the long-time DPCM quantizer range adaptation, based on the statistics of the groups, it has been revealed that an important quality gain is achieved with an insignificant rate increase. Moreover, the two iterative approaches proposed in the paper, mainly differ in the fact whether the long-time range adaptations of the used DPCM quantizers are performed according to the maximum amplitudes or according to the average powers of the signal difference determined in all subframes within a certain group. The benefits of both approaches to the above proposed method are shown and discussed in the paper.  相似文献   

12.
In this paper, we use the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals. The MMP is based on approximate multiscale pattern matching , encoding segments of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using concatenated and displaced versions of previously encoded segments, therefore MMP builds its own dictionary while the input data is being encoded. The MMP can be easily adapted to compress signals of any number of dimensions, and has been successfully applied to compress two-dimensional (2-D) image data. The quasi-periodic nature of ECG signals makes them suitable for compression using recurrent patterns, like MMP does. However, in order for MMP to be able to efficiently compress ECG signals, several adaptations had to be performed, such as the use of a continuity criterion among segments and the adoption of a prune-join strategy for segmentation. The rate-distortion performance achieved was very good. We show simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.  相似文献   

13.
In this study, a new compression algorithm for ECG signal is proposed based on selecting important subbands of wavelet packet transform (WPT) and applying subband-dependent quantization algorithm. To this end, first WPT was applied on ECG signal and then more important subbands are selected according to their Shannon entropy. In the next step, content-based quantization and denoising method are applied to the coefficients of the selected subbands. Finally, arithmetic coding is employed to produce compressed data. The performance of the proposed compression method is evaluated using compression rate (CR), percentage root-mean-square difference (PRD) as signal distortion, and wavelet energy-based diagnostic distortion (WEDD) as diagnostic distortion measures on MIT-BIH Arrhythmia database. The average CR of the proposed method is 29.1, its average PRD is <2.9 % and WEDD is <3.2 %. These results demonstrated that the proposed method has a good performance compared to the state-of-the-art compression algorithms.  相似文献   

14.
This paper presents efficient denoising and lossy compression schemes for electrocardiogram (ECG) signals based on a modified extended Kalman filter (EKF) structure. We have used a previously introduced two-dimensional EKF structure and modified its governing equations to be extended to a 17-dimensional case. The new EKF structure is used not only for denoising, but also for compression, since it provides estimation for each of the new 15 model parameters. Using these specific parameters, the signal is reconstructed with regard to the dynamical equations of the model. The performances of the proposed method are evaluated using standard denoising and compression efficiency measures. For denosing, the SNR improvement criterion is used, while for compression, we have considered the compression ratio (CR), the percentage area difference (PAD), and the weighted diagnostic distortion (WDD) measure. Several Massachusetts Institute of Technology–Beth Israel Deaconess Medical Center (MIT–BIH) ECG databases are used for performance evaluation. Simulation results illustrate that both applications can contribute to and enhance the clinical ECG data denoising and compression performance. For denoising, an average SNR improvement of 10.16 dB was achieved, which is 1.8 dB more than the next benchmark methods such as MABWT or EKF2. For compression, the algorithm was extended to include more than five Gaussian kernels. Results show a typical average CR of 11.37:1 with WDD ≪ 1.73%. Consequently, the proposed framework is suitable for a hybrid system that integrates these algorithmic approaches for clean ECG data storage or transmission scenarios with high output SNRs, high CRs, and low distortions.   相似文献   

15.
An efficient compression method is proposed by encoding the sequence index of atoms based on matching pursuit (MP)algorithm with over-complete Gabor dictionary,which has the merit to adjust the compres...  相似文献   

16.
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.  相似文献   

17.
Image compression using binary space partitioning trees   总被引:1,自引:0,他引:1  
For low bit-rate compression applications, segmentation-based coding methods provide, in general, high compression ratios when compared with traditional (e.g., transform and subband) coding approaches. In this paper, we present a new segmentation-based image coding method that divides the desired image using binary space partitioning (BSP). The BSP approach partitions the desired image recursively by arbitrarily oriented lines in a hierarchical manner. This recursive partitioning generates a binary tree, which is referred to as the BSP-tree representation of the desired image. The most critical aspect of the BSP-tree method is the criterion used to select the partitioning lines of the BSP tree representation, In previous works, we developed novel methods for selecting the BSP-tree lines, and showed that the BSP approach provides efficient segmentation of images. In this paper, we describe a hierarchical approach for coding the partitioning lines of the BSP-tree representation. We also show that the image signal within the different regions (resulting from the recursive partitioning) can be represented using low-order polynomials. Furthermore, we employ an optimum pruning algorithm to minimize the bit rate of the BSP tree representation (for a given budget constraint) while minimizing distortion. Simulation results and comparisons with other compression methods are also presented.  相似文献   

18.
In view of the shortcomes of conventional ElectroCardioGram (ECG) compression algo- rithms,such as high complexity of operation and distortion of reconstructed signal,a new ECG compression encoding algorithm based on Set Partitioning In Hierarchical Trees (SPIHT) is brought out after studying the integer lifting scheme wavelet transform in detail.The proposed algorithm modifies zero-tree structure of SPIHT,establishes single dimensional wavelet coefficient tree of ECG signals and enhances the efficiency of SPIHT-encoding by distributing bits rationally,improving zero-tree set and ameliorating classifying method.For this improved algorithm,floating-point com- putation and storage are left out of consideration and it is easy to be implemented by hardware and software.Experimental results prove that the new algorithm has admirable features of low complexity, high speed and good performance in signal reconstruction.High compression ratio is obtained with high signal fidelity as well.  相似文献   

19.
We compare ECG data compression algorithms based on signal entropy for a given mean-square-error (MSE) compression distortion. By defining the distortion in terms of the MSE and assuming the ECG signal to be a Gaussian process we are able to estimate theoretical rate distortion bounds from average ECG power spectra. These rate distortion bounds give estimates of the minimum bits per second (bps) required for storage of ECG data with a given MSE regardless of compression method. From average power spectra of the MIT/BIH arrhythmia database we have estimated rate distortion bounds for ambulatory ECG data, both before and after average beat subtraction. These rate distortion estimates indicate that, regardless of distortion, average beat subtraction reduces the theoretical minimum data rate required for ECG storage by approximately 100 bits per second (bps). Our estimates also indicate that practical ambulatory recording requires a compression distortion on the order of 11 microV rms. We have compared the performance of common ECG compression algorithms on data from the MIT/BIH database. We sampled and quantized the data to give distortion levels of 2, 5, 8, 11, and 14 microV rms. These results indicate that, when sample rates and quantization levels are chosen for optimal rate distortion performance, minimum data rates can be achieved by average beat subtraction followed by first differencing of the residual signal. Achievable data rates approximate our theoretical estimates at low distortion levels and are within 60 bps at higher distortion levels.  相似文献   

20.
ATM网络中语音编码和传输的新方案   总被引:2,自引:0,他引:2  
杨震  毕厚杰 《通信学报》2000,21(5):23-29
本文针对未来新的ATM通信方式,提出了一种新的语音可变速率编码和可变时延传输系统方案,为了将信号源和人耳听觉的特征,与ATM网络的统计复用性相结合,实现语音的码率在缩和低时延传输,该方案将ATM网络环境和语音编码系统中最优信号分析区间的选取、编码系统参数的确定相结合。文中基于一种新的分布熵进行信号特征判断,对输入信号构成不同的处理系统,具体编码由小波变换分带、多带二进树VQ构成,输出码率可调,改变  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号