首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.  相似文献   

2.
介绍了一种基于压缩感知的神经电信号采集方法,用于解决传统无线神经电信号采集系统面临的数据量及系统功耗的限制问题.鉴于压缩感知处理信号的稀疏性要求,该方法还可应用于其他生物电信号的采集、压缩,仿真验证压缩比可达10x,重建信号的信息损失较小.另外,由于以往神经电信号采集系统只截取动作电位中的一部分电位信息,因而存在信息损失现象;而本文方法可实现信号的连续采集.  相似文献   

3.
Mean-shape vector quantizer for ECG signal compression   总被引:1,自引:0,他引:1  
A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring  相似文献   

4.
The compressed sensing (CS) theory has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. In this paper, we focus on how to improve the sampling efficiency for CS-based image compression by using our proposed adaptive sampling mechanism on the block-based CS (BCS), especially the reweighted one. To achieve this goal, two solutions are developed at the sampling side and reconstruction side, respectively. The proposed sampling mechanism allocates the CS-measurements to image blocks according to the statistical information of each block so as to sample the image more efficiently. A generic allocation algorithm is developed to help assign CS-measurements and several allocation factors derived in the transform domain are used to control the overall allocation in both solutions. Experimental results demonstrate that our adaptive sampling scheme offers a very significant quality improvement as compared with traditional non-adaptive ones.  相似文献   

5.
无线多媒体传感器网络视频编码研究   总被引:9,自引:0,他引:9  
在资源受限的无线多媒体传感器网络中存在大量的视频数据需要处理和传输.为充分地利用这些有限的资源,需要设计一个能够综合考虑能量消耗和压缩率、图像质量之间平衡的视频编码方案.对无线多媒体传感器网络中视频编码技术的发展现状、面临的挑战和设计目标进行了讨论.对无线多媒体传感器网络视频编码现有的解决方案和理论研究成果进行探讨,分...  相似文献   

6.
In resource-limited wireless sensor networks,links with poor quality hinder its large-scale applications seriously.Thanks to the inherent sparse property of signals in WSN,the framework of sparse signal transmission based on double process of compressive sensing was proposed,providing an insight into a new way of real-time,accurate and energy-efficient sparse signal transmission.Firstly,the random packet loss during transmission under lossy wireless links was modeled as a linear dimension-reduced measurement process of CS (a passive process of CS).Then,considering that a large packet was often adopted in WSN for higher transmission efficiency,a random linear dimension-reduced projection (a simple source coding operation) was employed at the sender node (an active process of CS) to prevent block data loss.Now,the raw signal could be recovered from the lossy data at the receiver node using CS reconstruction algorithms.Furtherly,according to the theory of CS reconstruction and the formula of packet reception rate in wireless communication,the minimum compression ratio and the maximum packet length allowed were obtained.Extensive simulations demonstrate that the reliability of data transmission and its accuracy,the data transmission volume,the transmission delay and energy consumption could be greatly optimized by means of proposed method.  相似文献   

7.
Wireless body-area sensor networks (WBSNs) are key components of e-health solutions. Wearable wireless sensors can monitor and collect many different physiological parameters accurately, economically and efficiently. In this work we focus on WBSN for fall detection applications, where the real-time nature of I/O data streams is of critical importance. Additionally, this generation of alarms promises to maximize system life. Throughput and energy efficiency of the communication protocol must also be carefully optimized. In this article we investigate ZigBee’s ability to meet WBSN requirements, with higher communication efficiency and lower power consumption than a Bluetooth serial port profile (SPP) based solution. As a case study we implemented an accelerometer-based fall detection algorithm, able to detect eight different fall typologies by means of a single sensor worn on the subjects’ waist. This algorithm has a low computational complexity and can be processed on an embedded platform. Fall simulations were performed by three voluntary subjects. Preliminary results are promising and show excellent values for both sensitivity and specificity. This case study showed how a ZigBee-based network can be used for high throughput WBSN scenarios.  相似文献   

8.
The emerging compressive sensing (CS) theory has pointed us a promising way of developing novel efficient data compression techniques, although it is proposed with original intention to achieve dimension-reduced sampling for saving data sampling cost. However, the non-adaptive projection representation for the natural images by conventional CS (CCS) framework may lead to an inefficient compression performance when comparing to the classical image compression standards such as JPEG and JPEG 2000. In this paper, two simple methods are investigated for the block CS (BCS) with discrete cosine transform (DCT) based image representation for compression applications. One is called coefficient random permutation (CRP), and the other is termed adaptive sampling (AS). The CRP method can be effective in balancing the sparsity of sampled vectors in DCT domain of image, and then in improving the CS sampling efficiency. The AS is achieved by designing an adaptive measurement matrix used in CS based on the energy distribution characteristics of image in DCT domain, which has a good effect in enhancing the CS performance. Experimental results demonstrate that our proposed methods are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality. The proposed BCS based image representation scheme could be an efficient alternative for applications of encrypted image compression and/or robust image compression.  相似文献   

9.
In this study, a new compression algorithm for ECG signal is proposed based on selecting important subbands of wavelet packet transform (WPT) and applying subband-dependent quantization algorithm. To this end, first WPT was applied on ECG signal and then more important subbands are selected according to their Shannon entropy. In the next step, content-based quantization and denoising method are applied to the coefficients of the selected subbands. Finally, arithmetic coding is employed to produce compressed data. The performance of the proposed compression method is evaluated using compression rate (CR), percentage root-mean-square difference (PRD) as signal distortion, and wavelet energy-based diagnostic distortion (WEDD) as diagnostic distortion measures on MIT-BIH Arrhythmia database. The average CR of the proposed method is 29.1, its average PRD is <2.9 % and WEDD is <3.2 %. These results demonstrated that the proposed method has a good performance compared to the state-of-the-art compression algorithms.  相似文献   

10.
The delay performance of compression algorithms is particularly important when time-critical data transmission is required. In this paper, we propose a wavelet-based electrocardiogram (ECG) compression algorithm with a low delay property for instantaneous, continuous ECG transmission suitable for telecardiology applications over a wireless network. The proposed algorithm reduces the frame size as much as possible to achieve a low delay, while maintaining reconstructed signal quality. To attain both low delay and high quality, it employs waveform partitioning, adaptive frame size adjustment, wavelet compression, flexible bit allocation, and header compression. The performances of the proposed algorithm in terms of reconstructed signal quality, processing delay, and error resilience were evaluated using the Massachusetts Institute of Technology University and Beth Israel Hospital (MIT-BIH) and Creighton University Ventricular Tachyarrhythmia (CU) databases and a code division multiple access-based simulation model with mobile channel noise.  相似文献   

11.
Compressive sensing (CS) is well-known for its unique functionalities of sensing, compressing, and security (i.e. equal importance of CS measurements). However, there is a tradeoff. Improving sensing and compressing efficiency with prior signal information tends to favour particular measurements, thus decreasing security. This work aimed to improve the sensing and compressing efficiency without compromising security with a novel sampling matrix, named Restricted Structural Random Matrix (RSRM). RSRM unified the advantages of frame-based and block-based sensing together with the global smoothness prior (i.e. low-resolution signals are highly correlated). RSRM acquired compressive measurements with random projection of multiple randomly sub-sampled signals, which was restricted to low-resolution signals (equal in energy), thereby its observations are equally important. RSRM was proven to satisfy the Restricted Isometry Property and showed comparable reconstruction performance with recent state-of-the-art compressive sensing and deep learning-based methods.  相似文献   

12.
Future healthcare systems are shifted toward long‐term patient monitoring using embedded ultra‐low power devices. In this paper, the strengths of both rakeness‐based compressive sensing (CS) and block sparse Bayesian learning (BSBL) are exploited for efficient electroencephalogram (EEG) transmission/reception over wireless body area networks. A binary sensing matrix based on the rakeness concept is used to find the most energetic signal directions. A balance is achieved between collecting energy and enforcing restricted isometry property to capture the underlying signal structure. Correct presentation of the EEG oscillatory activity, EEG wave shape, and main signal characteristics is provided using the discrete cosine transform based BSBL, which models the intra‐block correlation. The IEEE 802.15.4 wireless communication technology (ZigBee) is employed, since it targets low data rate communications in an energy efficient manner. To alleviate noise and channel multipath effects, a recursive least square based equalizer is used, with an adaptation algorithm that continually updates the filter weights using successive input samples. For the same compression ratio (CR), results indicate that the proposed system permits a higher reconstruction quality compared with the standard CS algorithm. For higher CRs, lower dimensional projections are allowed, meanwhile guaranteeing a correct reconstruction. Thus, low computational high quality data compression/reconstruction are achieved with minimal energy expenditure at the sensors nodes.  相似文献   

13.
杨颖  刘军 《电子技术》2011,38(6):23-24
时间同步是无线躯体传感器网络(WBSN)的一项支撑技术.文章针对WBSN能源有限的问题,提出了一种改进的时间同步算法.该算法结合基准节点单向广播机制和成对双向消息传递机制,在保证一定同步精度的前提下,减少消息传递次数,降低通信开销,达到了低能耗的要求.最后进行仿真验证了算法性能.  相似文献   

14.
This paper introduces a new methodology for compressing ECG signals in an automatic way guaranteeing signal interpretation quality. The approach is based on noise estimation in the ECG signal that is used as a compression threshold in the coding stage. The Set Partitioning in Hierarchical Trees algorithm is used to code the signal in the wavelet domain. Forty different ECG records from two different ECG databases commonly used in ECG compression have been considered to validate the approach. Three cardiologists have participated in the clinical trial using mean opinion score tests in order to rate the signals quality. Results showed that the approach not only achieves very good ECG reconstruction quality but also enhances the visual quality of the ECG signal.   相似文献   

15.
In view of the shortcomes of conventional ElectroCardioGram (ECG) compression algo- rithms,such as high complexity of operation and distortion of reconstructed signal,a new ECG compression encoding algorithm based on Set Partitioning In Hierarchical Trees (SPIHT) is brought out after studying the integer lifting scheme wavelet transform in detail.The proposed algorithm modifies zero-tree structure of SPIHT,establishes single dimensional wavelet coefficient tree of ECG signals and enhances the efficiency of SPIHT-encoding by distributing bits rationally,improving zero-tree set and ameliorating classifying method.For this improved algorithm,floating-point com- putation and storage are left out of consideration and it is easy to be implemented by hardware and software.Experimental results prove that the new algorithm has admirable features of low complexity, high speed and good performance in signal reconstruction.High compression ratio is obtained with high signal fidelity as well.  相似文献   

16.
A method for the compression of ECG data is presented. The method is based on high-degree polynomial expansions. Data rates of about 350 bits per second are achievable at an acceptable signal quality. The high compression is obtained by a carefully selected subdivision of the ECG signal into intervals that make optimal use of the special properties of the polynomial base functions. Each interval corresponds to one ECG period. The method is compared to the discrete cosine transform and is found to yield a significantly higher data compression for a given signal quality (quantified by mean squared error and peak error).  相似文献   

17.
心律失常等慢性心血管疾病严重影响人类健康,采用心电信号(ECG)实现心律失常自动分类可有效提高该类疾病的诊断效率,降低人工成本。为此,该文基于1维心电信号,提出一种改进的长短时记忆网络(LSTM)方法实现心律失常自动分类。该方法首先设计深层卷积神经网络(CNN)对心电信号进行深度编码,提取心电信号形态特征。其次,搭建长短时记忆分类网络实现基于心电信号特征的心律失常自动分类。基于MIT-BIH心律失常数据库进行的实验结果表明,该方法显著缩短分类时间,并获得超过99.2%的分类准确率,灵敏度等评价参数均得到不同程度的提高,满足心电信号自动分类实时高效的要求。  相似文献   

18.
Cloud data centers have become overwhelmed with data-intensive applications due to the limited computational capabilities of mobile terminals. Mobile edge computing is emerging as a potential paradigm to host application execution at the edge of networks to reduce transmission delays. Compute nodes are usually distributed in edge environments, enabling crucially efficient task scheduling among those nodes to achieve reduced processing time. Moreover, it is imperative to conserve edge server energy, enhancing their lifetimes. To this end, this paper proposes a novel task scheduling algorithm named Energy-aware Double-fitness Particle Swarm Optimization (EA-DFPSO) that is based on an improved particle swarm optimization algorithm for achieving energy efficiency in an edge computing environment along with minimal task execution time. The proposed EA-DFPSO algorithm applies a dual fitness function to search for an optimal tasks-scheduling scheme for saving edge server energy while maintaining service quality for tasks. Extensive experimentation demonstrates that our proposed EA-DFPSO algorithm outperforms the existing traditional scheduling algorithms to achieve reduced task completion time and conserve energy in an edge computing environment.  相似文献   

19.
Peak transform for efficient image representation and coding.   总被引:3,自引:0,他引:3  
In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality. We believe that the concept explored in this work, designing a nonlinear transform to convert hard-to-compress signals into easy ones, is very useful. We hope this work would motivate more research work along this direction.  相似文献   

20.
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号