首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
小波变换在ECG信号处理中的应用得到了很多研究人员的关注。本文研究了5层5/3提升小波变换及其反变换的FPGA实现,并将其应用于ECG信号的压缩,在均方误差可控的范围内获得了较大的压缩比,并利用设计的硬核实现了信号的重建。  相似文献   

2.
A software based lossless ECG compression algorithm is developed here. The algorithm is written in the C-platform. The algorithm has applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). Here, a difference array has been generated from the corresponding input ECG data and this is multiplied by a large number to convert the number of arrays into integers. Then those integers are grouped in both forward and reverse direction, out of which few are treated differently. Grouping has been done in such a way that every grouped number resides under valid ASCII value. Then all the grouped numbers along with sign bit and other necessary information are converted into their corresponding ASCII characters. The reconstruction algorithm has also been developed in using the reversed logic and it has been observed that data is reconstructed with almost negligible difference as compared with the original (PRD 0.023%).  相似文献   

3.
随着数据采集能力和采样频率的不断提高,采用传统的奈奎斯特采样定理会获得海量的数据,这给信号的存储和传递带来了极大挑战。提出基于稀疏快速傅里叶变换的信号压缩方法,利用信号在频域的稀疏性,压缩信号所需的存储空间,在保证拥有足够小的误码率的前提下,以高概率重构原始信号。  相似文献   

4.
采用递推方法构造一种正交变换,称之为U变换,该变换含有分段常数基向量、分段一次多项式基向量以及分段二次多项式基向量,是对Walsh变换、斜变换的推广。根据递推方式,可以得到相应的快速算法。利用平移复制算子和Kronecher积的性质,推导基于Kronecher积的快速算法和正交U变换的直接分解算法。将该变换应用于图像压缩中,构造基于人类视觉系统的量化表,实验结果表明,正交U变换的图像压缩性能明显优于斜变换的图像压缩性能,与DCT变换的图像压缩性能相当,为图像压缩提供了一种新的选择。  相似文献   

5.
以Contourlet变换为基础,WBCT(Wavelet-Based Contourlet Transform)的思想为依据,提出了一种新的、改进Contourlet的变换-复数轮廓变换CCT(Complex Contourlet Transform)。DT-CWT(Dual-Tree Complex Wavelet)复数小波因为具有近似的位移不变性以及方向可选择性,已经成功地应用于图像压缩领域,然而无法解决小波变换方向选择性差的固有缺点。为了能够得到更多的图像方向信息,将DFB(Directional Filter Bank)方向滤波器引入DT-CWT,从而构建CCT(Complex Contourlet Transform)复数轮廓变换。实验结果表明,和WCBT相比较,CCT变换重构得到的实验图像纹理更加清晰、丰富,视觉效果更好。  相似文献   

6.
随着互联网的飞速发展,产生大量的图像信息。为了减小存储并提高图像质量,故提出了一种基于奇异值分解和Contourlet变化结合的有损图像压缩算法。该算法先对图像进行奇异值分解,根据奇异值对图像信号的贡献,选取适当的奇异值,来实现图像压缩,再对图像进行Contourlet 变换和量化,实现图像二级压缩。将该算法和图像奇异值分解直接压缩算法、Contourlet变换压缩算法进行实验比较,试验结果表明,该算法比图像奇异值分解直接压缩算法、Contourlet变换压缩算法有更好的性能,在同样的压缩比的情况下能获得更高的峰值信噪比和SSIM。  相似文献   

7.
8.
《Computers & chemistry》1994,18(1):13-20
Data compression using the Fourier transform method was applied to data generated from an ultraviolet-visible photodiode array spectrophotometer. The spectroscopic data were converted into the Fourier domain by using the fast Fourier transform technique. Four different methods were utilized to reduce the number of Fourier coefficients required for accurate reproductions of the original spectra. The performance of these methods in data compression was evaluated by using synthetic spectra as well as experimental absorption spectra. It was found that the storage space of the spectral data can be reduced by more than 90% by using three of the suggested methods.  相似文献   

9.
A recent trend in computer graphics and image processing is to use Iterated Function System(IFS)to generate and describe both man-made graphics and natural images.Jacquin was the first to propose a fully automation gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper.By using this algorithm,an image can be condensely described as a fractal transform operator which is the combination of a set of reactal mappings.When the fractal transform operator is iteratedly applied to any initial image,a unique attractro(reconstructed image)can be achieved.In this paper,a dynamic fractal transform is presented which is a modification of the static transform.Instea of being fixed,the dynamic transform operator varies in each decoder iteration,thus differs from static transform operators.The new transform has advantages in improving coding efficiency and shows better convergence for the deocder.  相似文献   

10.
The purpose of this paper is to provide an overview of the works dealing with the application and utilization of Walsh function series and transforms to a variety of systems and control applications. The works reviewed are classified in the following groups: 1. Walsh signal definition and generation, 2. Walsh transform Computation, 3. System analysis using Walsh functions, 4. System identification via Walsh functions, 5. Optimal control via Walsh functions, 6. Block-pulse functions, 7. Miscellaneous properties of Walsh functions, 8. Walsh-to-Fourier transform conversion, and 9. Walsh transform applications.  相似文献   

11.
体数据的数据量大、数据间的相关性强、拥有大量的线或面结构,因此需要研究有效的压缩编码方法。脊波变换作为一种新的时频分析工具,在处理线或面的奇异性时有它适用的一面。在介绍脊波变换理论的基础上,将脊波变换的思想应用到体数据的压缩编码中。文中两种压缩策略的主要思想分别为:策略1先将体数据划分成切片组,再对每一张切片做二维脊波变换,然后进行量化和熵编码;策略2直接对体数据做类似于三维脊波变换的变换,然后进行量化和熵编码。比较而言,策略1实现简单,策略2能获得更高的压缩比。两种策略都具有较强的鲁棒性,且能实现嵌入式编码。该方法已应用到实际工业CT体数据的压缩编码中,还可用于其它类型体数据的压缩编码中。  相似文献   

12.
It is shown here that the Hough transform may be used for encoding of line curves and waveforms that consist of the concatenation of curves from an underlying set of families of curves. Several properties of the transform are given in this context.  相似文献   

13.
In spite of great advancements in multimedia data storage and communication technologies, compression of medical data remains challenging. This paper presents a novel compression method for the compression of medical images. The proposed method uses Ripplet transform to represent singularities along arbitrarily shaped curves and Set Partitioning in Hierarchical Trees encoder to encode the significant coefficients. The main objective of the proposed method is to provide high quality compressed images by representing images at different scales and directions and to achieve high compression ratio. Experimental results obtained on a set of medical images demonstrate that besides providing multiresolution and high directionality, the proposed method attains high Peak Signal to Noise Ratio and significant compression ratio as compared with conventional and state-of-art compression methods.  相似文献   

14.
A new method for focus measure computation is proposed to reconstruct 3D shape using image sequence acquired under varying focus plane. Adaptive histogram equalization is applied to enhance varying contrast across different image regions for better detection of sharp intensity variations. Fast discrete curvelet transform (FDCT) is employed for enhanced representation of singularities along curves in an input image followed by noise removal using bivariate shrinkage scheme based on locally estimated variance. The FDCT coefficients with high activity are exploited to detect high frequency variations of pixel intensities in a sequence of images. Finally, focus measure is computed utilizing neighborhood support of these coefficients to reconstruct the shape and a well-focused image of the scene being probed.  相似文献   

15.
Binary wavelet transform (BWT) has several distinct advantages over the real wavelet transform (RWT), such as the conservation of alphabet size of wavelet coefficients, no quantization introduced during the transform and the simple Boolean operations involved. Thus, less coding passes are engaged and no sign bits are required in the compression of transformed coefficients. However, the use of BWT for the embedded grayscale image compression is not well established. This paper proposes a novel Context-based Binary Wavelet Transform Coding approach (CBWTC) that combines the BWT with a high-order context-based arithmetic coding scheme to embedded compression of grayscale images. In our CBWTC algorithm, BWT is applied to decorrelate the linear correlations among image coefficients without expansion of the alphabet size of symbols. In order to match up with the CBWTC algorithm, we employ the gray code representation (GCR) to remove the statistical dependencies among bi-level bitplane images and develop a combined arithmetic coding scheme. In the proposed combined arithmetic coding scheme, three highpass BWT coefficients at the same location are combined to form an octave symbol and then encoded with a ternary arithmetic coder. In this way, the compression performance of our CBWTC algorithm is improved in that it not only alleviate the degradation of predictability caused by the BWT, but also eliminate the correlation of BWT coefficients in the same level subbands. The conditional context of the CBWTC is properly modeled by exploiting the characteristics of the BWT as well as taking advantages of non-causal adaptive context modeling. Experimental results show that the average coding performance of the CBWTC is superior to that of the state-of-the-art grayscale image coders, and always outperforms the JBIG2 algorithm and other BWT-based binary coding technique for a set of test images with different characteristics and resolutions.  相似文献   

16.
The next two decades will see dramatic changes in the health needs of the world's populations with chronic diseases as the leading causes of disability, according to recent World Health Organization reports. Increases in the senior population living "confined" in domestic area are also expected producing a steep increase in the need for long-term monitoring and home care services. Independently of the particular features and specific architectures, long-term monitoring systems usually produce a large amount of data to be analyzed and inspected by the practitioners and in particular by the cardiologists dealing with ECG recordings analysis. This problem is well known and regards also the traditional holter-based practice. In this paper we present a program for discovering patterns in ECG recordings, to be considered as a medical decision-making support. Computational methods are based on a QRS detector especially designed for noisy applications followed by a parameters space reduction operated by the KL transform modified on a "user-fit" basis. Events characterization is based on a recently introduced clustering method, called KHM (K-harmonic means). The most representative beat families and the corresponding prototypes (physiological and pathological) are then presented to the user through appropriate graphics to facilitate an easy and fast interpretation. We tested the QRS detection algorithm using the MIT-BIH arrhythmia database. Our method produced 565 false positive beats and 379 false negative beats and a total detection failure of 0.85% considering all the 109.809 annotated beats in the database. While a clinical experimentation of our program is on the way, we used the VALE Database to perform a preliminary evaluation of the methods used for data exploration (PCA, KHM). Considering the entire database, we succeeded in identifying pathological clusters in 97% of the cases.  相似文献   

17.
Digital filtering of ERS‐2 SAR data using the fast Fourier transform (FFT) has been attempted over Singhbhum shear zone (SSZ) and its surroundings for extraction of linear and anomalous patterns. The results show that numerous lineaments as well as drainage patterns could be identified and demarcated by FFT digital filtering method. Major as well as several minor drainage patterns are easily detectable from the filtered image, which are structurally controlled and not observed in the original map. Comparison of the present interpretation of the study area to existing geological map/earlier interpretation has been done effectively. This technique was found to be more effective in identifying the lineaments using ERS SAR data compared with using Landsat imagery over the study area. The present study reveals that maximum lineaments occurring in the north of SSZ are NNE, NNW and NW trending, while maximum lineaments occurring in the south of SSZ are NE, ENE, WNW, and NW trending. The demarcated geological structures may have a great significance to locate the hidden ore/mineral occurrences. The existences of various mines, such as Baharagora, Mosaboni, Surda, Narwa, Bhatin, Jadugoda, Rakha, and Tatanagar along the shear zone, correlate well with the interpreted results.  相似文献   

18.
In this paper, we present a new approach for fingerprint classification based on discrete Fourier transform (DFT) and nonlinear discriminant analysis. Utilizing the DFT and directional filters, a reliable and efficient directional image is constructed from each fingerprint image, and then nonlinear discriminant analysis is applied to the constructed directional images, reducing the dimension dramatically and extracting the discriminant features. The proposed method explores the capability of DFT and directional filtering in dealing with low-quality images and the effectiveness of nonlinear feature extraction method in fingerprint classification. Experimental results demonstrates competitive performance compared with other published results.  相似文献   

19.
The recovery of process information from noisy data (de-noising) is studied by investigating the classical solution of the estimation problem first. Next, the effectiveness of wavelet-based algorithms for data recovery is considered. A novel method based on coefficient de-noising according to WienerShrink method of wavelet thresholding is proposed. Simulation results are presented, highlighting the advantages of the de-noising method over the classical approaches based on the mean square error criterion.  相似文献   

20.
A fast algorithm of finding the diagonal elements of the covariance matrix of the two-dimensional Walsh-Hadamard (WH) transform of data is described. Its usefulness and other interesting properties of the WH transform are discussed. The performance of the WH transform is compared with the Karhunen-Loeve transform for a first-order stationary Markov process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号