首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
Voice packetization and compression in broadband ATM networks   总被引:2,自引:0,他引:2  
Some methods of supporting voice in broadband ISDN, (B-ISDN) asynchronous transfer mode (ATM), including voice compression, are examined. Techniques for voice compression with variable-length packet format at DS1 transmission rate, e.g., wideband packet technology (WPT), have been successfully implemented utilizing embedded adaptive differential pulse code modulation (ADPCM) coding, digital speech interpolation (DSI), and block-dropping schemes. For supporting voice in B-ISDN, voice compression techniques are considered that are similar to those used in WPT but with different packetization and congestion control methods designed for the fixed-length ATM protocol at high speeds. Possible approaches for packetization and implementation of variable-bit-rate voice coding schemes are described. ADPCM and DSI for voice coding and compression and cell discarding (CD) for congestion control are considered. The advantages of voice compression and CD in broadband ATM networks are demonstrated in terms of transmission bandwidth savings and resiliency of the network during congestion  相似文献   

2.
The performances of a number of block-based, reversible, compression algorithms suitable for compression of very-large-format images (4096x4096 pixels or more) are compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. The performances of these coding techniques for a few mammograms and chest radiographs digitized to sizes up to 4096x4096 10 b pixels are discussed. Compression from 10 b to 2.5-3.0 b/pixel on these images has been achieved without any loss of information. The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation.  相似文献   

3.
Binary tree predictive coding (BTPC) is an efficient general-purpose still-image compression scheme, competitive with JPEG for natural image coding and with GIF for graphics. We report the extension of BTPC to video compression using motion estimation and compensation techniques which are simple, efficient, nonlinear and predictive. The new methods, binary tree recursive motion estimation coding (BTRMEC), and binary tree residue coding (BTRC) exploit the hierarchical structure of BTPC, in the first case giving progressively refined motion estimates for increasing numbers of pels and in the second case providing efficient residue coding. Compression results for BTRMEC and BTBC are compared against conventional block-based motion compensated coding as provided by MPEG. They show that both BTRMEC and BTRC are efficient methods to code video sequences.  相似文献   

4.
We study lossy-to-lossless compression of medical volumetric data using three-dimensional (3-D) integer wavelet transforms. To achieve good lossy coding performance, it is important to have transforms that are unitary. In addition to the lifting approach, we first introduce a general 3-D integer wavelet packet transform structure that allows implicit bit shifting of wavelet coefficients to approximate a 3-D unitary transformation. We then focus on context modeling for efficient arithmetic coding of wavelet coefficients. Two state-of-the-art 3-D wavelet video coding techniques, namely, 3-D set partitioning in hierarchical trees (Kim et al., 2000) and 3-D embedded subband coding with optimal truncation (Xu et al., 2001), are modified and applied to compression of medical volumetric data, achieving the best performance published so far in the literature-both in terms of lossy and lossless compression.  相似文献   

5.
In embedded system design, memory is one of the most restricted resources, posing serious constraints on program size. Code compression has been used as a solution to reduce the code size for embedded systems. Lossless data compression techniques are used to compress instructions, which are then decompressed on-the-fly during execution. Previous work used fixed-to-variable coding algorithms that translate fixed-length bit sequences into variable-length bit sequences. In this paper, we present a class of code compression techniques called variable-to-fixed code compression (V2FCC), which uses variable-to-fixed coding schemes based on either Tunstall coding or arithmetic coding. Though the techniques are suitable for both reduced instruction set computer (RISC) and very long instruction word (VLIW) architectures, they favor VLIW architectures which require a high-bandwidth instruction prefetch mechanism to supply multiple operations per cycle, and fast decompression is critical to overcome the communication bottleneck between memory and CPU. Experimental results for a VLIW embedded processor TMS320C6x show that the compression ratios using memoryless V2FCC and Markov V2FCC are around 82.5% and 70%, respectively. Decompression unit designs for memoryless V2FCC and Markov V2FCC are implemented in TSMC 0.25-/spl mu/m technology.  相似文献   

6.
测试数据编码压缩是一类重要、经典的测试源划分(TRP)方法。本文提出了一种广义交替码,将FDR码、交替码都看作它的特例;又扩展了两步压缩方法,将原测试集划分成多组,每组采用不同的比值进行交替编码,综合了交替码与两步编码各自的优势,弥补了FDR码,交替码对某些电路测试集压缩的缺陷,得到了较好的压缩率。实验结果表明,与同类型的编码压缩方法相比,该方案具有更高的测试数据压缩率和较好的综合测试性能。  相似文献   

7.
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.  相似文献   

8.
针对基于Android的VoIP(Voice over IP)在通信过程中,具有语音数据量大的特点.对目前存在的多种语音编码方案进行分析研究,编写基于Android的VoIP系统对各种方案的语音压缩率、编码速率以及MOS(Mean Opinion Score)值进行测试.结果表明,Speex(一种基于码激励线性预测编码而设计的音频编码压缩格式)语音编码方案在综合考虑多方面因素的情况下,在基于Android的VoIP通信系统中具有相对优势.  相似文献   

9.
谢慧  王娇  许磊 《电子科技》2010,23(8):15-17
作为新一代静止图像压缩标准的JPEG2000标准拥有压缩比高,支持多分辨率等优点。JPEG2000的编码方式采用了嵌入式码块编码(EBCOT)方式,在编码过程中采用了MQ算术编码。文中分析了它对内容单一、信息量少图像编解码的不足,针对这些不足提出了一种对MQ算术编码器流程的改进算法。这种算法提高了JPEG2000对简单图像压缩的PSNR值,使解码后的图像更加清晰。  相似文献   

10.
In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. In this paper, we investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.  相似文献   

11.
In recent decades, digital video and audio coding technologies have helped revolutionize the ways we create, deliver, and consume audiovisual content. This is exemplified by digital television (DTV), which is emerging as a captivating new program and data broadcasting service. This paper provides an overview of the video and audio coding subsystems of the Advanced Television Systems Committee (ATSC) DTV standard. We first review the motivation for data compression in digital broadcasting. The MPEG-2 video and AC-3 audio compression algorithms are described, with emphasis on basic concepts, system features, and coding performance. Next-generation video and audio codecs currently under consideration for advanced services are also presented.  相似文献   

12.
Image data compression: A review   总被引:6,自引:0,他引:6  
With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.  相似文献   

13.
《Spectrum, IEEE》1993,30(8):36-39
The use of data compression to reduce bandwidth and reduce storage requirements is discussed. The merits of lossless versus lossy compression techniques, the latter offering far greater compression ratios, are considered. The limits of lossless compression are discussed, and a simple method for lossless compression, runlength encoding, is described, as are the more sophisticated Huffman codes, arithmetic coding, and the trie-based codes invented by A. Lempel and J. Ziv (1977, 1978), WAN applications as well as throughput and latency are briefly considered  相似文献   

14.
ECG data compression techniques-a unified approach   总被引:25,自引:0,他引:25  
A broad spectrum of techniques for electrocardiogram (ECG) data compression have been proposed during the last three decades. Such techniques have been vital in reducing the digital ECG data volume for storage and transmission. These techniques are essential to a wide variety of applications ranging from diagnostic to ambulatory ECG's. Due to the diverse procedures that have been employed, comparison of ECG compression methods is a major problem. Present evaluation methods preclude any direct comparison among existing ECG compression techniques. The main purpose of this paper is to address this issue and to establish a unified view of ECG compression techniques. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are: ECG differential pulse code modulation and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods briefly presented, include: Fourier, Walsh, and K-L transforms. The theoretical basis behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, differential pulse code modulation (DPCM), and entropy coding methods. The paper concludes with the presentation of a framework for evaluation and comparison of ECG compression schemes.  相似文献   

15.
An encoding technique called multilevel block truncation coding that preserves the spatial details in digital images while achieving a reasonable compression ratio is described. An adaptive quantizer-level allocation scheme which minimizes the maximum quantization error in each block and substantially reduces the computational complexity in the allocation of optimal quantization levels is introduced. A 3.2:1 compression can be achieved by the multilevel block truncation coding itself. The truncated, or requantized, data are further compressed in a second pass using combined predictive coding, entropy coding, and vector quantization. The second pass compression can be lossless or lossy. The total compression ratios are about 4.1:1 for lossless second-pass compression, and 6.2:1 for lossy second-pass compression. The subjective results of the coding algorithm are quite satisfactory, with no perceived visual degradation  相似文献   

16.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

17.
图象压缩技术及其应用   总被引:9,自引:0,他引:9  
钟声 《电子学报》1995,23(10):117-123
本文介绍当前图象数据压缩技术的研究与应用,对于静态图象重点讨论以人眼视觉感知特性为指导的压缩技术,对于动态图象压缩技术,着重介绍MPEG-2及正在发展中的MPEG-4的一些目标和特点,以及基于模型的编码方法,文中还简介了图象压缩技术的应用和产品研制情况。  相似文献   

18.
The Moving Pictures Expert Group (MPEG) within the International Organization of Standardization (ISO) has developed a series of audio-visual standards known as MFEG-1 and MPEG-2. These audio-coding standards are the first international standards in the field of high-quality digital audio compression. MPEG-1 covers coding of stereophonic audio signals at high sampling rates aiming at transparent quality, whereas MPEG-2 also offers stereophonic audio coding at lower sampling rates. In addition, MPEG-2 introduces multichannel coding with and without backwards compatibility to MPEG-1 to provide an improved acoustical image for audio-only applications and for enhanced television and video-conferencing systems. MPEG-2 audio coding without backwards compatibility, called IMPEG-2 Advanced Audio Coding (AAC), offers the highest compression rates. Typical application areas for MPEG-based digital audio are in the fields of audio production, program distribution and exchange, digital sound broadcasting, digital storage, and various multimedia applications. We describe in some detail the key technologies and main features of MPEG-1 and MPEG-2 audio coders. We also present the MPEG-4 standard and discuss some of the typical applications for MPEG audio compression  相似文献   

19.
A picture data compression method consisting of a hybrid cascade having four processing stages is presented. The processing stages are: predictive ordering technique (POT), feedback transform coding (FTC), vertical subtraction of quantized coefficients (VSQC), and predictive coding refinements in the signal space consisting of either overshoot suppression (OS) as a first variant or hybrid block truncation coding (HBTC) as a second one. Each of these stages is described, and reconstructed pictures are presented with their coding fidelity performances (mean square quantization error, mean absolute error and signal-to-noise ratio), using as test pictures a portrait and a LANDSAT image. It is shown that good quality images at low bit rates (0.55 to 1.1 bits/pixel) have been obtained  相似文献   

20.
This article presents a coding method for the lossless compression of color video.In the proposed method,four-dimensional matrix Walsh transform(4D-M-Walsh-T)is used for color video coding.The whole n frames of a color video sequence are divided into '3D-blocks' which are image width(row component),image height(column component),image width(vertical component)in a color video sequence,and adjacency(depth component)of n frames(Y,U or V)of the video sequence.Similar to the method of 2D-Walsh transform,4D-M-Walsh-T is 4D sub-matrices,and the size of each sub-matrix is n.The method can fully utilize correlations to encode for lossless compression and reduce the redundancy of color video,such as adjacent pixels in one frame or different frames of a video at the same time.Experimental results show that the proposed method can achieve higher lossless compression ratio(CR)for the color video sequence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号