首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a survey of the literature on the information-theoretic problems of statistical inference under multiterminal data compression with rate constraints. Significant emphasis is put on problems: (1) multiterminal hypothesis testing, (2) multiterminal parameter estimation and (3) multiterminal pattern classification, in either case of positive rates or zero rates. In addition, the paper includes three new results, i.e., the converse theorems for all problems of multiterminal hypothesis testing, multiterminal parameter estimation, and multiterminal pattern classification at the zero rate  相似文献   

2.
The asymptotically optimal hypothesis testing problem, with general sources as the null and alternative hypotheses, is studied under exponential-type error constraints on the first kind of error probability. Our fundamental philosophy is to convert all of the hypothesis testing problems to the pertinent computation problems in the large deviation-probability theory. This methodologically new approach enables us to establish compact general formulas of the optimal exponents of the second kind of error and correct testing probabilities for the general sources including all nonstationary and/or nonergodic sources with arbitrary abstract alphabet (countable or uncountable). These general formulas are presented from the information-spectrum point of view  相似文献   

3.
Hypothesis testing and information theory   总被引:1,自引:0,他引:1  
The testing of binary hypotheses is developed from an information-theoretic point of view, and the asymptotic performance of optimum hypothesis testers is developed in exact analogy to the asymptotic performance of optimum channel codes. The discrimination, introduced by Kullback, is developed in a role analogous to that of mutual information in channel coding theory. Based on the discrimination, an error-exponent functione(r)is defined. This function is found to describe the behavior of optimum hypothesis testers asymptotically with block length. Next, mutual information is introduced as a minimum of a set of discriminations. This approach has later coding significance. The channel reliability-rate functionE(R)is defined in terms of discrimination, and a number of its mathematical properties developed. Sphere-packing-like bounds are developed in a relatively straightforward and intuitive manner by relatinge(r)andE (R). This ties together the aforementioned developments and gives a lower bound in terms of a hypothesis testing model. The result is valid for discrete or continuous probability distributions. The discrimination function is also used to define a source code reliability-rate function. This function allows a simpler proof of the source coding theorem and also bounds the code performance as a function of block length, thereby providing the source coding analog ofE (R).  相似文献   

4.
The multiterminal gyrator is a multiterminal generalization of the familiar three-terminal device. Whereas the simple gyrator can replace a single inductor, the multigyrator can simulate whole inductive networks; mutual inductors and transformers are not excluded.  相似文献   

5.
Letcal ybe a mean zero complex stationary Gaussian signal process depending on a vector parametertheta prime = { theta_{1}, theta_{2}, theta_{3} }whose components represent parameters of the covariance function R(r) ofcal y. These parameters are chosen astheta_{1} = R(0), theta_{2} = |R( tau )| /R(0), theta_{3} =phase ofR( tau), and they are simply related to the parameters of the spectral density ofcal y. This paper is concerned with the determination of most powerful (MP) tests that distinguish between random signals having different covariance functions. The tests are based uponNcorrelated pairs of independent observations oncal y. Although the MP test that distinguishes betweentheta = theta_{o}and the alternative hypothesistheta = theta_{1}has been solved previously [11], the problem of identifying the random signals is often complicated by the fact that the signal powertheta_{1} = R(0)is not a distinguishing feature of either hypothesis. This paper determines the MP invariant test that delineates between the composite hypothesislambda equiv R( tau)/R(0) = lambda_{0}and the composite alternativelambda = lambda_{1}. In addition, the uniformly MP invariant test that distinguishes between the composite hypothesestheta_{2} <_{=} | lambda_{o} |andtheta_{2} > | lambda_{0} |has also been found. In all cases, exact probability distributions have been obtained.  相似文献   

6.
ECG data compression with time-warped polynomials   总被引:2,自引:0,他引:2  
Presents a new adaptive compression method for ECGs. The method represents each R-R interval by an optimally time-warped polynomial. It achieves a high-quality approximation at less than 250 bits/s. The author shows that the corresponding rates for other transform based schemes (the DCT and the DLT) are always higher. Also, the new method is less sensitive to errors in QRS detection and it removes more (white) noise from the signal. The reconstruction errors are distributed more uniformly in the new scheme and the peak error is usually lower. The reconstruction method is also useful for adaptive filtering of noisy ECG signals  相似文献   

7.
The hypothesis-testing problem has recently been studied under a finite-memory constraint. However, most work has been concerned with the large-sample theory. Here we study the small-sample theory for binary-valued observations.  相似文献   

8.
The optimal data compression problem is posed in terms of an alphabet constraint rather than an entropy constraint. Solving the optimal alphabet-constrained data compression problem yields explicit source encoder/decoder designs, which is in sharp contrast to other approaches. The alphabet-constrained approach is shown to have the additional advantages that (1) classical waveform encoding schemes, such as pulse code modulation (PCM), differential pulse code modulation (DPCM), and delta modulation (DM), as well as rate distortion theory motivated tree/trellis coders fit within this theory; (2) the concept of preposterior analysis in data compression is introduced, yielding a rich. new class of coders: and (3) it provides a conceptual framework for the design of joint source/channel coders for noisy channel applications. Examples are presented of single-path differential encoding, delayed (or tree) encoding, preposterior analysis, and source coding over noisy channels.  相似文献   

9.
Data compression techniques are classified into four categories which describe the effect a compression method has on the form of the signal transmitted. Each category is defined and examples of techniques in each category are given. Compression methods which have received previous investigation, such as the geometric aperture methods, as well as techniques which have not received much attention, such as Fourier filter, optimum discrete filter, and variable sampling rate compression, are described. Results of computer simulations with real data are presented for each method in terms of rms and peak errors versus compression ratio. It is shown that, in general, the geometric aperture techniques give results comparable to or better than the more "exotic" methods and are more economical to implement at the present state-of-the-art. In addition, the aperture compression methods provide bounded peak error which is not readily obtainable with other methods. A general system design is given for a stored-logic data compression system with adaptive buffer control to prevent loss of data and to provide efficient transmission of multiplexed channels with compressed data. An adaptive buffer design is described which is shown to be practical, based on computer simulations with five different types of representative data.  相似文献   

10.
Group testing for image compression   总被引:4,自引:0,他引:4  
This paper presents group testing for wavelets (GTW), a novel embedded-wavelet-based image compression algorithm based on the concept of group testing. We explain how group testing is a generalization of the zerotree coding technique for wavelet-transformed images. We also show that Golomb coding is equivalent to Hwang's group testing algorithm (Du and Hwang 1993). GTW is similar to SPIHT (Said and Pearlman 1996) but replaces SPIHT's significance pass with a new group testing based method. Although no arithmetic coding is implemented, GTW performs competitively with SPIHT's arithmetic coding variant in terms of rate-distortion performance.  相似文献   

11.
Interconnections are quickly becoming a dominant factor in the design of computer chips. Techniques to estimate interconnection lengths a priori (very early in the design flow) therefore gain attention and will become important for making the right design decisions when one still has the freedom to do so. However, at that time, one also knows least about the possible results of subsequent design steps. Conventional models for a priori estimation of wire lengths in computer chips use Rent's rule to estimate the number of terminals needed for communication between sets of gates. The number of interconnections then follows by taking into account that most nets are point-to-point connections. In this paper, we apply our previously introduced model for multiterminal nets to show that such nets have a fundamentally different influence on the wire length estimations than point-to-point nets. We then estimate the wire length distribution of Steiner tree lengths for applications related to routing resource estimation. Experiments show that the new estimated Steiner-length distributions capture the multiterminal effects much better than the previous point-to-point length distributions. The accuracy of the estimated values is still too low, as for the conventional point-to-point models, because we are still lacking a good model for placement optimization. However, the new results are a step closer to the application of wire length estimation techniques in real-world situations.  相似文献   

12.
Gaussian multiterminal source coding   总被引:4,自引:0,他引:4  
In this paper, we consider the problem of separate coding for two correlated memoryless Gaussian sources. We determine the rate-distortion region in the special case when one source provides partial side information to the other source. We also show that the previously obtained inner region of the rate-distortion region is partially tight. A rigorous proof of the direct coding theorem is also given  相似文献   

13.
Experiments with wavelets for compression of SAR data   总被引:3,自引:0,他引:3  
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data  相似文献   

14.
The amount of fixed side information required for lossless data compression is discussed. Nonasymptotic coding and converse theorems are derived for data-compression algorithms with fixed statistical side information (“training sequence”) that is not large enough so as to yield the ultimate compression, namely, the entropy of the source  相似文献   

15.
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively.  相似文献   

16.
Nonasymptotic coding and converse theorems are derived for universal data compression algorithms in cases where the training sequence (“history”) that is available to the encoder consists of the most recent segment of the input data string that has been processed, but is not large enough so as to yield the ultimate compression, namely, the entropy of the source  相似文献   

17.
This correspondence presents entropy analyses for dithered and undithered quantized sources. Two methods are discussed that reduce the increase in entropy caused by the dither. The first method supplies the dither to the lossless encoding-decoding scheme. It is argued that this increases the complexity of the encoding-decoding scheme. A method to reduce this complexity is proposed. The second method is the usage of a dead-zone quantizer. A procedure for determining the optimal dead-zone width in the mean-square sense is given  相似文献   

18.
Conventional digital audio, as used for instance on compact discs, requires a large bandwidth for transmission and enormous amounts of storage space. Developments in high-speed digital signal processing chips have made it practical to reduce these requirements by employing sophisticated data compression techniques which reduce redundant and irrelevant information in the source signal. The paper reviews the types of digital audio data compression coders that are now available, their applications and the principles involved  相似文献   

19.
EEG data compression techniques   总被引:4,自引:0,他引:4  
Electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format  相似文献   

20.
In matched design, if the unit cost from one comparison group is higher than the unit cost from the other group, then one can consider matching each unit randomly selected from the former with more than one unit from the latter, to increase power of the test. This paper extends the discussion on testing equality between exponential distributions for one-to-one paired design to that for K-to-one matched design, where K can be any finite positive integer. This paper considers the asymptotic test procedure using the central limit and Fieller's theorems (CLFT), the asymptotic test procedure using the marginal likelihood ratio test (MLRT), an exact parametric test (EXPT) and applies Monte Carlo simulation to evaluate the performance of these procedures. When the number of matched sets, n, is as small as 10, the estimated type-I error for the two asymptotic procedures can still agree well with the nominal level. When the number of matched units, K, exceeds 4, the effect due to an increase in K on power generally becomes minimal. When the intra-class correlation between failure times within matched sets is small, using the CLFT generally has larger power than using either the MLRT or EXPT in one-to-one paired design. On the other hand, when the intra-class correlation between failure times within matched sets is large, the power for the MLRT is higher than the power for both the CLFT and EXPT in almost all the situations considered in this paper. Hence the author recommends the MLRT  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号