首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Driessen  P.F. 《Electronics letters》1989,25(15):960-961
The probability of successful frame synchronisation in the presence of jamming or interference may be increased by using bit erasure information without also increasing the total probability of simulated alignment P/sub t,sim/ in an incorrect position in he overlap region. An expression for P/sub t,sim/ is obtained as a function of the bit error and erasure probability, the specified frame alignment pattern, the amount of overlap and the number of bit errors and erasures which are tolerated in the frame sync word.<>  相似文献   

2.
A topology for high-precision noise-shaping converters that can be integrated on a standard digital IC process is presented. This topology uses a multibit noise-shaping coder and a novel form of dynamic element matching to achieve high accuracy and long-term stability without requiring precision matching of components. A fourth-order noise-shaping D/A (digital-to-analog) conversion system using a 3-b quantizer and a dynamic element-matching internal D/A converter, fabricated in a standard double-metal 3-μm CMOS process, achieved 16-bit dynamic range and a harmonic distortion below -90 dB. This multibit noise-shaping D/A conversion system achieved performance comparable to that of a 1-bit noise-shaping D/A conversion system that operated at nearly four times its clock rate  相似文献   

3.
For the CCITT recommendations on frame synchronization (as enhanced in Jones and Al-Subbagh, 1985), the performance in the presence of jamming, interference or other disturbance is improved by using bit erasure information. The acquisition time and misalignment detection times are reduced and the false exit time is increased, with only a minor reduction in the already large false entry time  相似文献   

4.
A new bit allocation algorithm is proposed and applied in a stereo transform coder. This algorithm is a two-stage approach based on the minimisation of perceptual distortion measured by artificial ears. At the first stage, the use of minimum frequency-weighted reconstruction error is exploited, in which the weighting factors are related to the masking threshold and the absolute hearing threshold. The bit assignment obtained at the first stage is fine tuned by a greedy algorithm of perceptual distortion versus bit rate at the second stage. A joint stereo perceptual transform coder is implemented and incorporated with this two-stage bit allocation algorithm. The listening tests show that this coder performs better than MPEG layer 2 at bit rates below 128 kbps and has the comparative quality of MPEC layer 2 at higher bit rates  相似文献   

5.
Recursive rectilinear tessellations like quadtree are widely used in image coding, but a regular tessellation, despite simple geometry, may not suit image compression because it is too rigid to reflect the scene structure of an image. The paper presents a new image pyramid formed by adaptive, tree-structured segmentation to be a framework of a predictive multiresolution image coder. Subjectively appealing compression results are obtained at different resolutions by scene-adaptive, tree-structured segmentation and by exploiting the statistical dependency between the layers of the image pyramid. The adaptive segmentation-based image coder is constructed by recursive, least-squares piecewise functional approximation. The seemingly expensive encoding process can be made efficient by an incremental least-squares computation technique. The decoding is simple and can be done in real time if assisted by existing hardware technology.  相似文献   

6.
Low-density parity-check codes over Gaussian channels with erasures   总被引:2,自引:0,他引:2  
We consider low-density parity-check code (LDPCC) design for additive white Gaussian noise (AWGN) channels with erasures. This model, for example, represents a common situation in magnetic and optical recording where defects or thermal asperities in the system are detected and presented to the decoder as erasures. We give thresholds of regular and irregular LDPCCs and discuss practical code design over the mixed Gaussian/erasures channel. The analysis is an extension of the Gaussian approximation work of Chung et al. In the two limiting cases of no erasures and large signal-to-noise ratio (SNR), the analysis tends to the results of Chung et al. (see ibid., vol. 47, p.657-670, Feb. 2001) and Luby et al. (1997), respectively, giving a general tool for a class of mixed channels. We derive a steady-state equation which gives a graphical interpretation of decoder convergence. This allows one to estimate the maximum erasure capability on the mixture channel, or conversely, to estimate the additional signal power required to compensate for the loss due to erasures. We see that a good (capacity-approaching) LDPCC over an AWGN channel is also good over the mixed channel up to a moderate erasure probability. We also investigate practical issues such as the maximum number of iterations of message-passing decoders, the coded block length, and types of erasure patterns (random/block erasures). Finally, we design an optimized LDPCC for the mixed channel, which shows better performance if the erasure probability is larger than a certain value (0.1 in our simulation) at the expense of performance degradation at unerased (AWGN channel) and lower erasure probability regions (less than 0.1 in our simulation).  相似文献   

7.
In this paper, a high efficient decoding algorithm is developed here in order to correct both erasures and errors for Reed-Solomon (RS) codes based on the Euclidean algorithm together with the Berlekamp-Massey (BM) algorithm. The new decoding algorithm computes the errata locator polynomial and the errata evaluator polynomial simultaneously without performing polynomial divisions, and there is no need for the computation of the discrepancies and the field element inversions. Also, the separate computation of the Forney syndrome needed in the decoder is completely avoided. As a consequence, the complexity of this new decoding algorithm is dramatically reduced. Finally, the new algorithm has been verified through a software simulation using C/sup ++/ language. An illustrative example of (255,239) RS code using this program shows that the speed of the decoding process is approximately three times faster than that of the inverse-free Berlekamp-Massey algorithm.  相似文献   

8.
The use of block coding and errors-and-erasures decoding can enhance performance in frequency-hop (FH) communication systems, provided that a good scheme is employed to determine which symbols to erase. The problem of making erasure decisions from collections of receiver outputs is investigated in this paper. Methods to determine which received symbols to erase are derived from Bayesian decision theory. Decision rules are developed for a system with M-ary orthogonal signaling, noncoherent demodulation, and frequency-selective fading. One result is a class of Bayesian schemes in which erasure decisions are made independently from symbol to symbol. Within this class is a rule that uses signal amplitude estimates for improved performance. A second result is a Bayesian technique in which erasure decisions are mutually dependent and are made collectively for each codeword. These techniques are analyzed and compared with the performance of receivers that use erasure techniques that require multiple applications of bounded distance decoding. The performance of the Bayesian techniques for dependent erasures are also compared with the performance of receivers that do not permit erasures. It is found that each of the Bayesian techniques offers substantial performance gains over errors-only decoding, and the dependent erasure scheme provides the best performance among the techniques of lower complexity  相似文献   

9.
A very low bit rate video coder based on vector quantization   总被引:1,自引:0,他引:1  
Describes a video coder based on a hybrid DPCM-vector quantization algorithm that is suited for bit rates ranging from 8-16 kb/s. The proposed approach involves segmenting difference images into variable-size and variable-shape blocks and performing segmentation and motion compensation simultaneously. The purpose of obtaining motion vectors for variable-size and variable-shape blocks is to improve the quality of motion estimation, particularly in those areas where the edges of moving objects are situated. For the larger blocks, decimation takes place in order to simplify vector quantization. For very active blocks, which are always of small dimension, a specific vector quantizer has been applied, the fuzzy classified vector quantizer (FCVQ). The coding algorithm described displays good performance in the compression of test sequences at the rates of 8 and 16 kb/s; the signal-to-noise ratios obtained are good in both cases. The complexity of the coder implementation is comparable to that of conventional hybrid coders, while the decoder is much simpler in this proposal.  相似文献   

10.
A new algorithm is presented for solving the key equation that simultaneously computes the error locator polynomial and the errata evaluator polynomial. The algorithm is similar to the Berlekamp algorithm but is more symmetrical in its treatment of the iterated pairs of polynomials making it particularly well suited to a highly parallel hardware implementation.<>  相似文献   

11.
Previously, the authors proposed an inverse-free Berlekamp-Massey (1968, 1969) algorithm to simplify the Reed-Solomon (RS) codes. This modified RS decoding method is the best known technique for finding the error locator polynomial. The inverse-free method is generalized to find both errors and erasures. The basic idea of the new procedure is the replacement of the initial condition of the BM algorithm by the Forney (1965) syndromes. With this improved technique, the complexity of time-domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches  相似文献   

12.
A method is presented for decoding erasures and errors in Reed-Solomon (RS) codes over GF(q). It uses fewer operations when the code is of medium or low rate, when the number of erasures is relatively large, and whenq-1is prime. This method can be used in conjunction with the customary method of decoding RS codes and can decrease the maximum number of operations needed to decode certain codes. This procedure is also applicable to generalized RS codes of lengthqover GF(q).  相似文献   

13.
A fast and efficient hybrid fractal-wavelet image coder.   总被引:1,自引:0,他引:1  
The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This paper presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations also compare the results to the SPIHT wavelet coding. In both cases, the new scheme improves the subjective quality of pictures for high-medium-low bitrates.  相似文献   

14.
A vector enhancement of Said and Pearlman's set partitioning in hierarchical trees (SPIHT) methodology, named VSPIHT, has been proposed for embedded wavelet image compression. A major advantage of vector-based embedded coding with fixed length VQs over scalar embedded coding is its superior robustness to noise. We show that vector set partitioning can effectively alter the balance of bits in the bit steam so that significantly fewer critical bits carrying significant information are transmitted, thereby improving inherent noise resilience. For low noise channels, the critical bits are protected, while the degradation in reconstruction quality caused by errors in noncritical quantization information, can be reduced by appropriate VQ indexing, or designing channel optimized VQs for the successive refinement systems. For very noisy channels, unequal error protection to the critical and noncritical bits with either block codes or convolution codes are used. Additionally, the error protection for the critical bits is changed from pass to pass. A buffering mechanism is used to produce an unequally protected bit stream without sacrificing the embedding property. Extensive simulation results are presented for noisy channels, including bursty channels  相似文献   

15.
The embedded zero-tree wavelet (EZW) coding algorithm is a very effective technique for low bitrate still image compression. In this paper, an improved EZW algorithm is proposed to achieve a high compression performance in terms of PSNR and bitrate for lossy and lossless image compression, respectively. To reduce the number of zerotrees, the scanning and symbol redundancy of the existing EZW; the proposed method is based on the use of a new significant symbol map which is represented in a more efficient way. Furthermore, we develop a new EZW-based schemes for achieving a scalable colour image coding by exploiting efficiently the interdependency of colour planes. Numerical results demonstrate a significant superiority of our scheme over the conventional EZW and other improved EZW schemes with respect to both objective and subjective criteria for lossy and lossless compression applications of greyscale and colour images.  相似文献   

16.
An embedded still image coder with rate-distortion optimization   总被引:12,自引:0,他引:12  
It is well known that the fixed rate coder achieves optimality when all coefficients are coded with the same rate-distortion (R-D) slope. In this paper, we show that the performance of the embedded coder can be optimized in a rate-distortion sense by coding the coefficients with decreasing R-D slope. We denote such a coding strategy as rate-distortion optimized embedding (RDE). RDE allocates the available coding bits first to the coefficient with the steepest R-D slope, i.e., the largest distortion decrease per coding bit. The resultant coding bitstream can be truncated at any point and still maintain an optimal R-D performance. To avoid the overhead of coding order transmission, we use the expected R-D slope, which can be calculated from the coded bits and is available in both the encoder and the decoder. With the probability estimation table of the QM-coder, the calculation of the R-D slope can be just a lookup table operation. Experimental results show that the rate-distortion optimization significantly improves the coding efficiency in a wide bit rate range.  相似文献   

17.
With the development of modern imaging techniques, every medical examination would result in a huge volume of image data. Analysis, storage and/or transmission of these data demands high compression without any loss of diagnostically significant data. Although, various 3-D compression techniques have been proposed, they have not been able to meet the current requirements. This paper proposes a novel method to compress 3-D medical images based on human vision model to remove visually insignificant information. The block matching algorithm applied to exploit the anatomical symmetry remove the spatial redundancies. The results obtained are compared with those of lossless compression techniques. The results show better compression without any degradation in visual quality. The rate-distortion performance of the proposed coders is compared with that of the state-of-the-art lossy coders. The subjective evaluation performed by the medical experts confirms that the visual quality of the reconstructed image is excellent.  相似文献   

18.
In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.  相似文献   

19.
Erasure selection for errors-and-erasures decoding algorithms is investigated and analyzed, where declaring erasures provides additional gain for hard decision decoding. To derive an analytical criterion for optimal erasure selection, we first investigate erasure effects on decoding performance, by which a metric function for erasure selection is derived. From the derived metric, a suboptimal threshold is defined, which can be used for errors and erasures decoding of large code-distance channel codes. Moreover, application to M-ary (M>2) PAM constellation is also described. For verification of the proposed erasure threshold, simulation results with popular channel codes are included.  相似文献   

20.
Gameiro  A. 《Electronics letters》1998,34(21):2000-2002
A bit synchronisation algorithm for channels with data dependent noise which operates with one sample per symbol is presented. The algorithm uses the same information as the Mueller and Muller (M&M) algorithm, and is optimised for operation with data dependent noise. The performance is derived and it is shown that significant improvements over the M&M algorithm can be obtained in practical optical channels  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号