首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of decoding binary linear block codes has received much attention; the two extremes are optimal, high-complexity soft-decision (or maximum-likelihood) decoding and lower performance, much lower complexity hard-decision (or algebraic) decoding. This article considers a class of decoders which first implements hard-decision decoding; second, tests to see if that is enough, that its result matches the result of soft-decision decoding; and third, continues to search if a match is not found. The advantage of such a testing procedure is that if the hard-decision decoding result is found to be enough (called a success for the test), then the computational effort expended by the decoder is low. The performance, as measured by the probability of a success, of a variety of simple tests of the hard-decision codeword are analyzed  相似文献   

2.
PSK调制下空频块码的低复杂度复数球形译码   总被引:1,自引:1,他引:0       下载免费PDF全文
张翼  张灿  丁赤飚  凃国防 《电子学报》2008,36(4):819-823
对于PSK调制下的空频块码,复数球形译码相对实数球形译码有较低的复杂度.当复数球形译码的初始半径趋向无穷大时,排序的复杂度高.本文针对PSK符号提出每层符号以排序中心点为中心,在极坐标角度维按照之字(Zigzag)排序的方法.通过查表可以快速获得排序后的符号序列,查表排序球形译码算法相对于通用复数球形译码算法在16-PSK调制14dB平均比特信噪比下节省约61%的复杂度.  相似文献   

3.
A novel noncoherent block coding scheme, called noncoherent block-coded MPSK (NBC-MPSK), was proposed recently. In this paper, we present further research results on NBC-MPSK. We first focus on the rotational invariance (RI) of NBC-MPSK. Based on the RI property of NBC-MPSK with multistage decoding, a noncoherent near-optimal linear complexity multistage decoder for NBC-MPSK is proposed. Then we investigate a tree-search ML decoding algorithm for NBCMPSK. The derived algorithm is shown to have low complexity and excellent error performance. In this paper, we also utilize the idea of the NBC-MPSK to design noncoherent space-time block codes, called noncoherent space-time block-coded MPSK (NSTBC-MPSK). For two transmit antennas, we propose a signal set with set partitioning and derive the minimum noncohent distance of NSTBC-MPSK with this signal set. For the decoding of NSTBC-MPSK, we modify the ML decoding algorithm of NBC-MPSK and propose an iterative hard-decision decoding algorithm. Compared with training codes and unitary space-time modulation, NBC-MPSK and NSTBC-MPSK have larger minimum noncoherent distance and thus better error performance for the noncoherent ML decoder.  相似文献   

4.
In this letter, formulas for the total number of decodable and undecodable vectors are derived for a general (n,k) q-ary linear block code. These formulas are used to find the probabilities of decoder error and decoder failure for Reed-Solomon codes under errors-and-erasures decoding. The resulting analytical expressions are computationally efficient and allow accurate calculation of very small values of decoder failure probabilities. The formulas are used to analyze the performance of type-I hybrid automatic-repeat-request (HARQ) protocols with two decoding diameters  相似文献   

5.
We present constructions of space–time (ST) codes based on lattice coset coding. First, we focus on ST code constructions for the short block-length case, i.e., when the block length is equal to or slightly larger than the number of transmit antennas. We present constructions based on dense lattice packings and nested lattice (Voronoi) shaping. Our codes achieve the optimal diversity–multiplexing tradeoff (DMT) of quasi-static multiple-input multiple-output (MIMO) fading channels for any fading statistics, and perform very well also at practical, moderate values of signal-to-noise ratios (SNR). Then, we extend the construction to the case of large block lengths, by using trellis coset coding. We provide constructions of trellis coded modulation (TCM) schemes that are endowed with good packing and shaping properties. Both short-block and trellis constructions allow for a reduced complexity decoding algorithm based on minimum mean-squared error generalized decision feedback equalizer (MMSE-GDFE) lattice decoding and a combination of this with a Viterbi TCM decoder for the TCM case. Beyond the interesting algebraic structure, we exhibit codes whose performance is among the state-of-the art considering codes with similar encoding/decoding complexity.   相似文献   

6.
A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSCs) is proposed. Based on the enumeration of the smallest weight error patterns that cannot be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates.  相似文献   

7.
Hard sphere decoding (HSD) has well-appreciated merits for near-optimal demodulation of multiuser, block single-antenna or multi-antenna transmissions over multi-input multi-output (MIMO) channels. At increased complexity, a soft version of sphere decoding (SD), so-termed list SD (LSD), has been recently applied to coded layered space-time (LST) systems enabling them to approach the capacity of MIMO channels. By introducing a novel bit-level multi-stream coded LST transmitter along with a soft-to-hard conversion at the decoder, we show how to achieve the near-capacity performance of LSD, and even outperform it as the size of the block to be decoded (M) increases. Specifically, for binary real LST codes, we develop exact max-log-based SD schemes with M + 1 HSD steps, and an approximate alternative with only one HSD step to trade off performance for average complexity. These schemes apply directly to the real and imaginary parts of quaternary phase-shift keying signaling, and also to quadrature amplitude modulation signaling after incorporating an appropriate interference estimation and cancellation module. We corroborate our near-optimal soft detection (SoD) algorithms based on HSD (SoD-HSD) with simulations.  相似文献   

8.
A Bidirectional Efficient Algorithm for Searching code Trees (BEAST) is proposed for efficient soft-output decoding of block codes and concatenated block codes. BEAST operates on trees corresponding to the minimal trellis of a block code and finds a list of the most probable codewords. The complexity of the BEAST search is significantly lower than the complexity of trellis-based algorithms, such as the Viterbi algorithm and its list generalizations. The outputs of BEAST, a list of best codewords and their metrics, are used to obtain approximate a posteriori probabilities (APPs) of the transmitted symbols, yielding a soft-input soft-output (SISO) symbol decoder referred to as the BEAST-APP decoder. This decoder is employed as a component decoder in iterative schemes for decoding of product and incomplete product codes. Its performance and convergence behavior are investigated using extrinsic information transfer (EXIT) charts and compared to existing decoding schemes. It is shown that the BEAST-APP decoder achieves performances close to the Bahl–Cocke–Jelinek–Raviv (BCJR) decoder with a substantially lower computational complexity.   相似文献   

9.
We consider receiver design for coded transmission over linear Gaussian channels. We restrict ourselves to the class of lattice codes and formulate the joint detection and decoding problem as a closest lattice point search (CLPS). Here, a tree search framework for solving the CLPS is adopted. In our framework, the CLPS algorithm is decomposed into the preprocessing and tree search stages. The role of the preprocessing stage is to expose the tree structure in a form matched to the search stage. We argue that the forward and feedback (matrix) filters of the minimum mean-square error decision feedback equalizer (MMSE-DFE) are instrumental for solving the joint detection and decoding problem in a single search stage. It is further shown that MMSE-DFE filtering allows for solving underdetermined linear systems and using lattice reduction methods to diminish complexity, at the expense of a marginal performance loss. For the search stage, we present a generic method, based on the branch and bound (BB) algorithm, and show that it encompasses all existing sphere decoders as special cases. The proposed generic algorithm further allows for an interesting classification of tree search decoders, sheds more light on the structural properties of all known sphere decoders, and inspires the design of more efficient decoders. In particular, an efficient decoding algorithm that resembles the well-known Fano sequential decoder is identified. The excellent performance-complexity tradeoff achieved by the proposed MMSE-DFE Fano decoder is established via simulation results and analytical arguments in several multiple-input multiple-output (MIMO) and intersymbol interference (ISI) scenarios.  相似文献   

10.
Simplified receiver design for STBC binary continuous phase modulation   总被引:1,自引:0,他引:1  
Existing space-time codes have focused on multiple- antenna systems with linear modulation schemes such as phase- shift keying and quadrature amplitude modulation. Continuous phase modulation (CPM) is an attractive scheme for digital transmission because of its constant envelope which is needed for power efficient transmitters. Recent research has shown that space-time coded CPM can achieve transmit diversity to improve performance while maintaining the compact spectrum of CPM signals. However, these efforts mainly combine space- time coding (STC) with CPM to achieve spatial diversity at the cost of a high decoding complexity. In this paper, we design space-time block codes (STBC) for binary CPM with modulation index h = 1/2 and derive low-complexity receivers for these systems. The proposed scheme has a much lower decoding complexity than STC CPM with the Viterbi decoder and still achieves near-optimum error performances.  相似文献   

11.
In this paper, we propose a low complexity decoder architecture for low-density parity-check (LDPC) codes using a variable quantization scheme as well as an efficient highly-parallel decoding scheme. In the sum-product algorithm for decoding LDPC codes, the finite precision implementations have an important tradeoff between decoding performance and hardware complexity caused by two dominant area-consuming factors: one is the memory for updated messages storage and the other is the look-up table (LUT) for implementation of the nonlinear function Ψ(x). The proposed variable quantization schemes offer a large reduction in the hardware complexities for LUT and memory. Also, an efficient highly-parallel decoder architecture for quasi-cyclic (QC) LDPC codes can be implemented with the reduced hardware complexity by using the partially block overlapped decoding scheme and the minimized power consumption by reducing the total number of memory accesses for updated messages. For (3, 6) QC LDPC codes, our proposed schemes in implementing the highly-parallel decoder architecture offer a great reduction of implementation area by 33% for memory area and approximately by 28% for the check node unit and variable node unit computation units without significant performance degradation. Also, the memory accesses are reduced by 20%.  相似文献   

12.
New multilevel block codes for Rayleigh-fading channels are presented. At high signal-to-noise ratios (SNRs), the proposed block codes can achieve better bit error performance over TCM codes, optimum for fading channels, with comparable decoder complexity and bandwidth efficiency. The code construction is based on variant length binary component block codes. As component codes for the 8-PSK multilevel block construction, the authors propose two modified forms of Reed-Muller codes giving a good trade-off between the decoder complexity and the effective code rates. Code design criteria are derived from the error performance analysis. Multistage decoding shows very slight degradation of bit error performance relative to the maximum likelihood algorithm  相似文献   

13.
We consider coded modulation schemes for the block-fading channel. In the setting where a codeword spans a finite number N of fading degrees of freedom, we show that coded modulations of rate R bit per complex dimension, over a finite signal set /spl chi//spl sube//spl Copf/ of size 2/sup M/, achieve the optimal rate-diversity tradeoff given by the Singleton bound /spl delta/(N,M,R)=1+/spl lfloor/N(1-R/M)/spl rfloor/, for R/spl isin/(0,M/spl rfloor/. Furthermore, we show also that the popular bit-interleaved coded modulation achieves the same optimal rate-diversity tradeoff. We present a novel coded modulation construction based on blockwise concatenation that systematically yields Singleton-bound achieving turbo-like codes defined over an arbitrary signal set /spl chi//spl sub//spl Copf/. The proposed blockwise concatenation significantly outperforms conventional serial and parallel turbo codes in the block-fading channel. We analyze the ensemble average performance under maximum-likelihood (ML) decoding of the proposed codes by means of upper bounds and tight approximations. We show that, differently from the additive white Gaussian noise (AWGN) and fully interleaved fading cases, belief-propagation iterative decoding performs very close to ML on the block-fading channel for any signal-to-noise ratio (SNR) and even for relatively short block lengths. We also show that, at constant decoding complexity per information bit, the proposed codes perform close to the information outage probability for any block length, while standard block codes (e.g., obtained by trellis termination of convolutional codes) have a gap from outage that increases with the block length: this is a different and more subtle manifestation of the so-called "interleaving gain" of turbo codes.  相似文献   

14.
Generalized minimum distance (GMD) decoding of Reed–Solomon (RS) codes can correct more errors than conventional hard-decision decoding by running error-and-erasure decoding multiple times for different erasure patterns. The latency of the GMD decoding can be reduced by the Kötter’s one-pass decoding scheme. This scheme first carries out an error-only hard-decision decoding. Then all pairs of error-erasure locators and evaluators are derived iteratively in one run based on the result of the error-only decoding. In this paper, a more efficient interpolation-based one-pass GMD decoding scheme is studied. Applying the re-encoding and coordinate transformation, the result of erasure-only decoding can be directly derived. Then the locator and evaluator pairs for other erasure patterns are generated iteratively by applying interpolation. A simplified polynomial selection scheme is proposed to pass only one pair of locator and evaluator to successive decoding steps and a low-complexity parallel Chien search architecture is developed to implement this selection scheme. With the proposed polynomial selection architecture, the interpolation can run at the full speed to greatly increase the throughput. After efficient architectures and effective optimizations are employed, a generalized hardware complexity analysis is provided for the proposed interpolation-based decoder. For a (255, 239) RS code, the high-speed interpolation-based one-pass GMD decoder can achieve 53% higher throughput than the Kötter’s decoder with slightly more hardware requirement. In terms of speed-over-area ratio, our design is 51% more efficient. In addition, compared to other soft-decision decoders, the high-speed interpolation-based GMD decoder can achieve better performance-complexity tradeoff.  相似文献   

15.
We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the tradeoff between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via LovÁsz's Local Lemma (LLL) and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum-likelihood (ML) decoding.   相似文献   

16.
A probabilistic hard-decision decoding algorithm based on the weight distribution of binary block codes and the random error distribution in the channel is briefly described. It reduces the number of look-up iterations performed in the conventional exhaustive search table look-up minimum distance decoder.  相似文献   

17.
We consider the decoding problem for low-density parity-check codes, and apply nonlinear programming methods. This extends previous work using linear programming (LP) to decode linear block codes. First, a multistage LP decoder based on the branch-and-bound method is proposed. This decoder makes use of the maximum-likelihood-certificate property of the LP decoder to refine the results when an error is reported. Second, we transform the original LP decoding formulation into a box-constrained quadratic programming form. Efficient linear-time parallel and serial decoding algorithms are proposed and their convergence properties are investigated. Extensive simulation studies are performed to assess the performance of the proposed decoders. It is seen that the proposed multistage LP decoder outperforms the conventional sum-product (SP) decoder considerably for low-density parity-check (LDPC) codes with short to medium block length. The proposed box-constrained quadratic programming decoder has less complexity than the SP decoder and yields much better performance for LDPC codes with regular structure.  相似文献   

18.
We study the design of optimal signals for bandwidth-efficient linear coded modulation. Previous results show that for linear channels with intersymbol interference (ISI), reduced-search decoding algorithms have near-maximum-likelihood error performance, but with much smaller complexity than the Viterbi decoder. Consequently, the controlled ISI introduced by a lowpass filter can be practically used for bandwidth reduction. Such spectrum shaping filters comprise an explicit coded modulation, for which we seek the optimal design. We simultaneously constrain the bandwidth and maximize the minimum Euclidean distance between signals. We show that under quite general assumptions the problem can be formulated as a linear program, and solved with well-known efficient optimization techniques. Numerical results are presented, and the performance of the optimal signals, measured by their combined bandwidth and noise immunity, is analyzed. The new codes are comparable to set-partition (TCM) trellis codes. Tests of an M-algorithm decoder confirm this and show that the performance occurs at small detection complexity  相似文献   

19.
We present an importance sampling (IS) technique for evaluating the word-error rate (WER) and bit-error rate (BER) performance of binary linear block codes under hard-decision decoding. This IS technique takes advantage of the invariance of the decoding outcome to the transition probability of the binary symmetric channel given a received error pattern, and is equivalent to the method of stratification for variance reduction. A thorough analysis of the accuracy of the proposed signal-to-noise-ratio-invariant IS (IIS) estimator based on computing its relative bias and standard deviation is provided. Under certain conditions, which may be achieved fairly easily for certain code and decoder combinations, we demonstrate that it is possible to use the proposed IIS technique to accurately evaluate the WER and BER to arbitrarily low values. Further, in all cases, the probability estimates obtained via IIS always serve as a lower bound on the true probability values  相似文献   

20.
A symbol-by-symbol maximum a posteriori (MAP) decoding algorithm for high-rate convolutional codes applying reciprocal dual convolutional codes is presented. The advantage of this approach is a reduction of the computational complexity since the number of codewords to consider is decreased. All requirements for iterative decoding schemes are fulfilled. Since tail-biting convolutional codes are equivalent to quasi-cyclic block codes, the decoding algorithm for truncated or terminated convolutional codes is modified to obtain a soft-in/soft-out decoder for high-rate quasi-cyclic block codes which also uses the dual code because of complexity reasons. Additionally, quasi-cyclic block codes are investigated as component codes for parallel concatenation. Simulation results obtained by iterative decoding are compared with union bounds for maximum likelihood decoding. The results of a search for high-rate quasi-cyclic block codes are given in the appendix  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号