首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Improved parallel weighted bit-flipping decoding algorithm for LDPC codes   总被引:1,自引:0,他引:1  
《Communications, IET》2009,3(1):91-99
Aiming at seeking a low-complexity decoder with fast decoding convergence speed for short or medium low-density parity-check (LDPC) codes, an improved parallel weighted bit-flipping (IPWBF) algorithm, which is applied flexibly for two classes of codes is presented here. For LDPC codes with low column weight in their parity check matrix, both bootstrapping and loop detection procedures, described in the existing literature, are included in IPWBF. Furthermore, a novel delay-handling procedure is introduced to prevent the codeword bits of high reliability from being flipped too hastily. For large column weight finite geometry LDPC codes, only the delay-handling procedure is included in IPWBF to show its effectiveness. Extensive simulation results demonstrate that the proposed algorithm achieves a good tradeoff between performance and complexity.  相似文献   

2.
差分酉空时协作系统的多符号裁减自动球形译码   总被引:1,自引:1,他引:0  
鉴于差分协作系统中多符号差分检测(multiple-symbol differential detection,MSDD)算法计算复杂度高的缺点,引入裁减自动球形译码(pruning automation sphere detection,PASD)算法来降低差分酉空时协作系统的检测复杂度.PASD算法是在球形译码的基础上,结合自动球形译码和裁减球形译码的思想而提出的.通过对扩展节点所消耗的复杂度和系统的误码率两方面来分析译码算法的性能.仿真分析表明,在差分酉空时协作系统中,PASD算法在误码性能几乎不变的情况下,复杂度曲线比球形译码(sphere detection,SD)算法收敛迅速.其中在分组长度N=5、信噪比SNR=15dB的情况下,PASD、SD算法的扩展点数分别为14.138 6、78.950 5,大大降低了系统的复杂度;同时,当SNR>16dB时,协作节点的增加有利于提高系统的误码性能,系统的复杂度性能并没有很大的损失.  相似文献   

3.
极化码(Polar code)因其高可靠性、实用的线性编、译码复杂度和理论上唯一可达香农极限等特点,成为信道编码领域新的研究热点.其编、译码方法的研究扩展至多种信道类型和应用领域,但在水声信道中的理论证明和应用研究相对较少且滞后.针对具有显著多途、多普勒扩散和有限带宽等复杂特性的水声信道,文章提出了与之相匹配的极化码信...  相似文献   

4.
We present a novel algorithm for imposing the maximum-transition-run (MTR) code constraint in the decoding of low-density parity-check (LDPC) codes over a partial response channel. This algorithm provides a gain of about 0.2 dB. We also develop log and max-log versions of the MTR enforcer, similar to the well-known "log-MAP" (maximum a posteriori ) and "max-log-MAP" variants of the LDPC decoder, that have performance equivalent to that of the original version.  相似文献   

5.
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.  相似文献   

6.
Novel soft-input soft-output (SISO) decoding algorithms suitable for turbo codes are proposed with good compromise between complexity and bit error rate (BER) performance. The algorithms are based on the application of the max/max* (i.e. Jacobian logarithm) operations at different levels when computing the decoder soft-output value. It is observed that some decoding schemes from the authors' previously published work fall into the family of methods described here. The effect is to provide a range of possibilities allowing system designers to make their own choices for turbo code BER performance against complexity  相似文献   

7.
Zhai  F. Xin  Y. Fair  I.J. 《Communications, IET》2007,1(6):1170-1178
Trellis-based error-control (EC) codes, such as convolutional or turbo codes, are integrated with guided scrambling (GS) multimode coding to generate DC-free GS-convolutional/ turbo codes. On the basis of the generators of the convolutional/turbo code, we employ puncturing or flipping to ensure that the EC-coded sequences are DC-free. At the receiver, convolutional/turbo decoding is performed before GS decoding to circumvent the performance degradation that can occur when GS decoding is performed prior to EC decoding. Performance of the new DC-free GS-convolutional/turbo codes is evaluated in terms of both spectral suppression and bit error rate (BER). It is shown that the new codes can provide superior BER performance and approximately the same suppression of low frequencies as the conventional concatenation of convolutional/turbo codes and DC-free GS codes.  相似文献   

8.
Mission critical Machine-type Communication (mcMTC), also referred to as Ultra-reliable Low Latency Communication (URLLC), has become a research hotspot. It is primarily characterized by communication that provides ultra-high reliability and very low latency to concurrently transmit short commands to a massive number of connected devices. While the reduction in physical (PHY) layer overhead and improvement in channel coding techniques are pivotal in reducing latency and improving reliability, the current wireless standards dedicated to support mcMTC rely heavily on adopting the bottom layers of general-purpose wireless standards and customizing only the upper layers. The mcMTC has a significant technical impact on the design of all layers of the communication protocol stack. In this paper, an innovative bottom-up approach has been proposed for mcMTC applications through PHY layer targeted at improving the transmission reliability by implementing ultra-reliable channel coding scheme in the PHY layer of IEEE 802.11a standard bearing in mind short packet transmission system. To achieve this aim, we analyzed and compared the channel coding performance of convolutional codes (CCs), low-density parity-check (LDPC) codes, and polar codes in wireless network on the condition of short data packet transmission. The Viterbi decoding algorithm (VA), logarithmic belief propagation (Log-BP) algorithm, and cyclic redundancy check (CRC) successive cancellation list (SCL) (CRC-SCL) decoding algorithm were adopted to CC, LDPC codes, and polar codes, respectively. Consequently, a new PHY layer for mcMTC has been proposed. The reliability of the proposed approach has been validated by simulation in terms of Bit error rate (BER) and packet error rate (PER) vs. signal-to-noise ratio (SNR). The simulation results demonstrate that the reliability of IEEE 802.11a standard has been significantly improved to be at PER = 10−5 or even better with the implementation of polar codes. The results also show that the general-purpose wireless networks are prominent in providing short packet mcMTC with the modification needed.  相似文献   

9.
Powerful rate-compatible codes are essential for achieving high throughput in hybrid automatic repeat request (ARQ) systems for networks utilising packet data transmission. The paper focuses on the construction of efficient rate-compatible low-density parity-check (RC-LDPC) codes over a wide range of rates. Two LDPC code families are considered; namely, regular LDPC codes which are known for good performance and low error floor, and semi-random LDPC codes which offer performance similar to regular LDPC codes with the additional property of linear-time encoding. An algorithm for the design of punctured regular RC-LDPC codes that have low error floor is presented. Furthermore, systematic algorithms for the construction of semi-random RC-LDPC codes are proposed based on puncturing and extending. The performance of a type-ll hybrid ARQ system employing the proposed RC-LDPC codes is investigated. Compared with existing hybrid ARQ systems based on regular LDPC codes, the proposed ARQ system based on semi-random LDPC codes offers the advantages of linear-time encoding and higher throughput.  相似文献   

10.
The current study proposes decoding algorithms for low density parity check codes (LDPC), which offer competitive performance-complexity trade-offs relative to some of the most efficient existing decoding techniques. Unlike existing low-complexity algorithms, which are essentially reduced complexity variations of the classical belief propagation algorithm, starting point in the developed algorithms is the gradient projections (GP) decoding technique, proposed by Kasparis and Evans (2007). The first part of this paper is concerned with the GP algorithm itself, and specifically with determining bounds on the step-size parameter, over which convergence is guaranteed. Consequently, the GP algorithm is reformulated as a message passing routine on a Tanner graph and this new formulation allows development of new low-complexity decoding routines. Simulation evaluations, performed mainly for geometry-based LDPC constructions, show that the new variations achieve similar performances and complexities per iteration to the state-of-the-art algorithms. However, the developed algorithms offer the implementation advantages that the memory-storage requirement is significantly reduced, and also that the performance and convergence speed can be finely traded-off by tuning the step-size parameter.  相似文献   

11.
List decoding is a novel method for decoding Reed-Solomon (RS) codes that generates a list of candidate transmitted messages instead of one unique message as with conventional algebraic decoding, making it possible to correct more errors. The Guruswami-Sudan (GS) algorithm is the most efficient list decoding algorithm for RS codes. Until recently only a few papers in the literature suggested practical methods to implement the key steps (interpolation and factorisation) of the GS algorithm that make the list decoding of RS codes feasible. However, the algorithm's high decoding complexity is unsolved and a novel complexity-reduced modification to improve its efficiency is presented. A detailed explanation of the GS algorithm with the complexity-reduced modification is given with simulation results of RS codes for different list decoding parameters over the AWGN and Rayleigh fading channels. A complexity analysis is presented comparing the GS algorithm with our modified GS algorithm, showing the modification can reduce complexity significantly in low error weight situations. Simulation results using the modified GS algorithm show larger coding gains for RS codes with lower code rates, with more significant gains being achieved over the Rayleigh fading channels.  相似文献   

12.
Esmaeili  M. Gholami  M. 《Communications, IET》2008,2(10):1251-1262
A class of maximum-girth geometrically structured regular (n, 2, k ⩾ 5) (column-weight 2 and rowweight k) quasi-cyclic low-density parity-check (LDPC) codes is presented. The method is based on cylinder graphs and the slope concept. It is shown that the maximum girth achieved by these codes is 12. A lowcomplexity algorithm producing all such maximum-girth LDPC codes is given. The shortest constructed code has a length of 105. The minimum length n of a regular (2, k) LDPC code with girth g ? 12 determined by the Gallager bound has been achieved by the constructed codes. From the perspective of performance these codes outperform the column-weight 2 LDPC codes constructed by the previously reported methods. These codes can be encoded using an erasure decoding process.  相似文献   

13.
Kashani  Z.H. Shiva  M. 《Communications, IET》2007,1(6):1256-1262
Energy consumption of low-density parity-check (LDPC) codes in different implementations is evaluated. Decoder's complexity is reduced by finite precision representation of messages, that is, quantised LDPC decoder, and replacement of function blocks with look-up tables. It is shown that the decoder's energy consumption increases exponentially with the number of quantisation bits. For the sake of low-power consumption, 3-bit magnitude and 1-sign bit representation for messages are used in the decoder. It is concluded that high-rate Gallager codes are as energy efficient as the Reed-Solomon codes, which till now have been the first choice for wireless sensor networks (WSNs). Finally, it is shown that using LDPC codes in WSNs can be justified even more by applying the idea of trading the transmitter power with the decoder energy consumption. By exploiting the trade-off inherent in iterative decoding, the network lifetime is increased up to four times with the 3-6 regular LDPC code. Hence, it is inferred that the LDPC codes are more efficient than the block and the convolutional codes.  相似文献   

14.
Improvements in SOVA-based decoding for turbo-coded storage channels   总被引:1,自引:0,他引:1  
In this paper, we propose a novel and simple approach for dealing with the exaggerated extrinsic information produced by the soft-output Viterbi algorithm (SOVA). We first identify the reason behind these exaggerated values and then propose a simple remedy for it. We argue that what leads to this optimistic extrinsic information is the inherent strong correlation between the intrinsic information (input to the SOVA) and extrinsic information (output of the SOVA). Our proposed remedy is based on mathematical analysis, and it involves using two attenuators, one applied to the immediate output of the SOVA and another applied to the extrinsic information before it is passed to the other decoder (assuming iterative decoding). We examine the modified SOVA (MSOVA) on idealized partial response (PR) channels and the Lorentzian channel equalized to a PR target. We consider both parallel concatenated codes (PCCs) and serial concatenated codes (SCCs). We show that the MSOVA provides substantial performance improvements over both channels. For example, it provides improvements of about 0.8 to 1.6 dB at P/sub b/=10/sup -5/. Finally, we note that the proposed modifications, while they provide considerable performance improvements, introduce only two additional multipliers to the complexity of the SOVA algorithm, which is remarkable.  相似文献   

15.
Nagaraj  S. Bell  M. 《Communications, IET》2008,2(7):895-902
A novel technique for improving coding and diversity gains in orthogonal frequency division multiplexing (OFDM) systems is proposed. Multidimensional symbol design based on complex field codes with interleaving across frequency has been known for some time now. However, such symbols cannot be concatenated to convolutional codes owing to the prohibitive complexity of decoding. A novel way of designing multidimensional symbols that allow to concatenate them to convolutional codes while maintaining a low decoding complexity is shown. The proposed multidimensional symbols are based on tailbiting convolutional codes and the design of codes is discussed with desirable properties. Also the design of bit interleaved coded modulation-type and trellis-coded modulation-type codes over these multidimensional symbols is shown. Simulations show that the proposed coding scheme provides significant performance and/or complexity improvements over existing alternatives and also provides more degrees of freedom for channel-based link adaptation.  相似文献   

16.
Application of a simple approach for the soft-decision decoding of Maximum-Transition-Run (MTR) codes has been presented in this paper. A lowdemanded hardware realization have been proposed for soft-decision decoding in MTR basic AND, OR and XOR logic circuits. The suggested approach is explored over the two-track, two-head E2PR4 partial response magnetic recording system. The overall two-track channel detection complexity reduction of 41·9% is offered in simulation scheme, encoded by Low-Density Parity-Check (LDPC) code, serially concatenated with inner MTR. The 1·9 dB coding gain has been obtained, comparing to uncoded channel and assuming the AWGN noise presence.  相似文献   

17.
Free-space optical (FSO) communication is of supreme importance for designing next-generation networks. Over the past decades, the radio frequency (RF) spectrum has been the main topic of interest for wireless technology. The RF spectrum is becoming denser and more employed, making its availability tough for additional channels. Optical communication, exploited for messages or indications in historical times, is now becoming famous and useful in combination with error-correcting codes (ECC) to mitigate the effects of fading caused by atmospheric turbulence. A free-space communication system (FSCS) in which the hybrid technology is based on FSO and RF. FSCS is a capable solution to overcome the downsides of current schemes and enhance the overall link reliability and availability. The proposed FSCS with regular low-density parity-check (LDPC) for coding techniques is deliberated and evaluated in terms of signal-to-noise ratio (SNR) in this paper. The extrinsic information transfer (EXIT) methodology is an incredible technique employed to investigate the sum-product decoding algorithm of LDPC codes and optimize the EXIT chart by applying curve fitting. In this research work, we also analyze the behavior of the EXIT chart of regular/irregular LDPC for the FSCS. We also investigate the error performance of LDPC code for the proposed FSCS.  相似文献   

18.
We describe a low-complexity noniterative detector for magnetic and optical multitrack high-density data storage. The detector is based on the M-algorithm architecture. It performs limited breadth-first detection on the equivalent one-dimensional (1-D) channel obtained by column-by-column helical unwinding of the two-dimensional (2-D) channel. The detection performance is optimized by the use of a specific 2-D minimum-phase factorization of the channel impulse response by the equalizer. An optimized path selection scheme maintains the complexity close to practical 1-D Viterbi. This scheme is based on an approximate path metric parallel sort network, taking advantage of the metrics' residual ordering from previous M-algorithm iterations. Such an architecture approaches maximum-likelihood performance on a high areal density uncoded channel for a practical number of retained paths M and bit error rate (BER) below 10-4. The performance of the system is evaluated when the channel is encoded with multi-parity check (MPC) block inner code and an outer interleaved Reed-Solomon code. The inner code enhances the minimum error distance of the equalized channel and reduces the correct path losses of the M-algorithm path buffer. The decoding is performed noniteratively. Here, we compare the performance of the system to the soft iterative joint decoding of the read channels for data pages encoded with low-density parity check (LDPC) codes with comparable rates and block length. We provide an approximation of the 2-D channel capacity to further assess the performance of the system  相似文献   

19.
基于比特可靠性的LDPC码编译码算法   总被引:1,自引:0,他引:1  
提出了一种基于比特可靠性的低密度奇偶校验(LDPC)码编码算法和一种加权置信传播(BP)译码算法.该编码算法首先利用蒙特卡罗仿真得到LDPC码各个比特节点的出错概率,然后用已知信息替换易出错比特节点进行编码;该译码算法根据比特节点可靠性的差异,在译码时为每个比特节点赋予相应的权值,以调整它们对译码的影响程度.仿真表明,新的编译码算法使得系统性能大大提高,同时加快了译码迭代收敛速度.  相似文献   

20.
 A bound on the minimum distance of Tanner codes / expander codes of Sipser and Spielman is obtained. Furthermore, a generalization of a decoding algorithm of Zémor to Tanner codes is presented. The algorithm can be implemented using the same complexity as that of Zémor and has a similar error-correction capability. Explicit families of Tanner codes are presented for which the decoding algorithm is applicable. Received: March 2, 2001; revised version: November 28, 2001 Key words: Tanner codes, Expander codes, LDPC codes, Decoding, Minimum distance, Expander graphs, Ramanujan graphs, N-gons, Multi-partite graphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号