首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Design methods for irregular repeat-accumulate codes   总被引:3,自引:0,他引:3  
We optimize the random-like ensemble of irregular repeat-accumulate (IRA) codes for binary-input symmetric channels in the large block-length limit. Our optimization technique is based on approximating the evolution of the densities (DE) of the messages exchanged by the belief-propagation (BP) message-passing decoder by a one-dimensional dynamical system. In this way, the code ensemble optimization can be solved by linear programming. We propose four such DE approximation methods, and compare the performance of the obtained code ensembles over the binary-symmetric channel (BSC) and the binary-antipodal input additive white Gaussian noise channel (BIAWGNC). Our results clearly identify the best among the proposed methods and show that the IRA codes obtained by these methods are competitive with respect to the best known irregular low-density parity-check (LDPC) codes. In view of this and the very simple encoding structure of IRA codes, they emerge as attractive design choices.  相似文献   

3.
We prove that for any given R between 0 and 1 the best threshold value for a regular LDPC code of rate R with common variable degree v and common check degree c occurs when v is at least 3 and is minimal subject to the condition R=1-v/c.  相似文献   

4.
This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes.  相似文献   

5.
We propose an efficient maximum-likelihood (ML) decoding algorithm for decoding low-density parity-check (LDPC) codes over the binary-erasure channel (BEC). We also analyze the computational complexity of the proposed algorithm.  相似文献   

6.
We derive upper bounds on the maximum achievable rate of low-density parity-check (LDPC) codes used over the binary erasure channel (BEC) under Gallager's decoding algorithm, given their right-degree distribution. We demonstrate the bounds on the ensemble of right-regular LDPC codes and compare them with an explicit left-degree distribution constructed from the given right degree.  相似文献   

7.
8.
A method for determining an upper bound for the size of a code for a two-access binary erasure channel is presented. For uniquely decodable codes, this bound gives a combinatorial proof of a result by Liao. Examples of the bound are given for codes with minimum distance 4.  相似文献   

9.
The sensitivity of the iterative decoder for repeat-accumulate (RA) codes to carrier phase and channel signal-to-noise ratio estimation errors is investigated, and efficient algorithms to estimate and correct these errors are developed. The behavior of RA codes with imperfect channel estimation is different from that of turbo codes, and correction algorithms specific to RA codes must be formulated. The proposed algorithms use the soft information generated within the iterative decoder, and thus, are not only hardware-efficient, but also offer excellent performance.  相似文献   

10.
In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area  相似文献   

11.
A possibility of estimating the finite-length performance of sparse-graph code ensembles gives two opportunities: to compare different codes of the same length in a context very close to real, practical applications and to perform the parameter optimization for a given code length [2]. We need a finite-length approximation that is valid for any code ensemble. The scaling approach seems to be a tool, general enough to provide such an approximation. However, the analytical derivation of parameters of the scaling approximation has been successful only for LDPC codes [1]; despite several attempts [25], [20], no such result was proposed for other code ensembles. In this paper, we focus on the finite-length performance of turbo-like codes, by applying the scaling approach to this case. In particular, by assuming the transmission over the binary erasure channel, we conjecture the scaling law and derive its scaling parameter. As examples, we present the performance estimation for Repeat-Accumulate codes [11], parallel turbo codes [8] and TLDPC codes [5], in all cases matching well the numerical results.  相似文献   

12.
In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.  相似文献   

13.
We present two sequences of ensembles of nonsystematic irregular repeat-accumulate (IRA) codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric (MBIOS) channel and is refined for the binary erasure channel.  相似文献   

14.
15.
16.
This paper considers designing and applying punctured irregular repeat-accumulate (IRA) codes for scalable image and video transmission over binary symmetric channels. IRA codes of different rates are obtained by puncturing the parity bits of a mother IRA code, which uses a systematic encoder. One of the main ideas presented here is the design of the mother code such that the entire set of higher rate codes obtained by puncturing are good. To find a good unequal error protection for embedded bit streams, we employ the fast joint source-channel coding algorithm in Hamzaoui et al. to minimize the expected end-to-end distortion. We test with two scalable image coders (SPIHT and JPEG-2000) and two scalable video coders (3-D SPIHT and H.26L-based PFGS). Simulations show better results with IRA codes than those reported in Banister et al. with JPEG-2000 and turbo codes. The IRA codes proposed here also have lower decoding complexity than the turbo codes used by Banister et al.  相似文献   

17.
Consider the transmission of a finitely interleaved rate 1/n convolutionally encoded message over a channel with memory having two internal states Ξ0 and Ξ1 where when in state Ξ0, the channel resembles a noiseless binary symmetric channel (BSC), whereas when in state Ξ1, the channel is totally blocked and is well approximated by a binary-input single-output channel. Assume that the channel's internal state is drawn at random once every h channel uses, and then remains constant for the following h channel uses. Further assume that the message is short in comparison to h, and that due to delay constraints, the message must be decoded within Nh channel uses, where N need not be large in comparison to the code's constraint length. Such a model is appropriate for describing a convolutionally coded slow frequency hopping system with non-ideal interleaving, in which every frequency is either totally erased or else noiseless. The probability of a message error, the normalized expected number of bits in error, and the bit error rate (BER) are analytically computed for the periodic N×h bit and word interleavers, where a bit refers to a binary code symbol, and a word refers to a n-tuple of consecutive bits. An analytic expression for the BER is also given for pseudo-random word and bit interleavers and for the corresponding limiting cases of infinite interleaving, i.e., N→∞. As an example for the use of our methods, we analyze the performance of the GSM system with various interleaving depths and methods. We introduce the notion of “matched” code and interleaver pairs, and argue that this is a desirable property. Several exhaustive searches are carried out for matched codes and interleavers  相似文献   

18.
关于纠删码的研究与进展   总被引:2,自引:1,他引:2  
该文简述了几类纠删码的纠删原理,系统地综合分析了各类纠删码的优缺点及其相互区别与联系,证明了若选取MDS(Maximal Distance Separable)码作为纠删码,只要接收者接收到源数据个数的数据,就能恢复原来的源数据。分析结果表明:复损码以及旋风(Tornado)码不仅能以线性时间可编码和可成功地译码,而且能以任意接近删除信道容量的速率进行传输,最后指出了目前复损码的研究中需要解决的一些问题,这些分析和结论为进一步研究纠删码提供了理论基础和新的思路。  相似文献   

19.
《Microelectronics Reliability》2015,55(11):2453-2467
Erasure codes are widely used in data storage systems to protect against disk failures. Employing erasure codes in an array of Solid-State Drives (SSDs) in storage systems necessitates designers to revisit different characteristics in comparison to Hard Disk Drives (HDDs), due to non-mechanical property of SSDs. One of the most important characteristics of SSDs is their limitation on the number of Program/Erase (P/E) cycles. By taking into account the characteristics of SSDs, this paper presents a comprehensive analysis to investigate the effects of three well-known erasure codes on the endurance and performance of SSD-based disk subsystems. The three erasure codes, i.e., Reed–Solomon, EVENODD, and RDP are implemented on the SSD-extension of DiskSim simulator. The results show that the endurance and performance of Reed–Solomon are on average 90% and 60% higher than other erasure codes, respectively. Additionally, the three erasure codes are compared in terms of different stripe unit sizes, number of disks, and various request sizes. The results show that configuring a disk array with a 4 KB stripe unit size will improve the endurance and performance of EVENODD by 1.8 × and 2.9 ×, respectively, as compared to 128 KB stripe unit size.  相似文献   

20.
We derive lower bounds on the density of parity-check matrices of binary linear codes which are used over memoryless binary-input output-symmetric (MBIOS) channels. The bounds are expressed in terms of the gap between the rate of these codes for which reliable communications is achievable and the channel capacity; they are valid for every sequence of binary linear block codes if there exists a decoding algorithm under which the average bit-error probability vanishes. For every MBIOS channel, we construct a sequence of ensembles of regular low-density parity-check (LDPC) codes, so that an upper bound on the asymptotic density of their parity-check matrices scales similarly to the lower bound. The tightness of the lower bound is demonstrated for the binary erasure channel by analyzing a sequence of ensembles of right-regular LDPC codes which was introduced by Shokrollahi, and which is known to achieve the capacity of this channel. Under iterative message-passing decoding, we show that this sequence of ensembles is asymptotically optimal (in a sense to be defined in this paper), strengthening a result of Shokrollahi. Finally, we derive lower bounds on the bit-error probability and on the gap to capacity for binary linear block codes which are represented by bipartite graphs, and study their performance limitations over MBIOS channels. The latter bounds provide a quantitative measure for the number of cycles of bipartite graphs which represent good error-correction codes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号