首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 221 毫秒
1.
Design of Rate-Compatible Irregular Repeat Accumulate Codes   总被引:1,自引:0,他引:1  
We consider the design of efficient rate-compatible (RC) irregular repeat accumulate (IRA) codes over a wide code rate range. The goal is to provide a family of RC codes to achieve high throughput in hybrid automatic repeat request (ARQ) scheme for high-speed data packet wireless systems. As a subclass of low-density parity-check codes, IRA codes have an extremely simple encoder and a low-complexity decoder while providing capacity approaching performance. We focus on a hybrid design method which employs both puncturing and extending. We propose a simple puncturing method based on minimizing the maximal recoverable step of the punctured nodes. We also propose a new extending scheme for IRA codes by introducing the degree-1 parity bits for the lower rate codes and obtaining the optimal proportions of extended nodes through density evolution analysis. The throughput performance of the designed RC-IRA codes in hybrid ARQ is evaluated for both AWGN and block fading channels. Simulation results demonstrate that our designed RC codes offer good error correction performance over a wide rate range and provide high throughput, especially in the high and low signal-to-noise ratio regions.  相似文献   

2.
The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.  相似文献   

3.
This paper considers designing and applying punctured irregular repeat-accumulate (IRA) codes for scalable image and video transmission over binary symmetric channels. IRA codes of different rates are obtained by puncturing the parity bits of a mother IRA code, which uses a systematic encoder. One of the main ideas presented here is the design of the mother code such that the entire set of higher rate codes obtained by puncturing are good. To find a good unequal error protection for embedded bit streams, we employ the fast joint source-channel coding algorithm in Hamzaoui et al. to minimize the expected end-to-end distortion. We test with two scalable image coders (SPIHT and JPEG-2000) and two scalable video coders (3-D SPIHT and H.26L-based PFGS). Simulations show better results with IRA codes than those reported in Banister et al. with JPEG-2000 and turbo codes. The IRA codes proposed here also have lower decoding complexity than the turbo codes used by Banister et al.  相似文献   

4.
Two convolutional-code construction schemes that utilize block codes are given. In the first method the generators of a self-orthogonal convolutional code (SOCC) are expanded. The generators of a block code whose block length is longer than that of the SOCC code replace the nonzero blocks of the convolutional code. The zero blocks are extended to the longer block length. There results a convolutional code whose blocks are self-orthogonal and which has a lower transmission rate. In the second scheme the parity constraints of an SOCC are expanded. The parity constraints of a block code replace some of the individual nonzero elements of the SOCC parity-check matrix, so that the convolutional code rate is greater than the block code rate. The resulting codes retain the SOCC advantages of simple implementation and limited error propagation. Both the encoding and the decoding can be based on the underlying block code. If a block code is majority decodable, then the resulting "hybrid" codes are majority decodable. Optimum majority-decodable block codes with up to five information bits per block are given, and from these codes several majority-decodable convolutional codes that are "optimum" with respect to the proposed construction are obtained.  相似文献   

5.
The conventional list Viterbi algorithm (LVA) produces a list of the L best output sequences over a certain block length in decoding a terminated convolutional code. We show in this paper that the LVA with a sufficiently long list is an optimum maximum-likelihood decoder for the concatenated pair of a convolutional code and a cyclic redundancy check (CRC) block code with error detection. The CRC is used to select the output. New LVAs for continuous transmission are proposed and evaluated, where no termination bits are required for the convolutional code for every CRC block. We also present optimum and suboptimum LVAs for tailbiting convolutional codes. Convolutional codes with Viterbi decoding were proposed for so-called hybrid in band on channel (hybrid IBOC) systems for digital audio broadcasting compatible with the frequency modulation band. For high-quality audio signals, it is beneficial to use error concealment/error mitigation techniques to avoid the worst type of channel errors. This requires a reliable error flag mechanism (error detection feature) in the channel decoder. A CRC on a block of audio information bits provides this mechanism. We demonstrate how the LVA can significantly reduce the flag rate compared to the regular Viterbi algorithm (VA) for the same transmission parameters. At the expense of complexity, a receiver optional LVA can reduce the flag rate by more than an order of magnitude. The difference in audio quality is dramatic. The LVA is backward compatible with a VA  相似文献   

6.
黄胜  敖翔  庞晓磊  张睿 《电视技术》2016,40(5):36-39
为了避免交织器产生的时延,通过改进的渐进边增长(PEG)算法和循环中国剩余定理构造了一种不规则重复累积(IRA)码.与常规的IRA码相比,提出的码字具有半随机半结构化形式,不需要设计交织器,且码长选择更加灵活.仿真结果显示,在码率为1/2的条件下,当误码率为10-6时,构造的IRA(1 000,500)码与PEG-IRA(1 000,500)码和基于剩余类数对的IRA(1 000,500)码相比,在对应的相同条件下分别取得了0.2 dB和0.1 dB左右的净编码增益提升;且在码率为3/4时,所构造的IRA(16 200,11 880)码比相同码长和码率的DVB-S2标准LDPC码净编码增益提高了约0.1 dB左右.  相似文献   

7.
A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors. The code consists of nonlinear inner codes, which we call “watermark"” codes, concatenated with low-density parity-check codes over nonbinary fields. The inner code allows probabilistic resynchronization and provides soft outputs for the outer decoder, which then completes decoding. We present codes of rate 0.7 and transmitted length 5000 bits that can correct 30 insertion/deletion errors per block. We also present codes of rate 3/14 and length 4600 bits that can correct 450 insertion/deletion errors per block  相似文献   

8.
极化码作为一种纠错码,具有较好的编译码性能,已成为5G短码控制信道的标准编码方案.但在码长较短时,其性能不够优异.作为一种新型级联极化码,奇偶校验码与极化码的级联方案提高了有限码长的性能,但是其译码算法有着较高的复杂度.该文针对这一问题,提出一种基于奇偶校验码级联极化码的串行抵消局部列表译码(PC-PSCL)算法,该算...  相似文献   

9.
Aiming at the problem that quasi-cyclic low density parity check (QC-LDPC) codes may have the error floor in the high signal to noise ratio (SNR) region, a new construction method of the QC-LDPC codes with the low error floor is proposed. The basic matrix of the method is based on the progressive edge growth (PEG) algorithm and the improved eliminate elementary trapping sets (EETS) algorithm so as to eliminate the elementary trapping sets in the basic matrix, then the Zig-Zag method is used to construct the cyclic shift matrix which is used to extend the basic matrix in order to construct the parity check matrix. The method not only can improve the error floor in the high SNR region, but also can flexibly design the code length and code rate. The simulation results show that at the bit error rate of 10-6, the PEG-trapping-Zig-Zag (PTZZ)-QC-LDPC(3024,1512) codes with the code rate of 0.5, compared with the PEG-Zig-Zag (PZZ)-QC-LDPC(3024,1512) codes and the PEG-QC-LDPC(3024,1512) codes, can respectively improve the net coding gain of 0.1 dB and 0.16 dB. The difference among the bit error rate performance curves will become better with the increase of the SNR. In addition, the PTZZ-QC-LDPC(3024,1512) codes have no error floor above the SNR of 2.2 dB.  相似文献   

10.
本论文用可编程逻辑器件(FPGA)实现了一种低密度奇偶校验码(LDPC)的编译码算法.采用基于Q矩阵LDPC码构造方法,设计了具有线性复杂度的编码器. 基于软判决译码规则,采用全并行译码结构实现了码率为1/2、码长为40比特的准规则LDPC码译码器,并且通过了仿真测试.该译码器复杂度与码长成线性关系,与Turbo码相比更易于硬件实现,并能达到更高的传输速率.  相似文献   

11.
Convolutional block codes, which are commonly used as constituent codes in turbo code configurations, accept a block of information bits as input rather than a continuous stream of bits. In this paper, we propose a technique for the calculation of the transfer function of convolutional block codes, both punctured and nonpunctured. The novelty of our approach lies in the augmentation of the conventional state diagram, which allows the enumeration of all codeword sequences of a convolutional block code. In the case of a turbo code, we can readily calculate an upper bound to its bit error rate performance if the transfer function of each constituent convolutional block code has been obtained. The bound gives an accurate estimate of the error floor of the turbo code and, consequently, our method provides a useful analytical tool for determining constituent codes or identifying puncturing patterns that improve the bit error rate performance of a turbo code, at high signal-to-noise ratios.  相似文献   

12.
The error correcting performance of the parallel concatenation of two convolutional codes is very promising. Inspired from this construction which is near the Shannon limit error correction performance, we consider a further development of concatenated codes. In this construction systematic convolutional codes and rate 1/2 systematic block codes are linked together such that only the systematic part of the convolutional codes is encoded with the block encoders. The bits of each information vector of the convolutional codes are scrambled by a given interleaving before entering to the block encoders. Then, differently from the Turbo codes, in which information symbols and the redundancy from the constituent codes are transmitted [1], we transmit only the redundancy from the convolutional and block codes. We call this construction convolutional coupled codes and code coupling is the new scheme of code concatenation. The structural properties of the generator matrix of the convolutional coupled code is investigated. It's minimum distance is lower and upper bounded and we introduce the term of the effective free distance. Simulation results will show, that convolutional coupled codes have the potential of being a realistic alternative to Turbo codes.  相似文献   

13.
Codes with full information rate (optimal), for example Hamming codes, provide the highest possible code rate R (R = k/n where k and n are the code dimension and length respectively) and it is an important property for a block code. Recently, the Systematic Distance-4 (SD-4) codes are proposed that allows generating all the optimal Hamming distance-4 binary linear block codes. Continuous Phase Frequency Shift Keying (CPFSK) provides low spectral occupancy and is suitable for power and bandwidth-limited channels such as satellite communication channels. MIMO technique is essential for modern wireless communication systems. In this article, we evaluated the error performances of SD-4 codes utilizing CPFSK modulation over MIMO Rician and Rayleigh channels via computer simulations and obtained outstanding results regarding coding gain.  相似文献   

14.
In order to reduce the number of redundant candidate codewords generated by the fast successive cancellation list (FSCL) decoding algorithm for polar codes, a simplified FSCL decoding algorithm based on critical sets (CS-FSCL) of polar codes is proposed. The algorithm utilizes the number of information bits belonging to the CS in the special nodes, such as Rate-1 node, repetition (REP) node and single-parity-check (SPC) node, to constrain the number of the path splitting and avoid the generation of unnecessary candidate codewords, and thus the latency and computational complexity are reduced. Besides, the algorithm only flips the bits corresponding to the smaller log-likelihood ratio (LLR) values to generate the sub-maximum likelihood (sub-ML) decoding codewords and ensure the decoding performance. Simulation results show that for polar codes with the code length of 1 024, the code rates of 1/4, 1/2 and 3/4, the proposed CS-FSCL algorithm, compared with the conventional FSCL decoding algorithm, can achieve the same decoding performance, but reduce the latency and computational complexity at different list sizes. Specifically, under the list size of L=8, the code rates of R=1/2 and R=1/4, the latency is reduced by 33% and 13% and the computational complexity is reduced by 55% and 50%, respectively.  相似文献   

15.
We present a new class of irregular low-density parity-check (LDPC) codes for moderate block lengths (up to a few thousand bits) that are well-suited for rate-compatible puncturing. The proposed codes show good performance under puncturing over a wide range of rates and are suitable for usage in incremental redundancy hybrid-automatic repeat request (ARQ) systems. In addition, these codes are linear-time encodable with simple shift-register circuits. For a block length of 1200 bits the codes outperform optimized irregular LDPC codes and extended irregular repeat-accumulate (eIRA) codes for all puncturing rates 0.6~0.9 (base code performance is almost the same) and are particularly good at high puncturing rates where good puncturing performance has been previously difficult to achieve.  相似文献   

16.
Lei  Jing  Li  Erbao  Li  Wenwen 《Wireless Personal Communications》2017,97(2):2269-2282

In this paper, we propose a novel extending algorithm to design rate compatible (RC) QC-IRA-d codes through Tanner Graph extension. The structure of QC-IRA-d codes provides high efficiency and low complexity for graph extension. Thus, in the proposed method, there are three features: (1) close relation between the extended parts and original parity check matrix; (2) larger length of introduced cycles; (3) balanced check node degree. All of them guarantee good performance of the designed RC family codes. Simulation results show that the graph extension method outperforms existed algorithms from 0.05 to 0.47 dB for different rate codes.

  相似文献   

17.
In this paper, we present a high‐rate M‐ary quadrature amplitude modulation (M‐QAM) space‐time labeling diversity (STLD) system that retains the robust error performance of the conventional STLD system. The high‐rate STLD is realised by expanding the conventional STLD via a unitary matrix transformation. Robust error performance of the high‐rate STLD is achieved by incorporating trellis coding into the mapping of additional bits to high‐rate codes. The comparison of spectral efficiency between the proposed trellis code‐aided high‐rate STLD (TC‐STLD) and the conventional STLD shows that TC‐STLD with 16‐QAM and 64‐QAM respectively achieves a 12.5% and 8.3% increase in spectral efficiency for each additional bit sent with the transmitted high‐rate codeword. Moreover, we derive an analytical bound to predict the average bit error probability performance of TC‐STLD over Rayleigh frequency‐flat fading channels. The analytical results are verified by Monte Carlo simulation results, which show that the derived analytical bounds closely predict the average bit error probability performance at high signal‐to‐noise ratios (SNR). Simulation results also show that TC‐STLD with 1 additional bit achieves an insignificant SNR gain of approximately 0.05 dB over the conventional STLD, while TC‐STLD with 2 additional bits achieves an SNR gain of approximately 0.12 dB.  相似文献   

18.
The binary extended Golay code has a two-level structure, which can be used in the decoding of the code. However, such structure is not limited to the Golay code, in fact, several binary linear codes can be constructed by a projective method which is related to the structure. In this correspondence, the binary (4n,n 2k, ≥min(8, n,2d)) linear codes are resulted from quaternary (n,k,d) linear block codes. Based on the structure, the efficient maximum likelihood decoding algorithms can be presented correspondingly for the derived codes.  相似文献   

19.
Templates are constructed to extend arbitrary additive error correcting or constrained codes, i.e., additional redundant bits are added in selected positions to balance the moment of the codeword. The original codes may have error correcting capabilities or constrained output symbols as predetermined by the usual communication system considerations, which are retained after extending the code. Using some number theoretic constructions in the literature, insertion/deletion correction can then be achieved. If the template is carefully designed, the number of additional redundant bits for the insertion/deletion correction can be kept small-in some cases of the same order as the number of parity bits in a Hamming code of comparable length.  相似文献   

20.
胡延平  张天骐  白杨柳  周琳 《信号处理》2021,37(11):2207-2215
摘 要:无法获得完整的递归系统卷积码(Recursive System Code,RSC)码字,传统的盲识别方法就不适用于删余型Turbo码的识别。于是该算法在识别序列的构造上进行了改进,针对Turbo码在删余位上的码字与对应的RSC码有所区别的情况,将该位上的码字视为“0”和“1”等概率出现的误码,从而对删余位进行归零处理并选取合适的截取序列进行匹配度计算,根据最后匹配度的总分布情况对删余型Turbo码分量编码器的参数进行识别。仿真结果表明针对码长为256,码率为1/2的删余型Turbo码,在最大误比特率不超过0.033的情况下正确识别率能保持在80%以上。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号