首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new ordered decoding scheme for a product code for mobile data communications. The ordered decoding scheme determines the order of decoding for both row and column component codewords according to the probability of decoding the component codeword correctly. Component codewords are decoded independently. To randomize burst errors in both row and column codewords, a diagonal interleaving scheme is used for code symbols in the codeword. It is shown that the ordered decoding scheme combined with diagonal interleaving improves the performance of a product code with reasonably long code length for mobile data communications  相似文献   

2.
流星余迹通信是一种利用流星电离余迹反射电波实现数据传输的通信方式。衡量流星余迹通信系统性能的指标之一是所采用编码的数据包正确传输概率。通常采用的编码方式为固定速率编码,这种编码数据包传输正确概率较低,系统性能较差。变速率编码根据流星余迹通信信道特点,通过改变数据包中各个码字的码率,提高数据包正确传输概率,改善系统性能。理论分析和仿真结果表明,采用变速率编码系统性能提高。  相似文献   

3.
The performance of Channel block codes for a general channel is studied by examining the relationship between the rate of a code, the joint composition of pairs of codewords, and the probability of decoding error. At fixed rate, lower bounds and upper bounds, both on minimum Bhattacharyya distance between codewords and on minimum equivocation distance between codewords, are derived. These bounds resemble, respectively, the Gilbert and the Elias bounds on the minimum Hamming distance between codewords. For a certain large class of channels, a lower bound on probability of decoding error for low-rate channel codes is derived as a consequence of the upper bound on Bhattacharyya distance. This bound is always asymptotically tight at zero rate. Further, for some channels, it is asymptotically tighter than the straight line bound at low rates. Also studied is the relationship between the bounds on codeword composition for arbitrary alphabets and the expurgated bound for arbitrary channels having zero error capacity equal to zero. In particular, it is shown that the expurgated reliability-rate function for blocks of letters is achieved by a product distribution whenever it is achieved by a block probability distribution with strictly positive components.  相似文献   

4.
This work introduces a novel approach to increase the performance of block turbo codes (BTCs). The idea is based on using a Hamming threshold to limit the search for the maximum-likelihood (ML) codeword within only those codewords that lie within this threshold. The proposed iterative decoding approach is shown to offer both significant coding gain and complexity reduction over the standard iterative decoding methods.  相似文献   

5.
The paper presents a computationally efficient hybrid reliability-based decoding algorithm for Reed-Solomon (RS) codes. This hybrid decoding algorithm consists of two major components, a re-encoding process and a successive erasure-and-error decoding process for both bit and symbol levels. The re-encoding process is to generate a sequence of candidate codewords based on the information provided by the codeword decoded by an algebraic decoder and a set of test error patterns. Two criteria are used for testing in the decoding process to reduce the decoding computational complexity. The first criterion is devised to reduce the number of re-encoding operations by eliminating the unlikely error patterns. The second criterion is to test the optimality of a generated candidate codeword. Numerical results show that the proposed decoding algorithm can achieve either a near-optimum error performance or an asymptotically optimum error performance.  相似文献   

6.
A new maximum likelihood decoding (MLD) algorithm for linear block codes is proposed. The new algorithm uses the algebraic decoder in order to generate the set of candidate codewords. It uses the exact probability for each codeword as a new likelihood metric and a method to generate the appropriate set of codewords similar to Kaneko, et al., and Tanaka-Kakigahara algorithms. The performance of the proposed algorithm is the same as that of MLD as it is proved theoretically and verified by simulation results. The comparison with these similar algorithms shows that the new one always requires less average decoding complexity than those of the other algorithms. Finally, we compare the algorithms for terrestrial and satellite channels.  相似文献   

7.
王婷  陈为刚 《信号处理》2020,36(5):655-665
考虑多进制LDPC码的符号特性,以及对其残留错误和删除的分析,本文采用多进制LDPC码作为内码,相同Galois域下的高码率RS码作为外码来构造多进制乘积码;并提出了一种低复杂度的迭代译码方案,减少信息传输的各类错误。在译码时,只对前一次迭代中译码失败的码字执行译码,并对译码正确码字所对应的比特初始概率信息进行修正,增强下一次迭代多进制LDPC译码符号先验信息的准确性,减少内码译码后的判决错误,从而充分利用外码的纠错能力。仿真结果显示,多进制乘积码相较于二进制LDPC乘积码有较大的编码增益,并通过迭代进一步改善了性能,高效纠正了信道中的随机错误和突发删除。对于包含2%突发删除的高斯信道,在误比特率为10-6时,迭代一次有0.4 dB左右的增益。   相似文献   

8.
This paper presents generalized expressions for the probabilities of correct decoding and decoder error for Reed-Solomon (RS) codes. In these expressions, the symbol error and erasure probabilities are different in each coordinate in a codeword. The above expressions are used to derive the expressions for reliability and delay for Type-I hybrid ARQ (HARQ-I) systems when each symbol in a packet (multiple codewords per packet) has unique symbol error and erasure probabilities. Applications of the above results are demonstrated by analyzing a bursty-correlative channel in which the symbols and codewords within the packet are correlated  相似文献   

9.
A list decoder generates a list of more than one codeword candidates, and decoding is erroneous if the transmitted codeword is not included in the list. This decoding strategy can be implemented in a system that employs an inner error correcting code and an outer error detecting code that is used to choose the correct codeword from the list. Probability of codeword error analysis for a linear block code with list decoding is typically based on the "worst case" lower bound on the effective weights of codewords for list decoding evaluated from the weight enumerating function of the code. In this paper, the concepts of generalized pairwise error event and effective weight enumerating function are proposed for evaluation of the probability of codeword error of linear block codes with list decoding. Geometrical analysis shows that the effective Euclidean distances are not necessarily as low as those predicted by the lower bound. An approach to evaluate the effective weight enumerating function of a particular code with list decoding is proposed. The effective Euclidean distances for decisions in each pairwise error event are evaluated taking into consideration the actual Hamming distance relationships between codewords, which relaxes the pessimistic assumptions upon which the traditional lower bound analysis is based. Using the effective weight enumerating function, a more accurate approximation is achieved for the probability of codeword error of the code with list decoding. The proposed approach is applied to codes of practical interest, including terminated convolutional codes and turbo codes with the parallel concatenation structure  相似文献   

10.
A double serially concatenated code with two interleavers consists of the cascade of an outer encoder, an interleaver permuting the outer codeword bits, a middle encoder, another interleaver permuting the middle codeword bits, and an inner encoder whose input words are the permuted middle codewords. The construction can be generalized to h cascaded encoders separated by h-1 interleavers, where h>3. We obtain upper bounds to the average maximum likelihood bit-error probability of double serially concatenated block and convolutional coding schemes. Then, we derive design guidelines for the outer, middle, and inner codes that maximize the interleaver gain and the asymptotic slope of the error probability curves. Finally, we propose a low-complexity iterative decoding algorithm. Comparisons with parallel concatenated convolutional codes, known as “turbo codes”, and with the proposed serially concatenated convolutional codes are also presented, showing that in some cases, the new schemes offer better performance  相似文献   

11.
Two codeword families and the corresponding encoder/decoder schemes are present for spatial/frequency optical code-division multiple-access communications. These 2-D codewords have multiple weights per row and can be encoded/decoded via compact hardware. With the proposed decoding mechanism, the intended user will reject interfering users and multiple-access interference is fully eliminated. In addition, the power of the same wavelength contributed by all interfering codewords is split and detected by distinct photodiodes in the decoder. Thus the performance degradation due to the beat noise arising in the photodetecting process is improved, as compared with the traditional 1-D coding scheme, and a larger number of active users is supported under a given bit-error rate.  相似文献   

12.
This correspondence presents performance analysis of symbol-level soft-decision decoding of q-ary maximum-distance-separable (MDS) codes based on the ordered statistics algorithm. The method we present is inspired by the one recently proposed by Agrawal and Vardy (2000), who approximately evaluate the performance of generalized minimum-distance decoding. The correspondence shows that in our context, the method allows us to compute the exact value of the probability that the transmitted codeword is not one of the candidate codewords. This leads to a close upper bound on the performance of the decoding algorithm. Application of the ordered statistics algorithm to MDS codes is not new. Nevertheless, its advantages seem not to be fully explored. We show an example where the decoding algorithm is applied to singly extended 16-ary Reed-Solomon (RS) codes in a 128-dimensional multilevel coded-modulation scheme that approaches the sphere lower bound within 0.5 dB at the word error probability of 10/sup -4/ with manageable decoding complexity.  相似文献   

13.
A new hybrid automatic repeat request (ARQ) scheme is proposed for data transmission in a power-controlled direct sequence (DS) code division multiple access (CDMA) system cellular system. The data frame is composed of interleaved Reed-Solomon codes. The depth of interleaving is determined by a power-control interval. After decoding each codeword with algebraic decoding, the post-decoding processor decides whether to accept the codeword or to discard it by using channel state information from the power-control processor. The proposed hybrid ARQ scheme significantly reduces the probability of undetected error among accepted codewords without significantly reducing the throughput  相似文献   

14.
Efficient code-search maximum-likelihood decoding algorithms, based on reliability information, are presented for binary Linear block codes. The codewords examined are obtained via encoding. The information set utilized for encoding comprises the positions of those columns of a generator matrix G of the code which, for a given received sequence, constitute the most reliable basis for the column space of G. Substantially reduced computational complexity of decoding is achieved by exploiting the ordering of the positions within this information set. The search procedures do not require memory; the codeword to be examined is constructed from the previously examined codeword according to a fixed rule. Consequently, the search algorithms are applicable to codes of relatively large size. They are also conveniently modifiable to achieve efficient nearly optimum decoding of particularly large codes  相似文献   

15.
Multiple-symbol parallel decoding for variable length codes   总被引:1,自引:0,他引:1  
In this paper, a multiple-symbol parallel variable length decoding (VLD) scheme is introduced. The scheme is capable of decoding all the codewords in an N-bit block of encoded input data stream. The proposed method partially breaks the recursive dependency related to the VLD. First, all possible codewords in the block are detected in parallel and lengths are returned. The procedure results redundant number of codeword lengths from which incorrect values are removed by recursive selection. Next, the index for each symbol corresponding the detected codeword is generated from the length determining the page and the partial codeword defining the offset in symbol table. The symbol lookup can be performed independently from symbol table. Finally, the sum of the valid codeword lengths is provided to an external shifter aligning the encoded input stream for a new decoding cycle. In order to prove feasibility and determine the limiting factors of our proposal, the variable length decoder has been implemented on an field-programmable gate-array (FPGA) technology. When applied to MPEG-2 standard benchmark scenes, on average 4.8 codewords are decoded per cycle resulting in the throughput of 106 million symbols per second.  相似文献   

16.
The performance of ARQ systems can be improved by combining current and prior transmissions at the receiver. Two techniques for combining outputs in a packet-based communication system are presented. In both techniques the fundamental unit of retransmission is a packet, and the fundamental unit of combining is a codeword. The techniques are analyzed for a bursty channel and a system that employs Reed–Solomon coding and bounded-distance errors-and-erasures decoding. Performance results show that the packet-combining schemes provide significant gains in throughput and reductions in error probability when compared with a system that does not employ combining. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
The performance of Reed-Solomon (RS) coded direct-sequence code division multiple-access (DS-CDMA) systems using noncoherent M-ary orthogonal modulation is investigated over multipath Rayleigh fading channels. Diversity reception techniques with equal gain combining (EGC) or selection combining (SC) are invoked and the related performance is evaluated for both uncoded and coded DS-CDMA systems. “Errors-and-erasures” decoding is considered, where the erasures are based on Viterbi's (1982) so-called ratio threshold test (RTT). The probability density functions (PDF) of the ratio associated with the RTT conditioned on both the correct detection and erroneous detection of the M-ary signals are derived. These PDFs are then used for computing the codeword decoding error probability of the RS coded DS-CDMA system using “errors-and-erasures” decoding. Furthermore, the performance of the “errors-and-erasures” decoding technique employing the RTT is compared to that of “error-correction-only” decoding refraining from using side-information over multipath Rayleigh fading channels. As expected, the numerical results show that when using “errors-and-erasures” decoding, RS codes of a given code rate can achieve a higher coding gain than without erasure information  相似文献   

18.
The performance of M-ary orthogonal signaling schemes employing Reed-Solomon (RS) codes and redundant residue number system (RRNS) codes is investigated over frequency-selective Rayleigh fading channels. “Errors-and-erasures” decoding is considered, where erasures are judged based on two low-complexity, low-delay erasure insertion schemes-Viterbi's ratio threshold test (RTT) and the proposed output threshold test (OTT). The probability density functions (PDF) of the ratio associated with the RTT and that of the demodulation output in the OTT conditioned on both the correct detection and erroneous detection of M-ary signals are derived, and the characteristics of the RTT and OTT are investigated. Furthermore, expressions are derived for computing the codeword decoding error probability of RS codes or RRNS codes based on the above PDFs, The OTT technique is compared to Viterbi's RTT, and both of these are compared to receivers using “error-correction only” decoding over frequency-selective Rayleigh-fading channels. The numerical results show that by using “errors-and-erasures” decoding, RS or RRNS codes of a given code rate can achieve higher coding gain than that without erasure information, and that the OTT technique outperforms the RTT, provided that both schemes are operated at the optimum decision thresholds  相似文献   

19.
Let a q-ary linear (n,k)-code be used over a memoryless channel. We design a soft-decision decoding algorithm that tries to locate a few most probable error patterns on a shorter length s ∈ [k,n]. First, we take s cyclically consecutive positions starting from any initial point. Then we cut the subinterval of length s into two parts and examine T most plausible error patterns on either part. To obtain codewords of a punctured (s,k)-code, we try to match the syndromes of both parts. Finally, the designed codewords of an (s,k)-code are re-encoded to find the most probable codeword on the full length n. For any long linear code, the decoding error probability of this algorithm can be made arbitrarily close to the probability of its maximum-likelihood (ML) decoding given sufficiently large T. By optimizing s, we prove that this near-ML decoding can be achieved by using only T≈q(n-k)k(n+k)/ error patterns. For most long linear codes, this optimization also gives about T re-encoded codewords. As a result, we obtain the lowest complexity order of q(n-k)k(n+k)/ known to date for near-ML decoding. For codes of rate 1/2, the new bound grows as a cubic root of the general trellis complexity qmin{n-k,k}. For short blocks of length 63, the algorithm reduces the complexity of the trellis design by a few decimal orders  相似文献   

20.
极化码拥有出色的纠错性能,但编码方式决定了其码长不够灵活,需要通过凿孔构造码长可变的极化码。该文引入矩阵极化率来衡量凿孔对极化码性能的影响,选择矩阵极化率最大的码字作为最佳凿孔模式。对极化码的码字进行分段,有效减小了最佳凿孔模式的搜索运算量。由于各分段的第1个码字都会被凿除,且串行抵消译码过程中主要发生1位错,因此在各段段首级联奇偶校验码作为译码提前终止标志,检测前段码字的译码错误并进行重新译码。对所提方法在串行抵消译码下的性能进行仿真分析,结果表明,相比传统凿孔方法,所提方法在10–3误码率时能获得约0.7 dB的编码增益,有效提升了凿孔极化码的译码性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号