首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 646 毫秒
1.
The definition of good codes for error-detection is given. It is proved that a (n, k) linear block code in GF(q) are the good code for error-detection, if and only if its dual code is also. A series of new results about the good codes for error-detection are derived. New lower bounds for undetected error probabilities are obtained, which are relative to n and k only, and not the weight structure of the codes.  相似文献   

2.
Runlength limited (RLL) codes are used in magnetic recording. The error patterns that occur with peak detection magnetic recording systems when using a runlength-limited code consist of both symmetric errors and shift errors. We refer to shift errors and symmetric errors collectively as mixed type errors. A method of providing error control for mixed type errors that occur in a runlength limited code comprised of (d, k) constrained sequences is examined. The coding scheme is to choose parity blocks to insert in the constrained information sequence. The parity blocks are chosen to satisfy the constraints and to provide some error control. The cases of single error-detection and single error-correction are investigated, where the single error is allowed to be a shift error or a symmetric error. Bounds are discussed on the possible lengths for the parity blocks. It is shown that the single error-detection codes are the best possible in terms of the length of the parity blocks  相似文献   

3.
In this paper we investigate a generalization of Gallager's (1963) low-density (LD) parity-check codes, where as component codes single error correcting Hamming codes are used instead of single error detecting parity-check codes. It is proved that there exist such generalized low-density (GLD) codes for which the minimum distance is growing linearly with the block length, and a lower bound of the minimum distance is given. We also study iterative decoding of GLD codes for the communication over an additive white Gaussian noise channel. The performance in terms of the bit error rate, obtained by computer simulations, is presented for GLD codes of different lengths  相似文献   

4.
Turbo乘积码因其具有接近香农限的译码性能和适合高速译码的并行结构,已成为纠错编码领域的研究热点。Turbo乘积码的分量码一般由扩展汉明码构造而成,所以该类码字编码和译码的硬件实现比较简单。当Turbo乘积码采用扩展汉明码作为子码时,随着信噪比的提高,码字的最小码重对误帧率的影响会逐步增大。文中改进了Turbo乘积码编码结构,在只增加较小的编译码复杂度和时延的情况下,提高了码字的最小码重,并减少了最小码重码字在码字空间所占的比例。通过仿真和分析,比较了这种码和TPC码在误帧率性能、码字的最小码重分布以及最小码间距估计上的差异。  相似文献   

5.
We present hardware performance analyses of Hamming product codes combined with type-II hybrid automatic repeat request (HARQ), for on-chip interconnects. Input flit width and the number of rows in the product code message are investigated for their impact on the number of wires in the link, codec delay, reliability, and energy consumption. Analytical models are presented to estimate codec delay and residual flit error rate. The analyses are validated by comparison with simulation results. In a case study using H.264 video encoder in a network-on-chip environment, the method of combining Hamming product codes with type-II HARQ achieves several orders of magnitude improvement in residual flit error rate. For a given residual flit error rate requirement (e.g., 10-20), this method yields up to 50% energy improvement over other error control methods in high-noise conditions.  相似文献   

6.
In this letter, we propose tight performance upper bounds for convolutional codes terminated with an input sequence of finite length. To obtain the upper bounds, a weight enumerator is defined to represent the relation between the Hamming distance of the coded output and the Hamming distance of the input bits of the code. The upper bounds on frame error rate (FER) and average bit error rate (BER) are obtained from the weight enumerator. A simple method is presented to compute the weight enumerator of a terminated convolutional code based on a modified trellis diagram.  相似文献   

7.
Distributed classification fusion using error-correcting codes (DCFECC) has recently been proposed for wireless sensor networks operating in a harsh environment. It has been shown to have a considerably better capability against unexpected sensor faults than the optimal likelihood fusion. In this paper, we analyze the performance of a DCFECC code with minimum Hamming distance fusion. No assumption on identical distribution for local observations, as well as common marginal distribution for the additive noises of the wireless links, is made. In addition, sensors are allowed to employ their own local classification rules. Upper bounds on the probability of error that are valid for any finite number of sensors are derived based on large deviations technique. A necessary and sufficient condition under which the minimum Hamming distance fusion error vanishes as the number of sensors tends to infinity is also established. With the necessary and sufficient condition and the upper error bounds, the relation between the fault-tolerance capability of a DCFECC code and its pair-wise Hamming distances is characterized, and can be used together with any code search criterion in finding the code with the desired fault-tolerance capability. Based on the above results, we further propose a code search criterion of much less complexity than the minimum Hamming distance fusion error criterion adopted earlier by the authors. This makes the code construction with acceptable fault-tolerance capability for a network with over a hundred of sensors practical. Simulation results show that the code determined based on the new criterion of much less complexity performs almost identically to the best code that minimizes the minimum Hamming distance fusion error. Also simulated and discussed are the performance trends of the codes searched based on the new simpler criterion with respect to the network size and the number of hypotheses  相似文献   

8.
A cumulative decision scheme that can be employed in a feedback communication system is treated. Any decision to decode depends upon all messages that have been received prior to the decision. The average error over all transmissions as well as the forward error may be controlled. Performance is compared to an error-detection repeat-request noncumulative scheme utilizing group codes. Optimal performance of the cumulative scheme with respect to the noncumulative scheme is given for short-length best group codes. It is also demonstrated that the use of feedback does not necessarily increase appreciably the amount of hardware necessary to effect error control.  相似文献   

9.
To prevent soft errors from causing data corruption, memories are commonly protected with Error Correction Codes (ECCs). To minimize the impact of the ECC on memory complexity simple codes are commonly used. For example, Single Error Correction (SEC) codes, like Hamming codes are widely used. Power consumption can be reduced by first checking if the word has errors and then perform the rest of the decoding only when there are errors. This greatly reduces the average power consumption as most words will have no errors. In this paper an efficient error detection scheme for Double Error Correction (DEC) Bose–Chaudhuri–Hocquenghem (BCH) codes is presented. The scheme reduces the dynamic power consumption so that it is the same that for error detection in a SEC Hamming code.  相似文献   

10.
Theoretical and simulation results of using Hamming codes with the two-dimensional discrete cosine transform (2D-DCT) at a transmitted data rate of 1 bit/pixel over a binary symmetric channel (BSC) are presented. The design bit error rate (BER) of interest is 10-2. The (7, 4), (15, 11), and (31, 26) Hamming codes are used to protect the most important bits in each 16 by 16 transformed block, where the most important bits are determined by calculating the mean squared reconstruction error (MSE) contributed by a channel error in each individual bit. A theoretical expression is given which allows the number of protected bits to achieve minimum MSE for each code rate to be computed. By comparing these minima, the best code and bit allocation can be found. Objective and subjective performance results indicate that using the (7, 4) Hamming code to protect the most important 2D-DCT coefficients can substantially improve reconstructed image quality at a BER of 10-2. Furthermore, the allocation of 33 out of the 256 bits per block to channel coding does not noticeably degrade reconstructed image quality in the absence of channel errors.  相似文献   

11.
Burst-error channels have been used to model a large class of modern communication media, and the problem of communicating reliably through such media has received much study [1]-[9]. Existing techniques include two-way communication schemes that involve error detection and retransmission, and schemes that utilize error correcting codes in code interleaving. The error-detection and retransmission scheme is simple, but its applicability has been restricted to limited environments. On the other hand, the concept of code interleaving has proved to be versatile and effective. Code interleaving distributes the error detection and correction burden among the component codes and thus lowers the overall redundancy requirement. However, the memory characteristics of the burst-error channel have not been used. This omission has prompted the investigation presented in this paper to utilize the inherent information embedded in the code interleaving scheme when used with burst-error channels. The concept of erasure decoding is introduced, leading to some useful coding and decoding strategies. Theoretical formulations are devised to predict code performance, and their validity is verified with computer simulations.  相似文献   

12.
Block cyclic redundancy check (CRC) codes represent a popular and powerful class of error detection techniques used almost exclusively in modern data communication systems. Though efficient, CRCs can detect errors only after an entire block of data has been received and processed. In this work, we exploit the “continuous” nature of error detection that results from using arithmetic codes for error detection, which provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. We demonstrate how this continuous error detection framework improves the overall performance of communication systems, and show how considerable performance gains can be attained. We focus on several important scenarios: 1) automatic repeat request (ARQ) based transmission; 2) forward error correction (FEC frameworks based on (serially) concatenated coding systems involving an inner error-correction code and an outer error-detection code; and 3) reduced state sequence estimation (RSSE) for channels with memory. We demonstrate that the proposed CED framework improves the throughput of ARQ systems by up to 15% and reduces the computational/storage complexity of FEC and RSSE by a factor of two in the comparisons that we make against state-of-the-art systems  相似文献   

13.
A Markovian technique is described to calculate the exact performance of the Viterbi algorithm used as either a channel decoder or a source encoder for a convolutional code. The probability of information bit error and the expected Hamming distortion are computed for codes of various rates and constraint lengths. The concept of tie-breaking rules is introduced and its influence on decoder performance is examined. Computer simulation is used to verify the accuracy of the results. Finally, we discuss the issue of when a coded system outperforms an uncoded system in light of the new results  相似文献   

14.
Computation of the undetected error probability for error detecting codes over the Z-channel is an important issue, explored only in part in previous literature. In this paper, Varshamov–Tenengol'ts (VT) codes are considered. First, an exact formula for the probability of undetected errors is given. It can be explicitly computed for small code lengths (up to approximately $25$). Next, some lower bounds that can be explicitly computed up to almost twice this length are studied. A comparison to the Hamming codes is given. It is further shown that heuristic arguments give a very good approximation that can easily be computed even for large lengths. Finally, Monte Carlo methods are used to estimate performance for long code lengths.   相似文献   

15.
We define a distance measure for block codes used over memoryless channels and show that it is related to upper and lower bounds on the low-rate error probability in the same way as Hamming distance is for binary block codes used over the binary symmetric channel. We then prove general Gilbert bounds for block codes using this distance measure. Some new relationships between coding theory and rate-distortion theory are presented.  相似文献   

16.
基于短帧交织的Turbo码编码器研究   总被引:1,自引:0,他引:1       下载免费PDF全文
李建平  王宏远 《电子学报》2003,31(3):444-447
本文分析了Turbo码编码器及其输出码字的组成原理,基于使输出码字最小汉明重量最大化和使最小汉明重量输出码字出现的概率最小化的原则,结合结构化交织器和随机交织器的优点,提出了一种伪随机的结构化反块交织器,并进一步提出了采用双伪随机反块交织器的Turbo码编码器方案.该方案有效的增大了Turbo码输出码字的最小汉明距离,同时避免了Turbo码边缘效应且具有伪随机性,因此,可有效的提高系统的纠错性能.仿真实验结果显示,该方案在短帧传输的条件下有着最佳的综合性能.特别是在信噪比高时,采用该方案与采用其它交织器的Turbo码方案相比,在提高系统的可靠性上具有明显的优势.  相似文献   

17.
In this paper we propose memory protection architectures based on nonlinear single-error-correcting, double-error-detecting (SEC-DED) codes. Linear SEC-DED codes widely used for design of reliable memories cannot detect and can miscorrect lots of errors with large Hamming weights. This may be a serious disadvantage for many modern technologies when error distributions are hard to estimate and multi-bit errors are highly probable. The proposed protection architectures have fewer undetectable errors and fewer errors that are miscorrected by all codewords than architectures based on linear codes with the same dimension at the cost of a small increase in the latency penalty, the area overhead and the power consumption. The nonlinear SEC-DED codes are generalized from the existing perfect nonlinear codes (Vasil’ev codes, Probl Kibern 8:375–378, 1962; Phelps codes, SIAM J Algebr Discrete Methods 4:398–403, 1983; and the codes based on one switching constructions, Etzion and Vardy, IEEE Trans Inf Theory 40:754–763, 1994). We present the error correcting algorithms, investigate and compare the error detection and correction capabilities of the proposed nonlinear SEC-DED codes to linear extended Hamming codes and show that replacing linear extended Hamming codes by the proposed nonlinear SEC-DED codes results in a drastic improvement in the reliability of the memory systems in the case of repeating errors or high multi-bit error rate. The proposed approach can be applied to RAM, ROM, FLASH and disk memories.  相似文献   

18.
Several authors have addressed the problem of designing good linear unequal error protection (LUEP) codes. However, very little is known about good nonbinary LUEP codes. The authors present a class of optimal nonbinary LUEP codes for two different sets of messages. By combining t-error-correcting Reed-Solomon (RS) codes and shortened nonbinary Hamming codes, they obtain nonbinary LUEP codes that protect one set of messages against any t or fewer symbol errors and the remaining set of messages against any single symbol error. For t⩾2, they show that these codes are optimal in the sense of achieving the Hamming lower bound on the number of redundant symbols of a nonbinary LUEP code with the same parameters  相似文献   

19.
Block cyclic redundancy check (CRC) codes are typically used to perform error detection in automatic repeat request (ARQ) protocols for data communications. Although efficient, CRCs can detect errors only after an entire block of data has been received and processed. We propose a new “continuous” error detection scheme using arithmetic coding that provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. This method of error detection, first introduced by Bell, Witten, and Cleary (1990), is achieved through the use of an arithmetic codec, and has the attractive feature that it can be combined physically with arithmetic source coding, which is widely used in state of-the-art image coders. We analytically optimize the tradeoff between added redundancy and error-detection time, achieving significant gains in bit rate throughput over conventional ARQ schemes for binary symmetric channel models for all probabilities of error  相似文献   

20.
The generalized Hamming weights of a linear code are fundamental code parameters related to the minimal overlap structures of the subcodes. They were introduced by V.K. Wei (1991) and shown to characterize the performance of the linear code in certain cryptographical applications. Results are presented on the generalized Hamming weights of several classes of binary cyclic codes, including primitive double-error-correcting and triple-error-correcting BCH codes, certain reversible cyclic codes, and some extended binary Goppa codes. In particular, the second generalized Hamming weight of primitive double-error-correcting BCH codes is determined and upper and lower bounds are obtained for the generalized Hamming weights for the codes studied. These bounds are compared to results from other methods  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号