首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Memory Reliability Improvement Based on Maximized Error-Correcting Codes   总被引:1,自引:1,他引:0  
Error-correcting codes (ECC) offer an efficient way to improve the reliability and yield of memory subsystems. ECC-based protection is usually provided on a memory word basis such that the number of data-bits in a codeword corresponds to the amount of information that can be transferred during a single memory access operation. Consequently, the codeword length is not the maximum allowed by a certain check-bit number since the number of data-bits is constrained by the width of the memory data interface. This work investigates the additional error correction opportunities offered by the absence of a perfect match between the numbers of data-bits and check-bits in some widespread ECCs. A method is proposed for the selection of multi-bit errors that can be additionally corrected with a minimal impact on ECC decoder latency. These methods were applied to single-bit error correction (SEC) codes and double-bit error correction (DEC) codes. Reliability improvements are evaluated for memories in which all errors affecting the same number of bits in a codeword are independent and identically distributed. It is shown that the application of the proposed methods to conventional DEC codes can improve the mean-time-to-failure (MTTF) of memories with up to 30 %. Maximized versions of the DEC codes are also proposed in which all adjacent triple-bit errors become correctable without affecting the maximum number of triple-bit errors that can be made correctable.  相似文献   

2.
In this paper we propose memory protection architectures based on nonlinear single-error-correcting, double-error-detecting (SEC-DED) codes. Linear SEC-DED codes widely used for design of reliable memories cannot detect and can miscorrect lots of errors with large Hamming weights. This may be a serious disadvantage for many modern technologies when error distributions are hard to estimate and multi-bit errors are highly probable. The proposed protection architectures have fewer undetectable errors and fewer errors that are miscorrected by all codewords than architectures based on linear codes with the same dimension at the cost of a small increase in the latency penalty, the area overhead and the power consumption. The nonlinear SEC-DED codes are generalized from the existing perfect nonlinear codes (Vasil’ev codes, Probl Kibern 8:375–378, 1962; Phelps codes, SIAM J Algebr Discrete Methods 4:398–403, 1983; and the codes based on one switching constructions, Etzion and Vardy, IEEE Trans Inf Theory 40:754–763, 1994). We present the error correcting algorithms, investigate and compare the error detection and correction capabilities of the proposed nonlinear SEC-DED codes to linear extended Hamming codes and show that replacing linear extended Hamming codes by the proposed nonlinear SEC-DED codes results in a drastic improvement in the reliability of the memory systems in the case of repeating errors or high multi-bit error rate. The proposed approach can be applied to RAM, ROM, FLASH and disk memories.  相似文献   

3.
Linear block codes are studied for improving the reliability of message storage in computer memory with stuck-at defects and noise. The case when the side information about the state of the defects is available to the decoder or to the encoder is considered. In the former case, stuck-at cells act as erasures so that techniques for decoding linear block codes for erasures and errors can be directly applied. We concentrate on the complimentary problem of incorporating stuck-at information in the encoding of linear block codes. An algebraic model for stuck-at defects and additive errors is presented. The notion of a "partitioned" linear block code is introduced to mask defects known at the encoder and to correct random errors at the decoder. The defect and error correction capability of partitioned linear block codes is characterized in terms of minimum distances. A class of partitioned cyclic codes is introduced. A BCH-type bound for these cyclic codes is derived and employed to construct partitioned linear block codes with specified bounds on the minimum distances. Finally, a probabilistic model for the generation of stuck-at cells is presented. It is shown that partitioned linear block codes achieve the Shannon capacity for a computer memory with symmetric defects and errors.  相似文献   

4.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

5.
刘小汇  张鑫  陈华明 《信号处理》2012,28(7):1014-1020
随着技术的发展和核心电压的降低,存储器更易受瞬时错误(软错误)影响,成为影响航天器件可靠性的主要原因。错误检测与纠正(EDAC)码(也称错误纠正码)常用来对SRAM型存储器中的瞬时错误进行纠正,由单个高能粒子引起的多位翻转错误(SEMU)是普通纠一检二(SEC-DED)编码所无法处理的。提出了一种交织度为2的(26,16)交织码,该码由两个能纠正一位随机错误、二位突发错误的(13,8)系统码组成,(26,16)交织码能够纠正单个码字中小于二位的随机错误和小于四位突发错误(DEC-QAEC)。通过理论分析和硬件平台实验表明,该交织码在存储资源占用率、实时性相当情况下可靠性优于同等长度的SEC DED码,能有效提高SRAM型存储器抗多位翻转错误的能力。   相似文献   

6.
As CMOS technology size scales down, multiple cell upsets (MCUs) caused by a single radiation particle have become one of the most challenging reliability issues for memories used in space application. Error correction codes (ECCs) are commonly used to protect memories against errors. Single error correction-Double error detection (SEC-DED) codes are the simplest and most typical ones, but they can only corrected single errors. The advanced ECCs, which can provide enough protection for memories, cost more overhead due to their complex decoders. Orthogonal Latin square (OLS) codes are one type of one-step majority logic decodable (OS-MLD) codes that can be decoded with low complexity and delay. However, there are no OLS codes directly fitting 32-bit data, which is a typical data size in memories. In this paper, (55, 32) and (68, 32) codes derived from (45, 25) and (55, 25) OLS codes have been proposed in order to improve OLS codes in terms of protection for the 32-bit data. The proposed codes can maintain the correction capability of OLS codes and be decoded with low delay and complexity. The evaluation of the implementations for these codes are presented and compared with those of the shortened version (60, 32) and (76, 32) OLS codes. The results show that the area and power of a 2-bit MCUs immune radiation hardened SRAM that protected by the proposed codes have been reduced by 7.76% and 6.34%, respectively. In the case of a 3-bit MCUs immune, the area and power of whole circuits have been reduced by 8.82% and 4.56% when the proposed codes are used.  相似文献   

7.
Three new techniques are proposed for constructing a class of codes that extends the protection provided by previous single error correcting (SEC)-double error detecting (DED)-single byte error detecting (SBD) codes. The proposed codes are systematic odd-weight-column SEC-DED-SBD codes providing also the correction of any odd number of erroneous bits per byte, where a byte represents a cluster of b bits of the codeword that are fed by the same memory chip or card. These codes are useful for practical applications to enhance the reliability and the data integrity of byte-organized computer memory systems against transient, intermittent, and permanent failures. In particular they represent a good tradeoff between the overhead in terms of additional check bits and the reliability improvement, due to the capability to correct at least 50% of the multiple errors per byte  相似文献   

8.
A modified error-correcting code that can correct up to two soft errors on each row (word line) in a dynamic random-access memory (DRAM) chip is proposed. Double-bit soft errors frequently occur in DRAM cells with trench capacitors, when charged alpha particles impinge on the intervening space between two vertical capacitors causing plasma shorts between them. The conventional on-chip error-correcting codes (ECCs) cannot correct such double-bit word-line soft errors, which significantly increase the uncorrectable error rate (UER). An ECC circuit that uses an augmented rectangular product code to detect and correct double-bit soft errors is presented. The proposed circuit automatically corrects the addressed bit if it is faulty, and then quickly locates the other faulty bit. A comprehensive study is made to estimate improvements in soft error rate (SER) and mean time to failure (MTTF). The ability of the circuit to correct soft errors in the presence of multiple-bit errors has also been analyzed by combinatorial enumeration  相似文献   

9.
Single Error Correction Double Error Detection (SEC-DED) codes are widely used to protect memories from soft errors due to their simple implementation. However, the limitation is that the double bit errors can just be detected but cannot be recovered by SEC-DED codes. In this paper, a method to recover some of the bits when a double error occurs is presented. This can be of interest for applications on which some bits store important information, for example control flags or the more significant bits of a value. For those, the proposed technique can in some cases determine whether those bits have been affected by the double error and when not, safely recover the correct values. However, the percentage of times that the bits can be recovered is small. The proposed scheme is also extended to increase this percentage by duplicating or triplicating the critical bits inside the word when there are spare bits. No modification to the decoding circuitry is needed, as the scheme can be implemented in the exception handler that is executed when a double error occurs. This facilitates the use of the proposed scheme in existing designs. Another option is to implement part of the scheme in hardware something that can be done with low cost.  相似文献   

10.
Discrete Fourier transform (DFT) codes have been used as joint source-channel codes in order to provide robustness against channel errors and erasures. This paper presents a frame-theoretic analysis of lowpass DFT codes with erasures. First, it is shown that such codes make a special class of frames. Then, the message reconstructions by the frame-theoretic signal space projection and coding-theoretic syndrome decoding of erasures are analyzed, and their equivalence is proved. The error performance of lowpass DFT codes is analyzed using the frame theory. The analysis helps in classifying the subframes of a DFT frame into several classes. This classification leads to the development of packetization criteria in order to guarantee minimum mean square reconstruction error and, thus, to minimize the effect of the quantization error. Simulation results substantiate the presented theory and the proposed packetization schemes.  相似文献   

11.
In this paper, the results of comparing the performance of various forward error correction techniques and several modulation formats when used over a nonselective Rayleigh fading channel in the presence of a pulse-burst jammer are presented. Both binary and nonbinary codes are considered, as well as concatenated codes consisting of either a block or a convolutional inner code and a Reed-Solomon outer code. Finally, the use of side information to allow the decoding of both erasures and errors is also analyzed.  相似文献   

12.
Reliability of scrubbing recovery-techniques for memory systems   总被引:1,自引:0,他引:1  
The authors analyze the problem of transient-error recovery in fault-tolerant memory systems, using a scrubbing technique. This technique is based on single-error-correction and double-error-detection (SEC-DED) codes. When a single error is detected in a memory word, the error is corrected and the word is rewritten in its original location. Two models are discussed: (1) exponentially distributed scrubbing, where a memory word is assumed to be checked in an exponentially distributed time period, and (2) deterministic scrubbing, where a memory word is checked periodically. Reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results of the scrubbing techniques are compared with those of memory systems without redundancies and with only SEC-DED codes. A major contribution of the analysis is easy-to-use expressions for MTTF of memories. The authors derive reliability functions and mean time to failure of four different memory systems subject to transient errors at exponentially distributed arrival times  相似文献   

13.
14.
On-chip error correction for random-access memories is not very popular because of the high overhead necessary. This paper presents a technique that performs a single-bit correction and a double-bit detection on clocked memories where all column data is internally available, with an area penalty of less than 20 percent. The timing overhead for on-chip implementation is less than the time required to generate a parity bit. The detection and correction operation is transparent to the user and does not require different cycle times for the detection and the correction.  相似文献   

15.
王婷  陈为刚 《信号处理》2020,36(5):655-665
考虑多进制LDPC码的符号特性,以及对其残留错误和删除的分析,本文采用多进制LDPC码作为内码,相同Galois域下的高码率RS码作为外码来构造多进制乘积码;并提出了一种低复杂度的迭代译码方案,减少信息传输的各类错误。在译码时,只对前一次迭代中译码失败的码字执行译码,并对译码正确码字所对应的比特初始概率信息进行修正,增强下一次迭代多进制LDPC译码符号先验信息的准确性,减少内码译码后的判决错误,从而充分利用外码的纠错能力。仿真结果显示,多进制乘积码相较于二进制LDPC乘积码有较大的编码增益,并通过迭代进一步改善了性能,高效纠正了信道中的随机错误和突发删除。对于包含2%突发删除的高斯信道,在误比特率为10-6时,迭代一次有0.4 dB左右的增益。  相似文献   

16.
As the technology scales down, shrinking geometry and layout dimension, on- chip interconnects are exposed to different noise sources such as crosstalk coupling, supply voltage fluctuation and temperature variation that cause random and burst errors. These errors affect the reliability of the on-chip interconnects. Hence, error correction codes integrated with noise reduction techniques are incorporated to make the on-chip interconnects robust against errors. The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability. The proposed error correction code corrects all the error patterns of one bit error, two bit errors. The proposed code also corrects 7 out of 10 possible three bit error patterns and detects burst errors of three. Hybrid Automatic Repeat Request (HARQ) system is employed when burst errors of three occurs. The performance of the proposed codec is evaluated for residual flit error rate, codec area, power, delay, average flit latency and link energy consumption. The proposed codec achieves four magnitude order of low residual flit error rate and link energy minimization of over 53 % compared to other existing error correction schemes. Besides the low residual flit error rate, and link energy minimization, the proposed codec also achieves up to 4.2 % less area and up to 6 % less codec power consumption compared to other error correction codes. The less codec area, codec power consumption, low link energy and low residual flit error rate make the proposed code appropriate for on chip interconnection link.  相似文献   

17.
Linear Network Error Correction Codes in Packet Networks   总被引:4,自引:0,他引:4  
In this paper, we study basic properties of linear network error correction codes, their construction and error correction capability for various kinds of errors. Our discussion is restricted to the single-source multicast case. We define the minimum distance of a network error correction code. This plays the same role as it does in classical coding theory. We construct codes that can correct errors up to the full error correction capability specified by Singleton bound for network error correction codes recently established by Cai and Yeung. We propose a decoding principle for network error correction codes, based on which we introduce two decoding algorithms and analyze their performance. We formulate the global kernel error correction problem and characterize the error correction capability of codes for this kind of error.  相似文献   

18.
A class of array codes correcting multiple column erasures   总被引:1,自引:0,他引:1  
A family of binary array codes of size (p-1)×n, with p a prime, correcting multiple column erasures is proposed. The codes coincide with a subclass of shortened Reed-Solomon codes and achieve the maximum possible correcting capability. Complexity of encoding and decoding is proportional to rnp, where r is the number of correctable erasures, i.e., is simpler than the Forney decoding algorithm. The length n of the codes is at most 2p-1, that is, twice as big as the length of the Blaum-Roth codes having comparable decoding complexity  相似文献   

19.
Some important aspects of a series of concatenated codes subjected to matrix type-Bcodes are investigated. Concatenated matrix codes and the concatenated quadratic residue codes especially are emphasized. An analysis of the error patterns, which can be corrected with the matrix coding, also is given. These codes are suitable for compound channels with memory (i.e., channels on which burst, cluster, and random errors occur). Explicit formulas are given for the number of bursts, cluster, and random errors that can be corrected with these codes. Decoding schemes and techniques for studying error propagation in the proposed codes are given. In particular a new decoding algorithm for a concatenated matrix code is given. The performance of coding and decoding schemes of the various types of concatenated codes can be tested in practice.  相似文献   

20.
To prevent soft errors from causing data corruption, memories are commonly protected with Error Correction Codes (ECCs). To minimize the impact of the ECC on memory complexity simple codes are commonly used. For example, Single Error Correction (SEC) codes, like Hamming codes are widely used. Power consumption can be reduced by first checking if the word has errors and then perform the rest of the decoding only when there are errors. This greatly reduces the average power consumption as most words will have no errors. In this paper an efficient error detection scheme for Double Error Correction (DEC) Bose–Chaudhuri–Hocquenghem (BCH) codes is presented. The scheme reduces the dynamic power consumption so that it is the same that for error detection in a SEC Hamming code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号