首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
刘小汇  张鑫  陈华明 《信号处理》2012,28(7):1014-1020
随着技术的发展和核心电压的降低,存储器更易受瞬时错误(软错误)影响,成为影响航天器件可靠性的主要原因。错误检测与纠正(EDAC)码(也称错误纠正码)常用来对SRAM型存储器中的瞬时错误进行纠正,由单个高能粒子引起的多位翻转错误(SEMU)是普通纠一检二(SEC-DED)编码所无法处理的。提出了一种交织度为2的(26,16)交织码,该码由两个能纠正一位随机错误、二位突发错误的(13,8)系统码组成,(26,16)交织码能够纠正单个码字中小于二位的随机错误和小于四位突发错误(DEC-QAEC)。通过理论分析和硬件平台实验表明,该交织码在存储资源占用率、实时性相当情况下可靠性优于同等长度的SEC DED码,能有效提高SRAM型存储器抗多位翻转错误的能力。   相似文献   

2.
Two error correction schemes are proposed for word-oriented binary memories that can be affected by erasures, i.e. errors with known location but unknown value. The erasures considered here are due to the drifting of the electrical parameter used to encode information outside the normal ranges associated to a logic 0 or a logic 1 value. For example, a dielectric breakdown in a magnetic memory cell may reduce its electrical resistance sensibly below the levels which correspond to logic 0 and logic 1 values stored in healthy memory cells. Such deviations can be sensed during memory read operations and the acquired information can be used to boost the correction capability of an error-correcting code (ECC). The proposed schemes enable the correction of double-bit errors based on the combination of erasure information with single-bit error correction and double-bit error detection (SEC-DED) codes or shortened (SEC) codes. The correction of single-bit errors is always guaranteed. Ways to increase the number of double-bit and triple-bit errors that can be detected by shortened SEC and SEC-DED codes are considered in order to augment the error correction capability of the proposed solutions.  相似文献   

3.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

4.
Reliability of scrubbing recovery-techniques for memory systems   总被引:1,自引:0,他引:1  
The authors analyze the problem of transient-error recovery in fault-tolerant memory systems, using a scrubbing technique. This technique is based on single-error-correction and double-error-detection (SEC-DED) codes. When a single error is detected in a memory word, the error is corrected and the word is rewritten in its original location. Two models are discussed: (1) exponentially distributed scrubbing, where a memory word is assumed to be checked in an exponentially distributed time period, and (2) deterministic scrubbing, where a memory word is checked periodically. Reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results of the scrubbing techniques are compared with those of memory systems without redundancies and with only SEC-DED codes. A major contribution of the analysis is easy-to-use expressions for MTTF of memories. The authors derive reliability functions and mean time to failure of four different memory systems subject to transient errors at exponentially distributed arrival times  相似文献   

5.
Memory Reliability Improvement Based on Maximized Error-Correcting Codes   总被引:1,自引:1,他引:0  
Error-correcting codes (ECC) offer an efficient way to improve the reliability and yield of memory subsystems. ECC-based protection is usually provided on a memory word basis such that the number of data-bits in a codeword corresponds to the amount of information that can be transferred during a single memory access operation. Consequently, the codeword length is not the maximum allowed by a certain check-bit number since the number of data-bits is constrained by the width of the memory data interface. This work investigates the additional error correction opportunities offered by the absence of a perfect match between the numbers of data-bits and check-bits in some widespread ECCs. A method is proposed for the selection of multi-bit errors that can be additionally corrected with a minimal impact on ECC decoder latency. These methods were applied to single-bit error correction (SEC) codes and double-bit error correction (DEC) codes. Reliability improvements are evaluated for memories in which all errors affecting the same number of bits in a codeword are independent and identically distributed. It is shown that the application of the proposed methods to conventional DEC codes can improve the mean-time-to-failure (MTTF) of memories with up to 30 %. Maximized versions of the DEC codes are also proposed in which all adjacent triple-bit errors become correctable without affecting the maximum number of triple-bit errors that can be made correctable.  相似文献   

6.
Single Error Correction Double Error Detection (SEC-DED) codes are widely used to protect memories from soft errors due to their simple implementation. However, the limitation is that the double bit errors can just be detected but cannot be recovered by SEC-DED codes. In this paper, a method to recover some of the bits when a double error occurs is presented. This can be of interest for applications on which some bits store important information, for example control flags or the more significant bits of a value. For those, the proposed technique can in some cases determine whether those bits have been affected by the double error and when not, safely recover the correct values. However, the percentage of times that the bits can be recovered is small. The proposed scheme is also extended to increase this percentage by duplicating or triplicating the critical bits inside the word when there are spare bits. No modification to the decoding circuitry is needed, as the scheme can be implemented in the exception handler that is executed when a double error occurs. This facilitates the use of the proposed scheme in existing designs. Another option is to implement part of the scheme in hardware something that can be done with low cost.  相似文献   

7.
Nowadays, multibit error correction codes (MECCs) are effective approaches to mitigate multiple bit upsets (MBUs) in memories. As technology scales, combinational circuits have become more susceptible to radiation induced single event transient (SET). Therefore, transient faults in encoding and decoding circuits are more frequent than before. Firstly, this paper proposes a new MECC, which is called Mix code, to mitigate MBUs in fault-secure memories. Considering the structure characteristic of MECC, Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Hamming codes are combined in the proposed Mix codes to protect memories against MBUs with low redundancy overheads. Then, the fault-secure scheme is presented, which can tolerate transient faults in both the storage cell and the encoding and decoding circuits. The proposed fault-secure scheme has remarkably lower redundancy overheads than the existing fault-secure schemes. Furthermore, the proposed scheme is suitable for ordinary accessed data width (e.g., 2n bits) between system bus and memory. Finally, the proposed scheme has been implemented in Verilog and validated through a wide set of simulations. The experiment results reveal that the proposed scheme can effectively mitigate multiple errors in whole memory systems. They can not only reduce the redundancy overheads of the storage array but also improve the performance of MECC circuits in fault-secure memory systems.  相似文献   

8.
To prevent soft errors from causing data corruption, memories are commonly protected with Error Correction Codes (ECCs). To minimize the impact of the ECC on memory complexity simple codes are commonly used. For example, Single Error Correction (SEC) codes, like Hamming codes are widely used. Power consumption can be reduced by first checking if the word has errors and then perform the rest of the decoding only when there are errors. This greatly reduces the average power consumption as most words will have no errors. In this paper an efficient error detection scheme for Double Error Correction (DEC) Bose–Chaudhuri–Hocquenghem (BCH) codes is presented. The scheme reduces the dynamic power consumption so that it is the same that for error detection in a SEC Hamming code.  相似文献   

9.
Cooperative diversity using distributed space-time codes has been recently proposed to form virtual antennas in order to achieve diversity gain. In this paper, we consider a multi-relay network operating in amplify-and-forward (AAF) mode. Motivated by protocol III presented in (Nabar et al. 2004), we propose a cooperative diversity protocol implementing space–time coding for an arbitrary number of relay nodes when the source-destination link contributes in the second phase. We consider the use of real-orthogonal and quasi-orthogonal designs of space–time codes as they give better performance than random linear-dispersion codes. The pairwise error probability (PEP) has been derived and the theoretical analysis demonstrates that the proposed protocol achieves a diversity of order N + 1, where N is the number of relay nodes. No instantaneous channel state information is required at the relay nodes. The optimum power allocation that minimizes the PEP is obtained with numerical and theoretical analysis. The aggregate system power constraint is considered in the optimization. Simulation results demonstrate an improvement over the existing orthogonal protocols for different source-destination channel conditions. The results also show that the proposed scheme is robust to the channel estimation errors  相似文献   

10.
As CMOS technology size scales down, multiple cell upsets (MCUs) caused by a single radiation particle have become one of the most challenging reliability issues for memories used in space application. Error correction codes (ECCs) are commonly used to protect memories against errors. Single error correction-Double error detection (SEC-DED) codes are the simplest and most typical ones, but they can only corrected single errors. The advanced ECCs, which can provide enough protection for memories, cost more overhead due to their complex decoders. Orthogonal Latin square (OLS) codes are one type of one-step majority logic decodable (OS-MLD) codes that can be decoded with low complexity and delay. However, there are no OLS codes directly fitting 32-bit data, which is a typical data size in memories. In this paper, (55, 32) and (68, 32) codes derived from (45, 25) and (55, 25) OLS codes have been proposed in order to improve OLS codes in terms of protection for the 32-bit data. The proposed codes can maintain the correction capability of OLS codes and be decoded with low delay and complexity. The evaluation of the implementations for these codes are presented and compared with those of the shortened version (60, 32) and (76, 32) OLS codes. The results show that the area and power of a 2-bit MCUs immune radiation hardened SRAM that protected by the proposed codes have been reduced by 7.76% and 6.34%, respectively. In the case of a 3-bit MCUs immune, the area and power of whole circuits have been reduced by 8.82% and 4.56% when the proposed codes are used.  相似文献   

11.
Several authors have addressed the problem of designing good linear unequal error protection (LUEP) codes. However, very little is known about good nonbinary LUEP codes. The authors present a class of optimal nonbinary LUEP codes for two different sets of messages. By combining t-error-correcting Reed-Solomon (RS) codes and shortened nonbinary Hamming codes, they obtain nonbinary LUEP codes that protect one set of messages against any t or fewer symbol errors and the remaining set of messages against any single symbol error. For t⩾2, they show that these codes are optimal in the sense of achieving the Hamming lower bound on the number of redundant symbols of a nonbinary LUEP code with the same parameters  相似文献   

12.
李路路  何春  李磊 《通信技术》2010,43(11):42-44
在太空辐射环境中存在各种宇宙射线和一些高能粒子,其中单粒子翻转(SEU)效应是引起存储器软错误的重要因素,降低了数据传输的可靠性,因此成为当前集成电路抗辐射加固设计的研究重点之一。标准的纠错编码(ECC)设计冗余度将占用超过50%的存储量,该设计基于缩短汉明码的原理实现了对32位存储器采用7位冗余码进行纠错编码的SEC-DED加固设计,在资源上得到了优化;同时从概率的角度分析了可靠性的理论基础,通过编码可靠性可以提高3到6个数量级。  相似文献   

13.
This paper presents central finite-dimensional H filters for linear systems with state or measurement delay that are suboptimal for a given threshold γ with respect to a modified Bolza–Meyer quadratic criterion including an attenuation control term with opposite sign. In contrast to the results previously obtained for linear time-delay systems, the paper reduces the original H filtering problems to H 2 (optimal mean-square) filtering problems, using the technique proposed in Doyle et al. (IEEE Trans. Automat. Contr. AC-34:831–847, 1989). The paper first presents a central suboptimal H filter for linear systems with state delay, based on the optimal H 2 filter from Basin et al. (IEEE Trans. Automat. Contr. AC-50:684–690, 2005), which contains a finite number of filtering equations for any fixed filtering horizon, but this number grows unboundedly as time goes to infinity. To overcome that difficulty, an alternative central suboptimal H filter is designed for linear systems with state delay, which is based on the alternative optimal H 2 filter from Basin et al. (Int. J. Adapt. Control Signal Process. 20(10):509–517, 2006). Then, the paper presents a central suboptimal H filter for linear systems with measurement delay, based on the optimal H 2 filter from Basin and Martinez-Zuniga (Int. J. Robust Nonlinear Control 14(8):685–696, 2004). Numerical simulations are conducted to verify the performance of the designed three central suboptimal filters for linear systems with state or measurement delay against the central suboptimal H filter available for linear systems without delays. The authors thank The London Royal Society (RS) and the Mexican National Science and Technology Council (CONACyT) for financial support under an RS International Incoming Short Visits 2006/R4 Grant and CONACyT Grants 55584 and 52953.  相似文献   

14.
This paper proposes a novel core-growing (CG) clustering method based on scoring k-nearest neighbors (CG-KNN). First, an initial core for each cluster is obtained, and then a tree-like structure is constructed by sequentially absorbing data points into the existing cores according to the KNN linkage score. The CG-KNN can deal with arbitrary cluster shapes via the KNN linkage strategy. On the other hand, it allows the membership of a previously assigned training pattern to be changed to a more suitable cluster. This is supposed to enhance the robustness. Experimental results on four UCI real data benchmarks and Leukemia data sets indicate that the proposed CG-KNN algorithm outperforms several popular clustering algorithms, such as Fuzzy C-means (FCM) (Xu and Wunsch IEEE Transactions on Neural Networks 16:645–678, 2005), Hierarchical Clustering (HC) (Xu and Wunsch IEEE Transactions on Neural Networks 16:645–678, 2005), Self-Organizing Maps (SOM) (Golub et al. Science 286:531–537, 1999; Tamayo et al. Proceedings of the National Academy of Science USA 96:2907, 1999), and Non-Euclidean Norm FCM (NEFCM) (Karayiannis and Randolph-Gips IEEE Transactions On Neural Networks 16, 2005).  相似文献   

15.
Reliability is a critical factor for systems operating in radiation environments. Among the different components in a system, memories are one of the parts most sensitive to soft errors due to their relatively large area. Due to their large cost, traditional techniques like Triple Modular Redundancy are not used to protect memories. A typical approach is to apply Error Correction Codes to correct single errors, and detect double errors. This type of codes, for example those based on Hamming, provides an initial level of protection. Detected single errors are usually corrected using scrubbing, by which the memory positions are periodically re-written after a fixed (deterministic scrubbing), or variable period (probabilistic scrubbing). These traditional models usually offer good results when calculating the reliability of memories (e.g. through the Mean Time To Failure). However, there are some particularities that are not modeled through these approaches, to the best of our knowledge. One of these particularities is how double errors are handled. In a traditional approach, two errors in the same word produce always a system failure (only single errors can be corrected). However, if the two (or more) errors affect the same bit, either the second one reinforces the first one (keeping just a single error), or corrects it. In both scenarios, the resulting situation does not trigger a system failure, which has a direct impact on the reliability of the memory. In this paper, traditional reliability models are refined to handle the mentioned scenarios, which produces a more precise analysis in the calculation of mean time to failure for memory systems.   相似文献   

16.
This paper studies the convergence analysis of the least mean M-estimate (LMM) and normalized least mean M-estimate (NLMM) algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises. These algorithms are based on the M-estimate cost function and employ error nonlinearity to achieve improved robustness in impulsive noise environment over their conventional LMS and NLMS counterparts. Using the Price’s theorem and an extension of the method proposed in Bershad (IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-34(4), 793–806, 1986; IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(5), 636–644, 1987), we first derive new expressions of the decoupled difference equations which describe the mean and mean square convergence behaviors of these algorithms for Gaussian inputs and additive Gaussian noise. These new expressions, which are expressed in terms of the generalized Abelian integral functions, closely resemble those for the LMS algorithm and allow us to interpret the convergence performance and determine the step size stability bound of the studied algorithms. Next, using an extension of the Price’s theorem for Gaussian mixture, similar results are obtained for additive contaminated Gaussian noise case. The theoretical analysis and the practical advantages of the LMM/NLMM algorithms are verified through computer simulations.  相似文献   

17.
This paper presents the first aggressively calibrated 14-b 32 MS/s pipelined ADC. The design uses a comprehensive digital background calibration engine that compensates for linear and nonlinear errors as well as capacitor mismatch in multi-bit DAC. Background calibration techniques that estimate the errors by correlating the output of ADC with the calibration signal have a very slow convergence rate. This paper also presents a fully digital technique to speed up the convergence in the error estimation procedure. By digitally filtering the input signal during the error estimation, the convergence rate of the calibration has been improved significantly. Implemented in TSMC 0.25 μm technology, the pipelined ADC consumes 75 mA from 2.5 V and occupies 2.8 mm2 of active area. Measurement results show that calibration significantly improved dynamic (SNDR, SFDR) as well as static (DNL, INL) performance of the ADC.  相似文献   

18.
Random linear network coding is an efficient technique for disseminating information in networks, but it is highly susceptible to errors. Kötter-Kschischang (KK) codes and Mahdavifar-Vardy (MV) codes are two important families of subspace codes that provide error control in noncoherent random linear network coding. List decoding has been used to decode MV codes beyond half distance. Existing hardware implementations of the rank metric decoder for KK codes suffer from limited throughput, long latency and high area complexity. The interpolation-based list decoding algorithm for MV codes still has high computational complexity, and its feasibility for hardware implementations has not been investigated. In this paper we propose efficient decoder architectures for both KK and MV codes and present their hardware implementations. Two serial architectures are proposed for KK and MV codes, respectively. An unfolded decoder architecture, which offers high throughput, is also proposed for KK codes. The synthesis results show that the proposed architectures for KK codes are much more efficient than rank metric decoder architectures, and demonstrate that the proposed decoder architecture for MV codes is affordable.  相似文献   

19.
A scheme for storing information in a memory system with defective memory ceils using "additive" codes was proposed by Kuznetsov and Tsybakov. When a source message is to be stored in a memory with defective cells, a code vectorxmasking the defect pattern of the memory is formed by adding a vector defined by the message and the defect pattern to the encoded message, and thenxis stored. The decoding process does not require the defect information. Considerably better bounds on the information rate of codes of this type which are capable of masking multiple defects and correcting multiple temporary errors are presented. The difference between the upper and lower bounds approaches the difference between the known best upper and lower bounds for random error correcting linear codes as the word length becomes large. Examples of efficient codes for masking double or fewer defects and correcting multiple temporary errors are presented.  相似文献   

20.
High-speed architectures for Reed-Solomon decoders   总被引:2,自引:0,他引:2  
New high-speed VLSI architectures for decoding Reed-Solomon codes with the Berlekamp-Massey algorithm are presented in this paper. The speed bottleneck in the Berlekamp-Massey algorithm is in the iterative computation of discrepancies followed by the updating of the error-locator polynomial. This bottleneck is eliminated via a series of algorithmic transformations that result in a fully systolic architecture in which a single array of processors computes both the error-locator and the error-evaluator polynomials. In contrast to conventional Berlekamp-Massey architectures in which the critical path passes through two multipliers and 1+[log2,(t+1)] adders, the critical path in the proposed architecture passes through only one multiplier and one adder, which is comparable to the critical path in architectures based on the extended Euclidean algorithm. More interestingly, the proposed architecture requires approximately 25% fewer multipliers and a simpler control structure than the architectures based on the popular extended Euclidean algorithm. For block-interleaved Reed-Solomon codes, embedding the interleaver memory into the decoder results in a further reduction of the critical path delay to just one XOR gate and one multiplexer, leading to speed-ups of as much as an order of magnitude over conventional architectures  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号