首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Software-implemented EDAC protection against SEUs   总被引:1,自引:0,他引:1  
In many computer systems, the contents of memory are protected by an error detection and correction (EDAC) code. Bit-flips caused by single event upsets (SEU) are a well-known problem in memory chips; EDAC codes have been an effective solution to this problem. These codes are usually implemented in hardware using extra memory bits and encoding/decoding circuitry. In systems where EDAC hardware is not available, the reliability of the system can be improved by providing protection through software. Codes and techniques that can be used for software implementation of EDAC are discussed and compared. The implementation requirements and issues are discussed, and some solutions are presented. The paper discusses in detail how system-level and chip-level structures relate to multiple error correction. A simple solution is presented to make the EDAC scheme independent of these structures. The technique in this paper was implemented and used effectively in an actual space experiment. We have observed that SEU corrupt the operating system or programs of a computer system that does not have any EDAC for memory, forcing the system to be reset frequently. Protecting the entire memory (code and data) might not be practical in software. However this paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in a low-radiation space environment  相似文献   

2.
Nowadays, multibit error correction codes (MECCs) are effective approaches to mitigate multiple bit upsets (MBUs) in memories. As technology scales, combinational circuits have become more susceptible to radiation induced single event transient (SET). Therefore, transient faults in encoding and decoding circuits are more frequent than before. Firstly, this paper proposes a new MECC, which is called Mix code, to mitigate MBUs in fault-secure memories. Considering the structure characteristic of MECC, Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Hamming codes are combined in the proposed Mix codes to protect memories against MBUs with low redundancy overheads. Then, the fault-secure scheme is presented, which can tolerate transient faults in both the storage cell and the encoding and decoding circuits. The proposed fault-secure scheme has remarkably lower redundancy overheads than the existing fault-secure schemes. Furthermore, the proposed scheme is suitable for ordinary accessed data width (e.g., 2n bits) between system bus and memory. Finally, the proposed scheme has been implemented in Verilog and validated through a wide set of simulations. The experiment results reveal that the proposed scheme can effectively mitigate multiple errors in whole memory systems. They can not only reduce the redundancy overheads of the storage array but also improve the performance of MECC circuits in fault-secure memory systems.  相似文献   

3.
刘小汇  张鑫  陈华明 《信号处理》2012,28(7):1014-1020
随着技术的发展和核心电压的降低,存储器更易受瞬时错误(软错误)影响,成为影响航天器件可靠性的主要原因。错误检测与纠正(EDAC)码(也称错误纠正码)常用来对SRAM型存储器中的瞬时错误进行纠正,由单个高能粒子引起的多位翻转错误(SEMU)是普通纠一检二(SEC-DED)编码所无法处理的。提出了一种交织度为2的(26,16)交织码,该码由两个能纠正一位随机错误、二位突发错误的(13,8)系统码组成,(26,16)交织码能够纠正单个码字中小于二位的随机错误和小于四位突发错误(DEC-QAEC)。通过理论分析和硬件平台实验表明,该交织码在存储资源占用率、实时性相当情况下可靠性优于同等长度的SEC DED码,能有效提高SRAM型存储器抗多位翻转错误的能力。   相似文献   

4.
误码条件下LDPC码参数的盲估计   总被引:1,自引:0,他引:1       下载免费PDF全文
针对非合作信号处理中LDPC码(Low-Density Parity-Check)的盲识别问题,提出了一种容错能力较强的开集识别算法.该算法通过对码字矩阵进行高斯约旦消元找到汉明重量较小的"相关列",并根据"相关列"中所包含的约束关系求得LDPC码的校验向量,然后剔除"相关列"中为"1"位置对应的错误码字.若根据高斯约旦消元求校验向量和剔除错误码字进行迭代无法得到更多校验向量,则对得到的这些校验向量进行稀疏化,再进行译码纠错.最后,综合利用校验向量的求解,错误码字的剔除,校验向量稀疏化,LDPC码译码进行迭代,实现LDPC码校验矩阵的有效重建.仿真结果表明,对于IEEE 802.16e标准中的(576,288)LDPC码,在误比特率为0.0022时,本文算法仍可以达到较好的识别效果.  相似文献   

5.
As the microelectronics technology continuously advances to deep submicron scales, the occurrence of Multiple Cell Upset (MCU) induced by radiation in memory devices becomes more likely to happen. The implementation of a robust Error Correction Code (ECC) is a suitable solution. However, the more complex an ECC, the more delay, area usage and energy consumption. An ECC with an appropriate balance between error coverage and computational cost is essential for applications where fault tolerance is heavily needed, and the energy resources are scarce. This paper describes the conception, implementation, and evaluation of Column-Line-Code (CLC), a novel algorithm for the detection and correction of MCU in memory devices, which combines extended Hamming code and parity bits. Besides, this paper evaluates the variation of the 2D CLC schemes and proposes an additional operation to correct more MCU patterns called extended mode. We compared the implementation cost, reliability level, detection/correction rate and the mean time to failure among the CLC versions and other correction codes, proving the CLCs have high MCU correction efficacy with reduced area, power and delay costs.  相似文献   

6.
Among popular multi-transmit and multi-receive antennas techniques, the VBLAST (Vertical Bell Laboratories Layered Space-Time) architecture has been shown to be a good solution for wireless communications applications that require the transmission of data at high rates. Recently, the application of efficient error correction coding schemes such as low density parity-check (LDPC) codes to systems with multi-transmit and multi-receive antennas has shown to significantly improve bit error rate performance. Although irregular LDPC codes with non-structure are quite popular due to the ease of constructing the parity check matrices and their very good error rate performance, the complexity of the encoder is high. Simple implementation of both encoder and decoder can be an asset in wireless communications applications. In this paper, we study the application of Euclidean geometry LDPC codes to the VBLAST system. We assess system performance using different code parameters and different numbers of antennas via Monte-Carlo simulation and show that the combination of Euclidean geometry LDPC codes and VBLAST can significantly improve bit error rate performance. We also show that interleaving data is necessary to improve performance of LDPC codes when a higher number of antennas is, used in order to mitigate the effect of error propagation. The simplicity of the implementation of both encoder and decoder makes Euclidean geometry LDPC codes with VBLAST system attractive and suitable for practical applications.  相似文献   

7.
缩短汉明码及其改进码字被广泛使用在宇航级高可靠性存储器的差错检测与纠正电路中。作为一种成熟的纠正单个错误编码,其单字节内多位翻转导致缩短汉明码失效的研究却很少。这篇文章分析了单字节多位翻转导致缩短汉明码失效的情况,分析了各种可能的错误输出模式,并从理论上给出了其概率计算公式。采用Matlab软件进行的计算机模拟试验表明,理论结果与试验结果基本相符。这篇文章最后分析了ISSI公司在其抗辐射SRAM设计中采用的一种将较长信息位分成相等两部分,分别采用缩短汉明码进行编译码的方案。分析表明,这种编译码方案可以降低失效状态下输出3 bit翻转的概率。  相似文献   

8.
郝丽  于立新  彭和平  庄伟 《半导体学报》2015,36(11):115005-5
An optimization method of error detection and correction (EDAC) circuit design is proposed. The method involves selecting or constructing EDAC codes of low cost hardware, associated with operation scheduling implementation based on 2-input XOR gates structure, and two actions for reducing hardware cells, which can reduce the delay penalties and area costs of the EDAC circuit effectively. The 32-bit EDAC circuit hardware implementation is selected to make a prototype, based on the 180 nm process. The delay penalties and area costs of the EDAC circuit are evaluated. Results show that the time penalty and area cost of the EDAC circuitries are affected with different parity-check matrices and different hardware implementation for the EDAC codes with the same capability of correction and detection code. This method can be used as a guide for low-cost radiation-hardened microprocessor EDAC circuit design and for more advanced technologies.  相似文献   

9.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

10.
This paper presents a quasi-random approach to space–time (ST) codes. The basic principle is to transmit randomly interleaved versions of forward error correction (FEC)-coded sequences simultaneously from all antennas in a multilayer structure. This is conceptually simple, yet still very effective. It is also flexible regarding the transmission rate, antenna numbers, and channel conditions (e.g., with intersymbol interference). It provides a unified solution to various applications where the traditional ST codes may encounter difficulties. We outline turbo-type iterative joint detection and equalization algorithms with complexity (per FEC-coded bit) growing linearly with the transmit antenna number and independently of the layer number. We develop a signal-to-noise-ratio (SNR) evolution technique and a bounding technique to assess the performance of the proposed code in fixed and quasi-static fading channels, respectively. These performance assessment techniques are very simple and reasonably accurate. Using these techniques as a searching tool, efficient power allocation strategies are examined, which can greatly enhance the system performance. Simulation results show that the proposed code can achieve near-capacity performance with both low and high rates at low decoding complexity.   相似文献   

11.
As the technology scales down, shrinking geometry and layout dimension, on- chip interconnects are exposed to different noise sources such as crosstalk coupling, supply voltage fluctuation and temperature variation that cause random and burst errors. These errors affect the reliability of the on-chip interconnects. Hence, error correction codes integrated with noise reduction techniques are incorporated to make the on-chip interconnects robust against errors. The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability. The proposed error correction code corrects all the error patterns of one bit error, two bit errors. The proposed code also corrects 7 out of 10 possible three bit error patterns and detects burst errors of three. Hybrid Automatic Repeat Request (HARQ) system is employed when burst errors of three occurs. The performance of the proposed codec is evaluated for residual flit error rate, codec area, power, delay, average flit latency and link energy consumption. The proposed codec achieves four magnitude order of low residual flit error rate and link energy minimization of over 53 % compared to other existing error correction schemes. Besides the low residual flit error rate, and link energy minimization, the proposed codec also achieves up to 4.2 % less area and up to 6 % less codec power consumption compared to other error correction codes. The less codec area, codec power consumption, low link energy and low residual flit error rate make the proposed code appropriate for on chip interconnection link.  相似文献   

12.
扩频系统侦察对抗时,在低信噪比下估计得到的扩频码存在严重误码,会影响信号解扩解调质量.通过Gold码与其对应m序列优选对的基本特性结合互相关函数特征,提出了一种严格的Gold码分类,并得出一种基于分类搜索的误码修正算法,通过比较待测Gold码与各类样本Gold码互相关函数的三值分布特性,可以快速搜索准确定位到正确的Gold码,实现误码完全修正.当Gold码的误码率不高于11%时,算法可实现对误码的完全修正,能有效降低扩频信号盲处理的信噪比门限.  相似文献   

13.
A new code construction algorithm for incoherent Multi-Dimensional Optical Code Division Multiple Access (MD-OCDMA) for asynchronous fiber optic communication is proposed. We refer multi-dimensionality to two-dimensional (2D) wavelength–time or space–time domains and three-dimensional (3D) space–wavelength–time domains. The application of the algorithm in constructing 2D multiple pulses per row codes and 3D multiple pulses per plane codes is given. The performance of the codes is discussed. In the applications discussed, this construction ensures a maximum crosscorrelation of 1 between any two codes. The proposed codes have complete 1D code allocation, which increases the cardinality. The performance of some codes in literature is compared with the proposed codes. The analyzed performance measure is bit error rate due to multiple access interference for different numbers of active users. The performance analysis shows that the proposed 2D construction offers very low bit error rate at lower spectral efficiency when compared with other 2D constructions. A comparison of the proposed 3D construction with existing 3D constructions shows lower bit error rate for equivalent code dimension. New integrated optic designs for the generation of OCDMA codes using titanium indiffused lithium niobate technology are explored, which can enable compact encoders and decoders for computer communications.  相似文献   

14.
极化码作为一种纠错码,具有较好的编译码性能,已成为5G短码控制信道的标准编码方案。但在码长较短时,其性能不够优异。提出一种基于增强奇偶校验码级联极化码的新型编译码方法,在原有的奇偶校验位后设立增强校验位,对校验方程中信道可靠度较低的信息位进行双重校验,辅助奇偶校验码在译码过程中对路径进行修剪,以此提高路径选择的可靠性。仿真结果表明,在相同信道、相同码率码长下,本文提出的新型编译码方法比循环冗余校验(cyclic redundancy check,CRC)码级联极化码、奇偶校验(parity check,PC)码级联极化码误码性能更优异。在高斯信道下,当码长为128、码率为1/2、误码率为10-3时,本文提出的基于增强PC码级联的极化码比PC码级联的极化码获得了约0.3dB增益,与CRC辅助的极化码相比获得了约0.4 dB增益。  相似文献   

15.
Caches, which are comprised much of a CPU chip area and transistor counts, are reasonable targets for transient single and multiple faults induced from energetic particles. This paper presents: (1) a new fault detection scheme for tag arrays of cache memories and (2) an architectural cache to improve performance as well as dependability. In this architecture, cache space is divided into sets of different sizes and different tag lengths. Using the proposed fault detection scheme, i.e., GParity, when single and multiple errors are detected in a word, the word is rewritten by its correct data from memory and its GParity code is recomputed. The error detection scheme and the cache architecture have been evaluated using a trace driven simulation with soft error injection and SPEC 2000 applications. Moreover, reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results of GParity code are compared with those of other protection codes and memory systems without redundancies and with single parity codes. The results show that error detection improvement varies between 66% and 96% as compared with the already available single parity in microprocessors.  相似文献   

16.
This paper proposes a pure software technique "error detection by duplicated instructions" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that "instruction-level parallelism" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle  相似文献   

17.
The studies performed in the process of designing error correction coding elements in sub-100-nm memory and microprocessor microcircuits confirm that the most efficiency of increasing upset tolerances of commercial RHBD memory microcircuits can be ensured by combining modern circuit solutions for memory elements and algorithmic data encoding and protection methods. Among the circuit methods, the following methods are urgent: the application of DICE memory cells for checking (reference) data files; the introduction of additional columns and multiplexers, intended to replace any column with an additional one, if a multiple incurable upset arises in this column; the implementation of data interleaving with a degree of no more than 8 s to minimize adjacent upsets in the code word. Algorithmic encoding approaches of (SEC-DED-DAEC) classes (single-error correction, double-error-detection, and double-adjacent-error-correction) are efficient for ensuring the upset tolerance of sub-100-nm very-largescale integration (VLSI) circuits under the external action of single nuclear particles. The encoding algorithm based on these recommendations demonstrated up to 27% better efficiency of correction of nonadjacent double errors at a slightly slower speed of operation and occupied on-chip area, as compared with Datta and Choi codes, thus allowing one to implement different implementation versions of upset tolerant VLSI circuits, depending on the solved problem.  相似文献   

18.
In this paper, a method to mitigate silent data corruptions (SDCs) is proposed. This paper, first, shows and characterizes instruction result locality based on several simulation results and next, proposes an architecture called instruction value history table (VHT) to detect SDCs. In the case of fault detection, extra instruction redundant execution is utilized to assure fault existence. If outcome of the new redundant execution is different from that of previous one, a fault occurred, otherwise the first execution will be correct. In order to correct any detected faults, third redundant execution of the instruction is performed. Having three values from three redundant instruction executions, makes the correction of the fault feasible. The main advantage of this method is to detect any error which is not detectable by traditional protection codes like parity and SEC-DED. In other words, this method detects SDCs or any multiple faults which are not detectable by protection codes. Various soft error injections have been applied on Alpha processor for several PARSEC benchmarks. Experimental results show that the method can detect up to 70% of injected SDCs.  相似文献   

19.
Aiming at the problem that quasi-cyclic low density parity check (QC-LDPC) codes may have the error floor in the high signal to noise ratio (SNR) region, a new construction method of the QC-LDPC codes with the low error floor is proposed. The basic matrix of the method is based on the progressive edge growth (PEG) algorithm and the improved eliminate elementary trapping sets (EETS) algorithm so as to eliminate the elementary trapping sets in the basic matrix, then the Zig-Zag method is used to construct the cyclic shift matrix which is used to extend the basic matrix in order to construct the parity check matrix. The method not only can improve the error floor in the high SNR region, but also can flexibly design the code length and code rate. The simulation results show that at the bit error rate of 10-6, the PEG-trapping-Zig-Zag (PTZZ)-QC-LDPC(3024,1512) codes with the code rate of 0.5, compared with the PEG-Zig-Zag (PZZ)-QC-LDPC(3024,1512) codes and the PEG-QC-LDPC(3024,1512) codes, can respectively improve the net coding gain of 0.1 dB and 0.16 dB. The difference among the bit error rate performance curves will become better with the increase of the SNR. In addition, the PTZZ-QC-LDPC(3024,1512) codes have no error floor above the SNR of 2.2 dB.  相似文献   

20.
Two error correction schemes are proposed for word-oriented binary memories that can be affected by erasures, i.e. errors with known location but unknown value. The erasures considered here are due to the drifting of the electrical parameter used to encode information outside the normal ranges associated to a logic 0 or a logic 1 value. For example, a dielectric breakdown in a magnetic memory cell may reduce its electrical resistance sensibly below the levels which correspond to logic 0 and logic 1 values stored in healthy memory cells. Such deviations can be sensed during memory read operations and the acquired information can be used to boost the correction capability of an error-correcting code (ECC). The proposed schemes enable the correction of double-bit errors based on the combination of erasure information with single-bit error correction and double-bit error detection (SEC-DED) codes or shortened (SEC) codes. The correction of single-bit errors is always guaranteed. Ways to increase the number of double-bit and triple-bit errors that can be detected by shortened SEC and SEC-DED codes are considered in order to augment the error correction capability of the proposed solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号