首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
复杂多变的大气环境极易造成星地光链路的突发错误,交织RS码超强的抗突发能力能有效地纠正信息序列中的突发错误。文章综合考虑时延、交织深度、编码效率三个参数,对交织RS码进行优化设计,计算了固定交织深度下的最佳编码效率,仿真表明:交织RS码在衰落信道中能明显地提高通信系统的性能,当编码效率为1/2时,具有最大的抗突发能力。  相似文献   

2.
缩短汉明码及其改进码字被广泛使用在宇航级高可靠性存储器的差错检测与纠正电路中。作为一种成熟的纠正单个错误编码,其单字节内多位翻转导致缩短汉明码失效的研究却很少。这篇文章分析了单字节多位翻转导致缩短汉明码失效的情况,分析了各种可能的错误输出模式,并从理论上给出了其概率计算公式。采用Matlab软件进行的计算机模拟试验表明,理论结果与试验结果基本相符。这篇文章最后分析了ISSI公司在其抗辐射SRAM设计中采用的一种将较长信息位分成相等两部分,分别采用缩短汉明码进行编译码的方案。分析表明,这种编译码方案可以降低失效状态下输出3 bit翻转的概率。  相似文献   

3.
刘小汇  伍微  欧钢 《信号处理》2011,27(8):1140-1146
基于信息冗余的错误检测与纠正(Error Detection and Correction,EDAC)技术是常见的系统级抗单粒子翻转(Single Event Upsets,SEU)的容错方法,软件实现的EDAC技术是硬件EDAC技术的替代方案,通过软件编程,在现有存储段上增加具有纠错功能的编码(Error-correcting Codes,ECC)来实现存储区错误的检测和纠正。分析了软件EDAC方案中,纠错编码的纠错能力及编码效率、刷新间隔、需保护代码量等因素对可靠性的影响,分析和仿真实验结果表明,对于单个粒子引起的存储器随机错误,提高单个码字的纠错能力及编码效率、增大刷新间隔对可靠性的影响不大,而通过缩短任务执行的代码量来提高刷新间隔,以及压缩需保护代码的总量,对可靠性有较大改进。分析结论能够指导工程实践中,在实现资源、实时性、可靠性之间进行优化选择。   相似文献   

4.
李路路  何春  李磊 《通信技术》2010,43(11):42-44
在太空辐射环境中存在各种宇宙射线和一些高能粒子,其中单粒子翻转(SEU)效应是引起存储器软错误的重要因素,降低了数据传输的可靠性,因此成为当前集成电路抗辐射加固设计的研究重点之一。标准的纠错编码(ECC)设计冗余度将占用超过50%的存储量,该设计基于缩短汉明码的原理实现了对32位存储器采用7位冗余码进行纠错编码的SEC-DED加固设计,在资源上得到了优化;同时从概率的角度分析了可靠性的理论基础,通过编码可靠性可以提高3到6个数量级。  相似文献   

5.
给出了一种改进的基于时钟沿的自我检测和纠正的电路结构,以纠正由单粒子翻转(SEU)引起的数据错误。简单概述了已有的检测和纠正SEU的电路结构,并在该电路的基础上提出了改进的电路结构,以实现对触发器以及SRAM等存储器的实时监控,并可以及时纠正其由于SEU引起的数据错误。采用内建命令进行错误注入模拟单粒子翻转对电路的影响。改进的电路与原来的电路相比,以微小的面积和较少的资源换取更高的纠错率。  相似文献   

6.
抗单粒子翻转效应的SRAM研究与设计   总被引:1,自引:0,他引:1  
在空间应用和核辐射环境中,单粒子翻转(SEU)效应严重影响SRAM的可靠性。采用错误检测与校正(EDAC)和版图设计加固技术研究和设计了一款抗辐射SRAM芯片,以提高SRAM的抗单粒子翻转效应能力。内置的EDAC模块不仅实现了对存储数据"纠一检二"的功能,其附加的存储数据错误标志位还简化了SRAM的测试方案。通过SRAM原型芯片的流片和测试,验证了EDAC电路的功能。与三模冗余技术相比,所设计的抗辐射SRAM芯片具有面积小、集成度高以及低功耗等优点。  相似文献   

7.
任磊  李磊  武书肖 《微电子学》2017,47(3):412-415
在静态存储器(SRAM)抗辐射加固设计中,单错误纠错码(SEC)联合交织是解决空间SRAM多比特翻转(MBU)的有效方法。交织距离(ID)的选择应当使MBU分布在不同的物理字中,ID越大,电路实现越复杂,功耗和面积也越大。基于SRAM在空间产生的MBU错误图样以及数目,在已有的ID选择模型上进行修正,提出了一种精确的评估模型。  相似文献   

8.
《无线电工程》2019,(12):1094-1098
FPGA自主刷新技术是一种有效减缓FPGA单粒子翻转效应的措施,有利于提高FPGA空间应用时的可靠性。以Xilinx公司Virtex-7系列FPGA为例,研究基于内部配置访问接口(Internal Configuration Access Port,ICAP)对配置存储器的回读和刷新技术、基于帧纠错码(Frame Error Correction Code,FRAME_ECC)对配置存储器的检错和纠错技术等关键技术,使FPGA在空间应用时能够自主检测并纠正由于单粒子翻转造成的FPGA配置存储器位翻转错误,实现对每帧回读配置存储器数据中1比特位翻转错误的纠正及2比特位翻转错误的检测。  相似文献   

9.
利用现场可编程门阵列(FPGA,Field Programmable Gate Array)实现了一种基于深空通信的级联码结构。该级联码包括内码和外码,内码是基于全并行的软判决Viterbi译码器结构,用来纠正随机错误,结合帧同步技术完成解交织,然后进行外码里德-所罗门码(RS codes,Reed-solomon codes)译码,纠正突发错误,实现级联码译码。通过实际硬件测试,在满足系统误码率要求的前提下,使用该级联码译码器能够降低发射功率或减少天线尺寸,对降低系统成本及提高系统性能具有非常重要的作用。  相似文献   

10.
王玲 《今日电子》2001,(12):17-18,16
交织和解交织是组合信道纠错系统的一个重要环节,交织器和解交织器的实现方法有多种。本文利用Altera公司开发的Quartus软件平台和仿真环境,设计一种交织器和解交织器FPGA电路单倍实现的方法,并分析该电路实现的特点。 外交织的基本原理 实际信道中产生的错误往往是突发错误或突发错误与随机错误并存,如果首先把突发错误离散成随机错误,然后再去纠随机错误,那么系统的抗干扰性能就会进一步得到提高。交织器的作用就是将比较长的突发错误或多个突发错误离散成随机错误,即把错误离散化。交织器按交织方式可分为交织深度固定的交织器(如分组交织器和卷积交织器)和交织深度不断变化的随机交织器;按交织对象可分为码元交织器和码段交织  相似文献   

11.
Three new techniques are proposed for constructing a class of codes that extends the protection provided by previous single error correcting (SEC)-double error detecting (DED)-single byte error detecting (SBD) codes. The proposed codes are systematic odd-weight-column SEC-DED-SBD codes providing also the correction of any odd number of erroneous bits per byte, where a byte represents a cluster of b bits of the codeword that are fed by the same memory chip or card. These codes are useful for practical applications to enhance the reliability and the data integrity of byte-organized computer memory systems against transient, intermittent, and permanent failures. In particular they represent a good tradeoff between the overhead in terms of additional check bits and the reliability improvement, due to the capability to correct at least 50% of the multiple errors per byte  相似文献   

12.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

13.
This paper presents a high level error detection and correction method called HVD code to tolerate multiple bit upsets (MBUs) occurred in memory cells. The proposed method uses parity codes in four directions in a data part to assure the reliability of memories. The proposed method is very powerful in error detection while its error correction coverage is also acceptable considering its low computing latency. HVD code is useful for applications whose high error detection coverage is very important such as memory systems. Of course, this code can be used in combination with other protection codes which have high correction coverage and low detection coverage. The proposed method is evaluated using more than one billion multiple fault injection experiments. Multiple bit flips were randomly injected in different segments of a memory system and the fault detection and correction coverages are calculated. Results show that 100% of the injected faults can be detected. We proved that, this method can correct up to three bit upsets. Some hardware implementation issues are investigated to show tradeoffs between different implementation parameters of HVD method.  相似文献   

14.
Nowadays, multibit error correction codes (MECCs) are effective approaches to mitigate multiple bit upsets (MBUs) in memories. As technology scales, combinational circuits have become more susceptible to radiation induced single event transient (SET). Therefore, transient faults in encoding and decoding circuits are more frequent than before. Firstly, this paper proposes a new MECC, which is called Mix code, to mitigate MBUs in fault-secure memories. Considering the structure characteristic of MECC, Euclidean Geometry Low Density Parity Check (EG-LDPC) codes and Hamming codes are combined in the proposed Mix codes to protect memories against MBUs with low redundancy overheads. Then, the fault-secure scheme is presented, which can tolerate transient faults in both the storage cell and the encoding and decoding circuits. The proposed fault-secure scheme has remarkably lower redundancy overheads than the existing fault-secure schemes. Furthermore, the proposed scheme is suitable for ordinary accessed data width (e.g., 2n bits) between system bus and memory. Finally, the proposed scheme has been implemented in Verilog and validated through a wide set of simulations. The experiment results reveal that the proposed scheme can effectively mitigate multiple errors in whole memory systems. They can not only reduce the redundancy overheads of the storage array but also improve the performance of MECC circuits in fault-secure memory systems.  相似文献   

15.
Memory Reliability Improvement Based on Maximized Error-Correcting Codes   总被引:1,自引:1,他引:0  
Error-correcting codes (ECC) offer an efficient way to improve the reliability and yield of memory subsystems. ECC-based protection is usually provided on a memory word basis such that the number of data-bits in a codeword corresponds to the amount of information that can be transferred during a single memory access operation. Consequently, the codeword length is not the maximum allowed by a certain check-bit number since the number of data-bits is constrained by the width of the memory data interface. This work investigates the additional error correction opportunities offered by the absence of a perfect match between the numbers of data-bits and check-bits in some widespread ECCs. A method is proposed for the selection of multi-bit errors that can be additionally corrected with a minimal impact on ECC decoder latency. These methods were applied to single-bit error correction (SEC) codes and double-bit error correction (DEC) codes. Reliability improvements are evaluated for memories in which all errors affecting the same number of bits in a codeword are independent and identically distributed. It is shown that the application of the proposed methods to conventional DEC codes can improve the mean-time-to-failure (MTTF) of memories with up to 30 %. Maximized versions of the DEC codes are also proposed in which all adjacent triple-bit errors become correctable without affecting the maximum number of triple-bit errors that can be made correctable.  相似文献   

16.
In certain memory systems the most common error is a single error and the next most common error is two errors in positions which are stored physically adjacent in the memory. In this correspondence we present optimal codes for recovering from such errors. We correct single errors and detect double adjacent errors. For detecting adjacent errors we consider codes which are byte-organized. In the binary case, it is clear that the length of the code is at most 2r-r-1, where r is the redundancy of the code. We summarize the known results and some new ones in this case. For the nonbinary case we show an upper bound, called “the pairs bound,” on the length of such code. Over GF(3) codes with bytes of size 2 which attain the bound exist if and only if perfect codes with minimum Hamming distance 5 over GF(3) exist. Over GF(4) codes which attain the bound with byte size 2 exist for all redundancies. For most other parameters we prove the nonexistence of codes which attain the bound  相似文献   

17.
Caches, which are comprised much of a CPU chip area and transistor counts, are reasonable targets for transient single and multiple faults induced from energetic particles. This paper presents: (1) a new fault detection scheme for tag arrays of cache memories and (2) an architectural cache to improve performance as well as dependability. In this architecture, cache space is divided into sets of different sizes and different tag lengths. Using the proposed fault detection scheme, i.e., GParity, when single and multiple errors are detected in a word, the word is rewritten by its correct data from memory and its GParity code is recomputed. The error detection scheme and the cache architecture have been evaluated using a trace driven simulation with soft error injection and SPEC 2000 applications. Moreover, reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results of GParity code are compared with those of other protection codes and memory systems without redundancies and with single parity codes. The results show that error detection improvement varies between 66% and 96% as compared with the already available single parity in microprocessors.  相似文献   

18.
In this paper we propose memory protection architectures based on nonlinear single-error-correcting, double-error-detecting (SEC-DED) codes. Linear SEC-DED codes widely used for design of reliable memories cannot detect and can miscorrect lots of errors with large Hamming weights. This may be a serious disadvantage for many modern technologies when error distributions are hard to estimate and multi-bit errors are highly probable. The proposed protection architectures have fewer undetectable errors and fewer errors that are miscorrected by all codewords than architectures based on linear codes with the same dimension at the cost of a small increase in the latency penalty, the area overhead and the power consumption. The nonlinear SEC-DED codes are generalized from the existing perfect nonlinear codes (Vasil’ev codes, Probl Kibern 8:375–378, 1962; Phelps codes, SIAM J Algebr Discrete Methods 4:398–403, 1983; and the codes based on one switching constructions, Etzion and Vardy, IEEE Trans Inf Theory 40:754–763, 1994). We present the error correcting algorithms, investigate and compare the error detection and correction capabilities of the proposed nonlinear SEC-DED codes to linear extended Hamming codes and show that replacing linear extended Hamming codes by the proposed nonlinear SEC-DED codes results in a drastic improvement in the reliability of the memory systems in the case of repeating errors or high multi-bit error rate. The proposed approach can be applied to RAM, ROM, FLASH and disk memories.  相似文献   

19.
This paper proposes a methodology, implemented in a tool, to automatically generate the main classes of error control codes (ECC's) widely applied in computer memory systems to increase reliability and data integrity. New code construction techniques extending the features of previous single error correcting (SEC)-double error detecting (DED)-single byte error detecting (SBD) codes have been integrated in the tool. The proposed techniques construct systematic odd-weight-column SEC-DED-SBD codes with odd-bit-per-byte error correcting (OBC) capabilities to enhance reliability in high speed memory systems organized as multiple-bit-per-chip or card  相似文献   

20.
Two error correction schemes are proposed for word-oriented binary memories that can be affected by erasures, i.e. errors with known location but unknown value. The erasures considered here are due to the drifting of the electrical parameter used to encode information outside the normal ranges associated to a logic 0 or a logic 1 value. For example, a dielectric breakdown in a magnetic memory cell may reduce its electrical resistance sensibly below the levels which correspond to logic 0 and logic 1 values stored in healthy memory cells. Such deviations can be sensed during memory read operations and the acquired information can be used to boost the correction capability of an error-correcting code (ECC). The proposed schemes enable the correction of double-bit errors based on the combination of erasure information with single-bit error correction and double-bit error detection (SEC-DED) codes or shortened (SEC) codes. The correction of single-bit errors is always guaranteed. Ways to increase the number of double-bit and triple-bit errors that can be detected by shortened SEC and SEC-DED codes are considered in order to augment the error correction capability of the proposed solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号