首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Soft errors are an important issue for circuit reliability. To mitigate their effects on the system functionality, different techniques are used. In many cases Error Correcting Codes (ECC) are used to protect circuits. Single Error Correction (SEC) codes are commonly used in memories and can effectively remove errors as long as there is only one error per word. Soft errors however may also affect the circuits that implement the Error Correcting Codes: the encoder and the decoder. In this paper, the protection against soft errors in the ECC encoder is studied and an efficient fault tolerant implementation is proposed.  相似文献   

2.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

3.
Network on Chip (NoC) is an enabling methodology of integrating a very high number of intellectual property (IP) blocks in a single System on Chip (SoC). A major challenge that NoC design is expected to face is the intrinsic unreliability of the interconnect infrastructure under technology limitations. Research must address the combination of new device-level defects or error-prone technologies within systems that must deliver high levels of reliability and dependability while satisfying other hard constraints such as low energy consumption. By incorporating novel error correcting codes it is possible to protect the NoC communication fabric against transient errors and at the same time lower the energy dissipation. We propose a novel, simple coding scheme called Crosstalk Avoiding Double Error Correction Code (CADEC). Detailed analysis followed by simulations with three commonly used NoC architectures show that CADEC provides significant energy savings compared to previously proposed crosstalk avoiding single error correcting codes and error-detection/retransmission schemes.  相似文献   

4.
In this paper we propose memory protection architectures based on nonlinear single-error-correcting, double-error-detecting (SEC-DED) codes. Linear SEC-DED codes widely used for design of reliable memories cannot detect and can miscorrect lots of errors with large Hamming weights. This may be a serious disadvantage for many modern technologies when error distributions are hard to estimate and multi-bit errors are highly probable. The proposed protection architectures have fewer undetectable errors and fewer errors that are miscorrected by all codewords than architectures based on linear codes with the same dimension at the cost of a small increase in the latency penalty, the area overhead and the power consumption. The nonlinear SEC-DED codes are generalized from the existing perfect nonlinear codes (Vasil’ev codes, Probl Kibern 8:375–378, 1962; Phelps codes, SIAM J Algebr Discrete Methods 4:398–403, 1983; and the codes based on one switching constructions, Etzion and Vardy, IEEE Trans Inf Theory 40:754–763, 1994). We present the error correcting algorithms, investigate and compare the error detection and correction capabilities of the proposed nonlinear SEC-DED codes to linear extended Hamming codes and show that replacing linear extended Hamming codes by the proposed nonlinear SEC-DED codes results in a drastic improvement in the reliability of the memory systems in the case of repeating errors or high multi-bit error rate. The proposed approach can be applied to RAM, ROM, FLASH and disk memories.  相似文献   

5.
Single Error Correction Double Error Detection (SEC-DED) codes are widely used to protect memories from soft errors due to their simple implementation. However, the limitation is that the double bit errors can just be detected but cannot be recovered by SEC-DED codes. In this paper, a method to recover some of the bits when a double error occurs is presented. This can be of interest for applications on which some bits store important information, for example control flags or the more significant bits of a value. For those, the proposed technique can in some cases determine whether those bits have been affected by the double error and when not, safely recover the correct values. However, the percentage of times that the bits can be recovered is small. The proposed scheme is also extended to increase this percentage by duplicating or triplicating the critical bits inside the word when there are spare bits. No modification to the decoding circuitry is needed, as the scheme can be implemented in the exception handler that is executed when a double error occurs. This facilitates the use of the proposed scheme in existing designs. Another option is to implement part of the scheme in hardware something that can be done with low cost.  相似文献   

6.
沈云付  潘磊 《电子学报》2013,41(8):1615-1621
本文在三值汉明码一位检错纠错研究工作的基础上,对三值汉明码的检错纠错方法进行进一步研究.给出了扩展三值汉明码的形式,通过对扩展三值汉明码的错误分析获得了一位纠错和二位检错原理,给出了扩展三值汉明码的纠错码表,根据纠错码表提出了一位纠错方法,给出了基于三值光学计算机的扩展三值汉明码检错纠错概念结构图和功能部件,为检错纠错系统的光学设计提供一种途径.  相似文献   

7.
As the microelectronics technology continuously advances to deep submicron scales, the occurrence of Multiple Cell Upset (MCU) induced by radiation in memory devices becomes more likely to happen. The implementation of a robust Error Correction Code (ECC) is a suitable solution. However, the more complex an ECC, the more delay, area usage and energy consumption. An ECC with an appropriate balance between error coverage and computational cost is essential for applications where fault tolerance is heavily needed, and the energy resources are scarce. This paper describes the conception, implementation, and evaluation of Column-Line-Code (CLC), a novel algorithm for the detection and correction of MCU in memory devices, which combines extended Hamming code and parity bits. Besides, this paper evaluates the variation of the 2D CLC schemes and proposes an additional operation to correct more MCU patterns called extended mode. We compared the implementation cost, reliability level, detection/correction rate and the mean time to failure among the CLC versions and other correction codes, proving the CLCs have high MCU correction efficacy with reduced area, power and delay costs.  相似文献   

8.
Many applications in Wireless Sensor Networks (WSNs) require all data to be transmitted with minimal or without loss, what implies that reliability is an important characteristic. In any WSN, there are two basic approaches to recover erroneous packets. One way is to use Automatic Repeat reQuest (ARQ), and another is Forward Error Correction (FEC). The error-control systems for applications based on ARQ use error detection coupled with retransmission requests to maximize reliability at some cost to throughput. Error detection is generally provided by the lower protocol layers which use checksums (e.g. Cyclic Redundancy Checksums (CRCs)) to discard corrupted packets and trigger retransmission requests. In these solutions event a single erroneous bit can render a packet useless to the end user. Having in mind that in WSNs the power is scarce and is primarily consumed by wireless transmission and reception, we propose to use FEC rather than ARQ. FEC is a way of correcting packets by transmitting additional information bits with aim to reduce the frequency of retransmission requests. During this, data bytes are optionally encoded after being fragmented with Error Correcting Code (ECC) to recover data bits in case of small number of bit errors. Various FEC encoding schemes such as erasure and Hamming based codes are available. The choice of the encoding schemes depends on the applications and error characteristics (error models/patterns) of the wireless channel. Erasure encoding is preferable for usage when the error pattern is burst dominated, while Hamming encoding when noise causes random errors. Our observations show that most bit errors are single-bit or double-bit errors and burst errors are present but rare. In this work, an efficient Hamming based FEC encoding scheme of relatively low complexity called Two Dimensional-Single Error Correcting-Double Error Detecting (2D-SEC-DED), intended to minimize packet retransmissions and to save energy, has been developed. Such FEC scheme can be used to correct all single-bit and 99.99%of double/multiple-bit errors. Since the radio block is dominant energy consumer within a Sensor Node (SN), we focus our attention to answer the question: which is the adequate metric to use, and under what conditions to accurately characterize the quality of the communication, related to reliable data transfer, with minimal energy consumption. To this end, as first, in a case when the bit error is not high and most errors are single-bit, we show that 2D SEC-DED encoding scheme is more energy efficient in comparison to erasure encoding. As second, the advantages of using 2D-SEC-DED in respect to CRC (NO-FEC) encoding, concerning decreasing the energy consumption and increasing the reliability of the radio block are derived through implementation of two versions of the Rendezvous Protocol for Long Living (RPLL) referred as Modified-RPLL (M-RPLL as FEC based) and Ordinary-RPLL (O-RPLL as NO-FEC), respectively.  相似文献   

9.
Error correction and error detection techniques are often used in wireless transmission systems. The Asynchronous Transfer Mode (ATM) employs Header Error Control (HEC). Since ATM specifications have been developed for high‐quality optical fiber transmission systems, HEC has single‐bit error correction and multiple‐bit error detection capabilities. When HEC detects multiple‐bit error, the cell is discarded. However, wireless ATM requires a more powerful Forward Error Correction (FEC) scheme to improve the Bit Error Rate (BER) performance resulting in a reduction in the transmission power and antenna size. This concatenation of wireless FEC and HEC of the ATM may effect cell loss performance. This paper proposes error correction and error detection techniques suitable for wireless ATM and analyzes the performance of the proposed schemes. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
The conditional probability (fraction) of the successful decoding of erasure patterns of high (greater than the code distance) weights is investigated for linear codes with the partially known or unknown weight spectra of code words. The estimated conditional probabilities and the methods used to calculate them refer to arbitrary binary linear codes and binary Hamming, Panchenko, and Bose–Chaudhuri–Hocquenghem (BCH) codes, including their extended and shortened forms. Error detection probabilities are estimated under erasure-correction conditions. The product-code decoding algorithms involving the correction of high weight erasures by means of component Hamming, Panchenko, and BCH codes are proposed, and the upper estimate of decoding failure probability is presented.  相似文献   

11.
Hash tables are one of the most commonly used data structures in computing applications. They are used for example to organize a data set such that searches can be performed efficiently. The data stored in a hash table is commonly stored in memory and can suffer errors. To ensure that data stored in a memory is not corrupted when it suffers errors, Error Correction Codes (ECCs) are commonly used. In this research note a scheme to efficiently implement ECCs for the entries stored in hash tables is presented. The main idea is to use an ECC as the hash function that is used to construct the table. This eliminates the need to store the parity bits for the entries in the memory as they are implicit in the hash table construction thus reducing the implementation cost.  相似文献   

12.
On-chip interconnects in very deep submicrometer technology are becoming more sensitive and prone to errors caused by power supply noise, crosstalk, delay variations and transient faults. Error-correcting codes (ECCs) can be employed in order to provide signal transmission with the necessary data integrity. In this paper, the impact of ECCs to encode the information on a very deep submicrometer bus on bus power consumption is analyzed. To fulfill this purpose, both the bus wires (with mutual capacitances, drivers, repeaters and receivers) and the encoding-decoding circuitry are accounted for. After a detailed analysis of power dissipation in deep submicrometer fault-tolerant busses using Hamming single ECCs, it is shown that no power saving is possible by choosing among different Hamming codes. A novel scheme, called dual rail, is then proposed. It is shown that dual rail, combined with a proper bus layout, can provide a reduction of energy consumption. In particular, it is shown how the passive elements of the bus (bottom and mutual wire capacitances), active elements of the bus (buffers) and error-correcting circuits contribute to power consumption, and how different tradeoffs can be achieved. The analysis presented in this paper has been performed considering a realistic bus structure, implemented in a standard 0.13-mum CMOS technology.  相似文献   

13.
An introduction to redundancy encoding as used in digital data communications is described. The need for redundancy is first addressed, followed by a discussion of the binary symmetric channel, burst noise channels, and the use of interleaving to randomize burst errors. The concept of redundancy is presented next, showing how it is used to supply the highest possible degree of error detection or how it can be applied to provide for the detection and correction of a lesser number of errors. The use of some codes to correct some errors and also to detect, but not correct, additional errors is discussed. The properties of block codes are developed beginning with repetition codes then covering single-parity check codes, Hamming (single-error detection) codes, and Bose-Chadhuri-Hocquenghem (BCH)codes. The basic properties and structures of these codes are emphasized with examples of implementation procedures for both encoding and decoding.  相似文献   

14.
Reliability is a critical factor for systems operating in radiation environments. Among the different components in a system, memories are one of the parts most sensitive to soft errors due to their relatively large area. Due to their large cost, traditional techniques like Triple Modular Redundancy are not used to protect memories. A typical approach is to apply Error Correction Codes to correct single errors, and detect double errors. This type of codes, for example those based on Hamming, provides an initial level of protection. Detected single errors are usually corrected using scrubbing, by which the memory positions are periodically re-written after a fixed (deterministic scrubbing), or variable period (probabilistic scrubbing). These traditional models usually offer good results when calculating the reliability of memories (e.g. through the Mean Time To Failure). However, there are some particularities that are not modeled through these approaches, to the best of our knowledge. One of these particularities is how double errors are handled. In a traditional approach, two errors in the same word produce always a system failure (only single errors can be corrected). However, if the two (or more) errors affect the same bit, either the second one reinforces the first one (keeping just a single error), or corrects it. In both scenarios, the resulting situation does not trigger a system failure, which has a direct impact on the reliability of the memory. In this paper, traditional reliability models are refined to handle the mentioned scenarios, which produces a more precise analysis in the calculation of mean time to failure for memory systems.   相似文献   

15.
As CMOS technology size scales down, multiple cell upsets (MCUs) caused by a single radiation particle have become one of the most challenging reliability issues for memories used in space application. Error correction codes (ECCs) are commonly used to protect memories against errors. Single error correction-Double error detection (SEC-DED) codes are the simplest and most typical ones, but they can only corrected single errors. The advanced ECCs, which can provide enough protection for memories, cost more overhead due to their complex decoders. Orthogonal Latin square (OLS) codes are one type of one-step majority logic decodable (OS-MLD) codes that can be decoded with low complexity and delay. However, there are no OLS codes directly fitting 32-bit data, which is a typical data size in memories. In this paper, (55, 32) and (68, 32) codes derived from (45, 25) and (55, 25) OLS codes have been proposed in order to improve OLS codes in terms of protection for the 32-bit data. The proposed codes can maintain the correction capability of OLS codes and be decoded with low delay and complexity. The evaluation of the implementations for these codes are presented and compared with those of the shortened version (60, 32) and (76, 32) OLS codes. The results show that the area and power of a 2-bit MCUs immune radiation hardened SRAM that protected by the proposed codes have been reduced by 7.76% and 6.34%, respectively. In the case of a 3-bit MCUs immune, the area and power of whole circuits have been reduced by 8.82% and 4.56% when the proposed codes are used.  相似文献   

16.
In this paper, the effects of digital transmission errors on H.263 codecs are analyzed and the transmission of H.263 coded video over a TDMA radio link is investigated. The impact of channel coding and interleaving on video transmission quality is simulated for different channel conditions. Fading on radio channels causes significant transmission errors and H.263 coded bit streams are very vulnerable to erros. Powerful Forward Error Correction (FEC) codes are therefore necessary to protect the data so that it can be successfully transmitted at acceptable signal power levels. FEC, however, imposes a high bandwidth overhead. In order to make best use of the available channel bandwidth and to alleviate the overall impact of errors on the video sequence, a twolayer data partitioning and unequal error protection scheme based on H.263 is also studied. The scheme can tolerate more transmission errors and leads to more graceful degradation in quality when the channel SNR decreases. In lossy environments, it can improve the video transmission quality at no extra bandwidth cost.Part of this paper was presented at IS&T/SPIE Symposium on Electronic Imaging, San Jose, CA, USA, January 1996.  相似文献   

17.
Two error correction schemes are proposed for word-oriented binary memories that can be affected by erasures, i.e. errors with known location but unknown value. The erasures considered here are due to the drifting of the electrical parameter used to encode information outside the normal ranges associated to a logic 0 or a logic 1 value. For example, a dielectric breakdown in a magnetic memory cell may reduce its electrical resistance sensibly below the levels which correspond to logic 0 and logic 1 values stored in healthy memory cells. Such deviations can be sensed during memory read operations and the acquired information can be used to boost the correction capability of an error-correcting code (ECC). The proposed schemes enable the correction of double-bit errors based on the combination of erasure information with single-bit error correction and double-bit error detection (SEC-DED) codes or shortened (SEC) codes. The correction of single-bit errors is always guaranteed. Ways to increase the number of double-bit and triple-bit errors that can be detected by shortened SEC and SEC-DED codes are considered in order to augment the error correction capability of the proposed solutions.  相似文献   

18.
LZW压缩数据的不均一两分段误码保护编码   总被引:3,自引:1,他引:2  
唐红  许鸿川 《电讯技术》2002,42(1):86-90
由T.A.Welch提出的LZW数据压缩方法已经被广泛地应用于文字压缩。然而,因为误码对压缩数据的危害很大,所以对压缩数据的纠错编码是十分重要且必不可少的。本文分析了误码对LZW压缩数据的影响,指出同样的误码对用于重建辞书的前一部分压缩数据的危害比对后面的其它数据的危害要严重得多,提出用一种不均一两分段误码保护编码方法,对用于重建辞书的前一部分压缩数据进行更好地保护。计算机模拟显示,该方法比传统的误码保护方法更有效。  相似文献   

19.
Wireless channels are highly affected by unpredictable factors such as cochannel interference, adjacent channel interference, propagation path loss, shadowing, and multipath fading. The unreliability of media seriously degrades the transmission quality. Automatic Repeat reQuest (ARQ) and Forward Error Correction (FEC) schemes are frequently used in wireless environments to reduce the high bit error rate of the channel. In this paper, we propose an adaptive error‐control scheme for wireless networks on the basis of dynamic variation of error‐control strategy as a function of the channel bit error rate, desired QoS, and number of receivers. Reed–Solomon codes are used throughout this study because of their appropriate characteristics in terms of powerful coding and implementation simplicity. Simulation results show that our adaptive error‐control protocol decreases the waste of bandwidth due to retransmissions or extra coding overheads while satisfying the QoS requirements of the receivers. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
Coding for system-on-chip networks: a unified framework   总被引:1,自引:0,他引:1  
Global buses in deep-submicron (DSM) system-on-chip designs consume significant amounts of power, have large propagation delays, and are susceptible to errors due to DSM noise. Coding schemes exist that tackle these problems individually. In this paper, we present a coding framework derived from a communication-theoretic view of a DSM bus to jointly address power, delay, and reliability. In this framework, the data is first passed through a nonlinear source coder that reduces self and coupling transition activity and imposes a constraint on the peak coupling transitions on the bus. Next, a linear error control coder adds redundancy to enable error detection and correction. The framework is employed to efficiently combine existing codes and to derive novel codes that span a wide range of tradeoffs between bus delay, codec latency, power, area, and reliability. Using simulation results in 0.13-/spl mu/m CMOS technology, we show that coding is a better alternative to repeater insertion for delay reduction as it reduces power dissipation at the same time. For a 10-mm 4-bit bus, we show that a bus employing the proposed codes achieves up to 2.17/spl times/ speed-up and 33% energy savings over a bus employing Hamming code. For a 10-mm 32-bit bus, we show that 1.7/spl times/ speed-up and 27% reduction in energy are achievable over an uncoded bus by employing low-swing signaling without any loss in reliability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号