首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

2.
This paper presents a high level error detection and correction method called HVD code to tolerate multiple bit upsets (MBUs) occurred in memory cells. The proposed method uses parity codes in four directions in a data part to assure the reliability of memories. The proposed method is very powerful in error detection while its error correction coverage is also acceptable considering its low computing latency. HVD code is useful for applications whose high error detection coverage is very important such as memory systems. Of course, this code can be used in combination with other protection codes which have high correction coverage and low detection coverage. The proposed method is evaluated using more than one billion multiple fault injection experiments. Multiple bit flips were randomly injected in different segments of a memory system and the fault detection and correction coverages are calculated. Results show that 100% of the injected faults can be detected. We proved that, this method can correct up to three bit upsets. Some hardware implementation issues are investigated to show tradeoffs between different implementation parameters of HVD method.  相似文献   

3.
Runlength limited (RLL) codes are used in magnetic recording. The error patterns that occur with peak detection magnetic recording systems when using a runlength-limited code consist of both symmetric errors and shift errors. We refer to shift errors and symmetric errors collectively as mixed type errors. A method of providing error control for mixed type errors that occur in a runlength limited code comprised of (d, k) constrained sequences is examined. The coding scheme is to choose parity blocks to insert in the constrained information sequence. The parity blocks are chosen to satisfy the constraints and to provide some error control. The cases of single error-detection and single error-correction are investigated, where the single error is allowed to be a shift error or a symmetric error. Bounds are discussed on the possible lengths for the parity blocks. It is shown that the single error-detection codes are the best possible in terms of the length of the parity blocks  相似文献   

4.
Software implementations of error detection codes are considered to be slow compared to other parts of the communication system. This is especially true for powerful error detection codes such as CRC. However, we have found that powerful error detection codes can run surprisingly fast in software. We discuss techniques for, and measure the performance of, fast software implementation of the cyclic redundancy check (CRC), weighted sum codes (WSC), one's-complement checksum, Fletcher (1982) checksum, CXOR checksum, and block parity code. Instruction count alone does not determine the fastest error detection code. Our results show the computer memory hierarchy also affects performance. Although our experiments were performed on a Sun SPARCstation LX, many of the techniques and conclusions will apply to other processors and error detection codes. Given the performance of various error detection codes, a protocol designer can choose a code with the desired speed and error detection power that is appropriate for his network and application  相似文献   

5.
To prevent soft errors from causing data corruption, memories are commonly protected with Error Correction Codes (ECCs). To minimize the impact of the ECC on memory complexity simple codes are commonly used. For example, Single Error Correction (SEC) codes, like Hamming codes are widely used. Power consumption can be reduced by first checking if the word has errors and then perform the rest of the decoding only when there are errors. This greatly reduces the average power consumption as most words will have no errors. In this paper an efficient error detection scheme for Double Error Correction (DEC) Bose–Chaudhuri–Hocquenghem (BCH) codes is presented. The scheme reduces the dynamic power consumption so that it is the same that for error detection in a SEC Hamming code.  相似文献   

6.
缩短汉明码及其改进码字被广泛使用在宇航级高可靠性存储器的差错检测与纠正电路中。作为一种成熟的纠正单个错误编码,其单字节内多位翻转导致缩短汉明码失效的研究却很少。这篇文章分析了单字节多位翻转导致缩短汉明码失效的情况,分析了各种可能的错误输出模式,并从理论上给出了其概率计算公式。采用Matlab软件进行的计算机模拟试验表明,理论结果与试验结果基本相符。这篇文章最后分析了ISSI公司在其抗辐射SRAM设计中采用的一种将较长信息位分成相等两部分,分别采用缩短汉明码进行编译码的方案。分析表明,这种编译码方案可以降低失效状态下输出3 bit翻转的概率。  相似文献   

7.
Low‐density parity‐check (LDPC) codes are very powerful error‐correction codes with capabilities approaching the Shannon's limits. In evaluating the error performance of an LDPC code, the computer simulation time taken becomes a primary concern when tens of millions of noise‐corrupted codewords are to be decoded, particularly for codes with very long lengths. In this paper, we propose modeling the parity‐check matrix of an LDPC code with compressed parity‐check matrices in the check‐node domain (CND) and in the bit‐node domain (BND), respectively. Based on the compressed parity‐check matrices, we created two message matrices, one in the CND and another in the BND, and two domain conversion matrices, one from CND to BND and another from BND to CND. With the proposed message matrices, the data used in the iterative LDPC decoding algorithm can be closely packed and stored within a small memory size. Consequently, such data can be mostly stored in the cache memory, reducing the need for the central processing unit to access the random access memory and hence improving the simulation time significantly. Furthermore, the messages in one domain can be easily converted to another domain with the use of the conversion matrices, facilitating the central processing unit to access and update the messages. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
A scheme for storing information in a memory system with defective memory ceils using "additive" codes was proposed by Kuznetsov and Tsybakov. When a source message is to be stored in a memory with defective cells, a code vectorxmasking the defect pattern of the memory is formed by adding a vector defined by the message and the defect pattern to the encoded message, and thenxis stored. The decoding process does not require the defect information. Considerably better bounds on the information rate of codes of this type which are capable of masking multiple defects and correcting multiple temporary errors are presented. The difference between the upper and lower bounds approaches the difference between the known best upper and lower bounds for random error correcting linear codes as the word length becomes large. Examples of efficient codes for masking double or fewer defects and correcting multiple temporary errors are presented.  相似文献   

9.
A new coding technique for single error correction and double error detection in computer memory systems is proposed. The number of 1s in the parity check matrix for the proposed coding is fewer than all currently available codes for this purpose, except in two cases when they are almost equal to that obtained by Hsiao code. This results in simplified encoding and decoding circuitry for error detection and correction.  相似文献   

10.
Gaitanis  N. 《Electronics letters》1984,20(15):638-640
We present cyclic AN arithmetic codes capable of single error correction and multiple unidirectional error detection. These codes can be used throughout a fault-tolerant computer, and they eliminate the need for encoding/decoding circuits and code translation circuits. We use criteria for the determination of the unidirectional error detection capability for a given AN code, and we present a new error correction/detection scheme.  相似文献   

11.
Software-implemented EDAC protection against SEUs   总被引:1,自引:0,他引:1  
In many computer systems, the contents of memory are protected by an error detection and correction (EDAC) code. Bit-flips caused by single event upsets (SEU) are a well-known problem in memory chips; EDAC codes have been an effective solution to this problem. These codes are usually implemented in hardware using extra memory bits and encoding/decoding circuitry. In systems where EDAC hardware is not available, the reliability of the system can be improved by providing protection through software. Codes and techniques that can be used for software implementation of EDAC are discussed and compared. The implementation requirements and issues are discussed, and some solutions are presented. The paper discusses in detail how system-level and chip-level structures relate to multiple error correction. A simple solution is presented to make the EDAC scheme independent of these structures. The technique in this paper was implemented and used effectively in an actual space experiment. We have observed that SEU corrupt the operating system or programs of a computer system that does not have any EDAC for memory, forcing the system to be reset frequently. Protecting the entire memory (code and data) might not be practical in software. However this paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in a low-radiation space environment  相似文献   

12.
A generalized low-density parity check code (GLDPC) is a low-density parity check code in which the constraint nodes of the code graph are block codes, rather than single parity checks. In this paper, we study GLDPC codes which have BCH or Reed-Solomon codes as subcodes under bounded distance decoding (BDD). The performance of the proposed scheme is investigated in the limit case of an infinite length (cycle free) code used over a binary erasure channel (BEC) and the corresponding thresholds for iterative decoding are derived. The performance of the proposed scheme for finite code lengths over a BEC is investigated as well. Structures responsible for decoding failures are defined and a theoretical analysis over the ensemble of GLDPC codes which yields exact bit and block error rates of the ensemble average is derived. Unfortunately this study shows that GLDPC codes do not compare favorably with their LDPC counterpart over the BEC. Fortunately, it is also shown that under certain conditions, objects identified in the analysis of GLDPC codes over a BEC and the corresponding theoretical results remain useful to derive tight lower bounds on the performance of GLDPC codes over a binary symmetric channel (BSC). Simulation results show that the proposed method yields competitive performance with a good decoding complexity trade-off for the BSC.  相似文献   

13.
The paper deals with context-oriented codes for concurrent error detection. We consider a fault model for which, in the presence of a fault, the values on the circuit’s output are arbitrary. This model allows one to design an error detection code without analyzing sensitive parts or error cones in the synthesized circuit. Conventional coding schemes are based on a one-to-one mapping between an original output vector (information word) and a codeword. In this paper, we introduce a different approach, which we call one-to-many coding. In one-to-many code, each codeword comprises a predefined set of words. The functional unit is referred to as an encoder enabling each activation to map an information word to a different word. This flexible mapping system results in a lower implementation cost of the functional unit and its checker.  相似文献   

14.
A real-time degradable four-way set-associative cache memory control (CMC) LSI is described. Three kinds of errors, address parity error, comparator error, and multihit error, can cause functional degradation by killing the associative unit corresponding to the fault detection. The parity generator and the double comparator have no effect on the timing-sensitive path delay because of the parallel configuration of the circuits. The multihit detector occupies about 16% of the propagation delay of the critical path, from the external address input to the hit/miss output  相似文献   

15.
In this paper, both performance and complexity aspects of two-dimensional single parity check turbo product codes (I-SPC-TPC) are investigated. Based on the proposed I-SPC-TPC coding scheme, a parallel decoding structure is developed to increase the decoding throughput with minor performance degradation compared with the serial structure. For both decoding architectures, a new helical interleaver is constructed to further improve the coding gain. In terms of decoding algorithm, the extremely simple Sign-Min decoding is alternatively derived with only three additions needed to compute each bit's extrinsic information. For performance evaluation, (16, 14, 2)2 single parity check turbo product code with code rate 0.766 over AWGN channel using QPSK modulation is considered. The simulation results using Sign-Min decoding show that it can achieve bit-error-rate of 10?5 at signal-to-noise ratio of 3.8 dB with 8 iterations. Compared to the same rate and codeword length turbo product code composed of extended Hamming codes, the considered scheme can achieve similar performance with much less complexity. Important implementation issues such as the finite precision analysis, efficient sorting circuit design and interleaver memory management are also presented.  相似文献   

16.
In this paper, in order to improve bit error performance, bandwidth efficiency and reduction of complexity compared to related schemes such as turbo codes, we combine low density parity check (LDPC) codes and continuous phase frequency shift keying (CPFSK) modulation and introduce a new scheme, called ‘low density parity check coded‐continuous phase frequency shift keying (LDPCC‐CPFSK)’. Since LDPC codes have very large Euclidean distance and use iterative decoding algorithms, they have high error correcting capacity and have very close performances to Shannon limit. In all communication systems, phase discontinuities of modulated signals result extra bandwidth requirements. Continuous phase modulation (CPM) is a powerful solution for this problem. Beside CPM provides good bandwidth efficiency; it also improves bit error performance with its memory unit. In our proposed scheme, LDPC and CPFSK, which is a special type of CPM, are considered together to improve both error performance and bandwidth efficiencies. We also obtain error performance curves of LDPCC‐CPFSK via computer simulations for both regular and irregular LDPC code. Simulation results are drawn for 4‐ary CPFSK, 8‐ary CPFSK and 16‐ary CPFSK over AWGN, Rician and Rayleigh fading channels for maximum 100 iterations, while the frame size is chosen as 504. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density.  相似文献   

18.
The use of error-correcting codes as one of the important techniques to increase computer system reliability is introduced. The different codes used in the central processing unit (CPU) are described. Since the CPU usually contains the data path, logic, and arithmetic units, the codes used in this area are error-detecting codes, such as parity check codes and residue codes. The codes used or suggested for the memory system are discussed, emphasis being placed on parity check codes, two-dimensional codes, Hamming codes and other recently developed codes. The various codes used in the input/output system are presented. The input/output area of the computer system is relatively unreliable as compared with CPU or memory; therefore, error-correcting codes used in this area usually are much more powerful than single parity check codes. These include codes for the magnetic tape, disk, and drum units. The error coding techniques are compared with other techniques for increasing computer system reliability. The future trend of using error-correcting codes in a computer system is also discussed.  相似文献   

19.
为实现两个射频识别(RFID)碰撞标签信息的检测和分离,提出一种利用Gen2标准中FM0标签编码固有记忆特性的检测方法。通过对FM0比特编码特点和碰撞标签信息的无记忆检测分析,得到基于单个比特持续时间的无记忆检测方法的条件错误概率和单个标签信息检测的误码率;然后利用单个FM0比特编码需要前一比特的“记忆”特性,得到对应于前一比特的一对测量值和对应于下一比特的一对测量值,进而得到碰撞标签信息的1比特记忆辅助检测时的条件错误概率和误码率性能;并对在帧Aloha媒质接入方案中采用提出的检测方法时的N个标签群的总延迟减少性能进行了分析。仿真实验结果表明,提出的1比特记忆辅助检测方法,相比于无记忆检测具有更好的误码率性能,且能减少标签群接入时的总延迟。  相似文献   

20.
Layered approximately regular (LAR) low-density parity-check (LDPC) codes are proposed, with which one single pair of encoder and decoder support various code lengths and code rates. The parity check matrices of LAR-LDPC codes have a "layer-block-cell" structure with some additional constraints. An encoder architecture is then designed for LAR-LDPC codes, by making two improvements to the Richardson-Urbanke approach: the forward substitution operation is entirely removed and the dense-matrix-vector multiplication is handled using feedback shift-registers. A partially parallel decoder architecture is also designed for LAR-LDPC codes, where a layered modified min-sum decoding algorithm is used to trade off among complexity, speed, and performance. More importantly, the interconnection network, which is inevitable for partially parallel decoders, has much lower hardware complexity compared with that for general LDPC codes. Both the encoder and decoder architectures are highly flexible in code length and code rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号