首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
变长检错码     
本文提出变长错误模型和变长错误检错码的概念,具体给出两类变长检错码,分析了其检测变长错误的能力,并介绍了其在计算机病毒防治中的实际应用.  相似文献   

2.
Hardware implementations of cryptographic algorithms are vulnerable to fault analysis attacks. Methods based on traditional fault-tolerant architectures are not suited for protection against these attacks. To detect these attacks we propose an architecture based on robust nonlinear systematic error-detecting codes. These nonlinear codes are capable of providing uniform error detecting coverage independently of the error distributions. They make no assumptions about what faults or errors will be injected by an attacker. Architectures based on these robust constructions have fewer undetectable errors than linear codes with the same n, k. We present the general properties and construction methods of these codes as well as their application for the protection of a cryptographic devices implementing the Advanced Encryption Standard.  相似文献   

3.
4.
Inspired by unidirectional error detecting codes that are used in situations where only one kind of bit errors are possible (e.g., it is possible to change a bit "0" into a bit "1", but not the contrary), we propose integrity codes (I-codes) for a radio communication channel, which enable integrity protection of messages exchanged between entities that do not hold any mutual authentication material (i.e. public keys or shared secret keys). The construction of I-codes enables a sender to encode any message such that if its integrity is violated in transmission over a radio channel, the receiver is able to detect it. In order to achieve this, we rely on the physical properties of the radio channel and on unidirectional error detecting codes. We analyze in detail the use of I-codes on a radio communication channel and we present their implementation on a wireless platform as a "proof of concept". We further introduce a novel concept called "authentication through presence", whose broad applications include broadcast authentication, key establishment and navigation signal protection. We perform a detailed analysis of the security of our coding scheme and we show that it is secure within a realistic attacker model.  相似文献   

5.
A convolutional code can be used to detect or correct infinite sequences of errors or to correct infinite sequences of erasures. First, erasure correction is shown to be related to error detection, as well as error detection to error correction. Next, the active burst distance is exploited, and various bounds on erasure correction, error detection, and error correction are obtained for convolutional codes. These bounds are illustrated by examples.  相似文献   

6.
F. Barsi  P. Maestrini 《Calcolo》1974,11(2):219-242
The problems of detecting overflow and single or multiple residue digit errors in Redundant Residue Numeber Systems are considered through an unified approach. It is shown that a single intermodular procedure allows concurrent detection of additive overflow and single residue digit error, even in the case where the error affects a number in overflow. In addition, it is shown that codes of adequate redundancy may allow detection of additive overflow and single bit error, provided that the residue digits are appropriately encoded. The discussion concerns both separate residue codes (i. e., codes being referred to as RRNS, where one or more redundant residues are added) and Product Codes.  相似文献   

7.
Embedded control networks commonly use checksums to detect data transmission errors. However, design decisions about which checksum to use are difficult because of a lack of information about the relative effectiveness of available options. We study the error detection effectiveness of the following commonly used checksum computations: exclusive or (XOR), two's complement addition, one's complement addition, Fletcher checksum, Adler checksum, and cyclic redundancy codes (CRC). A study of error detection capabilities for random independent bit errors and burst errors reveals that XOR, two's complement addition, and Adler checksums are suboptimal for typical network use. Instead, one's complement addition should be used for networks willing to sacrifice error detection effectiveness to reduce compute cost, Fletcher checksum for networks looking for a balance of error detection and compute cost, and CRCs for networks willing to pay a higher compute cost for significantly improved error detection.  相似文献   

8.
In this paper a statistical approach to error location and correction for data stored in secondary memories is developed. The approach is based on the observation that the data records in secondary storage have some inherent redundancy of information. This redundancy cannot precisely be predicted as in the case of typical error correction scheme's artificial redundancy. However, the redundancy can be exploited to provide error correction with some degree of confidence. We use simple and weighted checksum schemes for error detection and present algorithms for single and multiple error correction using statistical error location and correction (SELAC). An implementation of SELAC will be described with an elaborate study of its error-correction capabilities. A conspicuous aspect of SELAC is that it will not cost any processor time and storage overhead until after an error is encountered, unlike the classical schemes using single error correcting-double error detecting (SEC-DED) and double error correcting-triple error detecting (DEC-TED) codes.  相似文献   

9.
We consider the Mollard construction from the point of view of its efficiency for detecting multiple bit errors. We propose a generalization of the classical extended Mollard code to arbitrary code lengths. We show partial robustness of this construction: such codes have less undetected and miscorrected errors than linear codes. We prove that, for certain code parameters, the generalization of the Mollard construction can ensure better error protection than a generalization of Vasil’ev codes.  相似文献   

10.
本文针对t-EC/d-UED码[7]中介绍的纠t个错和检d个单向错(t-EC/d-UED)码提出了一种改进方法,改进后的码在大多数情况下具有更强的检错能力。  相似文献   

11.
Daily numerical data entry is subject to human errors, and errors in numerical data can cause serious losses in health care, safety and finance. Difficulty in detecting errors by human operators in numerical data entry necessitates an early error detection/prediction mechanism to proactively prevent severe accidents. To explore the possibility of using multi-channel electroencephalography (EEG) collected before movements/reactions to detect/predict human errors, linear discriminant analysis (LDA) classifier was utilised to predict numerical typing errors before their occurrence in numerical typing. Single trial EEG data were collected from seven participants during numerical hear-and-type tasks and three temporal features were extracted from six EEG sites in a 150-ms time window. The sensitivity of LDA classifier was revealed by adjusting the critical ratio of two Mahalanobis distances as a classification criterion. On average, the LDA classifier was able to detect 74.34% of numerical typing errors in advance with only 34.46% false alarms, resulting in a sensitivity of 1.05. A cost analysis also showed that using the LDA classifier would be beneficial as long as the penalty is at least 15 times the cost of inspection when the error rate is 5%. LDA demonstrated its realistic potential in detecting/predicting relatively few errors in numerical data without heavy pre-processing. This is one step towards predicting and preventing human errors in perceptual-motor tasks before their occurrence.  相似文献   

12.
This article presents research on error detection and prediction algorithms in robotics. Errors, defined as either agent error or Co-net error, are analyzed and compared. Three new error detection and prediction algorithms (EDPAs) are then developed, and validated by detecting and predicting errors in typical pick and place motions of an Adept Cobra 800 robot. A laser Doppler displacement meter (LDDM™) MCV-500 is used to measure the position of robot gripper in 105 experiment runs. Results show that combined EDPAs are preferred to detect and predict displacement errors in sequential robot motions.  相似文献   

13.
We describe the application of problem solving, knowledge based methods in creating process plans in manufacturing. The planner presented - called TOLTEC - is designed for experiential domains and bases its operation on the use of cases in a dynamic memory environment. We will describe the way TOLTEC creates process plans by utilizing previous experiences, dynamic clustering of its memories and dynamic constraint generation and by shifting its focus of attention to different features of the workpiece by using importance values. Also, we will present how TOLTEC learns by modifying its memories according to new experiences and how it helps bridge some of the gap between design and manufacturing by detecting design errors.The emphasis in this paper is more on the application aspects of our system and the examples presented will demonstrate the abilities of TOLTEC to design process plans, detect design errors, predict manufacturing errors, recover from planning errors, handle multiple branching solutions and improve its performance by utilizing learning techniques.  相似文献   

14.
As there are huge amounts of Global Positioning System (GPS) data points measured in the Qinghai-Tibet Railway (QTR) with a length of 1142 km, it was inevitable that some measuring errors existed due to various situations in measurement. It is very important to develop a method to automatically detect the possible errors in all data points so as to modify them or measure them again to improve the reliability of GPS data. Four error patterns, including redundant measurement, sparse measurement, back-and-forth measurement, and big angle change, were obtained based on expert knowledge. Based on the four error patterns, four algorithms were developed to detect the corresponding possible errors in data points. To delete the repetitive errors by different algorithms and effectively display the possible errors, an integrated error-detecting method was developed by reasonably assembling the four algorithms. After four performance indices were given to evaluate the performance of the error-detecting method, six GPS track data sets between seven railway stations in the QTR were used to validate the method. Thirty-eight segments of some sequential points that are possibly wrong were found by the method and fourteen of them were confirmed by measurement experts. The detecting rate of the method was 100% and the duration time of the detecting process was less than half an hour compared with the 94 h manual workload. The validation results show that the method is effective not only in decreasing workload, but also in ensuring correctness by integrating the domain expert knowledge to make the final decision.  相似文献   

15.
This paper describes the results of a general theory of matrix codes correcting a set of given types of multiple errors. A detailed study has been made of certain matrix classes of these systematic binary error correcting codes that will correct typical errors of some digital channels. These codes published by Elias,(2,3) Hobb's,(5) and Voukalis(11) account for this theory and other new families of binary systematic matrix codes of arbitrary size, correcting random, burst and clusters of errors are given here. Also presented here are the basic ideas of each of these codes. We can easily find practical decoding algorithms for each of these codes. The characteristic calculation of the parity check equations that the information matrix codebook has to satisfy are also shown. Further on we deal with the optimum construction of these codes showing their use in certain applications. We answer questions such as: “What is the optimum size of the code?” “What is the best structure of the code?” “What is the probability of error correction and the mean error correction performance?” Consequently, in this paper we also describe the results of an extensive search for optimum matrix codes designed to correct a given set of multiple errors as well as their implementation.  相似文献   

16.
Recent studies suggest that the soft-error rate in microprocessor logic is likely to become a serious reliability concern by 2010. Detecting soft errors in the processor's core logic presents a new challenge beyond what error detecting and correcting codes can handle. Commercial microprocessor systems that require an assurance of reliability employ an error-detection scheme based on dual modular redundancy (DMR) in some form - from replicated pipelines within the same die to mirroring of complete processors. To detect errors across a distributed DMR pair, we develop fingerprinting, a technique that summarizes a processor's execution history into a cryptographic signature, or "fingerprint". More specifically, a fingerprint is a hash value computed on the changes to a processor's architectural state resulting from a program's execution. Fingerprinting summarizes the history of internal processor state updates into a cryptographic signature. The processors in a dual modular redundant pair periodically exchange and compare fingerprints to corroborate each other's correctness. Relative to other techniques, fingerprinting offers superior error coverage and significantly reduces the error-detection latency and bandwidth  相似文献   

17.
Memory access violations are a leading source of unreliability in C programs. As evidence of this problem, a variety of methods exist that retrofit C with software checks to detect memory errors at runtime. However, these methods generally suffer from one or more drawbacks including the inability to detect all errors, the use of incompatible metadata, the need for manual code modifications, and high runtime overheads. This paper presents a compiler analysis and transformation for ensuring the memory safety of C called MemSafe. MemSafe makes several novel contributions that improve upon previous work and lower the cost of safety. These include (i) a method for modeling temporal errors as spatial errors, (ii) a metadata representation that combines features of both object‐based and pointer‐based approaches, and (iii) a dataflow representation that simplifies optimizations for removing unneeded checks. MemSafe is capable of detecting real errors with lower overheads than previous efforts. Experimental results show that MemSafe detects all memory errors in six programs with known violations as well as two large and widely used open source applications. Finally, MemSafe ensures complete safety with an average overhead of 88% on 30 programs commonly used for evaluating the performance of error detection tools. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical ‘health checks’ on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.  相似文献   

19.
马丽丽  吕涛  李华伟  张金巍  段永颢 《计算机工程》2011,37(12):279-281,284
为快速有效地对集成电路设计中潜在的常见错误进行检测,提出一种基于静态分析的错误检测方法。该方法可以自动地提取待测寄存器传输级(RTL)设计的行为信息,检测出设计中常见的错误,如状态机死锁、管脚配置错误。实验结果表明,静态检测相对于其他验证方法自动化程度高、检测速度快、检测准确度高、检测代码可重用,可以在模拟之前发现设计中的错误。  相似文献   

20.
A control flow checking scheme capable of detecting control flow errors of programs resulting from software coding errors, hardware malfunctions, or memory mutilation during the execution of the program is presented. In this approach, the program is partitioned into loop-free intervals and a database containing the path information in each of the loop-free intervals is derived from the detailed design. The path in each loop-free interval actually traversed at run time is recorded and then checked against the information provided in the database, and any discrepancy indicates an error. This approach is general, and can detect all uncompensated illegal branches. Any uncompensated error that occurs during the execution of a loop-free interval and manifests itself as a wrong branch within the loop-free interval or right after the completion of execution of the loop-free interval is also detectable. The approach can also be used to check the control flow in the testing phase of program development. The capabilities, limitations, implementation, and the overhead of using this approach are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号