首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Software validation is treated as the problem of detecting errors that programmers make during the software development process. This includes fault detection, in which the focus is on techniques for detecting the occurrence of local errors that result in well-defined classes of program statement faults. It also includes detecting other kinds of errors, such as decomposition errors. The main focus of the work is on a decomposition-error analysis technique called comments analysis. In this technique, errors are detected by analyzing special classes of program comments. Comments analysis has been applied to a variety of systems, including a data-processing program and an avionics real-time program. The use of comments analysis for sequential and concurrent systems is discussed, and the basic features of comments analysis tools are summarized. The relationship of comments analysis to other techniques, such as event sequence analysis, is discussed, and the differences between it and earlier work are explained  相似文献   

2.
Anomalies such as redundant, contradictory, or deficient knowledge in a knowledge base indicate possible errors. Various methods for detecting such anomalies have been introduced, analyzed, and applied in the past years, but they usually deal with rule-based systems. So far, little attention has been paid to the verification and validation of more complex representations, such as nonmonotonic knowledge bases, although there are good reasons to expect that these technologies will be increasingly used in practical applications. This article does a step towards the verification of knowledge bases which include defaults by providing a theoretical foundation of correctness concepts and a classification of possible anomalies. It also points out how existing verification methods may be applied to detect some anomalies in nonmonotonic knowledge bases, and discusses methods of avoiding potential inconsistencies (in the context of default reasoning inconsistency means nonexistence of extensions). © 1997 John Wiley & Sons, Inc.  相似文献   

3.
马丽丽  吕涛  李华伟  张金巍  段永颢 《计算机工程》2011,37(12):279-281,284
为快速有效地对集成电路设计中潜在的常见错误进行检测,提出一种基于静态分析的错误检测方法。该方法可以自动地提取待测寄存器传输级(RTL)设计的行为信息,检测出设计中常见的错误,如状态机死锁、管脚配置错误。实验结果表明,静态检测相对于其他验证方法自动化程度高、检测速度快、检测准确度高、检测代码可重用,可以在模拟之前发现设计中的错误。  相似文献   

4.
SpaceWire是应用于航空航天领域的高速通信总线协议,对SpaceWire设计正确性与可靠性要求极高,由于传统的验证方法,存在不完备性等缺陷,对SpaceWire的严格验证一直是备受关注的问题之一。模型检验以其验证的完备性得到设计人员的重视。提出用线性时态逻辑(LTL)模型检验的方法验证SpaceWire系统的检错机制。在检错模块中,该方法与用分支时态逻辑(CTL)验证方法相比,BDD分配数和状态数明显减少,提高了验证效率,还验证了错误优先级;对检错模块处理的五种错误的发生进行验证,验证结果均为正确。该方法实现了对检错机制的完备性验证。  相似文献   

5.
There is a dichotomy of opinion on the use of software testing versus formal verification in software development. Testing has been the accepted method for detecting and removing errors and has played a significant error removal role. Formal verification has only recently matured into accepted practice but shows the potential for playing an even more significant error prevention role. The Cleanroom software development process which has been developed by the IBM Federal Systems Division combines both ideas into an effective development tool. Software engineering methods based on functional verification support the production of software with sufficient quality to forego traditional unit or structural testing. Statistical methods are introduced that define objective and formal strategies for product or functional testing. The synergism between the two ideas results in software with fewer errors which are both easier to find and to fix and in products with exceptional operating characteristics. Error prevention, not removal, is the key and the only viable approach to any sustained software quality growth. The Cleanroom development method and its impact on the error prevention and removal processes are covered in this paper. The results from its use for software development are also discussed.  相似文献   

6.
操作系统在许多安全攸关领域为软件系统提供关键性底层支撑,操作系统中一个微小的错误或漏洞都可能引起整个软件系统的重大故障,造成巨大经济损失或危及人身安全.为了减少此类安全事故的发生,对操作系统正确性进行验证十分必要.传统测试手段无法穷尽系统中的所有潜在错误,因而操作系统验证有必要使用具有严格数学理论基础的形式化方法.在操作系统中,互斥量可协调多任务对资源的访问,是一种常用的任务同步方式,其功能正确性对于保障多任务应用的正确性十分关键.本文基于定理证明方法,在交互式定理证明器Coq中对某抢占式微内核操作系统的互斥量模块进行代码级形式化建模,给出其接口函数的形式化规范,并实现这些接口函数的功能正确性验证.  相似文献   

7.
变长检错码     
本文提出变长错误模型和变长错误检错码的概念,具体给出两类变长检错码,分析了其检测变长错误的能力,并介绍了其在计算机病毒防治中的实际应用.  相似文献   

8.
A Survey of Outlier Detection Methodologies   总被引:30,自引:0,他引:30  
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review.  相似文献   

9.
Verification of non-monotonic knowledge bases   总被引:1,自引:0,他引:1  
Non-monotonic Knowledge-Based Systems (KBSs) must undergo quality assurance procedures for the following two reasons: (i) belief revision (if such is provided) cannot always guarantee the structural correctness of the knowledge base, and in certain cases may introduce new semantic errors in the revised theory; (ii) non-monotonic theories may have multiple extensions, and some types of functional errors which do not violate structural properties of a given extension are hard to detect without testing the overall performance of the KBS. This paper presents an extension of the distributed verification method, which is meant to reveal structural and functional anomalies in non-monotonic KBSs. Two classes of anomalies are considered: (i) structural anomalies which manifest themselves within a given extension (such as logical inconsistencies, structural incompleteness, and intractabilities caused by circular rule chains), and (ii) functional anomalies related to the overall performance of the KBS (such as the existence of complementary rules and some types of rule subsumptions). The corresponding verification tests are presented and illustrated on an extended example.  相似文献   

10.
Software verification and validation is a domain which is covered by many dynamic test, static analysis, and formal verification techniques. This presents a problem to practitioners with respect to selecting those suitable techniques which can be used successfully. The basic idea of the methodology presented here is to select test techniques which fit the software under test. A dynamic test technique requires that certain program elements are covered, will be sensitive to errors associated with these elements, because executing an error location is a precondition for revealing the error. Furthermore, it is likely that the probability of errors increases with complexity. Complexity can be characterized in terms of several properties which can be used to suggest various testing strategies. The complexity of the various software properties can be measured using appropriate complexity metrics. Properties with unusual high complexity measures should be tested very throughly. The approach described in this paper permits the selection of test techniques based on the values of the metrics with respect to a particular software product.  相似文献   

11.
Expert system verification and validation: a survey and tutorial   总被引:2,自引:0,他引:2  
Assuring the quality of an expert system is critical. A poor quality system may make costly errors resulting in considerable damage to the user or owner of the system, such as financial loss or human suffering. Hence verification and validation, methods and techniques aimed at ensuring quality, are fundamentally important. This paper surveys the issues, methods and techniques for verifying and validating expert systems. Approaches to defining the quality of a system are discussed, drawing upon work in both computing and the model building disciplines, which leads to definitions of verification and validation and the associated concepts of credibility, assessment and evaluation. An approach to verification based upon the detection of anomalies is presented, and related to the concepts of consistency, completeness, correctness and redundancy. Automated tools for expert system verification are reviewed. Considerable attention is then given to the issues in structuring the validation process, particularly the establishment of the criteria by which the system is judged, the need to maintain objectivity, and the concept of reliability. This is followed by a review of validation methods for validating both the components of a system and the system as a whole, and includes examples of some useful statistical methods. Management of the verification and validation process is then considered, and it is seen that the location of methods for verification and validation in the development life-cycle is of prime importance.  相似文献   

12.
A prototype specification support system, Escort, is described that incorporates novel validation, verification, and simplification methods for telecommunications software specifications. Unix was adapted as the operating system for Escort, and many of Escort's tools were designed and implemented by making full use of the Unix facilities. Escort identifies three kinds of specification errors: errors in the grammar of the specification language, called syntax errors; those that degrade consistency and completeness, called logical errors; and those that degrade correctness, called semantic errors. It detects these errors using syntax analysis, validation, and verification, respectively  相似文献   

13.
This article presents research on error detection and prediction algorithms in robotics. Errors, defined as either agent error or Co-net error, are analyzed and compared. Three new error detection and prediction algorithms (EDPAs) are then developed, and validated by detecting and predicting errors in typical pick and place motions of an Adept Cobra 800 robot. A laser Doppler displacement meter (LDDM™) MCV-500 is used to measure the position of robot gripper in 105 experiment runs. Results show that combined EDPAs are preferred to detect and predict displacement errors in sequential robot motions.  相似文献   

14.
15.
Fujiwara  E. Pradhan  D.K. 《Computer》1990,23(7):63-72
In this article, intended for readers with basic knowledge in coding, the codes used in actual systems are surveyed. Error control in high-speed memories is examined, including bit-error-correcting/detecting codes, byte-error-correcting/detecting codes, and codes to detect single-byte errors as well as correct single-bit errors and detect double-bit errors. Tape and disk memory codes for error control in mass memories are discussed. Processor error control and unidirectional error-control codes are covered, including the application of the latter to masking asymmetric line faults  相似文献   

16.
17.
Dynamic verification is a new approach to formal verification, applicable to generic algorithms such as those found in the Standard Template Library (STL, part of the Draft ANSI/ISO C++ Standard Library). Using behavioral abstraction and symbolic execution techniques, verifications are carried out at an abstract level such that the results can be used in a variety of instances of the generic algorithms without repeating the proofs. This is achieved by substituting for type parameters of generic algorithms special data types that model generic concepts by accepting symbolic inputs and deducing outputs using inference methods. By itself, this symbolic execution technique supports testing of programs with symbolic values at an abstract level. For formal verification one also needs to generate multiple program execution paths and use assertions (to handle while loops, for example), but the authors show how this can be achieved via directives to a conventional debugger program and an analysis database. The assertions must still be supplied, but they can be packaged separately and evaluated as needed by appropriate transfers of control orchestrated via the debugger. Unlike all previous verification methods, the dynamic verification method thus works without having to transform source code or process it with special interpreters. They include an example of the formal verification of an STL generic algorithm  相似文献   

18.
赵炎  张文  万浩  赵会欣  王旭  王平 《传感技术学报》2012,25(11):1473-1478
面向水环境重金属元素检测系统,提出了一种智能化实时检测系统设计方法。系统在初始化过程和检测过程中引入多种智能化方法,如系统可靠性检查方法、系统误差自动补偿以及自动量程采样方法。在实验室通过锌、镉、铅、铜四种重金属离子标准溶液样品检测实验,结果表明,上述方法提高了系统的检测可靠性和检测精度,证明了该系统可以有效解决传统检测系统在多种重金属检测中无法自动消除系统误差、检测精度不高的问题,为实时监测水环境重金属元素提供了更加完善的解决方案。  相似文献   

19.
The construction of ultra-high-rise and long-span structures requires higher requirements for the integrity detection of piles. The acoustic signal detection has been verified an efficient and accurate nondestructive testing method. In fact, the integrity of piles is closely related to the onset time of signals. The accuracy of onset time directly affects the integrity evaluation of a pile. To achieve high-precision onset detection, continuous wavelet transform (CWT) preprocessing and machine learning algorithms were integrated into the software of high-sampling rate testing equipment. The distortion of waveforms, which could interfere with the accuracy of detection, was eliminated by CWT preprocessing. To make full use of the collected waveform data, three types of machine learning algorithms were used for classifying whether the data points are ambient or ultrasonic signals. The models involve a commonly used classifier (ELM), an individual classification tree model (DTC), an ensemble tree model (RFC) and a deep learning model (DBN). The classification accuracy of the ambient and ultrasonic signals of these models was compared by 5-fold validation. Results indicate that RFC performance is better than DBN and DTC after training. It is more suitable for the classification of points in waveforms. Then, a detection method of onset time based on classification results was therefore proposed to minimize the interference of classification errors on detection. In addition to the three data mining methods, the autocorrelation function method was selected as the control method to compare the proposed data mining based methods with the traditional one. The accuracy and error analysis of 300 waveforms proved the feasibility and stability of the proposed method. The RFC-based detection method is recommended because of the highest accuracy, lowest errors, and the most favorable error distribution among four onset detection methods. Successful applications demonstrate that it could provide a new way for ensuring the accurate testing of pile foundation integrity.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号