首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
故障注入技术在BIT软件测试中是一种有效的手段。针对电路板级BIT软件测试中遇到的问题,介绍了一种基于开源模拟器QEMU实现的处理器类故障模拟方法。采用该方法对多种处理器故障进行仿真建模,通过对QEMU的扩展开发,加入故障行为模拟模块和故障注入模块,以实现一个具有处理器类故障注入功能的系统级模拟器BitVaSim。首先分析处理器功能故障模式,提取故障的关键字值对,用XML Schema定义故障并用于故障建模;其次对QEMU代码进行二次开发以实现对处理器故障行为的模拟;然后通过配置故障注入接口实现模拟器运行时的故障模式匹配、故障按条件触发等功能;最后通过实验案例来观察模拟器的故障表现,评价这种基于模拟器的故障注入技术。实验过程和结果显示这种方法是有效可行的。  相似文献   

2.
The authors present a fault injection and monitoring environment (FINE) as a tool to study fault propagation in the UNIX kernel. FINE injects hardware-induced software errors and software faults into the UNIX kernel and traces the execution flow and key variables of the kernel. FINE consists of a fault injector, a software monitor, a workload generator, a controller, and several analysis utilities. Experiments on SunOS 4.1.2 are conducted by applying FINE to investigate fault propagation and to evaluate the impact of various types of faults. Fault propagation models are built for both hardware and software faults. Transient Markov reward analysis is performed to evaluate the loss of performance due to an injected fault. Experimental results show that memory and software faults usually have a very long latency, while bus and CPU faults tend to crash the system immediately. About half of the detected errors are data faults, which are detected when the system is tries to access an unauthorized memory location. Only about 8% of faults propagate to other UNIX subsystems. Markov reward analysis shows that the performance loss incurred by bus faults and CPU faults is much higher than that incurred by software and memory faults. Among software faults, the impact of pointer faults is higher than that of nonpointer faults  相似文献   

3.
System reliability has become a main concern during the computer-based system design process. It is one of the most important characteristics of the system quality. The continuous increase of the system complexity makes the reliability evaluation extremely costly. Therefore, there is need to develop new methods with less cost and effort. Furthermore, the system is vulnerable to both software and hardware faults. While the software faults are usually introduced by the programmer either at the design or the implementation stage of the software, the hardware faults are caused by physical phenomena affecting the hardware components, such as environmental perturbations, manufacturing defects, and aging-related phenomena. The software faults can only impact the software components. However, the hardware faults can propagate through the different system layers, and affect both the hardware and the software. This paper discusses the differences between the software testing and the software fault injections techniques used for reliability evaluation. We describe the mutation analysis as a method mainly used in software testing. Then, we detail the fault injection as a technique to evaluate the system reliability. Finally, we discuss how to use software mutation analysis in order to evaluate, at software level, the system reliability against hardware faults. The main advantage of this technique is its usability at early design stage of the system, when the instruction set architecture is not available. Experimental results run to evaluate faults occurring the memory show that the proposed approach significantly reduces the complexity of the system reliability evaluation in terms of time and cost.  相似文献   

4.
In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes  相似文献   

5.

Context

Mutation testing is a fault-injection-based technique to help testers generate test cases for detecting specific and predetermined types of faults.

Objective

Before mutation testing can be effectively applied to embedded systems, traditional mutation testing needs to be modified. To inject a fault into an embedded system without causing any system failure or hardware damage is a challenging task as it requires some knowledge of the underlying layers such as the kernel and the corresponding hardware.

Method

We propose a set of mutation operators for embedded systems using kernel-based software and hardware fault simulation. These operators are designed for software developers so that they can use the mutation technique to test the entire system after the software is integrated with the kernel and hardware devices.

Results

A case study on a programmable logic controller for a digital reactor protection system in a nuclear power plant is conducted. Our results suggest that the proposed mutation operators are useful for fault-injection and this is evidenced by the fact that faults not injected by us were discovered in the subject software as a result of the case study.

Conclusion

We conclude that our mutation operators are useful for integration testing of an embedded system.  相似文献   

6.
Software fault prediction using different techniques has been done by various researchers previously. It is observed that the performance of these techniques varied from dataset to dataset, which make them inconsistent for fault prediction in the unknown software project. On the other hand, use of ensemble method for software fault prediction can be very effective, as it takes the advantage of different techniques for the given dataset to come up with better prediction results compared to individual technique. Many works are available on binary class software fault prediction (faulty or non-faulty prediction) using ensemble methods, but the use of ensemble methods for the prediction of number of faults has not been explored so far. The objective of this work is to present a system using the ensemble of various learning techniques for predicting the number of faults in given software modules. We present a heterogeneous ensemble method for the prediction of number of faults and use a linear combination rule and a non-linear combination rule based approaches for the ensemble. The study is designed and conducted for different software fault datasets accumulated from the publicly available data repositories. The results indicate that the presented system predicted number of faults with higher accuracy. The results are consistent across all the datasets. We also use prediction at level l (Pred(l)), and measure of completeness to evaluate the results. Pred(l) shows the number of modules in a dataset for which average relative error value is less than or equal to a threshold value l. The results of prediction at level l analysis and measure of completeness analysis have also confirmed the effectiveness of the presented system for the prediction of number of faults. Compared to the single fault prediction technique, ensemble methods produced improved performance for the prediction of number of software faults. Main impact of this work is to allow better utilization of testing resources helping in early and quick identification of most of the faults in the software system.  相似文献   

7.
It has become well established that software will never become bug free, which has spurred research in mechanisms to contain faults and recover from them. Since such mechanisms deal with faults, fault injection is necessary to evaluate their effectiveness. However, little thought has been put into the question whether fault injection experiments faithfully represent the fault model designed by the user. Correspondence with the fault model is crucial to be able to draw strong and general conclusions from experimental results. The aim of this paper is twofold: to make a case for carefully evaluating whether activated faults match the fault model and to gain a better understanding of which parameters affect the deviation of the activated faults from the fault model. To achieve the latter, we instrumented a number of programs with our LLVM-based fault injection framework. We investigated the biases introduced by limited coverage, parts of the program executed more often than others and the nature of the workload. We evaluated the key factors that cause activated faults to deviate from the model and from these results provide recommendations on how to reduce such deviations.  相似文献   

8.
基于覆盖与故障注入的飞控软件测试技术研究   总被引:3,自引:3,他引:0  
无人机飞控软件是典型的实时嵌入式软件系统,其可靠性、安全性测试与评估是军用软件保障工作与无人机技术发展中的重点与难点。针对飞控软件的特点,介绍基于覆盖与故障注入的测试方法.分析其测试与可靠性评估中的关键技术,并简要介绍应用于测试数据分析过程的软件可靠性建模工具MEADEP的构成与建模方法。实践证明对安全关键软件严格的测试与评估可大大降低错误隐藏数,减少不必要的经济损失与灾难性事件发生。  相似文献   

9.
肖兵  胡静  罗飞  李鹏 《计算机仿真》2007,24(7):272-275
汽车工业的迅猛发展对相关发动机技术人员提出了更高的要求,而现在的汽车发动机培训方式很难满足实际需要.介绍了一种为了满足现代汽车技术实习培训教学的要求而设计开发的汽车发动机实时仿真培训器.文中介绍了汽车发动机实时仿真培训器的系统构成和发动机实时仿真及故障模拟电子单元(REFEU), 其中,重点介绍了进气歧管负压和发动机转速的仿真以及故障模拟的实现.该仿真器能够动态再现发动机的工作过程及相关信号的变化规律,并且培训教师能根据教学需要设置实际发动机运行过程中碰到的各类故障.目前,该仿真器已经投入实际应用中,并取得了良好的教学效果.  相似文献   

10.
A state-based approach to integration testing based on UML models   总被引:3,自引:0,他引:3  
Correct functioning of object-oriented software depends upon the successful integration of classes. While individual classes may function correctly, several new faults can arise when these classes are integrated together. In this paper, we present a technique to enhance testing of interactions among modal classes. The technique combines UML collaboration diagrams and statecharts to automatically generate an intermediate test model, called SCOTEM (State COllaboration TEst Model). The SCOTEM is then used to generate valid test paths. We also define various coverage criteria to generate test paths from the SCOTEM model. In order to assess our technique, we have developed a tool and applied it to a case study to investigate its fault detection capability. The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.  相似文献   

11.
基于动态局部重配置的FPGA抗辐射模拟   总被引:2,自引:1,他引:1       下载免费PDF全文
提出一种与具体硬件结构无关、基于权重的错误注入模型,用于准确模拟基于SRAM的现场可编程门阵列抗辐射性能。提出基于JTAG边界扫描技术和动态局部重配置的错误注入模拟平台。实验结果证明,由该软件模型和硬件平台组成的错误注入系统具有良好通用性,能更准确、高效地进行模拟,且成本较低。  相似文献   

12.
杜育松  王大星  沈静 《计算机工程》2006,32(23):174-176
描述了一种对AES-128的差分错误分析原理,给出了攻击的算法,分析了算法成功的概率。该算法可以得到AES-128的中间加密结果M9,利用M9和一组正确密文可以推出AES-128的最后一轮轮密钥,从而恢复AES-128的初始密钥。软件模拟结果表明,在物理技术达到的情况下,如果能向M9中反复随机地引入140个比特错误,那么找到初始密钥的可能性将超过90%。最后指出以密文分组链模式工作的AES可以抵抗以上提到的攻击。  相似文献   

13.
This paper proposes a strategy for automatically fixing faults in a program by combining the ideas of mutation and fault localization. Statements ranked in order of their likelihood of containing faults are mutated in the same order to produce potential fixes for the faulty program. The proposed strategy is evaluated using 8 mutant operators against 19 programs each with multiple faulty versions. Our results indicate that 20.70% of the faults are fixed using selected mutant operators, suggesting that the strategy holds merit for automatically fixing faults. The impact of fault localization on efficiency of the overall fault-fixing process is investigated by experimenting with two different techniques, Tarantula and Ochiai, the latter of which has been reported to be better at fault localization than Tarantula, and also proves to be better in the context of fault-fixing using our proposed strategy. Further experiments are also presented to evaluate stopping criteria with respect to the mutant examination process and reveal that a significant fraction of the (fixable) faults can be fixed by examining a small percentage of the program code. We also report on the relative fault-fixing capabilities of mutant operators used and present discussions on future work.  相似文献   

14.
High-assurance and complex mission-critical software systems are heavily dependent on reliability of their underlying software applications. An early software fault prediction is a proven technique in achieving high software reliability. Prediction models based on software metrics can predict number of faults in software modules. Timely predictions of such models can be used to direct cost-effective quality enhancement efforts to modules that are likely to have a high number of faults. We evaluate the predictive performance of six commonly used fault prediction techniques: CART-LS (least squares), CART-LAD (least absolute deviation), S-PLUS, multiple linear regression, artificial neural networks, and case-based reasoning. The case study consists of software metrics collected over four releases of a very large telecommunications system. Performance metrics, average absolute and average relative errors, are utilized to gauge the accuracy of different prediction models. Models were built using both, original software metrics (RAW) and their principle components (PCA). Two-way ANOVA randomized-complete block design models with two blocking variables are designed with average absolute and average relative errors as response variables. System release and the model type (RAW or PCA) form the blocking variables and the prediction technique is treated as a factor. Using multiple-pairwise comparisons, the performance order of prediction models is determined. We observe that for both average absolute and average relative errors, the CART-LAD model performs the best while the S-PLUS model is ranked sixth.  相似文献   

15.
An important step in the development of dependable systems is the validation of their fault tolerance properties. Fault injection has been widely used for this purpose, however with the rapid increase in processor complexity, traditional techniques are also increasingly more difficult to apply. This paper presents a new software-implemented fault injection and monitoring environment, called Xception, which is targeted at modern and complex processors. Xception uses the advanced debugging and performance monitoring features existing in most modern processors to inject quite realistic faults by software, and to monitor the activation of the faults and their impact on the target system behavior in detail. Faults are injected with minimum interference with the target application. The target application is not modified, no software traps are inserted, and it is not necessary to execute the target application in special trace mode (the application is executed at full speed). Xception provides a comprehensive set of fault triggers, including spatial and temporal fault triggers, and triggers related to the manipulation of data in memory. Faults injected by Xception can affect any process running on the target system (including the kernel), and it is possible to inject faults in applications for which the source code is not available. Experimental, results are presented to demonstrate the accuracy and potential of Xception in the evaluation of the dependability properties of the complex computer systems available nowadays  相似文献   

16.
Based on extensive field failure data for Tandem's GUARDIAN operating system, the paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate  相似文献   

17.
A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.  相似文献   

18.
Fault-based testing focuses on detecting faults in a software. Test data is typically generated considering the presence of a single fault at a time, under the assumption of coupling hypothesis. Fault-based testing approach, in particular mutation testing, assumes that the coupling hypothesis holds. According to the hypothesis, a test set that can detect presence of single faults in an implementation, is also likely to detect presence of multiple faults. In this paper it is formally shown that the hypothesis is guaranteed to hold for a large number of logical fault classes.  相似文献   

19.
系统虚拟化和模拟技术对当今计算机科学研究和相关产业有着重要的影响.整合虚拟和模拟环境,让运行在虚拟机中的操作系统获得更多重要的服务是一项具有挑战性和有意义的工作.由系统虚拟化提供的虚拟机动态迁移技术作进一步扩展后,可整合这两个计算环境.提出Roam,一个支持在虚拟和模拟环境之间进行虚拟机动态迁移的框架.开发的Roam原型系统实现了Linux虚拟机在Xen和纯Qemu环境之间的动态迁移.相关性能测试表明Roam是一个可行的虚拟机动态迁移方案,并且虚拟机的停机时间和整体迁移时间都在一个可接受的范围内.  相似文献   

20.
Laprie  J.-C. Arlat  J. Beounes  C. Kanoun  K. 《Computer》1990,23(7):39-51
A structured definition of hardware- and software-fault-tolerant architectures is presented. Software-fault-tolerance methods are discussed, resulting in definitions for soft and solid faults. A soft software fault has a negligible likelihood or recurrence and is recoverable, whereas a solid software fault is recurrent under normal operations or cannot be recovered. A set of hardware- and software-fault-tolerant architectures is presented, and three of them are analyzed and evaluated. Architectures tolerating a single fault and architectures tolerating two consecutive faults are discussed separately. A sidebar addresses the cost issues related to software fault tolerance. The approach taken throughout is as general as possible, dealing with specific classes of faults or techniques only when necessary  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号