首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

In general, software testing is a complicated and uncertain process. New faults can be introduced into the software during each fault removal. This process is called imperfect debugging. For simplicity, fault introduction rates are generally assumed to be constant. However, software debugging can be affected by many factors, such as subjective and objective influences, the difficulty and complexity of fault removal, the dependent relationships among faults, the changes in different phases of software testing, and the test schedules. Thus, the rate of fault introduction is not a constant, but is an irregularly fluctuating variable in software debugging. In this article, we propose a model with imperfect software debugging considering the irregular changes in fault introduction rates during software debugging. Experimental results reveal that our proposed model has good fitting capability and considerably stronger forecasting performance than that of the other models, and that the proposed model assumptions are close to the actual software debugging situation. Moreover, research on the irregular fluctuation of the fault introduction rate in software debugging has a certain reference value and important significance for software-intensive product testing, for instance, cloud computing.  相似文献   

2.
Software reliability literature consists of various change-point-based software reliability growth models and related release time problems. The primary assumption of the existing models is the existence of change-point before software release time only. This does not look practical as the testing team becomes more proficient in detecting the faults due to their continuous involvement in software development by the software release time. Hence the fault detection rate in the pre- and postrelease phase is not the same. To capture this change in fault detection rate in the pre- and postrelease testing phase, we propose a new software reliability modeling framework by considering two change-points during the software lifecycle; that is, there exists a change-point before release time and release time as a change-point. Further, in the last one-decade software firms have changed their strategy of stop testing the software after release and continue to test even after release to remove the number of faults to provide better user experiences. This phenomenon attracted academicians to develop theoretical as well empirical study on postrelease testing and formulation of related release time problem. In this paper, we propose a software cost model to determine optimal release and testing stop time considering under the assumption of two change-points as mentioned above. The proposed model is validated on real-life data set.  相似文献   

3.
Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment. Among other communication, teamwork, and coordination problems in global software development, the detection of faults is seen as the key challenge. Thus, there is a need to ensure the reliability of component-based applications requirements. Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations. In this study, we propose an approach for fault detection from component-based systems requirements using the fuzzy logic approach and historical information during acceptance testing. This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing. For the evaluation, we used empirical study, and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing. The comparison to the conventional procedures, i.e., requirement criteria, and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures. Consequently, the F-measures of the proposed approach define the accurate selection of components, and faults identification increases in components using the proposed approach were higher (i.e., more than 80 percent) than requirement criteria, and code coverage criteria procedures (i.e., less than 80 percent), respectively. Similarly, the rate of fault detection in the proposed approach increases, i.e., 92.80 compared to existing methods i.e., less than 80 percent. The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.  相似文献   

4.
Owing to release of software in multiple releases, code changes take place in software. Because of this added complexity in software, the testing team may be unable to correct the fault upon detection, leaving the actual fault to reside in the software, termed as imperfect debugging or there may be replacement of original fault by other fault, leading to error generation. Many other factors exist that affect the testing phase of software like strategies of testing, test cases, skill, efficiency, and learning of testing team. All these factors cannot be kept stable during the whole process of testing. They may change at any time moment causing the background processes to experience change, which is known as change‐point. Keeping all these critical testing environment factors under consideration, a new software reliability growth model has been proposed, which is derived from an non homogenous Poisson process (NHPP)based unified scheme for multi‐release two‐stage fault detection/observation and correction/removal software reliability models. The developed model is numerically illustrated on tandem data set for four releases.  相似文献   

5.
In practice, debugging operations during the testing phase of software development are not always performed perfectly. In other words, not all the software faults detected are corrected and removed. Generally, this is called imperfect debugging. In this paper, we discuss a software reliability growth model considering imperfect debugging. Defining a random variable representing the cumulative number of faults corrected up to a specified testing time, this model is described by a semi-Markov process. Then, several quantitative measures are derived for software reliability assessment in an imperfect debugging environment. The application of this model to optimal software release problems is also discussed. Finally, numerical illustrations for software reliability measurement and optimal software release policies are presented.  相似文献   

6.
7.
导弹武器系统对接联调故障诊断系统研究   总被引:1,自引:0,他引:1  
简要介绍了多种故障诊断技术,分析了某导弹武器系统对接联调过程中可能出现的故障,提出了建立故障诊断系统的思路和原则,给出了故障诊断系统的硬件和软件设计,指出了诊断系统的优点及发展方向。  相似文献   

8.
Many software reliability growth models (SRGMs) based on a non-homogenous Poisson process (NHPP) have been developed with the assumption of a constant fault detection rate (FDR) and a fault detection process dependent only on the residual fault content. In this paper we develop a SRGM based on NHPP using a different approach for model development. Here, the fault detection process is dependent not only on the residual fault content, but also on the testing time. It incorporates a realistic situation encountered in software development where the fault detection rate is not constant over the entire testing process, but changes due to variations in resource allocation, defect density, running environment and testing strategy (called the change-point). Here, the FDR is defined as a function of testing time. The proposed model also incorporates the testing effort with the change-point concept which is useful in solving the problems of runaway software projects and provides the testing effort control technique and flexibility to project managers to obtain the desired reliability level. It utilizes failure data collected from software development projects to show its applicability and effectiveness. The statistical package for social sciences (SPSS) based on the least-squares method has been used to estimate unknown parameters. The mean squared error (MSE), relative predictive error (RPE), average mean squared error (AMSE) and the average relative predictive error (ARPE) have been used to validate the model. It is observed that the proposed model results are accurate, highly predictive and incorporate industrial software project concepts.  相似文献   

9.
Typical software fault tolerance techniques are modeled on successful hardware fault tolerance techniques. The software fault tolerance techniques rely on design redundancy to tolerate residual design faults in the software; the hardware fault tolerance techniques rely on component redundancy to tolerate physical degradation in the hardware. Investigations of design redundant software have revealed difficulties in adapting the hardware strategy to software.We survey three categories of issues: (1) practical issues in the implementation of design-redundant software, (2) economic considerations for the development and maintenance of multiple software implementations, and (3) assessment difficulties in measuring and predicting the performance of design-redundant software. All of these issues should be considered by would- be developers of design-redundant software to justify use of the technique.  相似文献   

10.
Testing is an integral part of software development. Current fast-paced system developments have rendered traditional testing techniques obsolete. Therefore, automated testing techniques are needed to adapt to such system developments speed. Model-based testing (MBT) is a technique that uses system models to generate and execute test cases automatically. It was identified that the test data generation (TDG) in many existing model-based test case generation (MB-TCG) approaches were still manual. An automatic and effective TDG can further reduce testing cost while detecting more faults. This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model (EFSM). The proposed approach integrates MBT with combinatorial testing. The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach. The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventional MB-TCG but at the same time generated 43 more tests. The proposed approach effectively detects faults, but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.  相似文献   

11.
When specifying requirements for software controlling hybrid systems and conducting safety analysis, engineers experience that requirements are often known only in qualitative terms and that existing fault tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. In this paper, we propose Causal Requirements Safety Analysis (CRSA) as a technique to qualitatively evaluate causal relationship between software faults and physical hazards. This technique, extending qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and relationship among them. Using a simplified electrical power system as an example, we describe step-by-step procedures of conducting CRSA. Our experience of applying CRSA to perform fault tree analysis on requirements for the Wolsong nuclear power plant shutdown system indicates that CRSA is an effective technique in assisting safety engineers.  相似文献   

12.
Integration issues of component-based systems tend to be targeted at the later phases of the software development, mostly after components have been assembled to form an executable system. However, errors discovered at these phases are typically hard to localise and expensive to fix. To address this problem, the authors introduce assume-guarantee testing, a technique that establishes key properties of a componentbased system before component assembly, when the cost of fixing errors is smaller. Assume-guarantee testing is based on the (automated) decomposition of system-level requirements into local component requirements at design time. The local requirements are in the form of assumptions and guarantees that each component makes on, or provides to the system, respectively. Checking requirements is performed during testing of individual components (i.e. unit testing) and it may uncover system-level violations prior to system testing. Furthermore, assume-guarantee testing may detect such violations with a higher probability than traditional testing. The authors also discuss an alternative technique, namely predictive testing, that uses the local component assumptions and guarantees to test assembled systems: given a non-violating system run, this technique can predict violations by alternative system runs without constructing those runs. The authors demonstrate the proposed approach and its benefits by means of two NASA case studies: a safetycritical protocol for autonomous rendez-vous and docking and the executive subsystem of the planetary rover controller K9.  相似文献   

13.
In this article, we define a model for fault detection during the beta testing phase of a software design project. Given sampled data, we illustrate how to estimate the failure rate and the number of faults in the software using Bayesian statistical methods with various different prior distributions. Secondly, given a suitable cost function, we also show how to optimize the duration of a further test period for each one of the prior distribution structures considered. Michael Wiper acknowledges assistance from the Spanish Ministry of Science and Technology via the project BEC2000-0167 and support from projects SEJ2004-03303 and 06/HSE/0181/2004  相似文献   

14.
We discuss optimal software release problems which consider both a present value and a warranty period (in the operational phase) during which the developer has to pay the cost for fixing any faults detected. It is very important with respect to software development management that we solve an optimal software testing time by integrating the total expected testing cost and the reliability requirement. We apply a nonhomogeneous Poisson process model to the formulation of a software cost model and analyze three typical cases of the cost model. Moreover, we derive several optimal release polices. Finally, numerical examples are shown to illustrate the results of the optimal policies.  相似文献   

15.
毛磊  唐华 《中国测试技术》2007,33(5):109-113
随着故障诊断技术的发展,利用专业的仿真工具对实际电路进行可测试性分析仿真用的越来越普遍。LASAR(逻辑自动激励与响应)就是一套优秀的用于数字电路测试开发和逻辑分析的仿真软件系统。介绍了利用LASAR故障仿真进行数字电路可测试性分析的方法。通过对一个实际电路进行仿真,具体说明了该方法在实际工程当中的应用。  相似文献   

16.
党静  余臻  刘宇 《计测技术》2023,(5):91-96
针对某型单轴摆式加速度计在使用过程中出现的电压输出异常典型故障问题进行理论分析,利用故障树综合性分析方法,层层追踪分析,表达出输出电压出现异常故障内在联系,直观指出单元故障与整体故障之间的逻辑关系,切实归纳总结了工程运用中实际遇到的单轴摆式加速度计输出电压异常故障问题。而后对故障原因进行精确细致定位,提出振荡器电路中的D4二极管正极金丝断裂为此型号加速度计故障根本原因,融入电路金丝断裂机理分析法,并通过试验测试与改进措施方式验证了此故障分析的准确性,一定程度上提升了加速度计可靠性。研究成果可以为加速度计可靠性使用提供一定参考。  相似文献   

17.
Test limitations of parametric faults in analog circuits   总被引:2,自引:0,他引:2  
This paper investigates the detectability of parameter faults in linear, time-invariant, analog circuits and sheds new light on a number of very important test attributes. We show that there are inherent limitations with regard to analog faults detectability. It is shown that many parameter faults are undetectable irrespective of which test methodology is being used to catch them. It is also shown that, in many cases, the detectable minimum-size parameter fault is considerably larger than the normal parameter drift. Sometimes the minimum-size detectable fault is two to five times the parameter drift. We show that one of the fault-masking conditions in analog circuits, commonly believed to be true, is, in fact, untrue. We illustrate this with a simple counter example. We also show that, in analog circuits, it is possible for a fault-free parameter to mask an otherwise detectable parametric fault. We define the small-size parameter fault coverage, and describe ways to calculate or estimate it. This figure of merit is especially suitable in characterizing the test efficiency in the presence of small-size parameter faults. We further show that circuit specification requirements may be translated into parameter tolerance requirements. By doing so, a test for parametric faults can, indirectly, address circuit specification compliance. The test limitations of parametric faults in analog circuits are illustrated using numerous examples.  相似文献   

18.
This paper aims to describe a new methodology specifically designed for testing measurement and diagnostic software. A black-box procedure allows the user to verify whether the functional requirements of a software module under test are fulfilled. The robust experimental design techniques and statistical theories implemented to generate the software input test sets are described in detail. The reliability of the testing methodology is estimated by applying it to diagnostic software in wide use throughout the automotive industry. The results of validation tests carried out on data from a car engine measurement system are reported and analyzed.   相似文献   

19.
A new approach for protection of parallel transmission lines is presented using a time-frequency transform known as the S-transform that generates the S-matrix during fault conditions. The S-transform is an extension of the wavelet transform and provides excellent time localisation of voltage and current signals during fault conditions. The change in energy is calculated from the S-matrix of the current signal using signal samples for a period of one cycle. The change in energy in any of the phases of the two lines can be used to identify the faulty phase based on some threshold value. Once the faulty phase is identified the differences in magnitude and phase are utilised to identify the faulty line. For similar types of simultaneous faults on both the lines and external faults beyond the protected zone, where phasor comparison does not work, the impedance to the fault point is calculated from the estimated phasors. The computed phasors are then used to trip the circuit breakers in both lines. The proposed method for transmission-line protection includes all 11 types of shunt faults on one line and also simultaneous faults on both lines. The robustness of the proposed algorithm is tested by adding significant noise to the simulated voltage and current waveforms of a parallel transmission line. A laboratory power network simulator is used for testing the efficacy of the algorithm in a more realistic manner.  相似文献   

20.
This paper presents a methodology, based on Virtual Reality (VR), for representing a manufacturing system in order to help with the requirement analysis (RA) in CIM system development, suitable for SMEs. The methodology can reduce the costs and the time involved at this stage by producing precise and accurate plans, specification requirements, and a design for CIM information systems. These are essentials for small and medium scale manufacturing enterprises. Virtual Reality is computer-based and has better visualization effects for representing manufacturing systems than any other graphical user interface, and this helps users to collect information and decision needs quickly and correctly. A VR-RA tool is designed and developed as a software system to realize the features outlined in each phase of the methodology. A set of rules and a knowledge base is appended to the methodology to remove any inconsistency that could arise between the material and the information flows during the requirement analysis. A novel environment for matching the physical and the information model domains is suggested to delineate the requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号