首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
"Software test-coverage measures" quantify the degree of thoroughness of testing. Tools are now available that measure test-coverage in terms of blocks, branches, computation-uses, predicate-uses, etc. that are covered. This paper models the relations among testing time, coverage, and reliability. An LE (logarithmic-exponential) model is presented that relates testing effort to test coverage (block, branch, computation-use, or predicate-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have various probabilities of being exercised; just like defects have various probabilities of being encountered. This model allows relating a test-coverage measure directly with defect-coverage. The model is fitted to 4 data-sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test-application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data-sets used suggest the potential of the proposed model. The model is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based logarithmic software-reliability growth model. It considers that: at 100% coverage for a given enumerable, all defects might not yet have been found.  相似文献   

2.
Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given.  相似文献   

3.
Past research in software reliability concentrated on reliability growth of one-version software. This work proposes models for describing the dependency of N-version software. The models are illustrated via a logarithmic Poisson execution-time model by postulating assumptions of dependency among nonhomogeneous Poisson processes (NHPPs) of debugging behavior. Two-version, three-version, and general N-version models are proposed. The redundancy techniques discussed serve as a basis for fault-tolerant software design. The system reliability, related performance measures, and parameter estimation of model parameters when N=2 are presented. Based on the assumption of linear dependency among the NHPPs, two types of models are developed. The analytical models are useful primarily in estimating and monitoring software reliability of fault-tolerant software. Without considering dependency of failures, the estimation of reliability would not be conservative  相似文献   

4.
Several research studies have shown a strong relationship between program complexity, as measured by the structural properties of a program, and its error properties, as measured by number and types of errors and error detection and correction times. This research applies to: a) the setting of threshold values of complexity in software production in order to avoid undue difficulty with program debugging; b) the use of complexity as an index for allocating resources during the test phase of software development; c) the use of complexity for developing test strategies and the selection of test data. Application #c uses the directed graph representation of a program and its complexity measures to decompose the program into its basic constructs. The identification of the constructs serves to identify a) the components of the program which must be tested, and b) the selection of test data which are needed to exercise these components. Directed-graph properties which apply to program development and testing are defined; examples of the application of graph properties for program development and testing are given; the results of program complexity and error measurements are presented; and a procedure for complexity measurement and its use in programming and testing is summarized.  相似文献   

5.
针对嵌入式操作系统软件开发过程中的缓冲区溢出现象,提出一种基于边界检测的缓冲区溢出检测方法,给出该方法的理论基础,描述实验步骤及实验过程,该方法为需检测的数据缓冲区与检测变量分配连续的内存区域,通过检测变量的改变与否直观的检测出缓冲区是否溢出,并执行相应的告警和补救措施。  相似文献   

6.
Feasible images have been defined as those images that could have generated the original data by the statistical process that governs the measurement. In the case of emission tomography, the statistical process of emission is Poisson and it is known that feasible images resulting from the maximum likelihood estimator (MLE) and Bayesian methods with entropy priors can be of high quality. Tests for feasibility have been described that are based on one critical assumption: the image that is being tested is independent of the data, even though the reconstruction algorithm has used those data in order to obtain the image. This fact could render the procedure unacceptable unless it is shown that its effects on the results of the tests are negligible. Experimental evidence is presented showing that images reconstructed by the MLE and stopped before convergence do indeed behave as if independent of the data. The results justify the use of hypothesis testing in practice, although they leave the problem of analytical proof still open.  相似文献   

7.
An algorithmic procedure is developed for determining the release time of a software system with multiple modules where the underlying module structure is explicitly incorporated. Depending on how much the module is used during exception, the impact of software bugs from one module is distinguished from the impact of software bugs from another module. It is assumed that software bugs in one module have i.i.d. lifetimes but lifetime distributions can vary from one module to another. For the two cases of exponential and Weibull lifetimes, statistical procedures are developed for estimating distribution parameters based on failure data during the test period for individual modules. In the exponential case, the number of software bugs can also be estimated following H. Joe and N. Reid (J. Amer. Statis. Assoc., vol.80, p.222-6, 1985). These estimates enable one to evaluate the average cost due to undetected software bugs. By introducing an objective function incorporating this average cost as well as the time-dependent value of the software system and the cumulative running cost of the software testing, a decision criterion is given for determining whether the software system should be released or the test should be continued further for a certain period Δ. The validity of this procedure is examined through extensive Monte-Carlo simulation  相似文献   

8.
Reliability demonstration testing for software   总被引:2,自引:0,他引:2  
Reliability demonstration testing for software conducted to demonstrate whether the specified reliability has been achieved in a software product is studied. The software-reliability demonstration test is based on two risks. This procedure determines a test time and a maximum number of software failures during the test under prespecified producer and consumer risks. A procedure for zero software-failure reliability demonstration testing is proposed where the software product is accepted only if no failures occur in a specified testing time. The procedure based on two types of risk is identical to that for hardware items. The formulation for the zero software-failure reliability demonstration testing is identical to zero failure reliability demonstration testing for hardware items  相似文献   

9.
刘涛  卢希  冯飞  王月波 《电讯技术》2022,62(3):317-322
针对航空机载软件测试环境与开发环境冲突、测试环境可控性和通用性差、非干预性测试困难的问题,分析了全物理实装测试环境、半实物仿真测试环境的优缺点,研究了全数字仿真测试技术,设计并实现了一种航空机载软件全数字仿真测试系统。该系统由仿真核心平台、仿真工具组件、协同仿真组件和人机交互组件构成,提供了航空机载处理器、内存、外设等多种可重用库。提出了基于底层虚拟机的动态二进制翻译技术、协同仿真时间同步和数据通信机制等关键技术,实现了航空机载软件全数字高速闭环仿真运行。工程实践证明,该系统能达到降低硬件设备的依耐性、简化测试环境搭建的复杂度、提高测试效率约42%的目的。  相似文献   

10.
Effect of code coverage on software reliability measurement   总被引:1,自引:0,他引:1  
Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing  相似文献   

11.
Generally, in a complex software system there may be errors, whose removal is dependent on the previously removed errors and may result in a slowing down of the removal process for a period time. This dependency can be described in different ways. In this paper, we develop an SRGM which takes care of the underlying error dependency in a software system. The SRGM has the built-in flexibility and has been tested on real software error data to show its applicability.  相似文献   

12.
This study applies canonical correlation analysis to investigate the relationships between source-code (SC) complexity and fault-correction (FC) activity. Product and process measures collected during the development of a commercial real-time product provide the data for this analysis. Sets of variables represent SC complexity and FC activity. A canonical model represents the relationships between these sets. s-significant canonical correlations along 2 dimensions support the hypothesis that SC complexity exerted a causal influence on FC activity during the system-test phase of the real-time product. Interpretation of the s-significant canonical correlations suggests that two subsets of product measures had different relationships with process activity. One is related to design-change activity that resulted in faults, and the other is related directly to faults. Further, faults having less impact on the system-test process associated with design-change activity that occurred during the system-test phase, while those having more impact associated with SC complexity at entry to the system-test phase. The study demonstrates canonical correlation analysis as a useful exploratory tool for understanding influences that affected past development efforts. However, generalization of the canonical relationships to all software development efforts is untenable since the model does not represent many important influences on the modeled latent variables, e.g., schedule pressure, testing effort, product domain, and level of engineering expertise  相似文献   

13.
Four implementations of fault-tolerant software techniques are evaluated with respect to hardware and design faults. Project participants were divided into four groups, each of which developed fault-tolerant software based on a common specification. Each group applied one of the following techniques: N-version programming, recovery block, concurrent error-detection, and algorithm-based fault tolerance. Independent testing and modeling groups analyzed the software. The testing group subjected it to simulated design and hardware faults. The data were then mapped into a discrete-time Markov model developed by the modeling group. The effectiveness of each technique with respect to availability, correctness, and time to failure given an error, as shown by the model, is contrasted with measured data. The model is analyzed with respect to additional figures of merit identified during the modeling process, and the techniques are ranked using an application taxonomy  相似文献   

14.
业务流程路径覆盖方法的研究与实现   总被引:1,自引:1,他引:0  
王磊  罗省贤 《电子测试》2009,(1):15-19,52
本文提出了一种面向行业应用的业务流程路径覆盖方法,对通过业务分析生成的业务流程图进行统一协调管理,寻找基于业务流程图的路径覆盖及自动化生成方法,力求解决大型业务系统软件测试自动化程度低,自动化测试脚本无法有效管理等问题。本文根据已经成熟的结构化的测试技术建立了基于业务流程图的路径覆盖及自动化生成方法,实践证明,该方法使得软件测试工作由盲目变为有序,测试目的性更强,测试效率更高,有效地缩短了软件项目的开发周期。由该方法支持的自动化测试运行控制平台已经在一些大型金融系统得到应用,取得良好效果。  相似文献   

15.
传统维氏硬度测量需要人工干预测试过程,自动化程度低,重复精度低.鉴于此,本文设计了一种以89S52单片机为控制核心、使用PZT驱动器作为力发生装置、用于维氏显微硬度测试的微位移自动力加载控制系统,本文主要包括数据采集和PZT控制信号输出部分硬件与软件的设计.本文给出了原理框图和控制软件流程图,同时出了相关的实验数据及对实验数据的分析.研究结果表明,本设计方案可行,加载精度高,平均加载精度在0.43%以内.  相似文献   

16.
Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development  相似文献   

17.
Printed circuit board (PCB) interconnect test and reliability is addressed. The boundary scan test methodology proposed in the IEEE standard 1149.1 is reviewed, and its limitation is critically analyzed. Based on this, a technique is proposed to automate the interconnect wiring test, which is performed as part of the power-on self-test. Essential to the idea is the use of a walking sequence for test stimulus, and response compression by the multiple input signature registers (MISRs). The salient feature is its simplicity of operation. Unlike the existing boundary-scan methodology, interconnects are tested via on-site test generation; faults are detected by comparing the content of the MISR with the anticipated response. Formal analysis shows that the technique has a high test-coverage for the most common defects. As a result, PCB interconnect testing can be accomplished as part of the power-on self-test without external test fixtures  相似文献   

18.
根据中国移动多媒体广播(CMMB)系统的路测数据需求,提出了一种基于GE地图和KML脚本的CMMB路测分析系统开发方法。介绍了该路测分析系统的组成模块、GE地图和KML脚本关键技术以及具体设计步骤,并对该系统进行了工程实用测试,系统能够准确分析和显示CMMB路测结果参数。  相似文献   

19.
On account of various difficulties encountered with establishing the proof of correctness for software systems; the program testing seems to be the only sure way to prevent malfunctions from occurrence and thus to improve the software reliability. Whereas program proving is a reductive process; program testing is an affirmative process since everything done in testing can potentially contribute information about the quality of program being tested. Testing is the process of executing programs with representative input data or conditions, for which the correct results are known to determine whether incorrect results occur. This paper makes an attempt to provide a cross-section of current program testing technology—ranging from philosophical issues to research and development concepts to the extent that the known literature permits.  相似文献   

20.
软件测试是使用人工和自动手段来运行或测试某个系统的过程,其目的在于检验它是否满足规定的需求或是弄清预期结果与实际结果之间的差别。简单地说,如果有一段代码,查看这段代码并且从中找出错误,就是软件测试。软件测试课程主要是讲解软件测试的理论知识和自动化测试工具。从招聘网站上的数据可以看出,软件测试人员在IT公司的需求量是很大的,这些公司遍布了外资企业、合资企业、国企、大型民营企业和一些中小型公司。面对当前严峻的就业形势,软件测试显得更为重要。本文分析了软件测试的概念和作用以及软件测试职位的要求,从而得出计算机专业设置软件测试课程是很重要的结论。并且对软件测试课程的教学提出了合理的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号