共查询到20条相似文献,搜索用时 265 毫秒
1.
一个考虑多种排错延迟的NHPP类软件可靠性增长模型 总被引:5,自引:0,他引:5
软件可靠性增长模型通常假设软件的测试环境与软件实际运行的现场环境相同,期望利用测试阶段获得的失效数据评估软件在现场运行时的失效行为。多数非齐次泊松过程类软件可靠性增长模型假设软件故障被发现后立即被排除,这点假设无论是在测试环境还是在现场环境下都很难实现。根据故障对测试过程的影响,故障的排错时间可被分为多种。提出了一个考虑多种排错延迟的软件可靠性增长模型,讨论了基于这个模型的故障排除效率函数,指出从用户角度出发讨论软件可靠性时必须考虑重复性故障。 相似文献
2.
3.
4.
考虑软件不同失效过程偏差的软件可靠性模型 总被引:3,自引:0,他引:3
软件可靠性分析是根据软件失效数据等信息,通过合理建模来对软件可靠性进行预计和评价.现有的基于随机过程的可靠性模型一般采用均值过程来描述软件失效数据,然而,软件失效数据的模型化实质上应该是使其成为某个随机过程的一个样本轨迹.文中建立了考虑软件不同失效过程偏差的软件可靠性模型,用NHPP过程表示失效过程均值函数的变化趋势,ARMA过程表示实际失效过程对均值过程的偏差序列.在两组公开发表的真实数据集上对模型的实验表明,新模型较之一些广泛使用的NHPP软件可靠性模型在拟合能力及适用性上有明显的提高,并且保持了较好的预测能力. 相似文献
5.
软件可靠性研究已有四十多年的历史 ,近百个软件可靠性模型被建立起来。然而作为软件可靠性建模基础的可靠性数据在国内外文献中却不多见。软件可靠性数据是一切软件可靠性研究活动的基础 ,其重要性不言而喻。在ERP项目运行中收集了一批可靠性数据 ,分析这些数据发现 ,各类软件错误产生是有先后顺序的 相似文献
6.
软件可靠性研究已有四十多年的历史,近百个软件可靠性模型被建立起来.然而作为软件可靠性建模基础的可靠性数据在国内外文献中却不多见.软件可靠性数据是一切软件可靠性研究活动的基础,其重要性不言而喻.在ERP项目运行中收集了一批可靠性数据,分析这些数据发现,各类软件错误产生是有先后顺序的. 相似文献
7.
为定量评估软件的可靠性指标,介绍了利用软件可靠性模型评估软件可靠性的过程和方法;针对某星载嵌入式软件的失效趋势,根据模型的选择原则和方法,以及模型的预测质量的对比,最终选择了指数模型作为可靠性评估模型。对该软件在轨运行情况进行了可靠性评估,开展了基于该软件可靠性测试数据的可靠性评估,评估结果给出了该软件的可靠性水平。 相似文献
8.
基于未确知理论的软件可靠性建模 总被引:19,自引:0,他引:19
将未确知理论应用于软件可靠性建模研究,采用其分析软件故障过程,用未确知数学描述软件失效特征计算软件可靠性参数,并在此基础上构建了一个基于未确知数学理论的软件可靠性模型.新模型改变了传统的建模思路,跳出了传统软件可靠性建模过程中关于失效强度变化的各种统计分布假设的束缚,具有较好的适用性,改善了模型应用中的不一致性问题. 相似文献
9.
软件可靠性研究已有四十多年的历史,近百个软件可靠性模型被建立起来,然而作为软件可靠性建模基础的可靠性数据在国内外文献中却不多见,软件可靠性数据是一切软件可靠性研究活动的基础,其重要性不言而喻,在ERP项目运行中收集了一批可靠性数据,分析这些数据发现,各类软件错误产生是有先后顺序的。 相似文献
10.
11.
12.
传统的可靠性评估方法都是基于系统软件运行期间的失效,对于武器系统软件,由于其使用试验耗费巨资且周期很长,不可能对系统进行过多的使用试验,导致难以采集到高质量的失效数据。提出一种基于系统状态验证覆盖的Bayes软件可靠性评估方法,该方法以Bayes可靠性模型为评估准则,通过状态覆盖率来保证充分性,通过状态测试验证来保证可靠性,提倡可信性与可靠性并行增长。 相似文献
13.
Software reliability growth models attempt to forecast the future reliability of a software system, based on observations of the historical occurrences of failures. This allows management to estimate the failure rate of the system in field use, and to set release criteria based on these forecasts. However, the current software reliability growth models have never proven to be accurate enough for widespread industry use. One possible reason is that the model forms themselves may not accurately capture the underlying process of fault injection in software; it has been suggested that fault injection is better modeled as a chaotic process rather than a random one. This possibility, while intriguing, has not yet been evaluated in large-scale, modern software reliability growth datasets.We report on an analysis of four software reliability growth datasets, including ones drawn from the Android and Mozilla open-source software communities. These are the four largest software reliability growth datasets we are aware of in the public domain, ranging from 1200 to over 86,000 observations. We employ the methods of nonlinear time series analysis to test for chaotic behavior in these time series; we find that three of the four do show evidence of such behavior (specifically, a multifractal attractor). Finally, we compare a deterministic time series forecasting algorithm against a statistical one on both datasets, to evaluate whether exploiting the apparent chaotic behavior might lead to more accurate reliability forecasts. 相似文献
14.
Bedir Tekinerdogan Author Vitae Hasan Sozer Author VitaeAuthor Vitae 《Journal of Systems and Software》2008,81(4):558-575
With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components and usually assume the availability of a running system. Software architecture analysis methods aim to analyze the quality of software-intensive system early at the software architecture design level and before a system is implemented. We propose a Software Architecture Reliability Analysis Approach (SARAH) that benefits from mature reliability engineering techniques and scenario-based software architecture analysis to provide an early software reliability analysis at the architecture design level. SARAH defines the notion of failure scenario model that is based on the Failure Modes and Effects Analysis method (FMEA) in the reliability engineering domain. The failure scenario model is applied to represent so-called failure scenarios that are utilized to derive fault tree sets (FTS). Fault tree sets are utilized to provide a severity analysis for the overall software architecture and the individual architectural elements. Despite conventional reliability analysis techniques which prioritize failures based on criteria such as safety concerns, in SARAH failure scenarios are prioritized based on severity from the end-user perspective. SARAH results in a failure analysis report that can be utilized to identify architectural tactics for improving the reliability of the software architecture. The approach is illustrated using an industrial case for analyzing reliability of the software architecture of the next release of a Digital TV. 相似文献
15.
Safety and reliability have become important software quality characteristics in the development of safety-critical software
systems. However, there are so far no quantitative methods for assessing a safety-critical software system in terms of the
safety/reliability characteristics. The metrics of software safety is defined as the probability that conditions that can
lead to hazards do not occur. In this paper, we propose two stochastic models for software safety/reliability assessment:
the data-domain dependent safety assessment model and the availability-related safety assessment model. These models focus
on describing the time- or execution-dependent behavior of the software faults which can lead to unsafe states when they cause
software failures. The application of one of these models to optimal software release problems is also discussed. Finally,
numerical examples are illustrated for quantitative software safety assessment and optimal software release policies.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
16.
17.
Traditional parametric software reliability growth models (SRGMs) are based on some assumptions or distributions and none such single model can produce accurate prediction results in all circumstances. Non-parametric models like the artificial neural network (ANN) based models can predict software reliability based on only fault history data without any assumptions. In this paper, initially we propose a robust feedforward neural network (FFNN) based dynamic weighted combination model (PFFNNDWCM) for software reliability prediction. Four well-known traditional SRGMs are combined based on the dynamically evaluated weights determined by the learning algorithm of the proposed FFNN. Based on this proposed FFNN architecture, we also propose a robust recurrent neural network (RNN) based dynamic weighted combination model (PRNNDWCM) to predict the software reliability more justifiably. A real-coded genetic algorithm (GA) is proposed to train the ANNs. Predictability of the proposed models are compared with the existing ANN based software reliability models through three real software failure data sets. We also compare the performances of the proposed models with the models that can be developed by combining three or two of the four SRGMs. Comparative studies demonstrate that the PFFNNDWCM and PRNNDWCM present fairly accurate fitting and predictive capability than the other existing ANN based models. Numerical and graphical explanations show that PRNNDWCM is promising for software reliability prediction since its fitting and prediction error is much less relative to the PFFNNDWCM. 相似文献
18.
Campodonico S. Singpurwalla N.D. 《IEEE transactions on pattern analysis and machine intelligence》1994,20(9):677-683
We propose a Bayesian approach for predicting the number of failures in a piece of software, using the logarithmic-Poisson model, a nonhomogeneous Poisson process (NHPP) commonly used for describing software failures. A similar approach can be applied to other forms of the NHPP. The key feature of the approach is that now we are able to use, in a formal manner, expert knowledge on software testing, as for example, published information on the empirical experiences of other researchers. This is accomplished by treating such information as expert opinion in the construction of a likelihood function which leads us to a joint distribution. The procedure is computationally intensive, but for the case of the logarithmic-Poisson model has been codified for use on a personal computer. We illustrate the working of the approach via some real live data on software testing. The aim is not to propose another model for software reliability assessment. Rather, we present a methodology that can be invoked with existing software reliability models 相似文献
19.
Frankl P.G. Hamlet R.G. Littlewood B. Strigini L. 《IEEE transactions on pattern analysis and machine intelligence》1998,24(8):586-601
There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons 相似文献