首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two problems which are of great interest in relation to software reliability are the prediction of future times to failure and the calculation of the optimal release time. An important assumption in software reliability analysis is that the reliability grows whenever bugs are found and removed. In this paper we present a model for software reliability analysis using the Bayesian statistical approach in order to incorporate in the analysis prior assumptions such as the (decreasing) ordering in the assumed constant failure rates of prescribed intervals. We use as prior model the product of gamma functions for each pair of subsequent interval constant failure rates, considering as the location parameter of the first interval the failure rate of the following interval. In this way we include the failure rate ordering information. Using this approach sequentially, we predict the time to failure for the next failure using the previous information obtained. Using also the relevant predictive distributions obtained, we calculate the optimal release time for two different requirements of interest: (a) the probability of an in‐service failure in a prescribed time t; (b) the cost associated with a single or more failures in a prescribed time t. Finally a numerical example is presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man–machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1−R. This representation denotes the memory characteristics of the second failure cause. Man–machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted ‘bathtub’ procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons.  相似文献   

3.
Software reliability models can provide quantitative measures of the reliability of software systems which are of growing importance today. Most of the models are parametric ones which rely on the modelling of the software failure process as a Markov or non-homogeneous Poisson process. It has been noticed that many of them do not give a very accurate prediction of future software failures as the focus is on the fitting of past data. In this paper we study the use of the double exponential smoothing technique to predict software failures. The proposed approach is a non-parametric one and has the ability of providing more accurate prediction compared with traditional parametric models because it gives a higher weight to the most recent failure data for a better prediction of future behaviour. The method is very easy to use and requires a very limited amount of data storage and computational effort. It can be updated instantly without much calculation. Hence it is a tool that should be more commonly used in practice. Numerical examples are shown to highlight the applicability. Comparisons with other commonly used software reliability growth models are also presented. © 1997 John Wiley & Sons, Ltd.  相似文献   

4.
No one software reliability growth model has ever been shown to work well in all circumstances. This paper presents our evaluation results for two case studies in which the Akaike information criterion (AIC) was used. The AIC not only selects the best model among several reliability models, but also possess favourable properties that practitioners like to see in their software reliability modelling practices. These properties include simplicity, accuracy and ease of application. We propose using the Akaike information criterion to select the best model for each software system.  相似文献   

5.
Summary This paper discusses the Bayesian reliability analysis for an exponential failure model on the basis of some ordered observations when the firstp observations may represent “early failures” or “outliers”. The Bayes estimators of the mean life and reliability are obtained for the underlying parametric model referred to as theSB(p) model under the assumption of the squared error loss function, the inverted gamma prior for the scale parameter and a generalized uniform prior for the nuisance parameter.  相似文献   

6.
This paper presents a new and alternative computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry based on high‐dimensional model representation (HDMR) generated from low‐order function components. HDMR is a general set of quantitative model assessment and analysis tools for capturing the high‐dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher‐order variable correlations are weak, allowing the physical model to be captured by the lower‐order terms and facilitating lower‐dimensional approximation of the original high‐dimensional implicit limit state/performance function. When first‐order HDMR approximation of the original high‐dimensional implicit limit state/performance function is not adequate to provide the desired accuracy to the predicted failure probability, this paper presents an enhanced HDMR (eHDMR) method to represent the higher‐order terms of HDMR expansion by expressions similar to the lower‐order ones with monomial multipliers. The accuracy of the HDMR expansion can be significantly improved using preconditioning with a minimal number of additional input–output samples without directly invoking the determination of second‐ and higher‐order terms. The mathematical foundation of eHDMR is presented along with its applicability to approximate the original high‐dimensional implicit limit state/performance function for subsequent reliability analysis, given that conventional methods for reliability analysis are computationally demanding when applied in conjunction with complex finite element models. This study aims to assess how accurately and efficiently the eHDMR approximation technique can capture complex model output uncertainty. The limit state/performance function surrogate is constructed using moving least‐squares interpolation formula by component functions of eHDMR expansion. Once the approximate form of implicit response function is defined, the failure probability can be obtained by statistical simulation. Results of five numerical examples involving elementary mathematical functions and structural/solid‐mechanics problems indicate that the failure probability obtained using the eHDMR approximation method for implicit limit state/performance function, provides significant accuracy when compared with the conventional Monte Carlo method, while requiring fewer original model simulations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Over the years, several tools have been developed to estimate the reliability of hardware and software components. Typically, such tools are either for hardware or software. This paper presents the Software Tool for Reliability Estimation (STORE), which can be used for systems containing hardware and/or software components. For software components, exponential, Weibull, gamma, power, geometric, and inverse-linear models were implemented. Goodness of fit statistics are provided for each model. The user can select the most appropriate model for a given system configuration and failure data. The STORE program can analyze series, parallel, and complex systems. Tieset and cutset algorithm is utilized to determine the reliability of a complex system. The paper presents several examples to demonstrate the software tool.  相似文献   

8.
The present paper focuses on reliability prediction of composite structure under hygro-thermo-mechanical loading, conditioned by Tsai-Wu failure criterion, where the Monte–Carlo method is used to estimate the failure probability(Pf). This model was developed in two steps: first, the development of a deterministic model, based on an analytical and numerical approach, and then, a probabilistic computation. Using the hoop stress for each ply, a sensitivity analysis was performed for random design variables, such as materials properties, geometry, manufacturing, and loading, on composite cylindrical structure reliability. The probabilistic results show the very high increase of failure probability when all parameters are considered.  相似文献   

9.
For the last three decades, reliability growth has been studied to predict software reliability in the testing/debugging phase. Most of the models developed were based on the non-homogeneous Poisson process (NHPP), and S-shaped type or exponential-shaped type of behavior is usually assumed. Unfortunately, such models may be suitable only for particular software failure data, thus narrowing the scope of applications. Therefore, from the perspective of learning effects that can influence the process of software reliability growth, we considered that efficiency in testing/debugging concerned not only the ability of the testing staff but also the learning effect that comes from inspecting the testing/debugging codes. The proposed approach can reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and the results in the experiment show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other software failure models was also performed. Finally, an optimal software release policy is suggested.  相似文献   

10.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

11.
基于可靠性框图的可靠性建模研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统可靠性仿真模型建模繁琐、编程困难的问题,以可靠性框图为基础,设计了基于ExtendSim的可靠性建模流程,以两单元并联可修系统为例,建立了基于“致命修复”和“即坏即修”策略的仿真模型,并与解析模型进行了对比验证.仿真结果表明建立的可靠性模型是可信的,且此建模方法易于工程技术人员掌握,具有一定的推广价值.  相似文献   

12.
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability index for the original problem. This method has application to many civil engineering problems that involve random fields of material properties or loads. An application to reliability analysis of foundation piles illustrates the proposed method.  相似文献   

13.
Small sample reliability growth modeling using a grey systems model   总被引:1,自引:0,他引:1  
ABSTRACT

When performing system-level developmental testing, time and expenses generally warrant a small sample size for failure data. Upon failure discovery, redesigns and/or corrective actions can be implemented to improve system reliability. Current methods for estimating reliability growth, namely the Crow (AMSAA) growth model, stipulate that parameter estimates have a great level of uncertainty when dealing with small sample sizes. For purposes of handling limited failure data, we propose the use of a modified GM(1,1) model to predict system reliability growth parameters and investigate how parameter estimates are affected by systems whose failures follow a poly-Weibull distribution. Monte-Carlo simulation is used to map the response surface of system reliability, and results are used to compare the accuracy of the modified GM(1,1) model to that of the AMSAA growth model. It is shown that with small sample sizes and multiple failure modes, the modified GM(1,1) model is more accurate than the AMSAA model for prediction of growth model parameters.  相似文献   

14.
When dealing with practical problems of stress–strength reliability, one can work with fatigue life data and make use of the well‐known relation between stress and cycles until failure. For some materials, this kind of data can involve extremely large values. In this context, this paper discusses the problem of estimating the reliability index R = P(Y < X) for stress–strength reliability, where stress Y and strength X are independent q‐exponential random variables. This choice is based on the q‐exponential distribution's capability to model data with extremely large values. We develop the maximum likelihood estimator for the index R and analyze its behavior by means of simulated experiments. Moreover, confidence intervals are developed based on parametric and nonparametric bootstrap. The proposed approach is applied to two case studies involving experimental data: The first one is related to the analysis of high‐cycle fatigue of ductile cast iron, whereas the second one evaluates the specimen size effects on gigacycle fatigue properties of high‐strength steel. The adequacy of the q‐exponential distribution for both case studies and the point and interval estimates based on maximum likelihood estimator of the index R are provided. A comparison between the q‐exponential and both Weibull and exponential distributions shows that the q‐exponential distribution presents better results for fitting both stress and strength experimental data as well as for the estimated R index. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
This paper presents the similarities and differences between hardware, software and system reliability. Relative contributions to system failures are shown for software and hardware and failure and recovery propensities are also discussed. Reliability, availability and maintainability (RAM) concepts have been broadly developed for software reliability than hardware reliability. Extending these software concepts to hardware and system reliability helps in examining the reliability of complex systems. The paper concludes with assurance techniques for defending against faults. Most of the techniques discussed originate in software reliability but apply to all aspects of a system. Also, the effects of redundancy on overall system availability are shown.  相似文献   

16.
Assembly process is a critical stage in the formation of product quality and reliability, but related consideration of the produced product reliability and accident risk has not attracted deserved attention in the most assembly quality analysis frameworks. To this end, this paper enhances risk analysis in assembly process quality control, which is advocated by ISO 9001:2015, and presents a risk-oriented assembly quality analysis approach considering the effects of assembly variations on the produced product reliability degradation and accident risk. First, a conceptual QRR chain is presented to illustrate the relationship among assembly process quality (Q), product reliability (R), and failure accident risk (R). Second, a risk-oriented and bidirectional framework for the analysis of assembly process quality is established based on the presented QRR chain aiming to quantitatively identify the risk sources in the assembly process and reduce the risk of failure accidents. Third, an assembly process quality risk model with key function reliability as its core is presented to establish the quantitative relationship between assembly variation and product failure accident risk. Finally, the presented approach is verified through a case study of an assembling quality risk analysing for acid-resistant grinder.  相似文献   

17.
Many software reliability growth models (SRGMs) based on a non-homogenous Poisson process (NHPP) have been developed with the assumption of a constant fault detection rate (FDR) and a fault detection process dependent only on the residual fault content. In this paper we develop a SRGM based on NHPP using a different approach for model development. Here, the fault detection process is dependent not only on the residual fault content, but also on the testing time. It incorporates a realistic situation encountered in software development where the fault detection rate is not constant over the entire testing process, but changes due to variations in resource allocation, defect density, running environment and testing strategy (called the change-point). Here, the FDR is defined as a function of testing time. The proposed model also incorporates the testing effort with the change-point concept which is useful in solving the problems of runaway software projects and provides the testing effort control technique and flexibility to project managers to obtain the desired reliability level. It utilizes failure data collected from software development projects to show its applicability and effectiveness. The statistical package for social sciences (SPSS) based on the least-squares method has been used to estimate unknown parameters. The mean squared error (MSE), relative predictive error (RPE), average mean squared error (AMSE) and the average relative predictive error (ARPE) have been used to validate the model. It is observed that the proposed model results are accurate, highly predictive and incorporate industrial software project concepts.  相似文献   

18.
In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed.  相似文献   

19.
Reliability Centred Maintenance (RCM) is a procedure carried out as part of the logistic support analysis (LSA) process and is described in the US Department of Defence Military Standards (Mil Std 2173). RCM allows logisticians the opportunity to determine the best maintenance policy for each component within a system. However, the only data that are available to carryout RCM using Mil Std 2173 are of MTBF. This implies that all the necessary mathematical models need to be based on the exponential distribution. This is a serious drawback to the whole concept of RCM as the exponential distribution cannot be used to model items that fail due to wear, or any other mode that is related to their age. In this paper, a new approach to RCM is proposed using the concepts of soft life and hard life to optimise the total maintenance cost. For simplicity, only one mode of failure is considered for each component. However, the model can be readily applied to multiple failure modes. The proposed model is applied to find the optimal maintenance policies in the case of military aero-engines using Monte Carlo simulation. The case study shows a potential benefit from setting soft lives on relatively cheap components that can cause expensive, unplanned engine rejections.  相似文献   

20.
Motivated by real-world applications of satellites and wireless sensor networks, this paper models and evaluates a dynamic k-out-of-n phase-AND mission system (k/n-PAMS). The mission task conducted by a k/n-PAMS involves multiple consecutive phases; the mission is successful as long as the task is successful in any of the phases. Due to factors, such as scheduled maintenance, location changes in task execution during different phases, and resource sharing with other tasks, the total number of available components n for the considered mission task and the required number of working components k may change from phase to phase. In addition, due to varying load and working environments, component failure time distributions are also phase dependent. This paper proposes an analytical modeling approach based on multivalued decision diagrams (MDDs) for assessing reliability of the considered k/n-PAMS. The approach encompasses a new and fast MDD model generation algorithm that considers behaviors of all the mission phases simultaneously based on node labeling. As demonstrated through empirical studies on k/n-PAMSs with different sizes (different numbers of phases and different numbers of system components), the proposed algorithm is more efficient than the traditional phase-by-phase model generation method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号