首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The fault tree analysis is a well-established method in system safety and reliability assessment. We transferred the principles of this technique to an assembler code analysis, regarding any incorrect output of the software as the undesired top-level event. Starting from the instructions providing the outputs and tracking back to all instructions contributing to these outputs a hierarchical system of references is generated that may graphically be represented as a fault tree. To cope with the large number of relations in the code, a tool suite has been developed, which automatically creates these references and checks for unfulfilled preconditions of instructions. The tool was applied to the operational software of an inertial measurement unit, which provides safety critical signals for artificial stabilization of an aircraft. The method and its implementation as a software tool is presented and the benefits, surprising results, and limitations we have experienced were discussed.  相似文献   

2.
A thorough requirements analysis is indispensable for developing and implementing safety-critical software systems such as nuclear power plant (NPP) software systems because a single error in the requirements can generate serious software faults. However, it is very difficult to completely analyze system requirements. In this paper, an effective technique for the software requirements analysis is suggested. For requirements verification and validation (V&V) tasks, our technique uses software inspection, requirement traceability, and formal specification with structural decomposition. Software inspection and requirements traceability analysis are widely considered the most effective software V&V methods. Although formal methods are also considered an effective V&V activity, they are difficult to use properly in the nuclear fields as well as in other fields because of their mathematical nature. In this work, we propose an integrated environment (IE) approach for requirements, which is an integrated approach that enables easy inspection by combining requirement traceability and effective use of a formal method. The paper also introduces computer-aided tools for supporting IE approach for requirements. Called the nuclear software inspection support and requirements traceability (NuSISRT), the tool incorporates software inspection, requirement traceability, and formal specification capabilities. We designed the NuSISRT to partially automate software inspection and analysis of requirement traceability. In addition, for the formal specification and analysis, we used the formal requirements specification and analysis tool for nuclear engineering (NuSRS).  相似文献   

3.
为了有效地分析结构系统输入变量不确定性对结构系统失效概率的影响程度,研究了对基于失效概率的矩独立重要性测度的计算,并基于单层Monte Carlo计算法和密度权重计算法,建立了准确、高效求解失效概率矩独立重要性测度的单层密度权重法。工程算例表明,与已有的计算方法相比,该方法仅需较少的模型计算量就能得到足够准确的结果,极大地提高了计算效率,且具有很好的工程适用性。  相似文献   

4.
R K Shyamasundar 《Sadhana》1994,19(6):941-969
In this paper, we provide an overview of the use of formal methods in the development of safety critical systems and the notion ofsafety in the context. Our attempt would be to draw lessons from the various research efforts that have gone in towards the development of robust/reliable software for safety-critical systems. In the context of India leaping into hi-tech areas, we argue for the need of a thrust in the development of quality software and also discuss the steps to be initiated towards such a goal. “If only we could learn the right lessons from the successes of the past, we would not need to learn from our failures” C.A.R. Hoare An earlier version was presented as an Invited paper at the ISRO Conference on Software Engineering, VSSC, Trivandrum, 29–30 July 1994.  相似文献   

5.
M. B. Anoop  K. Balaji Rao 《Sadhana》2008,33(6):753-765
A fundamental component of safety assessment is the appropriate representation and incorporation of uncertainty. A procedure for handling hybrid uncertainties in stochastic mechanics problems is presented. The procedure can be used for determining the bounds on failure probability for cases where failure probability is a monotonic function of the fuzzy variables. The procedure is illustrated through an example problem of safety assessment of a nuclear power plant piping component against stress corrosion cracking, considering the stochastic evolution of stress corrosion cracks with time. It is found that the bounds obtained enclose the values of failure probability obtained from probabilistic analyses.  相似文献   

6.
Weak link (WL)/strong link (SL) systems constitute important parts of the overall operational design of high-consequence systems, with the SL system designed to permit operation of the system only under intended conditions and the WL system designed to prevent the unintended operation of the system under accident conditions. Degradation of the system under accident conditions into a state in which the WLs have not deactivated the system and the SLs have failed in the sense that they are in a configuration that could permit operation of the system is referred to as loss of assured safety. The probability of such degradation conditional on a specific set of accident conditions is referred to as probability of loss of assured safety (PLOAS). Previous work has developed computational procedures for the calculation of PLOAS under fire conditions for a system involving multiple WLs and SLs and with the assumption that a link fails instantly when it reaches its failure temperature. Extensions of these procedures are obtained for systems in which there is a temperature-dependent delay between the time at which a link reaches its failure temperature and the time at which that link actually fails.  相似文献   

7.
VVS Sarma  D Vijay Rao 《Sadhana》1997,22(1):121-132
In today’s competitive environment for software products, quality is an important characteristic. The development of large-scale software products is a complex and expensive process. Testing plays a very important role in ensuring product quality. Improving the software development process leads to improved product quality. We propose a queueing model based on re-entrant lines to depict the process of software modules undergoing testing/debugging, inspections and code reviews, verification and validation, and quality assurance tests before being accepted for use. Using the re-entrant line model for software testing, bounds on test times are obtained by considering the state transitions for a general class of modules and solving a linear programming model. Scheduling of software modules for tests at each process step yields the constraints for the linear program. The methodology presented is applied to the development of a software system and bounds on test times are obtained. These bounds are used to allocate time for the testing phase of the project and to estimate the release times of software.  相似文献   

8.
An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg–Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches.  相似文献   

9.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

10.
Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative ‘pre-prior’, and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data.  相似文献   

11.
Structural reliability methods aim at computing the probability of failure of systems with respect to prescribed limit state functions. A common practice to evaluate these limit state functions is using Monte Carlo simulations. The main drawback of this approach is the computational cost, because it requires computing a large number of deterministic finite element solutions. Surrogate models, which are built from a limited number of runs of the original model, have been developed, as substitute of the original model, to reduce the computational cost. However, these surrogate models, while decreasing drastically the computational cost, may fail in computing an accurate failure probability. In this paper, we focus on the control of the error introduced by a reduced basis surrogate model on the computation of the failure probability obtained by a Monte Carlo simulation. We propose a technique to determine bounds of this failure probability, as well as a strategy of enrichment of the reduced basis, based on limiting the bounds of the error of the failure probability for a multi‐material elastic structure. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
Asymptotic approximations for probability integrals   总被引:1,自引:0,他引:1  
This paper considers the asymptotic evaluation of probability integrals. The usual methods require that all random variables are transformed into standard normal variables. The method described here does not use such transformations. Asymptotic approximations are derived in the original space of the random variables. In this way it is also possible to obtain simple formulas for the sensitivity of the failure probability to changes in the distribution parameters.  相似文献   

13.
电子式电能表嵌入式软件测试依赖目标机和外部交互设备,运行环境与测试环境相适应问题是嵌入式软件仿真测试的技术难题.在研究了嵌入式软件仿真技术的基础上,提出了一种电子式电能表嵌入式软件半仿真测试的方法,构建了仿真测试环境,以实现对被测系统中嵌入式软件的动态测试.同时,还对其中的计算机仿真系统模拟外部交互设备、仿真测试环境工作模型进行了分析说明,通过对电子式电能表测试,验证该方法能较好地满足电子式电能表嵌入式软件测试.  相似文献   

14.
A quasi-linear system is referred to as a system linear in properties and subjected to multiplicative random excitations appearing also in the linear terms. It is known that exact solutions for the stationary moments can be obtained analytically for such a quasi-linear system if the excitations are Gaussian white noises. However, the exact response probability, which is non-Gaussian, is not obtainable analytically. In this paper, a neural network approach is proposed to evaluate the stationary response probability for quasi-linear systems under both additive and multiplicative excitations of Gaussian white noises based on the obtained exact statistical moments. Numerical examples show that the procedure yields accurate results if an appropriate form is assumed for the probability density function. The accuracy of the results is substantiated by comparing them with those obtained from Monte Carlo simulations.  相似文献   

15.
Various models which may be used for quantitative assessment of hardware, software and human reliability are compared in this paper. Important comparison criteria are the system life cycle phase in which the model is intended to be used, the failure category and reliability means considered in the model, model purpose, and model characteristic such as model construction approach, model output and model input. The main objective is to present limitations in the use of current models for reliability assessment of computer-based safety shutdown systems in the process industry and to provide recommendations on further model development. Main attention is given to presenting the overall concept of various models from a user's point of view rather than technical details of specific models. A new failure classification scheme is proposed which shows how hardware and software failures may be modelled in a common framework.  相似文献   

16.
This paper quantitatively presents the results of a case study which examines the fault tree analysis framework of the safety of digital systems. The case study is performed for the digital reactor protection system of nuclear power plants. The broader usage of digital equipment in nuclear power plants gives rise to the need for assessing safety and reliability because it plays an important role in proving the safety of a designed system in the nuclear industry. We quantitatively explain the relationship between the important characteristics of digital systems and the PSA result using mathematical expressions. We also demonstrate the effect of critical factors on the system safety by sensitivity study and the result which is quantified using the fault tree method shows that some factors remarkably affect the system safety. They are the common cause failure, the coverage of fault tolerant mechanisms and software failure probability.  相似文献   

17.
This paper presents a simulation technique for reliability analysis of linear dynamical systems. It is based on simple additive rules of probability (in contrast to other probabilistic approaches such as importance sampling). It is shown that the proposed appoach is identical to a newly developed approach, Importance Sampling using Elementary Events (ISEE) [Au SK, Beck JL. First excursion probabilities for linear sytems by very efficient importance sampling. Probabl Eng Mech 2001;16(3):193–208]. A simple formula for the coefficient of variation of the estimator of the failure probability using the samples is also given. A 10-story building model with nonstationary excitation is utilized to demonstrate the accuracy and efficiency of the proposed method.  相似文献   

18.
Dynamic response analysis of nonlinear structures involving random parameters has for a long time been an important and challenging problem. In recent years, the probability density evolution method, which is capable of capturing the instantaneous probability density function (PDF) of the dynamic response and its evolution, has been proposed and developed for nonlinear stochastic dynamical systems. In the probability density evolution method, the strategy of selecting representative points is of critical importance to the efficiency especially when the number of random parameters is large. Enlightened by Cantor’s set theory, a strategy of dimension-reduction via mapping is proposed in the present paper. In the strategy, a two-dimensional domain is firstly considered and discretized such that the grid points are assigned with probabilities associated to the joint PDF. These points are then sorted and set on a virtual line according to a certain principle. Partitioning the sorted points on the virtual line into a certain number of intervals and selecting one single point in each interval, the two random variables can be transformed to a single comprehensive random variable. The associated probability of each point is simultaneously transformed accordingly. In the case of multiple random parameters, the above dimension-reduction procedure from two to one could be used recursively such that the random vector is finally transformed to one single comprehensive random variable. Numerical examples are investigated, showing that the proposed method is of high efficiency and fair accuracy.  相似文献   

19.
In this paper we consider systems made of components with time-dependent failure rates. A proper analysis of the time-dependent failure behaviour is very important for considerations of life-extension of safety critical systems such as nuclear power plants. This problem is tackled by Monte Carlo simulation which does not suffer from the additional complexity introduced by the model parameters' time inhomogeneity.The high reliability of the systems typically encountered in practice entails resorting to biasing techniques for favouring the events of interest. In this work, we investigate the possibility of biasing the system failures to be distributed in time according to exponential laws. The drawbacks encountered in such procedure have driven us towards the adoption of biasing schemes relying on uniform distributions which distribute failures over the system life more evenly.  相似文献   

20.
Four verification test problems are presented for checking the conceptual development and computational implementation of calculations to determine the probability of loss of assured safety (PLOAS) in temperature-dependent systems with multiple weak links (WLs) and strong links (SLs). The problems are designed to test results obtained with the following definitions of loss of assured safety: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The test problems are based on assuming the same failure properties for all links, which results in problems that have the desirable properties of fully exercising the numerical integration procedures required in the evaluation of PLOAS and also possessing simple algebraic representations for PLOAS that can be used for verification of the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号