首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The fault tree analysis is a well-established method in system safety and reliability assessment. We transferred the principles of this technique to an assembler code analysis, regarding any incorrect output of the software as the undesired top-level event. Starting from the instructions providing the outputs and tracking back to all instructions contributing to these outputs a hierarchical system of references is generated that may graphically be represented as a fault tree. To cope with the large number of relations in the code, a tool suite has been developed, which automatically creates these references and checks for unfulfilled preconditions of instructions. The tool was applied to the operational software of an inertial measurement unit, which provides safety critical signals for artificial stabilization of an aircraft. The method and its implementation as a software tool is presented and the benefits, surprising results, and limitations we have experienced were discussed.  相似文献   

2.
A thorough requirements analysis is indispensable for developing and implementing safety-critical software systems such as nuclear power plant (NPP) software systems because a single error in the requirements can generate serious software faults. However, it is very difficult to completely analyze system requirements. In this paper, an effective technique for the software requirements analysis is suggested. For requirements verification and validation (V&V) tasks, our technique uses software inspection, requirement traceability, and formal specification with structural decomposition. Software inspection and requirements traceability analysis are widely considered the most effective software V&V methods. Although formal methods are also considered an effective V&V activity, they are difficult to use properly in the nuclear fields as well as in other fields because of their mathematical nature. In this work, we propose an integrated environment (IE) approach for requirements, which is an integrated approach that enables easy inspection by combining requirement traceability and effective use of a formal method. The paper also introduces computer-aided tools for supporting IE approach for requirements. Called the nuclear software inspection support and requirements traceability (NuSISRT), the tool incorporates software inspection, requirement traceability, and formal specification capabilities. We designed the NuSISRT to partially automate software inspection and analysis of requirement traceability. In addition, for the formal specification and analysis, we used the formal requirements specification and analysis tool for nuclear engineering (NuSRS).  相似文献   

3.
This paper proposes an efficient metamodeling approach for uncertainty quantification of complex system based on Gaussian process model (GPM). The proposed GPM‐based method is able to efficiently and accurately calculate the mean and variance of model outputs with uncertain parameters specified by arbitrary probability distributions. Because of the use of GPM, the closed form expressions of mean and variance can be derived by decomposing high‐dimensional integrals into one‐dimensional integrals. This paper details on how to efficiently compute the one‐dimensional integrals. When the parameters are either uniformly or normally distributed, the one‐dimensional integrals can be analytically evaluated, while when parameters do not follow normal or uniform distributions, this paper adopts the effective Gaussian quadrature technique for the fast computation of the one‐dimensional integrals. As a result, the developed GPM method is able to calculate mean and variance of model outputs in an efficient manner independent of parameter distributions. The proposed GPM method is applied to a collection of examples. And its accuracy and efficiency is compared with Monte Carlo simulation, which is used as benchmark solution. Results show that the proposed GPM method is feasible and reliable for efficient uncertainty quantification of complex systems in terms of the computational accuracy and efficiency. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
为了有效地分析结构系统输入变量不确定性对结构系统失效概率的影响程度,研究了对基于失效概率的矩独立重要性测度的计算,并基于单层Monte Carlo计算法和密度权重计算法,建立了准确、高效求解失效概率矩独立重要性测度的单层密度权重法。工程算例表明,与已有的计算方法相比,该方法仅需较少的模型计算量就能得到足够准确的结果,极大地提高了计算效率,且具有很好的工程适用性。  相似文献   

5.
R K Shyamasundar 《Sadhana》1994,19(6):941-969
In this paper, we provide an overview of the use of formal methods in the development of safety critical systems and the notion ofsafety in the context. Our attempt would be to draw lessons from the various research efforts that have gone in towards the development of robust/reliable software for safety-critical systems. In the context of India leaping into hi-tech areas, we argue for the need of a thrust in the development of quality software and also discuss the steps to be initiated towards such a goal. “If only we could learn the right lessons from the successes of the past, we would not need to learn from our failures” C.A.R. Hoare An earlier version was presented as an Invited paper at the ISRO Conference on Software Engineering, VSSC, Trivandrum, 29–30 July 1994.  相似文献   

6.
M. B. Anoop  K. Balaji Rao 《Sadhana》2008,33(6):753-765
A fundamental component of safety assessment is the appropriate representation and incorporation of uncertainty. A procedure for handling hybrid uncertainties in stochastic mechanics problems is presented. The procedure can be used for determining the bounds on failure probability for cases where failure probability is a monotonic function of the fuzzy variables. The procedure is illustrated through an example problem of safety assessment of a nuclear power plant piping component against stress corrosion cracking, considering the stochastic evolution of stress corrosion cracks with time. It is found that the bounds obtained enclose the values of failure probability obtained from probabilistic analyses.  相似文献   

7.
Weak link (WL)/strong link (SL) systems constitute important parts of the overall operational design of high-consequence systems, with the SL system designed to permit operation of the system only under intended conditions and the WL system designed to prevent the unintended operation of the system under accident conditions. Degradation of the system under accident conditions into a state in which the WLs have not deactivated the system and the SLs have failed in the sense that they are in a configuration that could permit operation of the system is referred to as loss of assured safety. The probability of such degradation conditional on a specific set of accident conditions is referred to as probability of loss of assured safety (PLOAS). Previous work has developed computational procedures for the calculation of PLOAS under fire conditions for a system involving multiple WLs and SLs and with the assumption that a link fails instantly when it reaches its failure temperature. Extensions of these procedures are obtained for systems in which there is a temperature-dependent delay between the time at which a link reaches its failure temperature and the time at which that link actually fails.  相似文献   

8.
A novel method which combines the active learning Kriging (ALK) model with important sampling is proposed in this paper. The main aim of the proposed method is to solve problems with very small failure probability and multiple failure regions. A surrogate limit state surface (LSS) which strikes a balance between the Kriging mean and variance is proposed. In each iteration, important samples of the surrogate LSS are generated, optimal training points are chosen, the Kriging model is updated and the surrogate LSS is refined. After several iterations, the surrogate LSS will converge to the true LSS. To obtain all the local and global most probable points (MPPs) on the surrogate LSS in each iteration, a recently proposed evolutionary algorithm from the field of multimodal optimization is introduced. In this way, none of the potential failure regions is missed and the unbiasedness of the proposed method is guaranteed. The contribution factor of each MPP is defined and a weighted multimodal instrumental sampling density is formulated. In this way, more attention is paid to the more important failure regions and training points are further saved. The performance of the proposed method is verified by six case studies.  相似文献   

9.
VVS Sarma  D Vijay Rao 《Sadhana》1997,22(1):121-132
In today’s competitive environment for software products, quality is an important characteristic. The development of large-scale software products is a complex and expensive process. Testing plays a very important role in ensuring product quality. Improving the software development process leads to improved product quality. We propose a queueing model based on re-entrant lines to depict the process of software modules undergoing testing/debugging, inspections and code reviews, verification and validation, and quality assurance tests before being accepted for use. Using the re-entrant line model for software testing, bounds on test times are obtained by considering the state transitions for a general class of modules and solving a linear programming model. Scheduling of software modules for tests at each process step yields the constraints for the linear program. The methodology presented is applied to the development of a software system and bounds on test times are obtained. These bounds are used to allocate time for the testing phase of the project and to estimate the release times of software.  相似文献   

10.
An efficient strategy to approximate the failure probability function in structural reliability problems is proposed. The failure probability function (FPF) is defined as the failure probability of the structure expressed as a function of the design parameters, which in this study are considered to be distribution parameters of random variables representing uncertain model quantities. The task of determining the FPF is commonly numerically demanding since repeated reliability analyses are required. The proposed strategy is based on the concept of augmented reliability analysis, which only requires a single run of a simulation-based reliability method. This paper introduces a new sample regeneration algorithm that allows to generate the required failure samples of design parameters without any additional evaluation of the structural response. In this way, efficiency is further improved while ensuring high accuracy in the estimation of the FPF. To illustrate the efficiency and effectiveness of the method, case studies involving a turbine disk and an aircraft inner flap are included in this study.  相似文献   

11.
The European Robotic Arm (ERA) is a seven degrees of freedom relocatable anthropomorphic robotic manipulator system, to be used in manned space operation on the International Space Station, supporting the assembly and external servicing of the Russian segment. The safety design concept and implementation of the ERA is described, in particular with respect to the central computer's software design. A top–down analysis and specification process is used to down flow the safety aspects of the ERA system towards the subsystems, which are produced by a consortium of companies in many countries. The user requirements documents and the critical function list are the key documents in this process. Bottom–up analysis (FMECA) and test, on both subsystem and system level, are the basis for safety verification. A number of examples show the use of the approach and methods used.  相似文献   

12.
An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg–Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches.  相似文献   

13.
Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative ‘pre-prior’, and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data.  相似文献   

14.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

15.
Structural reliability methods aim at computing the probability of failure of systems with respect to prescribed limit state functions. A common practice to evaluate these limit state functions is using Monte Carlo simulations. The main drawback of this approach is the computational cost, because it requires computing a large number of deterministic finite element solutions. Surrogate models, which are built from a limited number of runs of the original model, have been developed, as substitute of the original model, to reduce the computational cost. However, these surrogate models, while decreasing drastically the computational cost, may fail in computing an accurate failure probability. In this paper, we focus on the control of the error introduced by a reduced basis surrogate model on the computation of the failure probability obtained by a Monte Carlo simulation. We propose a technique to determine bounds of this failure probability, as well as a strategy of enrichment of the reduced basis, based on limiting the bounds of the error of the failure probability for a multi‐material elastic structure. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

16.
The liquefaction potential index (LPI) has been widely used to develop fragility function for predicting the liquefaction-induced ground failure. As the fragility function tends to vary from one region to another, it is best developed based on region-specific data. When the amount of region-specific data is limited, how to develop the region-specific fragility curve is a challenging problem. In this study, a Hierarchical Bayesian Model (HBM) is suggested for developing region-specific fragility functions based on LPI, which can systematically consider the effects of the amount and characteristics of the local data as well as the data from other regions. The suggested method is illustrated with an example. It is shown that the HBM outperforms the lumped parameter model (LPM) which does not consider the inter-region variability of the fragility curves. When the amount of region-specific data is large, the fragility function developed based on the HBM is very close to that developed based on the independent parameter model (IPM), which constructs a region-specific fragility function utilizing only the region-specific data. When the region-specific data is not enough, the HBM also outperforms the IPM through borrowing information from other regions.  相似文献   

17.
Asymptotic approximations for probability integrals   总被引:1,自引:0,他引:1  
This paper considers the asymptotic evaluation of probability integrals. The usual methods require that all random variables are transformed into standard normal variables. The method described here does not use such transformations. Asymptotic approximations are derived in the original space of the random variables. In this way it is also possible to obtain simple formulas for the sensitivity of the failure probability to changes in the distribution parameters.  相似文献   

18.
A method of estimating the probability density function and cumulative distribution function when only the ordinary or central moments of the distribution are known is examined. The technique is used in conjunction with previous work which yields the ordinary moments of time to first passage failure to obtain accurate estimates of the failure probability for two representative oscillators. The results are then compared to those obtained by a nearly exact numerical scheme.  相似文献   

19.
In this paper we discuss the accuracy of probability of failure sensitivity analysis with sampling-based schemes. Three approaches commonly employed in literature are discussed: the Weak sensitivity analysis, the Direct employment of finite difference schemes and the Common Random Variable approach. Theoretical estimates for the bias, the coefficient of variation and the mean square error for each approach are presented. The results hold for a single random variable and the extension to more general situations should be pursued in future works. These results lead to the conclusion that the Common Random Variable approach is superior to the Direct approach from the theoretical point of view. The Weak approach, on the other hand, is equivalent to the Common Random Variable approach with central finite difference formula. The choice between these latter two approaches is then a matter of computational efficiency. The results of this work should contribute to further development of efficient algorithms for the problem under study.  相似文献   

20.
For addressing the low efficiency of structural reliability analysis under the random-interval mixed uncertainties (RIMU), this paper establishes the line sampling method (LS) under the RIMU. The proposed LS divides the reliability analysis under RIMU into two stages. The Markov chain simulation is used to efficiently search the design point under RIMU in the first stage, then the upper and lower bounds of failure probability are estimated by LS in the second stage. To improve the computational efficiency of the proposed LS under RIMU, the Kriging model is employed to reduce the model evaluation numbers in the two stages. For efficiently searching the design point, the Kriging model is constructed and adaptively updated in the first stage to accurately recognize the Markov chain candidate state, and then it is sequentially updated by the improved U learning function in the second stage to accurately estimate the failure probability bounds. The proposed LS under RIMU with Kriging model can not only reduce the model evaluation numbers but also decrease the candidate sample pool size for constructing the Kriging model in two stages. The presented examples demonstrate the superior computational efficiency and accuracy of the proposed method by comparison with some existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号