首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Safety-critical systems are designed to prevent catastrophic consequences from failure, such as injury or death to humans and environmental damage. These must be carefully designed to ensure reliability requirements. The purpose of this paper is to identify the number of models in the reliability analysis of safety-critical systems. To achieve this goal, we conducted a systematic review of 40 shortlisted studies. The selected studies are classified based on various techniques used in safety-critical systems. This paper summarizes the literature review in the field of reliability of safety-critical systems. The limitations of the literature are exquisitely represented. The international safety norms and applications of safety-critical systems are discussed systematically in this paper. This paper emerges research trends, research challenges and insights of the researcher for future research direction in the area of safety-critical systems.  相似文献   

2.
Software plays an increasingly important role in modern safety-critical systems. Although, research has been done to integrate software into the classical probabilistic risk assessment (PRA) framework, current PRA practice overwhelmingly neglects the contribution of software to system risk. Dynamic probabilistic risk assessment (DPRA) is considered to be the next generation of PRA techniques. DPRA is a set of methods and techniques in which simulation models that represent the behavior of the elements of a system are exercised in order to identify risks and vulnerabilities of the system. The fact remains, however, that modeling software for use in the DPRA framework is also quite complex and very little has been done to address the question directly and comprehensively. This paper develops a methodology to integrate software contributions in the DPRA environment. The framework includes a software representation, and an approach to incorporate the software representation into the DPRA environment SimPRA. The software representation is based on multi-level objects and the paper also proposes a framework to simulate the multi-level objects in the simulation-based DPRA environment. This is a new methodology to address the state explosion problem in the DPRA environment. This study is the first systematic effort to integrate software risk contributions into DPRA environments.  相似文献   

3.
In the realm of safety related systems, a growing number of functions are realized by software, ranging from ‘firmware’ to autonomous decision‐taking software. To support (political) real‐world decision makers, quantitative risk assessment methodology quantifies the reliability of systems. The optimal choice of safety measures with respect to the available budget, for example, the UK (as low as reasonably practicable approach), requires quantification. If a system contains software, some accepted methods on quantification of software reliability exist, but none of them is generally applicable, as we will show. We propose a model bringing software into the quantitative risk assessment domain by introducing failure of software modules (with their probabilities) as basic events in a fault tree. The method is known as ‘TOPAAS’ (Task‐Oriented Probability of Abnormalities Analysis for Software). TOPAAS is a factor model allowing the quantification of the basic ‘software’ events in fault tree analyses. In this paper, we argue that this is the best approach currently available to industry. Task‐Oriented Probability of Abnormalities Analysis for Software is a practical model by design and is currently put to field testing in risk assessments of programmable electronic safety‐related systems in tunnels and control systems of movable storm surge barriers in the Netherlands. The TOPAAS model is constructed to incorporate detailed fields of knowledge and to provide focus toward reliability quantification in the form of a probability measure of mission failure. Our development also provides context for further in‐depth research. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

5.
In this article, the authors present a general methodology for age‐dependent reliability analysis of degrading or ageing components, structures and systems. The methodology is based on Bayesian methods and inference—its ability to incorporate prior information and on ideas that ageing can be thought of as age‐dependent change of beliefs about reliability parameters (mainly failure rate), when change of belief occurs not only because new failure data or other information becomes available with time but also because it continuously changes due to the flow of time and the evolution of beliefs. The main objective of this article is to present a clear way of how practitioners can apply Bayesian methods to deal with risk and reliability analysis considering ageing phenomena. The methodology describes step‐by‐step failure rate analysis of ageing components: from the Bayesian model building to its verification and generalization with Bayesian model averaging, which as the authors suggest in this article, could serve as an alternative for various goodness‐of‐fit assessment tools and as a universal tool to cope with various sources of uncertainty. The proposed methodology is able to deal with sparse and rare failure events, as is the case in electrical components, piping systems and various other systems with high reliability. In a case study of electrical instrumentation and control components, the proposed methodology was applied to analyse age‐dependent failure rates together with the treatment of uncertainty due to age‐dependent model selection. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
While the event-tree (ET)/fault-tree (FT) methodology is the most popular approach to probability risk assessment (PRA), concerns have been raised in the literature regarding its potential limitations in the reliability modeling of dynamic systems. Markov reliability models have the ability to capture the statistical dependencies between failure events that can arise in complex dynamic systems. A methodology is presented that combines Markov modeling with the cell-to-cell mapping technique (CCMT) to construct dynamic ETs/FTs and addresses the concerns with the traditional ET/FT methodology. The approach is demonstrated using a simple water level control system. It is also shown how the generated ETs/FTs can be incorporated into an existing PRA so that only the (sub)systems requiring dynamic methods need to be analyzed using this approach while still leveraging the static model of the rest of the system.  相似文献   

7.
Various models which may be used for quantitative assessment of hardware, software and human reliability are compared in this paper. Important comparison criteria are the system life cycle phase in which the model is intended to be used, the failure category and reliability means considered in the model, model purpose, and model characteristic such as model construction approach, model output and model input. The main objective is to present limitations in the use of current models for reliability assessment of computer-based safety shutdown systems in the process industry and to provide recommendations on further model development. Main attention is given to presenting the overall concept of various models from a user's point of view rather than technical details of specific models. A new failure classification scheme is proposed which shows how hardware and software failures may be modelled in a common framework.  相似文献   

8.
Digital instrumentation and control (I&C) systems can provide important benefits in many safety-critical applications, but they can also introduce potential new failure modes that can affect safety. Unlike electro-mechanical systems, whose failure modes are fairly well understood and which can often be built to fail in a particular way, software errors are very unpredictable. There is virtually no nontrivial software that will function as expected under all conditions. Consequently, there is a great deal of concern about whether there is a sufficient basis on which to resolve questions about safety. In this paper, an approach for validating the safety requirements of digital I&C systems is developed which uses the Dynamic Flowgraph Methodology to conduct automated hazard analyses. The prime implicants of these analyses can be used to identify unknown system hazards, prioritize the disposition of known system hazards, and guide lower-level design decisions to either eliminate or mitigate known hazards. In a case study involving a space-based reactor control system, the method succeeded in identifying an unknown failure mechanism.  相似文献   

9.
The fault coverage for digital system in nuclear power plants is evaluated using a simulated fault injection method. Digital systems have numerous advantages, such as hardware elements share and hardware replication of the needed number of independent channels. However, the application of digital systems to safety-critical systems in nuclear power plants has been limited due to reliability concerns. In the reliability issues, fault coverage is one of the most important factors. In this study, we propose an evaluation method of the fault coverage for safety-critical digital systems in nuclear power plants. The system under assessment is a local coincidence logic processor for a digital plant protection system at Ulchin nuclear power plant units 5 and 6. The assessed system is simplified and then a simulated fault injection method is applied to evaluate the fault coverage of two fault detection mechanisms. From the simulated fault injection experiment, the fault detection coverage of the watchdog timer is 44.2% and that of the read only memory (ROM) checksum is 50.5%. Our experiments show that the fault coverage of a safety-critical digital system is effectively quantified using the simulated fault injection method.  相似文献   

10.
Software reliability models can provide quantitative measures of the reliability of software systems which are of growing importance today. Most of the models are parametric ones which rely on the modelling of the software failure process as a Markov or non-homogeneous Poisson process. It has been noticed that many of them do not give a very accurate prediction of future software failures as the focus is on the fitting of past data. In this paper we study the use of the double exponential smoothing technique to predict software failures. The proposed approach is a non-parametric one and has the ability of providing more accurate prediction compared with traditional parametric models because it gives a higher weight to the most recent failure data for a better prediction of future behaviour. The method is very easy to use and requires a very limited amount of data storage and computational effort. It can be updated instantly without much calculation. Hence it is a tool that should be more commonly used in practice. Numerical examples are shown to highlight the applicability. Comparisons with other commonly used software reliability growth models are also presented. © 1997 John Wiley & Sons, Ltd.  相似文献   

11.
A methodology is proposed which can be used to design real-time expert systems for on-line process disturbance management. This methodology encompasses diverse functional aspects that are required for an effective process disturbance management: (1) intelligent process monitoring and alarming, (2) on-line sensor data validation and sensor failure diagnosis, (3) on-line hardware (besides sensors) failure diagnosis, and (4) real-time corrective measure synthesis. Accomplishment of these functions is made possible through the integrated application of the various models: goal-tree success-tree, process monitor tree, sensor failure diagnosis, and hardware failure diagnosis models. This paper presents and discusses the various models along with the overall algorithm of the methodology. The application of the methodology to a target process, a typical main feedwater system of a nuclear power plant which employs a complex control mechanism, will be presented in a companion paper.  相似文献   

12.
Software reliability growth models, which are based on nonhomogeneous Poisson processes, are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems. Faults in the systems, which eventually cause the failures, are usually connected with each other in complicated ways. Considering a group of networked faults, we raise a new model to examine the reliability of software systems and assess the model's performance from real‐world data sets. Our numerical studies show that the new model, capturing networking effects among faults, well fits the failure data. We also formally study the optimal software release policy using the multi‐attribute utility theory (MAUT), considering both the reliability attribute and the cost attribute. We find that, if the networking effects among different layers of faults were ignored by the software testing team, the best time to release the software package to the market would be much later while the utility reaches its maximum. Sensitivity analysis is further delivered.  相似文献   

13.
The developers of safety-critical instrumentation and control systems must qualify the design of the components used, including the software in the embedded computer systems, in order to ensure that the component can be trusted to perform its safety function under the full range of operating conditions. There are well known ways to qualify analog systems using the facts that: (1) they are built from standard modules with known properties; (2) design documents are available and described in a well understood language; (3) the performance of the component is constrained by physics; and (4) physics models exist to predict the performance. These properties are not generally available for qualifying software, and one must fall back on extensive testing and qualification of the design process. Neither of these is completely satisfactory.The research reported here is exploring an alternative approach that is intended to permit qualification for an important subset of instrumentation software. The research goal is to determine if a combination of static analysis and limited testing can be used to qualify a class of simple, but practical, computer-based instrumentation components for safety application. These components are of roughly the complexity of a motion detector alarm controller. This goal is accomplished by identifying design constraints that enable meaningful analysis and testing. Once such design constraints are identified, digital systems can be designed to allow for analysis and testing, or existing systems may be tested for conformance to the design constraints as a first step in a qualification process. This will considerably reduce the cost and monetary risk involved in qualifying commercial components for safety-critical service.  相似文献   

14.
Hardware‐software co‐design systems abound in diverse modern application areas such as automobile control, telecommunications, big data processing, and cloud computing. Existing works on reliability modeling of the co‐design systems have mostly assumed that hardware and software subsystems behave independently of each other. However, these two subsystems may have significant interactions in practice. In this paper, an analytical approach based on paths and integrals is proposed to analyze reliability of nonrepairable hardware‐software co‐design systems considering interactions between hardware and software during the system performance degradation and failure process. The proposed approach is verified using the Markov‐based method. As demonstrated by case studies on systems without and with warm standby sparing, the proposed approach is applicable to arbitrary types of time‐to‐failure or degradation distributions. Effects of different transition and fault detection/recovery parameters on system performance are also investigated through examples.  相似文献   

15.
This paper describes the underlying theory and a practical process for establishing time-dependent reliability models for components in a realistic and complex flood defence system. Though time-dependent reliability models have been applied frequently in, for example, the offshore, structural safety and nuclear industry, application in the safety-critical field of flood defence has to date been limited. The modelling methodology involves identifying relevant variables and processes, characterisation of those processes in appropriate mathematical terms, numerical implementation, parameter estimation and prediction. A combination of stochastic, hierarchical and parametric processes is employed. The approach is demonstrated for selected deterioration mechanisms in the context of a flood defence system. The paper demonstrates that this structured methodology enables the definition of credible statistical models for time-dependence of flood defences in data scarce situations. In the application of those models one of the main findings is that the time variability in the deterioration process tends to be governed the time-dependence of one or a small number of critical attributes. It is demonstrated how the need for further data collection depends upon the relevance of the time-dependence in the performance of the flood defence system.  相似文献   

16.
Computer based safety related control and instrumentation (C&I) systems are being employed in Indian Nuclear Power Plants (NPPs). These systems are designed around standardized family of microcomputer based circuit modules, which are qualified to the stringent requirements of the nuclear industry. Reliability analysis of standardized microcomputer circuit modules, used in the safety related C&I systems, were carried out using the analysis package based on the methodology and database of MIL-STD-217-F1. The circuit modules are the main building blocks of the safety related C&I systems in the forthcoming Indian NPPs. The article presents reliability analysis results of microcomputer and related circuit modules and a representative safety C&I system-Programmable Digital Comparator System (PDCS). Comparison of reliability values for prototype PDCS using commercial grade components, and for upgraded version PDCS using MIL grade or equivalent screened components was made. The estimated failure rate values of standardized microcomputer circuit modules will be useful, for reliability assessment of various other safety related C&I systems developed around these modules, for ongoing and future Indian NPPs.  相似文献   

17.
In this paper we present three models for the behavior of software failures. By applying these models an attempt has been made to predict reliability growth by predicting failure rates and mean time to next failure of software with Weibull inter failure times at different stages. The changes in the performance of the software as a result of the error removal are described as a Bayes empirical-Bayes prediction in Model I. Model II considers a fully Bayesian analysis with non informative priority of Weibull parameters. An approximation due to Lindley is used in this model as the expressions do not appear in close forms. The M.L. approach is used in Model III. Finally we apply these three models to actual failure data and compare their predictive performances. The comparison of the proposed models is also made in terms of the ratio of likelihoods of observed values based on their predictive distributions.

Among these three models, Model I seems to be quite reasonable as it shows higher reliability growth in all stages. It is noted that this model may be useful to measure the current reliability at any particular stage of the testing process and viewed as a measure of software quality.  相似文献   


18.
Over the years, several tools have been developed to estimate the reliability of hardware and software components. Typically, such tools are either for hardware or software. This paper presents the Software Tool for Reliability Estimation (STORE), which can be used for systems containing hardware and/or software components. For software components, exponential, Weibull, gamma, power, geometric, and inverse-linear models were implemented. Goodness of fit statistics are provided for each model. The user can select the most appropriate model for a given system configuration and failure data. The STORE program can analyze series, parallel, and complex systems. Tieset and cutset algorithm is utilized to determine the reliability of a complex system. The paper presents several examples to demonstrate the software tool.  相似文献   

19.
The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.  相似文献   

20.
In this paper, a quantitative methodology to be considered in safety-critical microprocessor applications is proposed. Some important aspects that must be considered in safety analysis work are discussed. We discuss how to evaluate the dangerous detectable and undetectable system failure rates in a single microprocessor board and the mean time to unsafe failure (MTTUF) of a critical system. The proposed methodology is finally applied to a practical system which employs a triple modular redundancy (TMR) architecture. The results obtained by employing this methodology are extremely relevant, especially to those aspects related to the impact of the computational blocks in the final safety integrity level (SIL) of a critical system. In this paper, we also consider how the software can influence the evaluation of the fault cover factor, another important aspect in safety analysis work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号