首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Phased missions consist of consecutive operational phases where the system logic and failure parameters can change between phases. A component can have different roles in different phases and the reliability function may have discontinuities at phase boundaries. An earlier method required NOT-gates and negations of events when calculating importance measures for such missions with non-repairable components. This paper suggests an exact method that uses standard fault tree techniques and Boolean algebra without any NOT-gates or negations. The criticalities and other importance measures can be obtained for events and components relevant to a single phase or to a transition between phases or over the whole mission. The method and importance measures are extended to phased missions with repairable components. Quantification of the reliability, the availability, the failure intensity and the total number of failures are described. New importance indicators defined for repairable systems measure component contributions to the total integrated unavailability, to the mission failure intensity and to the total number of mission failures.  相似文献   

2.
Aggregation is a way to reduce the size of Markovian reliability and availability problems. Since exact aggregation is only possible in exceptional cases, we introduce a canonical partition for the Markovian states for systems with 2-state components that allows a systematic study of approximate aggregation. Restriction and prolongation operators are introduced to define aggregation. Four methods are proposed to define approximate systems of birth and death type. Examples are given for k-out-of-4 systems and for a 7 component system.  相似文献   

3.
A typical flexible manufacturing system, Westland Helicopters' sheet metal detail manufacturing complex, has been analysed for reliability. The techniques of fault tree analysis and event tree analysis are presented and their applicability to this study investigated. Event tree analysis has been found to be a more effective method for analysing manufacturing systems. The failure states of the system have been identified from the construction of an event tree which considers random hardware faults that influence production. Failure rate data have been used to quantify the critical production failure states in terms of machine failures. Estimates are made of the system's MTTF and percentage availability using typical MTTR figures. The probability that a selected production route fails to complete the manufacture of a set of parts is also evaluated. A dependency of systems reliability on the production demand has been discovered, and a possible method for modelling and assessing the reliability of systems capable of producing several products is proposed.  相似文献   

4.
In a system subject to both repairable and catastrophic (i.e., nonrepairable) failures, ‘mission success’ can be defined as operating for a specified time without a catastrophic failure. We examine the effect of a burn-in process of duration τ on the mission time x, and also on the probability of mission success, by introducing several functions and surfaces on the (τ,x)-plane whose extrema represent suitable choices for the best burn-in time, and the best burn-in time for a desired mission time. The corresponding curvature functions and surfaces provide information about probabilities and expectations related to these burn-in and mission times. Theoretical considerations are illustrated with both parametric and, separating the failures by failure mode, nonparametric analyses of a data set, and graphical visualization of results.  相似文献   

5.
Predictive maintenance (PdM) is an effective means to eliminate potential failures, ensure stable equipment operation and improve the mission reliability of manufacturing systems and the quality of products, which is the premise of intelligent manufacturing. Therefore, an integrated PdM strategy considering product quality level and mission reliability state is proposed regarding the intelligent manufacturing philosophy of ‘prediction and manufacturing’. First, the key process variables are identified and integrated into the evaluation of the equipment degradation state. Second, the quality deviation index is defined to describe the quality of the product quantitatively according to the co-effect of manufacturing system component reliability and product quality in the quality–reliability chain. Third, to achieve changeable production task demands, mission reliability is defined to characterise the equipment production states comprehensively. The optimal integrated PdM strategy, which combines quality control and mission reliability analysis, is obtained by minimising the total cost. Finally, a case study on decision-making with the integrated PdM strategy for a cylinder head manufacturing system is presented to validate the effectiveness of the proposed method. The final results shows that proposed method achieves approximately 26.02 and 20.54% cost improvement over periodic preventive maintenance and conventional condition-based maintenance respectively.  相似文献   

6.
Reliability, availability and maintainability (RAM) analysis of system is helpful in carrying out design modifications, if any, required to achieve minimum failures or to increase mean time between failures (MTBF) and thus to plan maintainability requirements, optimize reliability and maximize equipment availability. To this effect, the paper presents the application of RAM analysis in a process industry. Markovian approach is used to model the system behavior. For carrying out analysis, transition diagrams for various subsystems are drawn and differential equations associated with them are formulated. After obtaining the steady state solution the corresponding values of reliability and maintainability are estimated at different mission times. The computed results are presented to plant personnel for their active consideration. The results proved helpful to them for analyzing the system behavior and thereby to improve the system performance considerably by adopting and practicing suitable maintenance policies/strategies.  相似文献   

7.
It is well known for complex repairable systems (with as few as four components), regardless of the time‐to‐failure (TTF) distribution of each component, that the time‐between‐failures (TBFs) tends toward the exponential. This is a long‐term or ‘steady‐state’ property. Aware of this property, many of those modeling such systems tend to base spares provisioning, maintenance personnel availability and other decisions on an exponential TBFs distribution. Such a policy may suffer serious drawbacks. A non‐homogeneous Poisson process (NHPP) accounts for these intervals for some time prior to ‘steady‐state’. Using computer simulation, the nature of transient TBF behavior is examined. The number of system failures until the exponential TBF assumption is valid is of particular interest. We show, using a number of system configurations and failure and repair distributions, that the transient behavior quickly drives the TBF distribution to the exponential. We feel comfortable with achieving exponential results for the TBF with 30 system failures. This number may be smaller for configurations with more components. However, at this point, we recommend 30 as the systems failure threshold for using the exponential assumption. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we introduce a new methodology for reasoning about the functional failures during early design of complex systems. The proposed approach is based on the notion that a failure happens when a functional element in the system does not perform its intended task. Accordingly, a functional criticality is defined depending on the role of functionality in accomplishing designed tasks. A simulation-based failure analysis tool is then used to analyze functional failures and reason about their impact on overall system functionality. The analysis results are then integrated into an early stage system architecture analysis framework that analyzes the impact of functional failures and their propagation to guide system-level architectural design decisions. With this method, a multitude of failure scenarios can be quickly analyzed to determine the effects of architectural design decisions on overall system functionality. Using this framework, design teams can systematically explore risks and vulnerabilities during the early (functional design) stage of system development prior to the selection of specific components. Application of the presented method to the design of a representative aerospace electrical power system (EPS) testbed demonstrates these capabilities.  相似文献   

9.
Based on the analysis of system characteristics and mission process, space tracking, telemetry and command (TT&C) system can be viewed as a phased‐mission system (PMS). A general methodology using discrete event system simulation is proposed to quantitatively assess mission reliability of space TT&C system, because the traditional method is difficult to solve such complex problem. By dividing the time sequence of TT&C mission profile into several phases, the fault tree model of PMS is built to represent the system logical structure in each phase. In order to efficiently build simulation models, unified modeling language static class diagram is used to describe simulation model architecture. Extensible markup language is adopted to represent the mission reliability model in standard format for simulation input. By randomly generating the failure and repair events of the system components, the changes of the system state are simulated. The logic structure function of fault tree and observation data of the system state change jointly determines the mission reliability. A case study is given to illustrate the approach and validate its effectiveness. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically.  相似文献   

11.
The concept of degraded availability is introduced, and the required definitions and assumptions are presented. Appropriate metrics are formulated for the quantification of degraded availability at function, mission and system level. This degraded availability model is an extension to the model for availability of multifunctional systems (Sols, A., Availability of continuously operated, coherent, multifunctional systems. Master's thesis, Virginia Polytechnic Institute and State University, 1992).  相似文献   

12.
Availability is one of the metrics often used in the evaluation of system effectiveness. Its use as an effectiveness metric is often dictated by the nature of the system under consideration. While some systems operate continuously, many others operate on an intermittent basis where each operational period may often involve a different set of missions. This is the most likely scenario for complex multi-functional systems, where each specific system mission may require the availability of a different combination of system elements. Similarly, for these systems, not only is it important to know whether a mission can be initiated, it is just as important to know whether the system is capable of completing such a mission. Thus, for these systems, additional measures become relevant to provide a more holistic assessment of system effectiveness. This paper presents techniques for the evaluation of both full and degraded mission reliability and mission dependability for coherent, intermittently operated multi-functional systems. These metrics complement previously developed availability and degraded availability measures of multi-functional systems, in the comprehensive assessment of system effectiveness.  相似文献   

13.
The results from reliability modeling and analysis are key contributors to design and tuning activities for computer-based systems. Each architecture style, however, poses different challenges for which analytical approaches must be developed or modified. The challenge we address in this paper is the reliability analysis of hierarchical computer-based systems (HS) with common-cause failures (CCF). The dependencies among components introduced by CCF complicate the reliability analysis of HS, especially when components affected by a common cause exist on different hierarchical levels. We propose an efficient decomposition and aggregation (EDA) approach for incorporating CCF into the reliability evaluation of HS. Our approach is to decompose an original HS reliability analysis problem with CCF into a number of reduced reliability problems freed from the CCF concerns. The approach is represented in a dynamic fault tree by a proposed CCF gate modeled after the functional dependency gate. We present the basics of the EDA approach by working through a hypothetical analysis of a HS subject to CCF and show how it can be extended to an analysis of a hierarchical phased-mission system subject to different CCF depending on mission phases.  相似文献   

14.
The Rainbow net simulation technique is applied to modelling the impact of system load and fault handling on the availability of a fault-tolerant multiprocessor architecture. Rainbow nets are described along with the motivation for creating this modelling technique. A Rainbow net fault-handling model is created for the fault-tolerant multiprocessor architecture and the topology is shown to remain constant in size, independent of the number of processor, memory and I/O elements configured in the system. Simulation is performed with a varying load in terms of the number of active jobs the system must support. Results are given showing how the fault-tolerant capability varies with load. Two new metrics for evaluating fault tolerance are introduced; namely full fault-tolerability and partial fault-tolerability. They are based on simple observations in the model.  相似文献   

15.
Power generators are critical assets in wastewater treatment plants (WWTPs) in Australia and many countries. Better managing the lifetime, minimising failures, improving reliability and availability, and reducing operating and maintenance costs of the power generation assets are still challenging topics for water utilities. This case study aims to develop power generation system reliability and availability modelling considering redundancy to minimise operation and maintenance costs. The two-parameter Weibull model was used to assess system reliability and availability to power generation engines in WWTPs. The Kaplan-Meier method (a time-driven estimation technique) and the log beta-Weibull model (which is suitable for modelling censored and uncensored data) were used to analyse and validate the modelling results. Shape and scale parameters of the Weibull models were estimated by maximising the log-likelihood function using non-linear optimisation. Hazard and reliability functions were calculated using the Weibull model. Results using two-parameter Weibull, Kaplan-Meier, and log beta-Weibull models display low reliability and high hazard rate over time, which was associated with spark plug failure due to a suboptimal start and stop operation strategy.  相似文献   

16.
A mathematical model for the prediction of failures of systems designed to protect dangerous industrial facilities (damage control systems) based on stochastic branching processes is proposed. A system of linear ordinary differential equations for determining the probabilistic reliability characteristics of a technical system is considered and solved by means of the apparatus of generating functions.  相似文献   

17.
The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization.  相似文献   

18.
This paper presents the segregated failures model (SFM) of availability of fault‐tolerant computer systems with several recovery procedures. This model is compared with a Markov chain model and its advantages are explained. The basic model is then extended for the situation when the coverage factor is unknown and the failure escalation rates must be used instead. A simple practical analytical approach to availability evaluation is provided and illustrated in detail by estimating the availability of two versions of a reliable clustered computing architecture. For these examples, numeric values of availability indexes are computed and the contribution of each recovery procedure to total system availability is analysed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
An integrated code system SECOM-2, developed at the Japan Atomic Energy Research Institute (JAERI), has the following functions for systems reliability analysis in seismic probabilistic safety assessments (PSAs): (1) calculation of component failure probability, (2) extraction of minimal cut sets (MCSs) from a given fault tree (FT), (3) calculation of frequencies of accident sequences and core damage, (4) importance analysis with several measures with consideration of unique parameters of seismic PSAs, (5) sensitivity analysis, and (6) uncertainty analysis. This paper summarizes the special features of SECOM-2 to perform the analyses mentioned above. At JAERI, using an integrated FT which represents seismically induced core damage due to all initiating events as a system model to calculate core damage frequency of a nuclear power plant, SECOM-2 can calculate conditional point estimate probabilities of system failures, losses of safety functions, and core damage as a function of earthquake motions. The point estimate is computed by a method which gives an exact numerical solution using the Boolean arithmetic model method. As for consideration of correlation of component failure, which has been an important issue in seismic PSAs, a new technique based on direct FT quantification by a Monte Carlo simulation is being added to SECOM-2. Adding this technique, the core damage frequency can be calculated not only with the upper bound approximation based on MCSs but also with a near exact solution taking into account the correlation among all components. This paper also presents the preliminary results of a seismic PSA of a generic BWR plant in Japan performed at JAERI to demonstrate the functions of the SECOM-2 code.  相似文献   

20.
Fault-tolerant multiple-phased systems (FTMPS) are defined as systems whose critical components are independently replicated and whose operational life can be partitioned into a set of disjoint periods, called “phases”. Because of their deployment in critical applications, their mission reliability analysis is a task of primary relevance to validate the designs. This paper is focused on the reliability analysis of FTMPS with random phase durations, non-exponentially distributed repair activities and different repair policies. For self-repairable FTMPS with a component-level reconfiguration architecture, we derive several efficient formulations from the underlying structure characteristics for their intraphase behavior analysis. We also present a uniform solution framework of the mission reliability for FTMPS with generally distributed phase durations. Compared with existing methods based on deterministic and stochastic Petri nets or Markov regenerative stochastic Petri nets, our approach is more simple in concept and powerful in computation. Two examples of FTMPS are analyzed to illustrate the advantages of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号