首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
The Bayesian network (BN) is an efficient tool for probabilistic modeling and causal inference, and it has gained considerable attentions in the field of reliability assessment. The common cause failure (CCF) is simultaneous failure of multiple elements in a system under a common cause, and it is a common phenomenon in engineering systems with dependent elements. Several models and methods have been proposed for modeling and assessment of complex systems with CCF. In this paper, a new reliability assessment method is proposed for the systems suffering from CCF in a dynamic environment. The CCF among components is characterized by a BN, which allows for bidirectional reasoning. A proportional hazards model is applied to capture the dynamic working environment of components and then the reliability function of the system is obtained. The proposed method is validated through an illustrative example, and some comparative studies are also presented.  相似文献   

2.
Rotor components of an aircraft engine in service are usually subjected to combined high and low cycle fatigue (CCF) loadings. In this work, combining with the load spectrum of CCF, a modified damage accumulation model for CCF life prediction of turbine blades is first put forward to take into account the effects of load consequence and load interaction caused by high‐cycle fatigue (HCF) loads and low‐cycle fatigue (LCF) loads under CCF loading conditions. The predicted results demonstrate that the proposed model presents a higher prediction accuracy than Miner, Manson‐Halford model does. Moreover, to evaluate the fatigue reliability of rotor components, reliability model with the failure mode of CCF is proposed on the basis of the stress‐strength interference method when considering the strength degeneration, and its results show that the reliability model with CCF is more suitable for aero‐engine components than that with the failure mode of single fatigue.  相似文献   

3.
Functional dependence (FDEP) exists in many real‐world systems, where the failure of one component (trigger) causes other components (dependent components) within the same system to become isolated (inaccessible or unusable). The FDEP behavior complicates the system reliability analysis because it can cause competing failure effects in the time domain. Existing works have assumed noncascading FDEP, where each system component can be a trigger or a dependent component, but not both. However, in practical systems with hierarchical configurations, cascading FDEP takes place where a system component can play a dual role as both a trigger and a dependent component simultaneously. Such a component causes correlations among different FDEP groups, further complicating the system reliability analysis. Moreover, the existing works mostly assume that any failure propagation originating from a system component instantaneously takes effect, which is often not true in practical scenarios. In this work, we propose a new combinatorial method for the reliability analysis of competing systems subject to cascading FDEP and random failure propagation time. The method is hierarchical and flexible without limitations on the type of time‐to‐failure distributions for system components. A detailed case study is performed on a sensor system used in smart home applications to illustrate the proposed methodology.  相似文献   

4.
With the increasing complexity of engineering systems, reliability analysis and evaluation of systems with traditional methods can't meet practical engineering requirements. Based on limited experimental conditions, lack of data, complex structure models, insufficient cognitive abilities, and many other issues, people have to consider many uncertain factors in system reliability research. Besides, common cause failure (CCF) has become an important factor of system failure. In this paper, a discrete‐time Bayesian network (DTBN) associated with an eight‐rotor unmanned aerial vehicle (UAV) system is presented to discuss above problems. In this approach, the system is assumed as a two‐state system. After that, interval analysis theory is employed to deal with uncertainty. We consider the four sets of auxiliary propellers in the auxiliary power group as a 3/8 voting system, and β factor model is used to process CCF in the auxiliary power group. The proposed methods prove the validity of proposing interval analysis theory to solve uncertain problems and it is necessary to consider reducing or avoiding CCFs in system.  相似文献   

5.
The issue of information loss in the process of system reliability modeling through conventional load–strength interference analysis is discussed first, and the reason why it is impossible to construct dependent system reliability model simply by means of component reliability index is demonstrated. Then, an approach to modeling the reliability of dependent system with common cause failure (CCF) is presented. The approach is based on system-level load–strength interference analysis and a concept of ‘conditional failure probability of component’ as well. With the opinion that load randomness is the direct cause of failure dependence, a discrete type system reliability model is developed via the conditional component failure probability concept. At last, the model's capabilities to estimate system reliability with CCF effect and to predict high multiplicity failure probability based on low multiplicity failure event data are proved.  相似文献   

6.
Fault-tolerant multiple-phased systems (FTMPS) are defined as systems whose critical components are independently replicated and whose operational life can be partitioned into a set of disjoint periods, called “phases”. Because of their deployment in critical applications, their mission reliability analysis is a task of primary relevance to validate the designs. This paper is focused on the reliability analysis of FTMPS with random phase durations, non-exponentially distributed repair activities and different repair policies. For self-repairable FTMPS with a component-level reconfiguration architecture, we derive several efficient formulations from the underlying structure characteristics for their intraphase behavior analysis. We also present a uniform solution framework of the mission reliability for FTMPS with generally distributed phase durations. Compared with existing methods based on deterministic and stochastic Petri nets or Markov regenerative stochastic Petri nets, our approach is more simple in concept and powerful in computation. Two examples of FTMPS are analyzed to illustrate the advantages of our approach.  相似文献   

7.
Life data from systems of components are often analysed to estimate the reliability of the individual components. These estimates are useful since they reflect the reliability of the components under actual operating conditions. However, owing to the cost or time involved with failure analysis, the exact component causing system failure may be unknown or ‘masked’. That is, the cause may only be isolated to some subset of the system's components. We present an iterative approach for obtaining component reliability estimates from such data for series systems. The approach is analogous to traditional probability plotting. That is, it involves the fitting of a parametric reliability function to a set of nonparametric reliability estimates (plotting points). We present a numerical example assuming Weibull component life distributions and a two-component series system. In this example we find estimates with only 4 per cent of the computation time required to find comparable MLEs.  相似文献   

8.
Common-cause failures (CCF) are one of the more critical and challenging issues for system reliability and risk analyses. Academic interest in modeling CCF, and more broadly in modeling dependent failures, has steadily grown over the years in the number of publications as well as in the sophistication of the analytical tools used. In the past few years, several influential articles have shed doubts on the relevance of redundancy arguing that “redundancy backfires” through common-cause failures, and that the latter dominate unreliability, thus defeating the purpose of redundancy. In this work, we take issue with some of the results of these publications. In their stead, we provide a nuanced perspective on the (contingent) value of redundancy subject to common-cause failures. First, we review the incremental reliability and MTTF provided by redundancy subject to common-cause failures. Second, we introduce the concept and develop the analytics of the “redundancy–relevance boundary”: we propose this redundancy–relevance boundary as a design-aid tool that provides an answer to the following question: what level of redundancy is relevant or advantageous given a varying prevalence of common-cause failures? We investigate the conditions under which different levels of redundancy provide an incremental MTTF over that of the single component in the face of common-cause failures. Recognizing that redundancy comes at a cost, we also conduct a cost–benefit analysis of redundancy subject to common-cause failures, and demonstrate how this analysis modifies the redundancy–relevance boundary. We show how the value of redundancy is contingent on the prevalence of common-cause failures, the redundancy level considered, and the monadic cost–benefit ratio. Finally we argue that general unqualified criticism of redundancy is misguided, and efforts are better spent for example on understanding and mitigating the potential sources of common-cause failures rather than deriding the concept of redundancy in system design.  相似文献   

9.
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.  相似文献   

10.
Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems.  相似文献   

11.
The problem addressed is how to combine event experience data from multiple source plants to estimate common cause failure (CCF) rates for a target plant. Alternative models are considered for transforming CCF parameters from systems with different numbers of similar components to obtain CCF-rates for a specific group of components. Two sets of rules are reviewed and compared for transforming rates and assessment uncertainties from larger to smaller systems, i.e. mapping down. Mapping down equations are presented also for the alpha-factors and for the variances of CCF rates. Consistent rules are developed for mapping up CCF-rates and uncertainties from smaller to larger systems. These mapping up rules are not limited to a binomial CCF model. It is shown how consistency requirements set certain limits to possible parametric values. Empirical alpha factors are used to estimate robust mapping parameters, and mapping up equations are derived for alpha factors as well. An assessment uncertainty procedure is presented for treating incomplete or vague information when estimating CCF-rates. Numerical studies illustrate mapping rules and procedures. Recommendations are made for practical applications.  相似文献   

12.
Reliability improvement of CMOS VLSI circuits depends on a thorough understanding of the technology, failure mechanisms, and resulting failure modes involved. Failure analysis has identified open circuits, short circuits and MOSFET degradations as the prominent failure modes. Classical methods of fault simulation and test generation are based on the gate level stuck-at fault model. This model has proved inadequate to model all realistic CMOS failure modes. An approach, which will complement available VLSI design packages, to aid reliability improvement and assurance of CMOS VLSI is outlined. A ‘two-step’ methodology is adopted. Step one, described in this paper, involves accurate circuit level fault simulation of CMOS cells used in a hierarchical design process. The simulation is achieved using SPICE and pre-SPICE insertion of faults (PSIF). PSIF is an additional module to SPICE that has been developed and is outlined in detail. Failure modes effects analysis (FMEA) is executed on the SPICE results and FMEA tables are generated. The second step of the methodology uses the FMEA tables to produce a knowledge base. Step two is essential when reliability studies of larger and VLSI circuits are required and will be the subject of a future paper. The knowledge base has the potential to generate fault trees, fault simulate and fault diagnose automatically.  相似文献   

13.
This paper considers software systems consisting of fault-tolerant components. These components are built from functionally equivalent but independently developed versions characterized by different reliability and execution time. Because of hardware resource constraints, the number of versions that can run simultaneously is limited. The expected system execution time and its reliability (defined as probability of obtaining the correct output within a specified time) strictly depend on parameters of software versions and sequence of their execution. The system structure optimization problem is formulated in which one has to choose software versions for each component and find the sequence of their execution in order to achieve the greatest system reliability subject to cost constraints. The versions are to be chosen from a list of available products. Each version is characterized by its reliability, execution time and cost.The suggested optimization procedure is based on an algorithm for determining system execution time distribution that uses the moment generating function approach and on the genetic algorithm. Both N-version programming and the recovery block scheme are considered within a universal model. Illustrated example is presented.  相似文献   

14.
A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis.  相似文献   

15.
Reliability of a system may differ greatly when operating under different environments. However, the existing works have either neglected the environment factor in system reliability analysis or considered this factor for binary systems or systems subject to a single environment (parameter). In this paper, we make contributions by modeling a multi-state system operating under hybrid dynamic environments affected by multiple environmental parameters. Different Markov chains with finite states are used to represent the random system behavior and dynamic environments, leading to an aggregated Markov process that models the overall system behavior. An effective approach based on state partitions and aggregations is suggested for assessing the system reliability indexes, including reliability, availability, multi-point availability, and environment-based reliability. A high-pressure homogenizer system is analyzed to demonstrate the proposed model and show the comparison of the reliability of system under fixed and dynamic environment.  相似文献   

16.
Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis.  相似文献   

17.
The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EBs) and send these blocks to different resources. In order to provide a desired level of service reliability the RMS can assign the same blocks to several independent resources for parallel execution.The data security is a crucial issue in distributed computing that affects the execution policy. By the optimal service task partition into the EBs and their distribution among resources, one can achieve the greatest possible service reliability and/or expected performance subject to data security constraints. The paper suggests an algorithm for solving this optimization problem. The algorithm is based on the universal generating function technique and on the evolutionary optimization approach. Illustrative examples are presented.  相似文献   

18.
Gao H  Wang G  Yang M  Tan L  Yu J 《Nanotechnology》2012,23(1):015607
A novel hierarchical Ni-Co hydroxide assembled from two-wheeled units was successfully synthesized via a simple, hydrothermal method through the reaction of nickel salt, cobalt salt and sodium hydroxide, and with a chelating agent (EDA) to control the precipitation rate. The as-synthesized materials were characterized by x-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), Fourier transform infrared spectroscopy (FTIR), atomic absorption spectrophotometer (AAS) and thermogravimetric analysis (TG). The Ni(2+)/Co(2+) molar ratio R in the initial solution plays an important role to control the morphology of the hierarchical Ni-Co hydroxide. The influence of the EDA concentration, reaction temperature and NaOH concentration on the formation of the hierarchical Ni-Co hydroxide was also investigated. The formation mechanism of the hierarchical Ni-Co hydroxide assembled by two-wheeled units was proposed. A Ni-Co oxide with a similar structure was obtained by calcination of the as-prepared Ni-Co hydroxide.  相似文献   

19.
High reliability of railway power systems is one of the essential criteria to ensure quality and cost-effectiveness of railway services. Evaluation of reliability at system level is essential for not only scheduling maintenance activities, but also identifying reliability-critical components. Various methods to compute reliability on individual components or regularly structured systems have been developed and proven to be effective. However, they are not adequate for evaluating complicated systems with numerous interconnected components, such as railway power systems, and locating the reliability critical components. Fault tree analysis (FTA) integrates the reliability of individual components into the overall system reliability through quantitative evaluation and identifies the critical components by minimum cut sets and sensitivity analysis. The paper presents the reliability evaluation of railway power systems by FTA and investigates the impact of maintenance activities on overall reliability. The applicability of the proposed methods is illustrated by case studies in AC railways  相似文献   

20.
In a Bayesian reliability analysis of a system with dependent components, an aggregate analysis (i.e. system-level analysis) or a simplified disaggregate analysis with independence assumptions may be preferable if the estimations obtained from employing these two approaches do not deviate substantially from those derived through a disaggregate analysis, which is generally considered the most accurate method. This study was conducted to identify the key factors and their range of values that lead to estimation errors of great magnitude. In particular, a copula-based Bayesian reliability model was developed to formulate the dependence structure for a products of probabilities model of a simple parallel system. Monte Carlo simulation, regionalised sensitivity analysis and classification tree learning were employed to investigate the key factors. The resulting classification tree achieved favourable predictive accuracy. Several decision rules suggesting the optimal approach under different combinations of conditions were also extracted. This study has made a methodological contribution in laying the groundwork for investigating systems with dependent components using copula-based Bayesian reliability models. With regard to practical implications, this study also derived useful guidelines for selecting the most appropriate analysis approach under different scenarios with different magnitude of dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号