首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The redundancy allocation problem is formulated with the objective of minimizing design cost, when the system exhibits a multi-state reliability behavior, given system-level performance constraints. When the multi-state nature of the system is considered, traditional solution methodologies are no longer valid. This study considers a multi-state series-parallel system (MSPS) with capacitated binary components that can provide different multi-state system performance levels. The different demand levels, which must be supplied during the system-operating period, result in the multi-state nature of the system. The new solution methodology offers several distinct benefits compared to traditional formulations of the MSPS redundancy allocation problem. For some systems, recognizing that different component versions yield different system performance is critical so that the overall system reliability estimation and associated design models the true system reliability behavior more realistically. The MSPS design problem, solved in this study, has been previously analyzed using genetic algorithms (GAs) and the universal generating function. The specific problem being addressed is one where there are multiple component choices, but once a component selection is made, only the same component type can be used to provide redundancy. This is the first time that the MSPS design problem has been addressed without using GAs. The heuristic offers more efficient and straightforward analyses. Solutions to three different problem types are obtained illustrating the simplicity and ease of application of the heuristic without compromising the intended optimization needs.  相似文献   

2.
Optimization leads to specialized structures which are not robust to disturbance events like unanticipated abnormal loading or human errors. Typical reliability-based and robust optimization mainly address objective aleatory uncertainties. To date, the impact of subjective epistemic uncertainties in optimal design has not been comprehensively investigated. In this paper, we use an independent parameter to investigate the effects of epistemic uncertainties in optimal design: the latent failure probability. Reliability-based and risk-based truss topology optimization are addressed. It is shown that optimal risk-based designs can be divided in three groups: (A) when epistemic uncertainty is small (in comparison to aleatory uncertainty), the optimal design is indifferent to it and yields isostatic structures; (B) when aleatory and epistemic uncertainties are relevant, optimal design is controlled by epistemic uncertainty and yields hyperstatic but nonredundant structures, for which expected costs of direct collapse are controlled; (C) when epistemic uncertainty becomes too large, the optimal design becomes redundant, as a way to control increasing expected costs of collapse. The three regions above are divided by hyperstatic and redundancy thresholds. The redundancy threshold is the point where the structure needs to become redundant so that its reliability becomes larger than the latent reliability of the simplest isostatic system. Simple truss topology optimization is considered herein, but the conclusions have immediate relevance to the optimal design of realistic structures subject to aleatory and epistemic uncertainties.  相似文献   

3.
Optimization of system reliability in the presence of common cause failures   总被引:1,自引:0,他引:1  
The redundancy allocation problem is formulated with the objective of maximizing system reliability in the presence of common cause failures. These types of failures can be described as events that lead to simultaneous failure of multiple components due to a common cause. When common cause failures are considered, component failure times are not independent. This new problem formulation offers several distinct benefits compared to traditional formulations of the redundancy allocation problem. For some systems, recognition of common cause failure events is critical so that the overall system reliability estimation and associated design resembles the true system reliability behavior realistically. Since common cause failure events may vary from one system to another, three different interpretations of the reliability estimation problem are presented. This is the first time that mixing of components together with the inclusion of common cause failure events has been addressed in the redundancy allocation problem. Three non-linear optimization models are presented. Solutions to three different problem types are obtained. They support the position that consideration of common cause failures will lead to different and preferred “optimal” design strategies.  相似文献   

4.
This paper presents the similarities and differences between hardware, software and system reliability. Relative contributions to system failures are shown for software and hardware and failure and recovery propensities are also discussed. Reliability, availability and maintainability (RAM) concepts have been broadly developed for software reliability than hardware reliability. Extending these software concepts to hardware and system reliability helps in examining the reliability of complex systems. The paper concludes with assurance techniques for defending against faults. Most of the techniques discussed originate in software reliability but apply to all aspects of a system. Also, the effects of redundancy on overall system availability are shown.  相似文献   

5.
Ran Cao  Wei Hou  Yanying Gao 《工程优选》2018,50(9):1453-1469
This article presents a three-stage approach for solving multi-objective system reliability optimization problems considering uncertainty. The reliability of each component is considered in the formulation as a component reliability estimate in the form of an interval value and discrete values. Component reliability may vary owing to variations in the usage scenarios. Uncertainty is described by defining a set of usage scenarios. To address this problem, an entropy-based approach to the redundancy allocation problem is proposed in this study to identify the deterministic reliability of each component. In the second stage, a multi-objective evolutionary algorithm (MOEA) is applied to produce a Pareto-optimal solution set. A hybrid algorithm based on k-means and silhouettes is performed to select representative solutions in the third stage. Finally, a numerical example is presented to illustrate the performance of the proposed approach.  相似文献   

6.
Accelerated life testing (ALT) design is usually performed based on assumptions of life distributions, stress–life relationship, and empirical reliability models. Time‐dependent reliability analysis on the other hand seeks to predict product and system life distribution based on physics‐informed simulation models. This paper proposes an ALT design framework that takes advantages of both types of analyses. For a given testing plan, the corresponding life distributions under different stress levels are estimated based on time‐dependent reliability analysis. Because both aleatory and epistemic uncertainty sources are involved in the reliability analysis, ALT data is used in this paper to update the epistemic uncertainty using Bayesian statistics. The variance of reliability estimation at the nominal stress level is then estimated based on the updated time‐dependent reliability analysis model. A design optimization model is formulated to minimize the overall expected testing cost with constraint on confidence of variance of the reliability estimate. Computational effort for solving the optimization model is minimized in three directions: (i) efficient time‐dependent reliability analysis method; (ii) a surrogate model is constructed for time‐dependent reliability under different stress levels; and (iii) the ALT design optimization model is decoupled into a deterministic design optimization model and a probabilistic analysis model. A cantilever beam and a helicopter rotor hub are used to demonstrate the proposed method. The results show the effectiveness of the proposed ALT design optimization model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
This paper considers software systems consisting of fault-tolerant components. These components are built from functionally equivalent but independently developed versions characterized by different reliability and execution time. Because of hardware resource constraints, the number of versions that can run simultaneously is limited. The expected system execution time and its reliability (defined as probability of obtaining the correct output within a specified time) strictly depend on parameters of software versions and sequence of their execution. The system structure optimization problem is formulated in which one has to choose software versions for each component and find the sequence of their execution in order to achieve the greatest system reliability subject to cost constraints. The versions are to be chosen from a list of available products. Each version is characterized by its reliability, execution time and cost.The suggested optimization procedure is based on an algorithm for determining system execution time distribution that uses the moment generating function approach and on the genetic algorithm. Both N-version programming and the recovery block scheme are considered within a universal model. Illustrated example is presented.  相似文献   

8.
This paper presents the design optimization by a multi-objective genetic algorithm of a safety-instrumented system based on RAMS+C measures. This includes optimization of safety and reliability measures plus lifecycle cost. Diverse redundancy is implemented as an option for redundancy allocation, and special attention is paid to its effect on common cause failure and the overall system objectives. The requirements for safety integrity established by the standard IEC 61508 are addressed, as well as the modelling detail required for this purpose. The problem is about reliability and redundancy allocation with diversity for a series-parallel system. The objectives to optimize are the average probability of failure on demand, which represents the system safety integrity, Spurious Trip Rate and Lifecycle Cost. The overall method is illustrated with a practical example from the chemical industry: a safety function against high pressure and temperature for a chemical reactor. In order to implement diversity, each subsystem is given the option of three different technologies, each technology with different reliability and diagnostic coverage characteristics. Finally, the optimization with diversity is compared against optimization without diversity.  相似文献   

9.
Estimating reliability of components in series and parallel systems from masking system testing data has been studied. In this paper we take into account a second type of uncertainty: censored lifetime, when system components have constant failure rates. To efficiently estimate failure rates of system components in presence of combined uncertainty, we propose a useful concept for components: equivalent failure and equivalent lifetime. For a component in a system with known status and lifetime, its equivalent failure is defined as its conditional failure probability and its equivalent lifetime is its expectation of lifetime. For various uncertainty scenarios, we derive equivalent failures and test times for individual components in both series and parallel systems. An efficient EM algorithm is formulated to estimate component failure rates. Two numerical examples are presented to illustrate the application of the algorithm.  相似文献   

10.
In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system.In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems.  相似文献   

11.
Life data from systems of components are often analysed to estimate the reliability of the individual components. These estimates are useful since they reflect the reliability of the components under actual operating conditions. However, owing to the cost or time involved with failure analysis, the exact component causing system failure may be unknown or ‘masked’. That is, the cause may only be isolated to some subset of the system's components. We present an iterative approach for obtaining component reliability estimates from such data for series systems. The approach is analogous to traditional probability plotting. That is, it involves the fitting of a parametric reliability function to a set of nonparametric reliability estimates (plotting points). We present a numerical example assuming Weibull component life distributions and a two-component series system. In this example we find estimates with only 4 per cent of the computation time required to find comparable MLEs.  相似文献   

12.
Any structure or component can be made to fail if it is subjected to loadings in excess of its strength. Structural integrity is achieved by ensuring that there is an adequate safety margin or reserve factor between strength and loading effects. The basic principles of ‘allowable stress’ and ‘limit state’ design methods to avoid failure in structural and pressure vessel components are summarised. The use of risk as a means of defining adequate safety is introduced where risk is defined as the product of probability of failure multiplied by consequences of failure. The concept of acceptable ‘target’ levels of risk is discussed. The use of structural reliability theory to determine estimates of probability of failure and the use of the reliability index β are described. The need to consider the effects of uncertainties in loading information, calculation of stresses, input data and material properties is emphasised. The way in which the effect of different levels of uncertainty can be dealt with by use of partial safety factors in limit state design is explained. The need to consider all potential modes of failure, including the unexpected, is emphasised and an outline given of safety factor treatments for crack tip dependent and time dependent modes. The relationship between safety factors appropriate for the design stage and for assessment of structural integrity at a later stage is considered. The effects of redundancy and system behaviour on appropriate levels of safety factors are discussed.  相似文献   

13.
Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations.  相似文献   

14.
Reliability–sensitivity, which is considered as an essential component in engineering design under uncertainty, is often of critical importance toward understanding the physical systems underlying failure and modifying the design to mitigate and manage risk. This paper presents a new computational tool for predicting reliability (failure probability) and reliability–sensitivity of mechanical or structural systems subject to random uncertainties in loads, material properties, and geometry. The dimension reduction method is applied to compute response moments and their sensitivities with respect to the distribution parameters (e.g., shape and scale parameters, mean, and standard deviation) of basic random variables. Saddlepoint approximations with truncated cumulant generating functions are employed to estimate failure probability, probability density functions, and cumulative distribution functions. The rigorous analytic derivation of the parameter sensitivities of the failure probability with respect to the distribution parameters of basic random variables is derived. Results of six numerical examples involving hypothetical mathematical functions and solid mechanics problems indicate that the proposed approach provides accurate, convergent, and computationally efficient estimates of the failure probability and reliability–sensitivity. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
New insights on multi-state component criticality and importance   总被引:1,自引:1,他引:1  
In this paper, new importance measures for multi-state systems with multi-state components are introduced and evaluated. These new measures complement and enhance current work done in the area of multi-state reliability. In general, importance measures are used to evaluate and rank the criticality of component or component states with respect to system reliability. The focus of the study is to provide intuitive and clear importance measures that can be used to enhance system reliability from two perspectives: (1) how a specific component affects multi-state system reliability and (2) how a particular component state or set of states affects multi-state system reliability. The first measure unsatisfied demand index, provides insight regarding a component or component state contribution to unsatisfied demand. The second measure multi-state failure frequency index, elaborates on an approach that quantifies the contribution of a particular component or component state to system failure. Finally, the multi-state redundancy importance identifies where to allocate component redundancy as to improve system reliability. The findings of this study indicate that both perspectives can be used to complement each other and as an effective tool to assess component criticality. Examples illustrate and compare the proposed measures with previous multi-state importance measures.  相似文献   

16.
The objective of this paper is to present an efficient computational methodology to obtain the optimal system structure of electronic devices by using either a single or a multiobjective optimization approach, while considering the constraints on reliability and cost. The component failure rate uncertainty is taken under consideration and it is modeled with two alternative probability distribution functions. The Latin hypercube sampling method is used to simulate the probability distributions. An optimization approach was developed using the simulated annealing algorithm because of its flexibility to be applied in various system types with several constraints and its efficiency in computational time. This optimization approach can handle efficiently either the single or the multiobjective optimization modeling of the system design. The developed methodology was applied to a power electronic device and the results were compared with the results of the complete enumeration of the solution space. The stochastic nature of the best solutions for the single objective optimization modeling of the system design was sampled extensively and the robustness of the developed optimization approach was demonstrated.  相似文献   

17.
On the Effect of Redundancy for Systems with Dependent Components   总被引:1,自引:0,他引:1  
Parallel redundancy is a common approach to increase system reliability and mean time to failure. When studying systems with redundant components, it is usually assumed that the components are independent; however, this assumption is seldom valid in practice. In the case of dependent components, the effectiveness of adding a component may be quite different from the case of independent components. In this paper we investigate how the degree of correlation affects the increase in the mean lifetime for parallel redundancy when the two components are positively quadrant dependent. A number of bivariate distributions that can be used in the modeling of dependent components are compared. Various bounds are also derived. The results are useful in reliability analysis as well as for designers who are required to take into account the possible dependence among the components.  相似文献   

18.
In reliability modelling it is conventional to build sophisticated models of the probabilistic behaviour of the component lifetimes in a system in order to deduce information about the probabilistic behaviour of the system lifetime. Decision modelling of the reliability programme requires a priori, therefore, an even more sophisticated set of models in order to capture the evidence the decision maker believes may be obtained from different types of data acquisition.Bayes linear analysis is a methodology that uses expectation rather than probability as the fundamental expression of uncertainty. By working only with expected values, a simpler level of modelling is needed as compared to full probability models.In this paper we shall consider the Bayes linear approach to the estimation of a mean time to failure MTTF of a component. The model built will take account of the variance in our estimate of the MTTF, based on a variety of sources of information.  相似文献   

19.
Traditionally, reliability based design optimization (RBDO) is formulated as a nested optimization problem. For these problems the objective is to minimize a cost function while satisfying the reliability constraints. The reliability constraints are usually formulated as constraints on the probability of failure corresponding to each of the failure modes or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a reliability analysis. The difficulty in evaluating reliability constraints comes from the fact that modern reliability analysis methods are themselves formulated as an optimization problem. Solving such nested optimization problems is extremely expensive for large scale multidisciplinary systems which are likewise computationally intensive. In this research, a framework for performing reliability based multidisciplinary design optimization using approximations is developed. Response surface approximations (RSA) of the limit state functions are used to estimate the probability of failure. An outer loop is incorporated to ensure that the approximate RBDO converges to the actual most probable point of failure. The framework is compared with the exact RBDO procedure. In the proposed methodology, RSAs are employed to significantly reduce the computational expense associated with traditional RBDO. The proposed approach is implemented in application to multidisciplinary test problems, and the computational savings and benefits are discussed.  相似文献   

20.
A new methodology for the reliability optimization of a k dissimilar-unit nonrepairable cold-standby redundant system is introduced in this paper. Each unit is composed of a number of independent components with generalized Erlang distributions of lifetimes arranged in a series–parallel configuration. We also propose an approximate technique to extend the model to the general types of nonconstant hazard functions. To evaluate the system reliability, we apply the shortest path technique in stochastic networks. The purchase cost of each component is assumed to be an increasing function of its expected lifetime. There are multiple component choices with different distribution parameters available for replacement with each component of the system. The objective of the reliability optimization problem is to select the best components, from the set of available components, to be placed in the standby system to minimize the initial purchase cost of the system, maximize the system mean time to failure, minimize the system variance of time to failure, and also maximize the system reliability at the mission time. The goal attainment method is used to solve a discrete time approximation of the original problem.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号