首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, an adaptive directional importance sampling (ADIS) method is presented. The algorithm is based on a directional simulation scheme in which the most important directions are sampled exact and the others by means of a response surface approach. These most important directions are determined by a β-sphere enclosing the most important part(s) of the limit state. The β-sphere and response surface are constantly updated during sampling with information that becomes available from the exact evaluations making the scheme adaptive.Various widely used test problems, representing a broad range of complex limit states that can occur in practice, of which several that pose potential problems to stochastic methods in general, demonstrate the high efficiency, accuracy and robustness of the method. As such, the ADIS method is of particular interest in applications with a low probability of failure and medium number (up to about 40) of stochastic variables, for instance in aircraft and nuclear industry.  相似文献   

2.
In this paper the problem of calculating the probability of failure of linear dynamical systems subjected to random excitations is considered. The failure probability can be described as a union of failure events each of which is described by a linear limit state function. While the failure probability due to a union of non-interacting limit state functions can be evaluated without difficulty, the interaction among the limit state functions makes the calculation of the failure probability a difficult and challenging task. A novel robust reliability methodology, referred to as Wedge-Simulation-Method, is proposed to calculate the probability that the response of a linear system subjected to Gaussian random excitation exceeds specified target thresholds. A numerical example is given to demonstrate the efficiency of the proposed method which is found to be enormously more efficient than Monte Carlo Simulations.  相似文献   

3.
4.
A new method for power system reliability analysis using the fault tree analysis approach is developed. The method is based on fault trees generated for each load point of the power system. The fault trees are related to disruption of energy delivery from generators to the specific load points. Quantitative evaluation of the fault trees, which represents a standpoint for assessment of reliability of power delivery, enables identification of the most important elements in the power system. The algorithm of the computer code, which facilitates the application of the method, has been applied to the IEEE test system. The power system reliability was assessed and the main contributors to power system reliability have been identified, both qualitatively and quantitatively.  相似文献   

5.
This paper evaluates and implements composite importance measures (CIM) for multi-state systems with multi-state components (MSMC). Importance measures are frequently used as a means to evaluate and rank the impact and criticality of individual components within a system yet they are less often used as a guide to prioritize system reliability improvements. For multi-state systems, previously developed measures sometimes are not appropriate and they do not meet all user needs. This study has two inter-related goals: first, to distinguish between two types of importance measures that can be used for evaluating the criticality of components in MSMC with respect to multi-state system reliability, and second, based on the CIM, to develop a component allocation heuristic to maximize system reliability improvements. The heuristic uses Monte-Carlo simulation together with the max-flow min-cut algorithm as a means to compute component CIM. These measures are then transformed into a cost-based composite metric that guides the allocation of redundant elements into the existing system. Experimental results for different system complexities show that these new CIM can effectively estimate the criticality of components with respect to multi-state system reliability. Similarly, these results show that the CIM-based heuristic can be used as a fast and effective technique to guide system reliability improvements.  相似文献   

6.
This paper starts from the main objections regarding MIL-HDBK-217 and the BELLCORE method for reliability prediction, objections asserting that these methods are approximate, complicated and unconvincing. To support these assertions, and by applying techniques specific to reliability theory, the author has developed a reliability model which is plausible for certain elements of technical systems. The existence of such a model, which in practice is useless because the failure rate expression is too complicated, proves clearly the inefficiency of classical methods.  相似文献   

7.
Metamodel-based method is a wise reliability analysis technique because it uses the metamodel to substitute the actual limit state function under the predefined accuracy. Adaptive Kriging (AK) is a famous metamodel in reliability analysis for its flexibility and efficiency. AK combined with the importance sampling (IS) method abbreviate as AK–IS can extremely reduce the size of candidate sampling pool in the updating process of Kriging model, which makes the AK-based reliability method more suitable for estimating the small failure probability. In this paper, an error-based stopping criterion of updating the Kriging model in the AK–IS method is constructed and two considerable maximum relative error estimation methods between the failure probability estimated by the current Kriging model and the limit state function are derived. By controlling the maximum relative error, the accuracy of the estimate can be adjusted flexibly. Results in three case studies show that the error-based stopping criterion based AK–IS method can achieve the predefined accuracy level and simultaneously enhance the efficiency of updating the Kriging model.  相似文献   

8.
A generic method for estimating system reliability using Bayesian networks   总被引:2,自引:0,他引:2  
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.  相似文献   

9.
This paper proposes a double-loop relevant vector machine (RVM) model for system reliability analysis. To reduce the computational load, an adaptive RVM is constructed, which is built by minority initial samples and K-folds clustering. The candidate sample pool constructed by this rough adaptive RVM model improves the computational efficiency. Based on the idea of active learning, another adaptive RVM is established. By combining two adaptive RVMs, the proposed model has the advantages of both active learning and importance sampling, which is called DLRVM. In this model, the failure probability is expressed as a product of the augmented failure probability and the correction factor. From the characteristics of RVM, this model under the Bayesian framework has significant generalization ability which avoids the limitations of many machine learning models. The accuracy and high efficiency are verified via four academic examples and an implicit engineering problem. The results also indicate that RVM is appropriate for system reliability analysis.  相似文献   

10.
Various schemes have been created for verifying that reliability is not degraded during production. These include the periodic performance of reliability tests during production, three versions of an all-equipment reliability test plan and Bayesian approaches. Each method has its drawbacks. The purpose of all of these is to verify that the production process is continuing to produce products of acceptable reliability, for which the long-existing tools of statistical process control are directly applicable and advantageous. A method of verifying production reliability based on the use of a control chart for failure rate is proposed as a better way than the current standards and alternatives discussed in this paper.  相似文献   

11.
The presented method extends the classical reliability block diagram method to a repairable multi-state system. It is very suitable for engineering applications since the procedure is well formalized and based on the natural decomposition of the entire multi-state system (the system is represented as a collection of its elements). Until now, the classical block diagram method did not provide the reliability assessment for the repairable multi-state system. The straightforward stochastic process methods are very difficult for engineering application in such cases due to the “dimension damnation”—huge number of system states. The suggested method is based on the combined random processes and the universal generating function technique and drastically reduces the number of states in the multi-state model.  相似文献   

12.
13.
Matrix-based system reliability method and applications to bridge networks   总被引:1,自引:0,他引:1  
Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information.  相似文献   

14.
In this paper, marginal parts are equated with low quality and low reliability. Marginal parts can be shown to cause errors in some products during tests. They are also a cause of field failures in these products. Although marginal parts causes still have a random failure time component, they have a much lower amount of variation than our traditional failure causes, hidden flaws. I give marginal parts a measurable definition. If marginal effects can be established for a product, then this knowledge can be used to improve reliability. Some examples of products where I believe this marginal effect holds are discussed in this paper. Such marginal effects on reliability are gaining more and more importance in systems that are increasing in complexity. A strong point in applying the marginal parts theory framed in this paper is that it can be readily subjected to statistical testing to see if it holds or not for any particular product.  相似文献   

15.
This paper presents a multi-state Markov model for a coal power generating unit. The paper proposes a technique for the estimation of transition intensities (rates) between the various generating capacity levels of the unit based on field observation. The technique can be applied to such units where output generating capacity is uniformly distributed. In order to estimate the transition intensities a special Markov chain embedded in the observed capacity process was defined. By using this technique, all transition intensities can be estimated from the observed realization of the unit generating capacity stochastic process. The proposed multi-state Markov model was used to calculate important reliability indices such as the Forced Outage Rate (FOR), the Expected Energy Not Supplied (EENS) to consumers, etc. These indices were found for short-time periods (about 100 h). It was shown that these indices are sensibly different from those calculated for a long-term range. Such Markov models could be very useful for power system security analysis and short-term operating decisions.  相似文献   

16.
In this paper we adopt a geometric perspective to highlight the challenges associated with solving high-dimensional reliability problems. Adopting a geometric point of view we highlight and explain a range of results concerning the performance of several well-known reliability methods.

We start by investigating geometric properties of the N-dimensional Gaussian space and the distribution of samples in such a space or in a subspace corresponding to a failure domain. Next, we discuss Importance Sampling (IS) in high dimensions. We provide a geometric understanding as to why IS generally does not work in high dimensions [Au SK, Beck JL. Importance sampling in high dimensions. Structural Safety 2003;25(2):139–63]. We furthermore challenge the significance of “design point” when dealing with strongly nonlinear problems. We conclude by showing that for the general high-dimensional nonlinear reliability problems the selection of an appropriate fixed IS density is practically impossible.

Next, we discuss the simulation of samples using Markov Chain Monte Carlo (MCMC) methods. Firstly, we provide a geometric explanation as to why the standard Metropolis–Hastings (MH) algorithm does “not work” in high-dimensions. We then explain why the modified Metropolis–Hastings (MMH) algorithm introduced by Au and Beck [Au SK, Beck JL. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics 2001;16(4):263–77] overcomes this problem. A study of the correlation of samples obtained using MMH as a function of different parameters follows. Such study leads to recommendations for fine-tuning the MMH algorithm. Finally, the MMH algorithm is compared with the MCMC algorithm proposed by Katafygiotis and Cheung [Katafygiotis LS, Cheung SH. Application of spherical subset simulation method and auxiliary domain method on a benchmark reliability study, Structural Safety 2006 (in print)] in terms of the correlation of samples they generate.  相似文献   


17.
Reliability improvement of CMOS VLSI circuits depends on a thorough understanding of the technology, failure mechanisms, and resulting failure modes involved. Failure analysis has identified open circuits, short circuits and MOSFET degradations as the prominent failure modes. Classical methods of fault simulation and test generation are based on the gate level stuck-at fault model. This model has proved inadequate to model all realistic CMOS failure modes. An approach, which will complement available VLSI design packages, to aid reliability improvement and assurance of CMOS VLSI is outlined. A ‘two-step’ methodology is adopted. Step one, described in this paper, involves accurate circuit level fault simulation of CMOS cells used in a hierarchical design process. The simulation is achieved using SPICE and pre-SPICE insertion of faults (PSIF). PSIF is an additional module to SPICE that has been developed and is outlined in detail. Failure modes effects analysis (FMEA) is executed on the SPICE results and FMEA tables are generated. The second step of the methodology uses the FMEA tables to produce a knowledge base. Step two is essential when reliability studies of larger and VLSI circuits are required and will be the subject of a future paper. The knowledge base has the potential to generate fault trees, fault simulate and fault diagnose automatically.  相似文献   

18.
Comparison of finite element reliability methods   总被引:7,自引:0,他引:7  
The spectral stochastic finite element method (SSFEM) aims at constructing a probabilistic representation of the response of a mechanical system, whose material properties are random fields. The response quantities, e.g. the nodal displacements, are represented by a polynomial series expansion in terms of standard normal random variables. This expansion is usually post-processed to obtain the second-order statistical moments of the response quantities. However, in the literature, the SSFEM has also been suggested as a method for reliability analysis. No careful examination of this potential has been made yet. In this paper, the SSFEM is considered in conjunction with the first-order reliability method (FORM) and with importance sampling for finite element reliability analysis. This approach is compared with the direct coupling of a FORM reliability code and a finite element code. The two procedures are applied to the reliability analysis of the settlement of a foundation lying on a randomly heterogeneous soil layer. The results are used to make a comprehensive comparison of the two methods in terms of their relative accuracies and efficiencies.  相似文献   

19.
Many studies regarded a power transmission network as a binary-state network and constructed it with several arcs and vertices to evaluate network reliability. In practice, the power transmission network should be stochastic because each arc (transmission line) combined with several physical lines is multistate. Network reliability is the probability that the network can transmit d units of electric power from a power plant (source) to a high voltage substation at a specific area (sink). This study focuses on searching for the optimal transmission line assignment to the power transmission network such that network reliability is maximized. A genetic algorithm based method integrating the minimal paths and the Recursive Sum of Disjoint Products is developed to solve this assignment problem. A real power transmission network is adopted to demonstrate the computational efficiency of the proposed method while comparing with the random solution generation approach.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号