首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the past decade, the civil engineering community has ever more realized the importance and perspective of reliability-based design optimization (RBDO). Since then several advanced stochastic simulation algorithms for computing small failure probabilities encountered in reliability analysis of engineering systems have been developed: Subset Simulation (Au and Beck (2001) [2]), Line Sampling (Schuëller et al. (2004) [3]), The Auxiliary Domain Method (Katafygiotis et al. (2007) [4]), ALIS (Katafygiotis and Zuev (2007) [5]), etc. In this paper we propose a novel advanced stochastic simulation algorithm for solving high-dimensional reliability problems, called Horseracing Simulation (HRS). The key idea behind HS is as follows. Although the reliability problem itself is high-dimensional, the limit-state function maps this high-dimensional parameter space into a one-dimensional real line. This mapping transforms a high-dimensional random parameter vector, which may represent the stochastic input load as well as any uncertain structural parameters, into a random variable with unknown distribution, which represents the uncertain structural response. It turns out that the corresponding cumulative distribution function (CDF) of this random variable of interest can be accurately approximated by empirical CDFs constructed from specially designed samples. The generation of samples is governed by a process of “racing” towards the failure domain, hence the name of the algorithm. The accuracy and efficiency of the new method are demonstrated with a real-life wind engineering example.  相似文献   

2.
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.  相似文献   

3.
The development of an efficient MCMC strategy for sampling from complex distributions is a difficult task that needs to be solved for calculating the small failure probabilities encountered in the high-dimensional reliability analysis of engineering systems. Usually different variations of the Metropolis-Hastings algorithm (MH) are used. However, the standard MH algorithm does not generally work in high dimensions, since it leads to very frequent repeated samples. In order to overcome this deficiency one can use the Modified Metropolis-Hastings algorithm (MMH) proposed in Au and Beck (2001) [1]. Another variation of the MH algorithm, called the Metropolis-Hastings algorithm with delayed rejection (MHDR) has been proposed by Tierney and Mira (1999) [7]. The key idea behind the MHDR algorithm is to reduce the correlation between states of the Markov chain. In this paper we combine the ideas of MMH and MHDR and propose a novel modification of the MH algorithm, called the Modified Metropolis-Hastings algorithm with delayed rejection (MMHDR). The efficiency of the new algorithm is demonstrated with a numerical example where MMHDR is used together with Subset simulation for computing small failure probabilities in high dimensions.  相似文献   

4.
The present paper is concerned with the estimation of structural reliability when a large number of random variables is present. A sampling technique which uses lines in order to probe the failure domain, is presented. The latter is employed in conjunction with a stepwise procedure which makes use of Markov Chains. The resulting algorithm exhibits accelerated convergence.  相似文献   

5.
In this study, a Reliability-Based Optimization (RBO) methodology that uses Monte Carlo Simulation techniques, is presented. Typically, the First Order Reliability Method (FORM) is used in RBO for failure probability calculation and this is accurate enough for most practical cases. However, for highly nonlinear problems it can provide extremely inaccurate results and may lead to unreliable designs. Monte Carlo Simulation (MCS) is usually more accurate than FORM but very computationally intensive. In the RBO methodology presented in this paper, limit state approximations are used in conjunction with MCS techniques in an approximate MCS-based RBO that facilitates the efficient calculation of the probabilities of failure. A FORM-based RBO is first performed to obtain the initial limit state approximations. A Symmetric Rank-1 (SR1) variable metric algorithm is used to construct and update the quadratic limit state approximations. The approximate MCS-based RBO uses a conditional-expectation-based MCS, that was chosen over indicator-based MCS because of the smoothness of the probability of failure estimates and the availability of analytic sensitivities. The RBO methodology was implemented for an analytic test problem and a higher-dimensional, control-augmented-structure test problem. The results indicate that the SR1 algorithm provides accurate limit state approximations (and therefore accurate estimates of the probabilities of failure) for these test problems. It was also observed that the RBO methodology required two orders of magnitude fewer analysis calls than an approach that used exact limit state evaluations for both test problems.  相似文献   

6.
Safety assessment in industrial plants with ‘major hazards’ requires a rigorous combination of both qualitative and quantitative techniques of RAMS. Quantitative assessment can be executed by static or dynamic tools of dependability but, while the former are not sufficient to model exhaustively time-dependent activities, the latter are still too complex to be used with success by the operators of the industrial field.In this paper we present a review of the procedures that can be used to solve quite general dynamic fault trees (DFT) that present a combination of the following characteristics: time dependencies, repeated events and generalized probability failure.Theoretical foundations of the DFT theory are discussed and the limits of the most known DFT tools are presented. Introducing the concept of weak and strong hierarchy, the well-known modular approach is adapted to study a more generic class of DFT. In order to quantify the approximations introduced, an ad-hoc simulative environment is used as benchmark.In the end, a DFT of an accidental scenario is analyzed with both analytical and simulative approaches. Final results are in good agreement and prove how it is possible to implement a suitable Monte Carlo simulation with the features of a spreadsheet environment, able to overcome the limits of the analytical tools, thus encouraging further researches along this direction.  相似文献   

7.
This second part describes the application of the methodology for assessing the relative importance of uncertain structural parameters. The emphasis is on the demonstration that the proposed method can indeed handle large-scale problems relevant to industrial users. Four examples are included, with growing complexity and degree of difficulty. While the first two examples are quite simple tutorial type problems, the remaining two examples deal with a large-scale application from aerospace engineering. The results demonstrate the remarkable efficiency of the method, even for problems with extremely high numbers of uncertain parameters.  相似文献   

8.
A novel procedure for estimating the relative importance of uncertain parameters of complex FE model is presented. The method is specifically directed toward problems involving high-dimensional input parameter spaces, as they are encountered during uncertainty analysis of large scale, refined FE models. In these cases one is commonly faced with thousands of uncertain parameters and traditional techniques, e.g. finite difference or direct differentiation methods become expensive. In contrast, the presented method quickly filters out the most influential variables. Hence, the main objective is not to compute the sensitivity but to identify those parameters whose random variations have the biggest influence on the response. This is achieved by generating a set of samples with direct Monte Carlo simulation, which are closely scattered around the point at which the relative importance measures are sought. From these samples, estimators of the relative importance are synthesized and the most important ones are refined with a method of choice. In this paper, the underlying theory as well as the resulting algorithm is presented.  相似文献   

9.
    
In recent years, the need for a more accurate dependability modelling (encompassing reliability, availability, maintenance, and safety) has favoured the emergence of novel dynamic dependability techniques able to account for temporal and stochastic dependencies of a system. One of the most successful and widely used methods is Dynamic Fault Tree that, with the introduction of the dynamic gates, enables the analysis of dynamic failure logic systems such as fault‐tolerant or reconfigurable systems. Among the dynamic gates, Priority‐AND (PAND) is one of the most frequently used gates for the specification and analysis of event sequences. Despite the numerous modelling contributions addressing the resolution of the PAND gate, its failure logic and the consequences for the coherence behaviour of the system need to be examined to understand its effects for engineering decision‐making scenarios including design optimization and sensitivity analysis. Accordingly, the aim of this short communication is to analyse the coherence region of the PAND gate so as to determine the coherence bounds and improve the efficacy of the dynamic dependability modelling process.  相似文献   

10.
Steam generators in nuclear power plants have experienced varying degrees of under-deposit pitting corrosion. A probabilistic model to accurately predict pitting damage is necessary for effective life-cycle management of steam generators. This paper presents an advanced probabilistic model of pitting corrosion characterizing the inherent randomness of the pitting process and measurement uncertainties of the in-service inspection (ISI) data obtained from eddy current (EC) inspections. A Markov chain Monte Carlo simulation-based Bayesian method, enhanced by a data augmentation technique, is developed for estimating the model parameters. The proposed model is able to predict the actual pit number, the actual pit depth as well as the maximum pit depth, which is the main interest of the pitting corrosion model. The study also reveals the significance of inspection uncertainties in the modeling of pitting flaws using the ISI data: Without considering the probability-of-detection issues and measurement errors, the leakage risk resulted from the pitting corrosion would be under-estimated, despite the fact that the actual pit depth would usually be over-estimated.  相似文献   

11.
The analysis of natural γ-ray spectra measured in boreholes has to take into account borehole parameters such as the presence of casings and borehole diameter. For large, high-efficiency γ-ray detectors, such as BGO-based systems, which employ full-spectrum data analysis, corresponding corrections were not previously determined. In a joint project of the Nuclear Geophysics Division of the Kernfysisch Versneller Instituut (NGD/KVI), Groningen, Medusa Explorations B.V. and the Dutch Institute for Applied Geosciences (TNO-NITG) a catalogue of corrections was constructed. Using the Monte Carlo code MCNP, the influence of steel casings, borehole diameter, central axis probe position and the diameter of the γ-ray detector on the γ-ray spectra has been investigated for nearly 20 geometries. The calculated γ-ray spectra are compared qualitatively and quantitatively. In a case study, γ-ray spectra from a borehole measured in a cased and uncased configuration are analyzed with simulated spectra. When no corrections are used, the activity concentrations deviated by as much as 50% between the two measurements. Taking into account the specific measurement geometry, the activity concentrations were found to be identical within the statistical and systematic uncertainties of the experiment for the same borehole, with and without casing. These results illustrate the need for borehole-specific corrections and this study demonstrates that Monte Carlo methods are a fast and reliable way to calibrate well-logging tools for a wide variety of configurations.  相似文献   

12.
A case study for quantifying system reliability and uncertainty   总被引:1,自引:0,他引:1  
The ability to estimate system reliability with an appropriate measure of associated uncertainty is important for understanding its expected performance over time. Frequently, obtaining full-system data is prohibitively expensive, impractical, or not permissible. Hence, methodology which allows for the combination of different types of data at the component or subsystem levels can allow for improved estimation at the system level. We apply methodologies for aggregating uncertainty from component-level data to estimate system reliability and quantify its overall uncertainty. This paper provides a proof-of-concept that uncertainty quantification methods using Bayesian methodology can be constructed and applied to system reliability problems for a system with both series and parallel structures.  相似文献   

13.
    
A novel subset simulation algorithm, called the parallel subset simulation, is proposed to estimate small failure probabilities of multiple limit states with only a single subset simulation analysis. As well known, crude Monte Carlo simulation is inefficient in estimating small probabilities but is applicable to multiple limit states, while the ordinary subset simulation is efficient in estimating small probabilities but can only handle a single limit state. The proposed novel stochastic simulation approach combines the advantages of the two simulation methods: it is not only efficient in estimating small probabilities but also applicable to multiple limit states. The key idea is to introduce a “principal variable” which is correlated with all performance functions. The failure probabilities of all limit states therefore could be evaluated simultaneously when subset simulation algorithm generates the principal variable samples. The statistical properties of the failure probability estimators are also derived. Two examples are presented to demonstrate the effectiveness of the new approach and to compare with crude Monte Carlo and ordinary subset simulation methods.  相似文献   

14.
A Decision Tree (DT) approach to build empirical models for use in Monte Carlo reliability evaluation is presented. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replacing the Evaluation Function (EF) by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated with two systems of different size, represented by their equivalent networks. The robustness of the DT approach as an approximated method to replace the EF is also analysed. Excellent system reliability results are obtained by training a DT with a small amount of information.  相似文献   

15.
A hybrid Subset Simulation approach is proposed for reliability estimation for general dynamical systems subject to stochastic excitation. This new stochastic simulation approach combines the advantages of the two previously proposed Subset Simulation methods, Subset Simulation with Markov Chain Monte Carlo (MCMC) algorithm and Subset Simulation with splitting. The new method employs the MCMC algorithm before reaching an intermediate failure level and splitting after reaching the level to exploit the causality of dynamical systems. The statistical properties of the failure probability estimators are derived. Two examples are presented to demonstrate the effectiveness of the new approach and to compare with the previous two Subset Simulation methods. The results show that the new method is robust to the choice of proposal distribution for the MCMC algorithm and to the intermediate failure events selected for Subset Simulation.  相似文献   

16.
Scale-up of hot-wire CVD reactors for commercial production of a-Si:H based solar cells requires understanding of the large-area deposition process. Therefore, the process was simulated using the Direct Simulation Monte Carlo-method (G.A. Bird, Clarendon Press, Oxford (1994)), considering reactions at the filaments, in the gas phase and at the substrate, and in particular large-area deposition by modeling the gas shower and the filament grid, which were found to determine the uniformity and quality of the a-Si:H films (Thin Solid Films 395 (2001) 61; Solar Energy Mater. Solar Cells 73 (2002) 321). The distance between the filament grid and the substrate (dfil–S) and the distance between the filaments (dfil) were systematically varied, and the simulation results were compared to experimental results obtained in our large-area deposition system (Thin Solid Films 395 (2001) 61; Solar Energy Mater. Solar Cells 73 (2002) 321). The experimentally obtained optimum filament-to-substrate distance was supported by an optima in the simulated Si2H4-concentration. For other species, the existence was confirmed but a definite value for optimum dfil–S could not be concluded. The simulations also confirmed the influence of the filament geometry on the uniformity as obtained in the experiments.  相似文献   

17.
Efficient maintenance policies are of fundamental importance in system engineering because of their fallbacks into the safety and economics of plants operation. When the condition of a system, such as its degradation level, can be continuously monitored, a Condition-Based Maintenance (CBM) policy can be implemented, according to which the decision of maintaining the system is taken dynamically on the basis of the observed condition of the system.In this paper, we consider a continuously monitored multi-component system and use a Genetic Algorithm (GA) for determining the optimal degradation level beyond which preventive maintenance has to be performed. The problem is framed as a multi-objective search aiming at simultaneously optimizing two typical objectives of interest, profit and availability. For a closer adherence to reality, the predictive model describing the evolution of the degrading system is based on the use of Monte Carlo (MC) simulation. More precisely, the flexibility offered by the simulation scheme is exploited to model the dynamics of a stress-dependent degradation process in load-sharing components and to account for limitations in the number of maintenance technicians available. The coupled (GA[plus ]MC) approach is rendered particularly efficient by the use of the ‘drop-by-drop’ technique, previously introduced by some of the authors, which allows to effectively drive the combinatorial search towards the most promising solutions.  相似文献   

18.
19.
Summary Based on type-2 censored samples, the maximum likelihood, uniformly minimum variance unbiased, Bayes and empirical Bayes estimators of one of the two shape parameters (k) and reliability functionR(t) of the Burr type XII failure model are computed and compared. Computations show that when the censoring sizer=10, the EBE’s of κ andR(t),t=0.9, are better than the corresponding UMVUE’s for as few asm *=7 past samples fork andm *=11 past samples forR(0.9), in the sense of having smaller estimated risks, when the gamma conjugate prior is used.  相似文献   

20.
In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or ‘wear out’ failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values <1, 1, and >1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes.

We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1–2% can be obtained using typically 1000 simulations.

The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η.

Examples are given of the results obtained using three different models: (1) the Eyring–Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of dispersion of the uncertain parameters. If there is no uncertainty, a single, sharp value of the component lifetime is predicted, corresponding to the limit β=∞. In contrast, the shock-loading model is inherently random, and its predictions correspond closely to those of a constant hazard rate model, characterized by a value of β close to 1 for all finite degrees of parameter uncertainty.

The results are discussed in the context of traditional methods for reliability analysis and conventional views on the nature of early-life failures.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号