首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In RBDO, input uncertainty models such as marginal and joint cumulative distribution functions (CDFs) need to be used. However, only limited data exists in industry applications. Thus, identification of the input uncertainty model is challenging especially when input variables are correlated. Since input random variables, such as fatigue material properties, are correlated in many industrial problems, the joint CDF of correlated input variables needs to be correctly identified from given data. In this paper, a Bayesian method is proposed to identify the marginal and joint CDFs from given data where a copula, which only requires marginal CDFs and correlation parameters, is used to model the joint CDF of input variables. Using simulated data sets, performance of the Bayesian method is tested for different numbers of samples and is compared with the goodness-of-fit (GOF) test. Two examples are used to demonstrate how the Bayesian method is used to identify correct marginal CDFs and copula.  相似文献   

2.
Zhao  Lin-Feng  Zhou  Zai-Fa  Song  Yi-Qun  Meng  Mu-Zi  Huang  Qing-An 《Microsystem Technologies》2020,26(5):1689-1696

The concern about process deviations rises because that the performance uncertainty they cause are strengthened with the miniaturization and complication of Microelectromechanical System (MEMS) devices. To predict the statistic behavior of devices, Monte Carlo method is widely used, but it is limited by the low efficiency. The recently emerged generalized polynomial chaos expansion method, though highly efficient, cannot solve uncertainty quantification problems with correlated deviations, which is common in MEMS applications. In this paper, a Gaussian mixture model (GMM) and Nataf transformation based polynomial chaos method is proposed. The distribution of correlated process deviations is estimated using GMM, and modified Nataf transformation is applied to convert the correlated random vectors of GMM into mutually independent ones. Then polynomial chaos expansion and stochastic collection can be implemented. The effectiveness of our proposed method is demonstrated by the simulation results of V-beam thermal actuator, and its computation speed is faster compared with the Monte Carlo technique without loss of accuracy. This method can be served as an efficient analysis technique for MEMS devices which are sensitive to correlated process deviations.

  相似文献   

3.
For obtaining a correct reliability-based optimum design, the input statistical model, which includes marginal and joint distributions of input random variables, needs to be accurately estimated. However, in most engineering applications, only limited data on input variables are available due to expensive testing costs. The input statistical model estimated from the insufficient data will be inaccurate, which leads to an unreliable optimum design. In this paper, reliability-based design optimization (RBDO) with the confidence level for input normal random variables is proposed to offset the inaccurate estimation of the input statistical model by using adjusted standard deviation and correlation coefficient that include the effect of inaccurate estimation of mean, standard deviation, and correlation coefficient.  相似文献   

4.
Reliability-Based Design Optimization (RBDO) algorithms, such as Reliability Index Approach (RIA) and Performance Measure Approach (PMA), have been developed to solve engineering optimization problems under design uncertainties. In some existing methods, the random design space is transformed to standard normal design space and the reliability assessment, such as reliability index from RIA or performance measure from PMA, is estimated in order to evaluate the failure probability. When the random variable is arbitrarily distributed and cannot be properly fitted to any known form of probability density function, the existing RBDO methods cannot perform reliability analysis in the original design space. This paper proposes a novel Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) to evaluate the failure probability of any arbitrarily distributed random variables in the original design space. The arbitrary distribution of the random variable is approximated by a merger of multiple Gaussian kernel functions in a single-variate coordinate that is directed toward the gradient of the constraint function. The failure probability is then estimated using the ensemble of each kernel reliability analysis. This paper further derives a linearly approximated probabilistic constraint at the design point with allowable reliability level in the original design space using the aforementioned fundamentals and techniques. Numerical examples with generated random distributions show that existing RBDO algorithms can improperly approximate the uncertainties as Gaussian distributions and provide solutions with poor assessments of reliabilities. On the other hand, the numerical results show EGTRA is capable of efficiently solving the RBDO problems with arbitrarily distributed uncertainties.  相似文献   

5.
Nonlinear transformation is one of the major obstacles to analyzing the properties of multilayer perceptrons. In this letter, we prove that the correlation coefficient between two jointly Gaussian random variables decreases when each of them is transformed under continuous nonlinear transformations, which can be approximated by piecewise linear functions. When the inputs or the weights of a multilayer perceptron are perturbed randomly, the weighted sums to the hidden neurons are asymptotically jointly Gaussian random variables. Since sigmoidal transformation can be approximated piecewise linearly, the correlations among the weighted sums decrease under sigmoidal transformations. Based on this result, we can say that sigmoidal transformation used as the transfer function of the multilayer perceptron reduces redundancy in the information contents of the hidden neurons.  相似文献   

6.
This study presents a methodology to convert an RBDO problem requiring very high reliability to an RBDO problem requiring relatively low reliability by appropriately increasing the input standard deviations for efficient computation in sampling-based RBDO. First, for linear performance functions with independent normal random inputs, an exact probability of failure is derived in terms of the ratio of the input standard deviation, which is denoted by $\boldsymbol {\delta } $ . Then, the probability of failure estimation is generalized for other types of random inputs and performance functions. For the generalization of the probability of failure estimation, two types of coefficients need to be determined by equating the probability of failure and its sensitivities with respect to the input standard deviation at the given design point. The sensitivities of the probability of failure with respect to the standard deviation are obtained using the first-order score function for the standard deviation. To apply the proposed method to an RBDO problem, a concept of an equivalent target probability of failure, which is an increased target probability of failure corresponding to the increased input standard deviations, is also introduced. Numerical results indicate that the proposed method can estimate the probability of failure accurately as a function of the input standard deviation compared to the Monte Carlo simulation results. As anticipated, the sampling-based RBDO using equivalent target probability of failure helps find the optimum design very efficiently while yielding reasonably accurate optimum design, which is close to the one obtained using the original target probability of failure.  相似文献   

7.
Reliability-based design optimization (RBDO) requires evaluation of sensitivities of probabilistic constraints. To develop RBDO utilizing the recently proposed novel second-order reliability method (SORM) that improves conventional SORM approaches in terms of accuracy, the sensitivities of the probabilistic constraints at the most probable point (MPP) are required. Thus, this study presents sensitivity analysis of the novel SORM at MPP for more accurate RBDO. During analytic derivation in this study, it is assumed that the Hessian matrix does not change due to the small change of design variables. The calculation of the sensitivity based on the analytic derivation requires evaluation of probability density function (PDF) of a linear combination of non-central chi-square variables, which is obtained by utilizing general chi-squared distribution. In terms of accuracy, the proposed probabilistic sensitivity analysis is compared with the finite difference method (FDM) using the Monte Carlo simulation (MCS) through numerical examples. The numerical examples demonstrate that the analytic sensitivity of the novel SORM agrees very well with the sensitivity obtained by FDM using MCS when a performance function is quadratic in U-space and input variables are normally distributed. It is further shown that the proposed sensitivity is accurate enough compared with FDM results even for a higher order performance function.  相似文献   

8.
In most industrial applications, only limited statistical information is available to describe the input uncertainty model due to expensive experimental testing costs. It would be unreliable to use the estimated input uncertainty model obtained from insufficient data for the design optimization. Furthermore, when input variables are correlated, we would obtain non-optimum design if we assume that they are independent. In this paper, two methods for problems with a lack of input statistical information—possibility-based design optimization (PBDO) and reliability-based design optimization (RBDO) with confidence level on the input model—are compared using mathematical examples and an Abrams M1A1 tank roadarm example. The comparison study shows that PBDO could provide an unreliable optimum design when the number of samples is very small. In addition, PBDO provides an optimum design that is too conservative when the number of samples is relatively large. Furthermore, the obtained PBDO designs do not converge to the optimum design obtained using the true input distribution as the number of samples increases. On the other hand, RBDO with confidence level on the input model provides a conservative and reliable optimum design in a stable manner. The obtained RBDO designs converge to the optimum design obtained using the true input distribution as the number of samples increases.  相似文献   

9.
This paper presents a sampling-based RBDO method using surrogate models. The Dynamic Kriging (D-Kriging) method is used for surrogate models, and a stochastic sensitivity analysis is introduced to compute the sensitivities of probabilistic constraints with respect to independent or correlated random variables. For the sampling-based RBDO, which requires Monte Carlo simulation (MCS) to evaluate the probabilistic constraints and stochastic sensitivities, this paper proposes new efficiency and accuracy strategies such as a hyper-spherical local window for surrogate model generation, sample reuse, local window enlargement, filtering of constraints, and an adaptive initial point for the pattern search. To further improve computational efficiency of the sampling-based RBDO method for large-scale engineering problems, parallel computing is proposed as well. Once the D-Kriging accurately approximates the responses, there is no further approximation in the estimation of the probabilistic constraints and stochastic sensitivities, and thus the sampling-based RBDO can yield very accurate optimum design. In addition, newly proposed efficiency strategies as well as parallel computing help find the optimum design very efficiently. Numerical examples verify that the proposed sampling-based RBDO can find the optimum design more accurately than some existing methods. Also, the proposed method can find the optimum design more efficiently than some existing methods for low dimensional problems, and as efficient as some existing methods for high dimensional problems when the parallel computing is utilized.  相似文献   

10.
Interactive multidimensional visualisation based on parallel coordinates has been studied previously as a tool for process historical data analysis. Here attention is given to improvement of the technique by the introduction of dimension reduction and upper and lower limits for separating abnormal data to the plots of coordinates. Dimension reduction using independent component analysis transforms the original variables to a smaller number of latent variables which are statistically independent to each other. This enables the visualisation technique to handle a large number of variables more effectively, particularly when the original variables have recycling and interacting correlations and dependencies. Statistical independence between the parallel coordinates also makes it possible to calculate upper and lower limits (UL and LL) for each coordinate separating abnormal data from normal. Calculation of the UL and LL limits requires each coordinate to satisfy Gaussian distribution. In this work a method called the Box–Cox transformation is proposed to transform the non-Gaussian coordinate to a Gaussian distribution before the UL and LL limits are calculated.  相似文献   

11.
Hau-San  Kent K.T.  Horace H.S.   《Pattern recognition》2004,37(12):2307-2322
Classification of 3D head models based on their shape attributes for subsequent indexing and retrieval are important in many applications, as in hierarchical content-based retrieval of these head models for virtual scene composition, and the automatic annotation of these characters in such scenes. While simple feature representations are preferred for more efficient classification operations, these features may not be adequate for distinguishing between the subtly different head model classes. In view of these, we propose an optimization approach based on genetic algorithm (GA) where the original model representation is transformed in such a way that the classification rate is significantly enhanced while retaining the efficiency and simplicity of the original representation. Specifically, based on the Extended Gaussian Image (EGI) representation for 3D models which summarizes the surface normal orientation statistics, we consider these orientations as random variables, and proceed to search for an optimal transformation for these variables based on genetic optimization. The resulting transformed distributions for these random variables are then used as the modified classifier inputs. Experiments have shown that the optimized transformation results in a significant improvement in classification results for a large variety of class structures. More importantly, the transformation can be indirectly realized by bin removal and bin count merging in the original histogram, thus retaining the advantage of the original EGI representation.  相似文献   

12.
Arguments have been advanced to support the role of principal components (e.g., Karhunen-Loéve, eigenvector) and independent components transformations in early sensory processing, particularly for color and spatial vision. Although the concept of redundancy reduction has been used to justify a principal components transformation, these transformations per se do not necessarily confer benefits with respect to information transmission in information channels with additive independent identically distributed Gaussian noise. Here, it is shown that when a more realistic source of multiplicative neural noise is present in the information channel, there are quantitative benefits to a principal components or independent components representation for Gaussian and non-Gaussian inputs, respectively. Such a representation can convey a larger quantity of information despite the use of fewer spikes. The nature and extent of this benefit depend primarily on the probability distribution of the inputs and the relative power of the inputs. In the case of Gaussian input, the greater the disparity in power between dimensions, the greater the advantage of a principal components representation. For non-Gaussian input distributions with a kurtosis that is super-Gaussian, an independent components representation is similarly advantageous. This advantage holds even for input distributions with equal power since the resulting density is still rotationally asymmetric. However, sub-Gaussian input distributions can lead to situations where maximally correlated inputs are the most advantageous with respect to transmitting the greatest quantity of information with the fewest number of spikes.  相似文献   

13.
Based on Random Set Theory, procedures are presented for bracketing the results of Monte Carlo simulations in two notable cases: (i) the calculation of the entire distribution of the dependent variable; (ii) the calculation of the CDF of a particular value of the dependent variable (e.g. reliability analyses). The presented procedures are not intrusive in that they can be equally applied when the functional relationship between the dependent variable and independent variables is known analytically and when it is a complex computer model (black box). Also, the proposed procedures can handle probabilistic (with any type of input joint PDF), interval-valued, set-valued, and random set-valued input information, as well as any combination thereof.When exact or outer bounds on the function image can be calculated, the bounds on the CDF of the dependent variable guarantee 100% confidence, and allow for an explicit and exact evaluation of the error involved in the calculation of the CDF. These bounds are often enough to make decisions, and require a minimal amount of functional evaluations. A procedure for effectively approximating the CDF of the dependent variable is also proposed.An example shows that, compared to Monte Carlo simulations, the number of functional evaluations is reduced by orders of magnitude and that the convergence rate increases tenfold.  相似文献   

14.
为了实现地面上目标的位置估计和跟踪,本文提出了一种摄像机平面视图与地面位置估计之间关系的建模理论。首先分析了随机变量(摄像机图像平面上的位置估计和地面上的位置估计)在投影变换下的变换方式,表明了当某些几何性质满足时,投影变换会将正态分布映射为正态分布;其次采用无迹变换来计算得到变换后的随机变量的矩;最后采用得到的建模相关性设计了一种用于多个摄像机位置估计的最小方差估计器,并应用于跟踪地面上动态环境中的多个目标;实验结果表明,本文提出的模型不仅具有较好的组合位置估计能力,而且还能够利用这种模型得到的最小方差估计器有效地呈现和跟踪地面目标。  相似文献   

15.
The inverse Gaussian distribution is a useful distribution with important applications. But there is less discussion in the literature on sampling of this distribution. The method given in [Atkinson, A.C., 1982. The simulation of generalized inverse Gaussian and hyperbolic random variables. SIAM Journal on Scientific and Statistical Computing 3(4), 502-515] is based on rejection method where some (uniform) random numbers from the sample are discarded. This feature makes it difficult to take advantage of the low discrepancy sequences which have important applications. In [Michael, J., Schucany, W., Haas, R., 1976. Generating random variates using transformations with multiple roots. The American Statistician 30(2), 88-90], Michael et al. give a method to generate random variables with inverse Gaussian distribution. In their method, two pseudorandom numbers uniformly distributed on (0, 1) are needed in order to generate one inverse Gaussian random variable. In this short paper, we present a new method, based on direct approximate inversion, to generate the inverse Gaussian random variables. In this method, only one pseudorandom number is needed in generating one inverse Gaussian random variate. This method enables us to make use of the better convergence of low discrepancy sequence than the pseudorandom sequence. Numerical results show the superiority of low discrepancy sequence than the pseudorandom sequence in simulating the mean of the inverse Gaussian distribution by using our sampling method. Further application of this method in exotic option pricing under the normal inverse Gaussian model is under investigation.  相似文献   

16.
王丽芳  曾建潮  洪毅 《控制与决策》2011,26(9):1333-1337
将Copula理论引入分布估计算法的研究中,并在估计概率模型时分两个步骤进行:1)估计各变量的边缘分布函数;2)构造经验copula函数或正态Copula函数.根据Copula函数和各边缘分布进行采样,在简化估计模型运算复杂度的同时,充分反映了变量之间的关系.仿真实验验证了该算法的可行性和有效性.  相似文献   

17.
This study developed a reliability-based design optimization (RBDO) algorithm focusing on the ability of solving problems with nonlinear constraints or system reliability. In this case, a sampling technique is often adopted to evaluate the reliability analyses. However, simulation with an insufficient sample size often possesses statistical randomness resulting in an inaccurate sensitivity calculation. This may cause an unstable RBDO solution. The proposed approach used a set of deterministic variables, called auxiliary design points, to replace the random parameters. Thus, an RBDO is converted into a deterministic optimization (DO, α-problem). The DO and the analysis of finding the auxiliary design points (β-problem) are conducted iteratively until the solution converges. To maintain the stability of the RBDO solution with less computational cost, the proposed approach calculated the sensitivity of reliability (in the β-problem) with respect to the mean value of the pseudo-random parameters rather than the design variables. The stability of the proposed method was compared to that of the double-loop approach, and many factors, such as sample size, starting point and the parameters used in the optimization, were considered. The accuracy of the proposed method was confirmed using Monte Carlo simulation (MCS) with several linear and nonlinear numerical problems.  相似文献   

18.
Whereas optimal prediction of Gaussian sequences requires the employment of a linear filter with consistently identifiable parameters and with Gaussian white noise input, the optimal predictor of non-Gaussian sequences is n nonlinear filter, having an independent noise input. Since the latter cannot be identified directly without prior knowledge of the non-linearity, the optimal linear predictor is usually identified where a non-Gaussian white noise input is considered and which is fully optimal only when that input turns out to be independent in all moments. However, if the non-Gaussian sequence is the outcome of a Gaussian sequence passed through a zero memory non-linearity or through non-linear measurement elements, a transformation of the non-Gaussian sequence into a Gaussian one is possible, such that optimal non-linear prediction may be approximated to any required degree, as is shown by the analysis of the present work. Furthermore, the parameters of that predictor may be consistently identified in the absence of any parameter information.  相似文献   

19.
If the statistical data for the input uncertainties are sufficient to construct the distribution function, the input uncertainties can be treated as random variables to use the reliability-based design optimization (RBDO) method; otherwise, the input uncertainties can be treated as fuzzy variables to use the possibility-based design optimization (PBDO) method. However, many structural design problems include both input uncertainties with sufficient and insufficient data. This paper proposes a new mixed-variable design optimization (MVDO) method using the performance measure approach (PMA) for such design problems. For the inverse analysis, this paper proposes a new most probable/possible point (MPPP) search method called maximal failure search (MFS), which is an integration of the enhanced hybrid mean value method (HMV+) and maximal possibility search (MPS) method. This paper also improves the HMV+ method using an angle-based interpolation. Mathematical and physical examples are used to demonstrate the proposed inverse analysis method and MVDO method.  相似文献   

20.
Reliability analysis and reliability-based design optimization (RBDO) require an exact input probabilistic model to obtain accurate probability of failure (PoF) and RBDO optimum design. However, often only limited input data is available to generate the input probabilistic model in practical engineering problems. The insufficient input data induces uncertainty in the input probabilistic model, and this uncertainty forces the PoF to be uncertain. Therefore, it is necessary to consider the PoF to follow a probability distribution. In this paper, the probability of the PoF is obtained with consecutive conditional probabilities of input distribution types and parameters using the Bayesian approach. The approximate conditional probabilities are obtained under reasonable assumptions, and Monte Carlo simulation is applied to calculate the probability of the PoF. The probability of the PoF at a user-specified target PoF is defined as the conservativeness level of the PoF. The conservativeness level, in addition to the target PoF, will be used as a probabilistic constraint in an RBDO process to obtain a conservative optimum design, for limited input data. Thus, the design sensitivity of the conservativeness level is derived to support an efficient optimization process. Using numerical examples, it is demonstrated that the conservativeness level should be involved in RBDO when input data is limited. The accuracy and efficiency of the proposed design sensitivity method is verified. Finally, conservative RBDO optimum designs are obtained using the developed methods for limited input data problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号