首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Non-probabilistic convex models need to be provided only the changing boundary of parameters rather than their exact probability distributions; thus, such models can be applied to uncertainty analysis of complex structures when experimental information is lacking. The interval and the ellipsoidal models are the two most commonly used modeling methods in the field of non-probabilistic convex modeling. However, the former can only deal with independent variables, while the latter can only deal with dependent variables. This paper presents a more general non-probabilistic convex model, the multidimensional parallelepiped model. This model can include the independent and dependent uncertain variables in a unified framework and can effectively deal with complex ‘multi-source uncertainty’ problems in which dependent variables and independent variables coexist. For any two parameters, the concepts of the correlation angle and the correlation coefficient are defined. Through the marginal intervals of all the parameters and also their correlation coefficients, a multidimensional parallelepiped can easily be built as the uncertainty domain for parameters. Through the introduction of affine coordinates, the parallelepiped model in the original parameter space is converted to an interval model in the affine space, thus greatly facilitating subsequent structural uncertainty analysis. The parallelepiped model is applied to structural uncertainty propagation analysis, and the response interval of the structure is obtained in the case of uncertain initial parameters. Finally, the method described in this paper was applied to several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes a low‐cost method for predicting probabilistic high‐cycle fatigue life for Al 2024‐T3 based on continuum damage mechanics and non‐intrusive polynomial chaos (NIPC). To randomize Lemaitre's two scale fatigue damage model, parameters S and s are regarded as random variables. Based on small sample of test life, inverse analysis is performed to obtain samples of the two parameters. Statistic characteristics of the two parameters are calculated analytically through coefficients of NIPC. Fatigue test of aluminum alloy 2024‐T3 standard coupon and plate with hole under different spectrum loading shows that the proposed method is effective.  相似文献   

3.
An interval random model is introduced for the response analysis of structural‐acoustic systems that lack sufficient information to construct the precise probability distributions of uncertain parameters. In the interval random model, the uncertain parameters are treated as random variables, whereas some distribution parameters of random variables with limited information are expressed as interval variables instead of precise values. On the basis of the interval random model, the interval random structural‐acoustic finite element equation is constructed, and an interval random perturbation method for solving this interval random equation is proposed. In the proposed method, the interval random matrix and vector are expanded by the first‐order Taylor series, and the response vector of the structural‐acoustic system is calculated by the matrix perturbation method. According to the linear monotonicity of the response vector, the lower and upper bounds of the response vector are calculated by the vertex method. On the basis of the lower and upper bounds, the intervals of expectation and standard variance of the response vector are obtained by the random interval moment method. The numerical results on a shell structural‐acoustic model and an automobile passenger compartment with flexible front panel demonstrate the effectiveness and efficiency of the proposed method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
In order to arrive at realistic results in statistical analysis, it is often advisable to consider involved uncertainties as random variables. An important aspect in this context is the evaluation of the importance of parameter uncertainty. Because of the complexity of computational models, the point estimate method is usually adopted as an easy‐run approach for approximating the statistical moments of a system/model in a reliability analysis. The efficiency of this method highly depends on the correlation coefficient. However, the complex nature of parameters in computational problems often exhibits a nonlinear relationship. This paper aims to develop an original and efficient point estimate method based on the copula approach for reliability engineering problems. The paper discusses the use of the copula theory in the point estimate method for computing the statistical moments of a function involving random variables. The study performs two engineering applications to demonstrate the benefits of this approach. The performance of this proposed method can significantly improve the quality of the results in using the point estimate method when a nonlinear relationship exists between the parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables—regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used.  相似文献   

6.
The well-known phenomenological model of small strain rate-independent plasticity is reformulated in this paper. The main difference from the classical expositions concerns the absence of the plastic strain from the list of state variables. We show that with the proposed choice of state variables, including the total and the elastic strains and strain-like variables which control hardening, we recover all the ingredients of the classical model from a minimum number of hypotheses: instantaneous elastic response and the principle of maximum plastic dissipation. We also show that using a regularized, penalty-like form of the principle of maximum plastic dissipation, we can recover the classical viscoplasticity model. As opposed to the previous schemes used for the finite element implementation of this model (e.g. B-bar method), we propose an approach in which the basic set of equations need not be modified. The operator split method is used to simplify the details of the numerical implementation concerning both the computation of state variables and the incompatible mode based finite element approximations. The latter proves to be indispensable for accommodating the near-incompressible deformation patterns arising in the classical plasticity. An extensive set of numerical simulations is used to illustrate the proposed formulation. © 1998 John Wiley & Sons, Ltd.  相似文献   

7.
This work proposes a method for statistical effect screening to identify design parameters of a numerical simulation that are influential to performance while simultaneously being robust to epistemic uncertainty introduced by calibration variables. Design parameters are controlled by the analyst, but the optimal design is often uncertain, while calibration variables are introduced by modeling choices. We argue that uncertainty introduced by design parameters and calibration variables should be treated differently, despite potential interactions between the two sets. Herein, a robustness criterion is embedded in our effect screening to guarantee the influence of design parameters, irrespective of values used for calibration variables. The Morris screening method is utilized to explore the design space, while robustness to uncertainty is quantified in the context of info‐gap decision theory. The proposed method is applied to the National Aeronautics and Space Administration Multidisciplinary Uncertainty Quantification Challenge Problem, which is a black‐box code for aeronautic flight guidance that requires 35 input parameters. The application demonstrates that a large number of variables can be handled without formulating simplifying assumptions about the potential coupling between calibration variables and design parameters. Because of the computational efficiency of the Morris screening method, we conclude that the analysis can be applied to even larger‐dimensional problems. (Approved for unlimited, public release on October 9, 2013, LA‐UR‐13‐27839, Unclassified.) Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
The goal of this paper is two fold. First, it introduces a general parametric lifetime model for high‐cycle fatigue regime derived from physical, statistical, engineering and dimensional analysis considerations. The proposed model has two threshold parameters and three Weibull distribution parameters. A two‐step procedure is presented to estimate the parameters. In the first step, the two threshold parameters are estimated by minimizing a least squares regression function. In the second step, the parameters are estimated by the maximum likelihood method after pooling together the data from different stress levels. Since parameter estimation should always be accompanied by a sensitivity analysis of the fitted model, the second goal of this paper is to propose a method for sensitivity analysis for fatigue models. We show that the proposed sensitivity analysis methods are general and can be applied to any fatigue or lifetime model, not just to the one proposed here. Although several fatigue models have been proposed in the literature, to our knowledge this is the first attempt to produce methods for sensitivity analysis for fatigue models. The proposed method makes use of the well‐known duality property of mathematical programming, which states that the partial derivatives of the primal objective function with respect to the constraints right hand side parameters are the optimal values of the negative of the dual problem variables. For the parameters or data, for which sensitivities are sought, to appear on the right hand side, they are converted into artificial variables and set to their actual values, thus obtaining the desired constraints. Both the estimation and sensitivity analysis methods are illustrated by two examples, one application using real fatigue data and the other using simulated data. In addition, the sensitivity proposed method is also applied to an alternative fatigue model. Finally, some specific conclusions and recommendations are also given.  相似文献   

9.
为提高有限元模型修正方法效率,保证修正精度,提出基于高斯白噪声扰动的粒子群优化(GMPSO)有限元模型修正方法。介绍标准粒子群优化(PSO)方法和改进后的GMPSO方法,基于测试函数比对两种方法的全局寻优能力和寻优效率;提出高效的基于GMPSO有限元模型修正方法,阐述方法流程并明确各参数与实际物理量的对应关系;基于GMPSO有限元模型修正方法对高维有损伤简支梁模型(变量维度为10)实施修正,并与基于遗传算法(GA)的模型修正结果进行比对;基于GMPSO有限元模型修正方法对某在役桥梁结构实施修正(变量维度为13),验证所提方法可行性。结果表明:经局部改进的GMPSO方法较原PSO方法的优化能力显著提升;高维损伤简支梁模型修正结果显示,基于GMPSO模型修正方法可获得较好的修正结果,修正效率较基于GA的模型修正方法有显著提升;在役桥梁结构有限元模型修正结果显示,基于GMPSO模型修正方法可有效降低主梁计算频率和试验频率的误差,所提方法可适用于较工程复杂结构模型修正问题。  相似文献   

10.
In this paper, an approach based on the U statistic is first proposed to eliminate the effect of between‐profile autocorrelation of error terms in Phase‐II monitoring of general linear profiles. Then, a control chart based on the adjusted parameter estimates is designed to monitor the parameters of the model. The performance of the proposed method is compared with the ones of some existing methods in terms of average run length for weak, moderate, and strong autocorrelation coefficients under different shift scenarios. The results show that the proposed method provides significantly better results than the competing methods to detect shifts in the regression parameters, while the competing methods perform better in detecting shifts in the standard deviation. At the end, the applicability of the proposed method is illustrated by an example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, we consider prediction interval estimation in the original units of observation after fitting a linear model to an appropriately transformed response variable. We assume that the residuals obtained from fitting the linear model in the transformed space are iid zero‐mean normal random variables, at least approximately. We discuss the bias in the retransformed mean and derive a reduced‐bias estimator for the kth moment of the original response, given settings of the design variables. This is then used to compute reduced‐bias estimates for the mean and variance of the untransformed response at various locations in design space. We then exploit a well‐known probability inequality, along with our proposed moment estimator, to construct an approximate 100(1?α)% prediction interval on the original response, given settings of the design factors. We used Monte Carlo simulation to evaluate the performance of the proposed prediction interval estimator relative to 2 commonly used alternatives. Our results suggest the proposed method is often the better alternative when the sample size is small and/or when the underlying model is misspecified. We illustrate the application of our new method by applying it to a real experimental data set obtained from the literature, where machine tool life was studied as a function of various machining parameters.  相似文献   

12.
In the multivariate errors in variables models, one wishes to retrieve a linear relationship of the form y=β t x+α, where both x and y can be multivariate. The variables y and x are not directly measurable, but observed with measurement error. The classical approach to estimate the multivariate errors in variables model is based on an eigenvector analysis of the joint covariance matrix of the observations. In this paper, a projection-pursuit approach is proposed to estimate the unknown parameters. The focus is on projection indices based on half-samples. These lead to robust estimators which can be computed using fast algorithms. Fisher consistency of the procedure is shown, without the need to make distributional assumptions on the x-variables. A simulation study gives insight into the robustness and the efficiency of the procedure.  相似文献   

13.
In many fields, there is the need to monitor quality characteristics defined as the ratio of two random variables. The design and implementation of control charts directly monitoring the ratio stability is required for the continuous surveillance of these quality characteristics. In this paper, we propose two one‐sided exponentially weighted moving average (EWMA) charts with subgroups having sample size n > 1 to monitor the ratio of two normal random variables. The optimal EWMA smoothing constants, control limits, and ARLs have been computed for different values of the in‐control ratio and correlation between the variables and are shown in several figures and tables to discuss the statistical performance of the proposed one‐sided EWMA charts. Both deterministic and random shift sizes have been considered to test the two one‐sided EWMA charts' sensitivity. The obtained results show that the proposed one‐sided EWMA control charts are more sensitive to process shifts than other charts already proposed in the literature. The practical application of the proposed control schemes is discussed with an illustrative example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
A model to assess the failure rate of equipment under use conditions is proposed. This model links noncontrolled variables to a piecewise failure rate combined with a proportional hazard model. Two influential variables are considered. One is the temperature characterizing the outdoor climate, and the other one is moisture—as an intrinsic variable. The maximum likelihood estimates of the model parameters are obtained. The efficiency of the method is evaluated through simulated data. Results on data from the field are provided. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Defining an effective relaxation time which depends on the root mean square (rms) surface roughness and on the angle of incidence of electrons and then using the Boltzmann transport equation general expressions have been derived for the Hall coefficient and conductivity in thin metal films subjected to à transverse magnetic field. In the weak- and strong-field limits simple analytical equations have been proposed which reveal slight size effects in the Hall coefficient and in the magnetoresistance as well as à weak field dependence of these transport parameters in agreement with previous experiments. The theoretical predictions of the present model have been compared with those of the mean free path (mfp) method which constituted an extension of the Coney model. 1n conclusion a correlation between the respective size parameters,A, in the present model and, µ, in the mfp method is proposed.[/p]  相似文献   

16.
In the performance evaluation of structures under disastrous actions, for example, earthquakes, it is important to take into account the randomness of structural parameters. Generally, these random parameters are treated either as independent or perfectly dependent, but practically they are partly dependent. This article aims at developing a point selection strategy for uncertainty quantification of nonlinear structures involving probabilistically dependent random parameters characterized by copula function. For this purpose, the point selection strategy for structures involving independent basic variables is first revisited. As an improvement, a generalized F-discrepancy diminishing oriented iterative screening algorithm is proposed. Then, combining with the conditional sampling method, a conditional point set rearrangement method and a conditional iterative screening-rearrangement method are proposed for probabilistically dependent variables. These new point selection strategies are readily incorporated into the probability density evolution method for uncertainty quantification of nonlinear structures involving dependent random parameters, which is characterized by copula function. The proposed methods are illustrated by two examples including a shear frame with hysteretic restoring forces and a reinforced concrete frame structure with the damage constitutive model of concrete, where the material parameters are probabilistically dependent. The results demonstrate the effectiveness of the proposed method. Problems to be studied are discussed.  相似文献   

17.
We present a sweeping window method in elastodynamics for detection of multiple flaws embedded in a large structure. The key idea is to measure the elastic wave propagation generated by a dynamic load within a smaller substructural detecting window domain, given a sufficient number of sensors. Hence, rather than solving the full structure, one solves a set of smaller dynamic problems quickly and efficiently. To this end, an explicit dynamic extended FEM with circular/elliptical void enrichments is implemented to model the propagation of elastic waves in the detecting window domain. To avoid wave reflections, we consider the window as an unbounded domain with the option of full‐infinite/semi‐infinite/quarter‐infinite domains and employ a simple multi‐dimensional absorbing boundary layer technique. A spatially varying Rayleigh damping is proposed to eliminate spurious wave reflections at the artificial model boundaries. In the process of flaw detection, two phases are proposed: (i) pre‐analysis—identification of rough damage regions through a data‐driven approach, and (ii) post‐analysis‐–identification of the true flaw parameters by a two‐stage optimization technique. The ‘pre‐analysis’ phase considers the information contained in the ‘pseudo’ healthy structure and the scattered wave signals, providing an admissible initial guess for the optimization process. Then a two‐stage optimization approach (the simplex method and a damped Gauss–Newton algorithm) is carried out in the ‘post‐analysis’ phase for convergence to the true flaw parameters. A weighted sum of the least squares, of the residuals between the measured and simulated waves, is used to construct the objective function for optimization. Several benchmark examples are numerically illustrated to test the performance of the proposed sweeping methodology for detection of multiple flaws in an unbounded elastic domain. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design.  相似文献   

19.
This work presents a data‐driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi‐physics simulation of Micro‐ElectroMechanical Systems (MEMS). The proposed framework comprises of two key steps: first, probabilistic characterization of the input uncertain parameters based on available experimental information, and second, propagation of these uncertainties through the predictive model to relevant quantities of interest. The uncertain input parameters are modeled as independent random variables, for which the distributions are estimated based on available experimental observations, using a nonparametric diffusion‐mixing‐based estimator, Botev (Nonparametric density estimation via diffusion mixing. Technical Report, 2007). The diffusion‐based estimator derives from the analogy between the kernel density estimation (KDE) procedure and the heat dissipation equation and constructs density estimates that are smooth and asymptotically consistent. The diffusion model allows for the incorporation of the prior density and leads to an improved density estimate, in comparison with the standard KDE approach, as demonstrated through several numerical examples. Following the characterization step, the uncertainties are propagated to the output variables using the stochastic collocation approach, based on sparse grid interpolation, Smolyak (Soviet Math. Dokl. 1963; 4 :240–243). The developed framework is used to study the effect of variations in Young's modulus, induced as a result of variations in manufacturing process parameters or heterogeneous measurements on the performance of a MEMS switch. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
This paper develops an economic design of variable sampling interval (VSI)―X control charts in which the next sample is taken sooner than usual if there is an indication that the process is off‐target. When designing VSI―X control charts, the underlying assumption is that the measurements within a sample are independent. However, there are many practical situations that violate this hypothesis. Accordingly, a cost model combining the multivariate normal distribution model given by Yang and Hancock with Bai and Lee's cost model is proposed to develop the design of VSI charts for correlated data. An evolutionary search method to find the optimal design parameters for this model is presented. Also, we compare VSI and traditional ―X charts with respect to expected cost per unit time, utilizing hypothetical cost and process parameters as well as various correlation coefficients. The results indicate that VSI control charts outperform the traditional control charts for larger mean shift when correlation is present. In addition, there is a difference between the design parameters of VSI charts when correlation is present or absent. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号