共查询到20条相似文献,搜索用时 15 毫秒
1.
Multivariate approach to the thermal challenge problem 总被引:1,自引:0,他引:1
Richard G. Hills Kevin J. Dowding 《Computer Methods in Applied Mechanics and Engineering》2008,197(29-32):2442
This paper presents an engineering approach to the thermal challenge problem defined by Dowding et al. (this issue). This approach to model validation is based on a multivariate validation metric that accounts for model parameter uncertainty and correlation between multiple measurement/prediction differences. The effect of model parameter uncertainty is accounted for through first-order sensitivity analysis for the ensemble/validation tests, and first-order sensitivity analysis and Monte-Carlo analysis for the regulatory prediction. While sensitivity based approaches are less computational expensive than Monte-Carlo approaches, they are less likely to capture the far tail behavior of even mildly nonlinear models.The application of the sensitivity based validation metric provided strong evidence that the tested model was not consistent with the experimental data. The use of a temperature dependent effective conductivity with the linear model resulted in model predictions that were consistent with the data. The correlation structure of the model was used to pool the prediction/measurement differences to evaluate the corresponding cumulative density function (CDF). Both the experimental CDF and the predicted CDFs indicated that the regulatory criterion was not met. 相似文献
2.
G. BettaAuthor Vitae D. CapriglioneAuthor Vitae M.D. MiglioreAuthor Vitae 《Computer Standards & Interfaces》2011,33(2):201-205
The accurate knowledge of the antenna factor is a fundamental requirement for reliable electromagnetic compatibility (EMC) measurements in emissions, immunity and human exposure tests. According to international standards, the calibration of EMC antennas requires close-to-ideal test sites (or calibration test sites), characterized by very large sizes of the ground plane and of the empty space volume above it (free-space behaviour). On the other hand a great number of EMC test sites are available and designed for measurements at 3 m distance, therefore it would be very convenient to calibrate antennas in such facilities at the cost of an acceptable loss of accuracy. In this paper, the authors investigate the suitability of compact semi-anechoic chambers (standard chambers, compliant for measurements at 3 m distance from the equipment under test) for reliable antenna factor calibrations. As an application, the calibration of a common broadband biconical antenna in the 200-1000 MHz frequency range is here considered and analysed. A detailed experimental analysis is offered for estimating all the relevant uncertainty contributions. 相似文献
3.
利用方差分析法进行模型验证 总被引:6,自引:0,他引:6
计算机仿真在许多领域得到了越来越广泛的应用,这给模型验证方法的研究提出了更高的要求。众所周知,方差分析法是一种常用的统讣方法。它通过比较因素在不同水平下进行试验所得到的试验指标,来检验不同水平下试验结果的差别是否显著。如果将仿真模型与实际系统分别看作试验因素的水平,在其他条件相同时进行试验,获得两组试验指标。对所获得的数据利用方差分析法进行分析,则可检验其差别是否显著。若差别不显著,则表明仿真模型与实际系统在给定的显著性水平下是一致的。该文最后结合具体数据给出了该方法的具体应用,所得结论与利用其它验证方法得到的结论相一致,从而说明利用方差分析法进行模型验证是可行的。 相似文献
4.
A top-down approach to calibration, validation, uncertainty quantification and predictive accuracy assessment 总被引:1,自引:0,他引:1
Timothy Hasselman George Lloyd 《Computer Methods in Applied Mechanics and Engineering》2008,197(29-32):2596
This paper describes a “top-down” uncertainty quantification (UQ) approach for calibration, validation and predictive accuracy assessment of the SNL Validation Workshop Structural Dynamics Challenge Problem. The top-down UQ approach differs from the more conventional (“bottom-up”) approach in that correlated statistical analysis is performed directly with the modal characteristics (frequencies, mode shapes and damping ratios) rather than using the modal characteristics to derive the statistics of physical model parameters (springs, masses and viscous damping elements in the present application). In this application, a stochastic subsystem model is coupled with a deterministic subsystem model to analyze stochastic system response to stochastic forcing functions. The weak nonlinearity of the stochastic subsystem was characterized by testing it at three different input levels, low, medium and high. The calibrated subsystem models were validated with additional test data using published NASA and Air Force validation criteria. The validated subsystem models were first installed in the accreditation test bed where system response simulations involving stochastic shock-type force inputs were conducted. The validated stochastic subsystem model was then installed in the target application and simulations involving limited duration segments of stationary random vibration excitation were conducted. 相似文献
5.
A probabilistic construction of model validation 总被引:1,自引:0,他引:1
Roger G. Ghanem Alireza Doostan John Red-Horse 《Computer Methods in Applied Mechanics and Engineering》2008,197(29-32):2585
We describe a procedure to assess the predictive accuracy of process models subject to approximation error and uncertainty. The proposed approach is a functional analysis-based probabilistic approach for which we represent random quantities using polynomial chaos expansions (PCEs). The approach permits the formulation of the uncertainty assessment in validation, a significant component of the process, as a problem of approximation theory. It has two essential parts. First, a statistical procedure is implemented to calibrate uncertain parameters of the candidate model from experimental or model-based measurements. Such a calibration technique employs PCEs to represent the inherent uncertainty of the model parameters. Based on the asymptotic behavior of the statistical parameter estimator, the associated PCE coefficients are then characterized as independent random quantities to represent epistemic uncertainty due to lack of information. Second, a simple hypothesis test is implemented to explore the validation of the computational model assumed for the physics of the problem. The above validation path is implemented for the case of dynamical system validation challenge exercise. 相似文献
6.
This paper examines how considerations of model uncertainty can affect policy design. Without such considerations one may expect that choice of policy control rules for a macroeconomic model would depend on some welfare criterion based on the model as given. However if there is uncertainty in the structure of the model or in the values of particular model parameters then it is argued that choice of policy should take this into account.We introduce and define some measures ofrobustness which describe how well a particular control rule performs when the model is uncertain. These can only be evaluated using Monte-Carlo simulations; in that sense they are ex post. Then we define a number of indicators which may be of use in predicting robustness, and which do not require simulations to calculate. In that sense they are ex ante.Lastly we evaluate the ex ante indicators on a small macromodel by comparing their predictions with the actual robustness outturn for the range of possible control rules. We find that use of the indicators in choosing rules yields some improvement on the ordinary welfare criterion, especially when the shocks hitting the system are unknown. 相似文献
7.
Model predictive control (MPC) has become one of the most popular control techniques in the process industry mainly because of its ability to deal with multiple-input–multiple-output plants and with constraints. However, in the presence of model uncertainties and disturbances its performance can deteriorate. Therefore, the development of robust MPC techniques has been widely discussed during the last years, but they were rarely, if at all, applied in practice due to the conservativeness or the computational complexity of the approaches. In this paper, we present multi-stage NMPC as a promising robust non-conservative nonlinear model predictive control scheme. The approach is based on the representation of the evolution of the uncertainty by a scenario tree, and leads to a non-conservative robust control of the uncertain plant because the adaptation of future inputs to new information is taken into account. Simulation results show that multi-stage NMPC outperforms standard and min–max NMPC under the presence of uncertainties for a semi-batch polymerization benchmark problem. In addition, the advantages of the approach are illustrated for the case where only noisy measurements are available and the unmeasured states and the uncertainties have to be estimated using an observer. It is shown that better performance can be achieved than by estimating the unknown parameters online and adapting the plant model. 相似文献
8.
9.
The thermal problem defined for the validation challenge workshop involves a simple one-dimensional slab geometry with a defined heat flux at the front face, adiabatic conditions at the rear face, and a provided baseline predictive simulation model to be used to simulate the time-dependent heatup of the slab. This paper will discuss a clustering methodology using a surrogate heat transfer algorithm that allows propagation of the uncertainties in the model parameters using a very limited series of full simulations. This clustering methodology can be used when the predictive model to be run is very expensive, and only a few simulation runs are possible. A series of time-dependent statistical comparisons designed to validate the model against experimental data provided in the problem formulation will also be presented, and limitations of the approach discussed. The purpose of this paper is to represent methods of propagation of uncertainty with limited computer runs, validation with uncertain data, and decision-making under uncertainty. The final results of the analysis indicate that the there is approximately 95% confidence that the regulatory criteria under consideration would be failed given the high level of physical data provided. 相似文献
10.
This paper presents a probabilistic model validation methodology for nonlinear systems in time-domain. The proposed formulation is simple, intuitive, and accounts both deterministic and stochastic nonlinear systems with parametric and nonparametric uncertainties. Instead of hard invalidation methods available in the literature, a relaxed notion of validation in probability is introduced. To guarantee provably correct inference, algorithm for constructing probabilistically robust validation certificate is given along with computational complexities. Several examples are worked out to illustrate its use. 相似文献
11.
I. Babuka F. Nobile R. Tempone 《Computer Methods in Applied Mechanics and Engineering》2008,197(29-32):2517
This work describes a solution to the validation challenge problem posed at the SANDIA Validation Challenge Workshop, May 21–23, 2006, NM. It presents and applies a general methodology to it. The solution entails several standard steps, namely selecting and fitting several models to the available prior information and then sequentially rejecting those which do not perform satisfactorily in the validation and accreditation experiments. The rejection procedures are based on Bayesian updates, where the prior density is related to the current candidate model and the posterior density is obtained by conditioning on the validation and accreditation experiments. The result of the analysis is the computation of the failure probability as well as a quantification of the confidence in the computation, depending on the amount of available experimental data. 相似文献
12.
We study a new robust formulation for strategic location and capacity planning considering potential company acquisitions under uncertainty. Long-term logistics network planning is among the most difficult decisions for supply-chain managers. While costs, demands, etc. may be known or estimated well for the short-term, their future development is uncertain and difficult to predict.A new model formulation for the robust capacitated facility location problem is presented to cope with uncertainty in planning. Minimizing the expectation of the relative regrets across scenarios over multiple periods is the objective. It is achieved by dynamically assigning multi-level production allocations, locations and capacity adjustments for uncertain parameter development over time. Considering acquisitions for profit maximization and its supply-chain impact is new as well as the simultaneous decision of capacity adjustment and facility location over time. The solution of the novel robust formulation provides a single setup where good results can be achieved for any realized scenario. Hence, the solution may not be optimal for one particular scenario but may be good, i.e. the highest expected profit to gain, for any highly probable future realization. We show that robust mixed-integer linear programming model achieves superior results to the deterministic configurations in exhaustive computational tests. This dynamic robust formulation allows the supply-chain to favorably adapt to acquisitions and uncertain developments of revenue, demand and costs and hence reduces the potential negative impacts of uncertainty on supply-chain operations. 相似文献
13.
We compare the use of price-based policies or taxes, and quantity-based policies or quotas, for controlling emissions in a dynamic setup when the regulator faces two sources of uncertainty: (i) market-related uncertainty; and (ii) ecological uncertainty. We assume that the regulator is a rational Bayesian learner and the regulator and firms have asymmetric information. In our model the structure of Bayesian learning is general. Our results suggest that the expected level of emissions is the same under taxes and quotas. However, the comparison of the total benefits related to these policies suggests that taxes dominate quotas, that is, they provide a higher social welfare. Even though taxes have some benefits over quotas, neither learning nor ecological uncertainty affect the choice of policy, i.e., the only factor having such an impact is uncertainty in the instantaneous net emissions benefits (market-related uncertainty). Besides, the more volatile is this uncertainty, the more benefits of taxes over quotas. Ecological uncertainty leads to a difference between the emissions rule under the informed and the rational learning assumptions. However, the direction of this difference depends on the beliefs bias with regard to ecological uncertainty. We also find that a change in the regulator’s beliefs toward more optimistic views will increase the emissions. 相似文献
14.
One of the most challenging issues for the semiconductor testing industry is how to deal with capacity planning and resource allocation simultaneously under demand and technology uncertainty. In addition, capacity planners require a tradeoff among the costs of resources with different processing technologies, while simultaneously considering resources to manufacture products. The need for exploring better solutions further increases the complexity of the problem. This study focuses on the decisions pertaining to (i) the simultaneous resource portfolio/investment and allocation plan accounting for the hedging tradeoff between the expected profit and risk, (ii) the most profitable orders from pending ones in each time bucket under demand and technology uncertainty, (iii) the algorithm to efficiently solve the stochastic and mixed integer programming problem. Due to the high computational complexity of the problem, this study develops a constraint-satisfaction based genetic algorithm, in conjunction with a chromosome-repair mechanism and sampling procedure, to resolve the above issues simultaneously. The experimental results indicate that the proposed mathematical model can accurately represent the resource portfolio planning problem of the semiconductor testing industry, and the solution algorithm can solve the problem efficiently. 相似文献
15.
Many of the problems addressed through engineering analysis include a set of regulatory (or other) probabilistic requirements that must be demonstrated with some degree of confidence through the analysis. Problems cast in this environment can pose new challenges for computational analyses in both model validation and model-based prediction. The “regulatory problems” given for the “Sandia challenge problems exercise”, while relatively simple, provide an opportunity to demonstrate methods that address these challenges. This paper describes and illustrates methods that can be useful in analysis of the regulatory problem. Specifically, we discuss:
- (1) an approach for quantifying variability and uncertainty separately to assess the regulatory requirements and provide a statement of confidence; and
- (2) a general validation metric to focus the validation process on a specific range of the predictive distribution (the predictions near the regulatory threshold).
Keywords: Regulatory problem; Calibration; Model validation; Model-based prediction 相似文献
16.
Assuming a general linear model with unknown and possibly unequal normal error variances, the interest is to develop a one-sample procedure to handle the hypothesis testing on all, partial, or a subset of linear functions of regression parameters. The sampling procedure is to split up each single sample of size ni at a controllable regressor's data point into two portions, the first consisting of the ni-1 observations for initial estimation and the second consisting of the remaining one for overall use in the final estimation in order to define a weighted sample mean based on all sample observations at each data point. Then, the weighted sample mean is used to serve as a basis for parameter estimates and test statistics for a general linear regression model. It is found that the distributions of the test statistics based on the weighted sample means are completely independent of the unknown variances. This method can be applied to analysis of variance under various designs of experiments with unequal variances. 相似文献
17.
Tong ZhouAuthor Vitae Ling Wang Author VitaeZhengshun Sun Author Vitae 《Automatica》2002,38(9):1449-1461
This paper deals with probabilistic model set validation. It is assumed that the dynamics of a multi-input multi-output (MIMO) plant is described by a model set with unstructured uncertainties, and identification experiments are performed in closed loop. A necessary and sufficient condition has been derived for the consistency of the model set with both the stabilizing controller and closed-loop frequency domain experimental data (FDED). In this condition, only the Euclidean norm of a complex vector is involved, and this complex vector depends linearly on both the disturbances and the measurement errors. Based on this condition, an analytic formula has been derived for the sample unfalsified probability (SUP) of the model set. Some of the asymptotic statistical properties of the SUP have also been briefly discussed. A numerical example is included to illustrate the efficiency of the suggested method in model set quality evaluation. 相似文献
18.
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework.
Program summary
Program title: TRolke version 2.0Catalogue identifier: AEFT_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: MIT licenseNo. of lines in distributed program, including test data, etc.: 3431No. of bytes in distributed program, including test data, etc.: 21 789Distribution format: tar.gzProgramming language: ISO C++.Computer: Unix, GNU/Linux, Mac.Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8).RAM:∼20 MBClassification: 14.13.External routines: ROOT (http://root.cern.ch/drupal/)Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background.Solution method: Profile likelihood method, AnalyticalRunning time:<10−4 seconds per extracted limit. 相似文献19.
20.
The paper presents sufficient and necessary conditions that verify the relevance of an assumed linear stochastic system model for problems in which probabilistic characteristics of the plant are not known exactly. The approach is to establish the existence of an admissible probability space on which the output of the candidate stochastic system model is consistent (in a stochastic sense) with the noisy output of the plant. 相似文献