首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a heuristic technique for solving a parameter estimation problem that arises in modeling the thermal behavior of electronic chip packages. Compact Thermal Models (CTMs) are network models of steady state thermal behavior, which show promise in augmenting the use of more detailed and computationally expensive models. The CTM parameter optimization problem that we examine is a nonconvex optimization problem in which we seek a set of CTM parameters that best predicts, under general conditions, the thermal response of a particular chip package geometry that has been tested under a small number of conditions. We begin by developing a nonlinear programming formulation for this parameter optimization problem, and then develop an algorithm that uses special characteristics of the optimization problem to quickly generate heuristic solutions. Our algorithm descends along a series of solutions to one-dimensional nonconvex optimization problems, obtaining a locally optimal set of model parameters at modest computational cost. Finally, we provide some experimental results and recommendations for extending this research. The authors are indebted to four anonymous referees for their help in improving the contribution and presentation of this paper.  相似文献   

2.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Deterministic simulation is a popular tool used to numerically solve complex mathematical models in engineering applications. These models often involve parameters in the form of numerical values that can be calibrated when real‐life observations are available. This paper presents a systematic approach in parameter calibration using Response Surface Methodology (RSM). Additional modeling by considering correlation in error structure is suggested to compensate the inadequacy of the computer model and improve prediction at untried points. Computational Fluid Dynamics (CFD) model for manure storage ventilation is used for illustration. A simulation study shows that in comparison to likelihood‐based parameter calibration, the proposed parameter calibration method performs better in accuracy and consistency of the calibrated parameter value. The result from sensitivity analysis leads to a guideline in setting up factorial distance in relation to initial parameter values. The proposed calibration method extends RSM beyond its conventional use of process yield improvement and can also be applied widely to calibrate other types of models when real‐life observations are available. Moreover, the proposed inadequacy modeling is useful to improve the accuracy of simulation output, especially when a computer model is too expensive to run at its finest level of detail. Copyright © 2011 John Wiley and Sons Ltd.  相似文献   

4.
In this paper, we consider the problem of constructing reduced‐order models of a class of time‐dependent randomly parametrized linear partial differential equations. Our objective is to efficiently construct a reduced basis approximation of the solution as a function of the spatial coordinates, parameter space, and time. The proposed approach involves decomposing the solution in terms of undetermined spatial and parametrized temporal basis functions. The unknown basis functions in the decomposition are estimated using an alternating iterative Galerkin projection scheme. Numerical studies on the time‐dependent randomly parametrized diffusion equation are presented to demonstrate that the proposed approach provides good accuracy at significantly lower computational cost compared with polynomial chaos‐based Galerkin projection schemes. Comparison studies are also made against Nouy's generalized spectral decomposition scheme to demonstrate that the proposed approach provides a number of computational advantages. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
The deformation and failure of spot‐welded joints have been successfully modelled using a cohesive‐zone model for fracture. This has been accomplished by implementing a user‐defined, three‐dimensional, cohesive‐zone element within a commercial finite‐element package. The model requires two material parameters for each mode of deformation. Results show that the material parameters from this type of approach are transferable for identical spot welds in different geometries where a single parameter (such as maximum stress) is not. The approach has been demonstrated using a model system consisting of spot‐welded joints made from 5754 aluminium sheets. The techniques for determining the cohesive fracture parameters for both nugget fracture and nugget pullout are described in this paper. It has been demonstrated that once the appropriate cohesive parameters for a weld are determined, quantitative predictions can be developed for the strengths, deformations and failure mechanisms of different geometries with nominally identical welds.  相似文献   

6.
We consider the problem of constructing metamodels for computationally expensive simulation codes; that is, we construct interpolators/predictors of functions values (responses) from a finite collection of evaluations (observations). We use Gaussian process (GP) modeling and kriging, and combine a Bayesian approach, based on a finite set GP models, with the use of localized covariances indexed by the point where the prediction is made. Our approach is not based on postulating a generative model for the unknown function, but by letting the covariance functions depend on the prediction site, it provides enough flexibility to accommodate arbitrary nonstationary observations. Contrary to kriging prediction with plug-in parameter estimates, the resulting Bayesian predictor is constructed explicitly, without requiring any numerical optimization, and locally adjusts the weights given to the different models according to the data variability in each neighborhood. The predictor inherits the smoothness properties of the covariance functions that are used and its superiority over plug-in kriging, sometimes also called empirical-best-linear-unbiased predictor, is illustrated on various examples, including the reconstruction of an oceanographic field over a large region from a small number of observations. Supplementary materials for this article are available online.  相似文献   

7.
Regression methods are widely used to estimate the spectral reflectance of object surfaces from camera responses. These methods are under the same problem setting as that to build an estimation function for each sampled wavelength separately, which means that the accuracy of the spectral estimation will be reduced when the training set is small. To improve the spectral estimation accuracy, we propose a novel estimating approach based on the support vector regression method. The proposed approach utilizes a composite modeling scheme, which formulates the RGB values and the sampled wavelength together as the input term to make the most use of the information from the training samples. Experimental results show that the proposed method can improve the recovery accuracy when the training set is small.  相似文献   

8.
Abstract

In this paper, we present a new scheme called the maximum log‐likelihood sum (MLSUM) algorithm to simultaneously determine the number of closely‐spaced sources and their locations by uniform linear sensor arrays. Based on the principle of the maximum likelihood (ML) estimator and a newly proposed orthogonal‐projection decomposition technique, the multivariate log‐likelihood maximization problem is transformed into a multistage one‐dimensional log‐likelihood‐sum maximization problem. The global‐optimum solution of the approximated ML localization is obtained by simply maximizing the single one‐dimensional log‐likelihood function. This algorithm is applicable to coherent sources as well as incoherent sources. The computer simulations show that the MLSUM algorithm is much superior to the MUSIC when the element SNR is low and/or the number of snapshots is small.  相似文献   

9.
Conceptual modeling is an important initial stage in the life cycle of engineered systems. It is also highly instrumental in studying existing unfamiliar systems—the focus of scientific inquiry. Conceptual modeling methodologies convey key qualitative system aspects, often at the expense of suppressing quantitative ones. We present and assess two approaches for solving this computational simplification problem by combining Object-Process Methodology (OPM), the new ISO/PAS 19450 standard, with MATLAB or Simulink without compromising the holism and simplicity of the OPM conceptual model. The first approach, AUTOMATLAB, expands the OPM model to a full-fledged MATLAB-based simulation. In the second approach, OPM computational subcontractor, computation-enhanced functions replace low-level processes of the OPM model with MATLAB or Simulink models. We demonstrate the OPM computational subcontractor on a radar system computation. Experimenting with students on a model of an online shopping system with and without AUTOMATLAB has indicated important benefits of employing this computation layer on top of the native conceptual OPM model.  相似文献   

10.
Reliability-based robust design optimization (RBRDO) is a crucial tool for life-cycle quality improvement. Gaussian process (GP) model is an effective alternative modeling technique that is widely used in robust parameter design. However, there are few studies to deal with reliability-based design problems by using GP model. This article proposes a novel life-cycle RBRDO approach concerning response uncertainty under the framework of GP modeling technique. First, the hyperparameters of GP model are estimated by using the Gibbs sampling procedure. Second, the expected partial derivative expression is derived based on GP modeling technique. Moreover, a novel failure risk cost function is constructed to assess the life-cycle reliability. Then, the quality loss function and confidence interval are constructed by simulated outputs to evaluate the robustness of optimal settings and response uncertainty, respectively. Finally, an optimization model integrating failure risk cost function, quality loss function, and confidence interval analysis approach is constructed to find reasonable optimal input settings. Two case studies are given to illustrate the performance of the proposed approach. The results show that the proposed approach can make better trade-offs between the quality characteristics and reliability requirements by considering response uncertainty.  相似文献   

11.
In this paper, an extension to allow the presence of non-informative left- or right-censored observations in log-symmetric regression models is addressed. Under such models, the log-lifetime distribution belongs to the symmetric class and its location and scale parameters are described by semi-parametric functions of explanatory variables, whose nonparametric components are approximated using natural cubic splines or P-splines. An iterative process of parameter estimation by the maximum penalized likelihood method is presented. The large sample properties of the maximum penalized likelihood estimators are studied analytically and by simulation experiments. Diagnostic methods such as deviance-type residuals and local influence measures are derived. The package ssym, which includes an implementation in the computational environment R of the methodology addressed in this paper, is also discussed. The proposed methodology is illustrated by the analysis of a real data set.  相似文献   

12.
From an abstract point of view, a numerical simulation implements a mathematical function that produces some output from some given input. Derivatives (or sensitivities) of the function's output with respect to its input can be obtained—free from truncation error—by using a technique called automatic differentiation. Given a computer code in a high‐level programming language like Fortran, C, or C++, automatic differentiation generates another code capable of computing not only the original function but also its derivatives. Thus, the application of automatic differentiation significantly extends the functionality of a simulation package. For instance, automatic differentiation enables, in a completely mechanical fashion, the usage of derivative‐based optimization algorithms where the evaluation of the objective function comprises some given large‐scale engineering simulation. In this paper, the automatic differentiation tool ADIFOR is used to transform the general‐purpose finite element package SEPRAN. In doing so, we automatically transform the given 400000 lines of Fortran 77 into a new program consisting of 600000 lines of Fortran 77. We compare our approach with a traditional approach based on numerical differentiation and quantify its advantages in terms of accuracy and computational efficiency for a standard fluid flow problem. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
This work addresses computational modeling challenges associated with structures subjected to sharp, local heating, where complex temperature gradients in the materials cause three‐dimensional, localized, intense stress and strain variation. Because of the nature of the applied loadings, multiphysics analysis is necessary to accurately predict thermal and mechanical responses. Moreover, bridging spatial scales between localized heating and global responses of the structure is nontrivial. A large global structural model may be necessary to represent detailed geometry alone, and to capture local effects, the traditional approach of pre‐designing a mesh requires careful manual effort. These issues often lead to cumbersome and expensive global models for this class of problems. To address them, the authors introduce a generalized FEM (GFEM) approach for analyzing three‐dimensional solid, coupled physics problems exhibiting localized heating and corresponding thermomechanical effects. The capabilities of traditional hp‐adaptive FEM or GFEM as well as the GFEM with global–local enrichment functions are extended to one‐way coupled thermo‐structural problems, providing meshing flexibility at local and global scales while remaining competitive with traditional approaches. The methods are demonstrated on several example problems with localized thermal and mechanical solution features, and accuracy and (parallel) computational efficiency relative to traditional direct modeling approaches are discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
The influence of working condition difference has not been considered in the traditional reliability modeling of numerical control (NC) machine tools. To solve the problem, a reliability evaluation method based on mixture variable parameter power law model (MVPPLM) is proposed in this study. First, the scale parameter of the PLM is obtained by multi‐dimensional exponential distribution. Second, a proportional relation of failure rate function between each working condition and reference working condition is established. The proportion coefficient is solved using the partial likelihood function. Working condition factors with a significant influence on reliability levels are selected through the chi‐squared test. Third, reliability evaluation models under different working condition levels are established through mixture distributions. The mixture weight coefficient is calculated by the standard deviation of working condition covariates. The maximum likelihood estimation method is used to estimate parameters. Finally, results of a case analysis based on the data of NC machine tools in the user field tracing test show that the MVPPLM has higher precision than the traditional method. Therefore, reliability evaluation that considers working condition difference is valuable for engineering application.  相似文献   

15.
A new generalized probabilistic approach of uncertainties is proposed for computational model in structural linear dynamics and can be extended without difficulty to computational linear vibroacoustics and to computational non‐linear structural dynamics. This method allows the prior probability model of each type of uncertainties (model‐parameter uncertainties and modeling errors) to be separately constructed and identified. The modeling errors are not taken into account with the usual output‐prediction‐error method, but with the nonparametric probabilistic approach of modeling errors recently introduced and based on the use of the random matrix theory. The theory, an identification procedure and a numerical validation are presented. Then a chaos decomposition with random coefficients is proposed to represent the prior probabilistic model of random responses. The random germ is related to the prior probability model of model‐parameter uncertainties. The random coefficients are related to the prior probability model of modeling errors and then depends on the random matrices introduced by the nonparametric probabilistic approach of modeling errors. A validation is presented. Finally, a future perspective is introduced when experimental data are available. The prior probability model of the random coefficients can be improved in constructing a posterior probability model using the Bayesian approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
We present a new method for analysing stochastic epidemic models under minimal assumptions. The method, dubbed dynamic survival analysis (DSA), is based on a simple yet powerful observation, namely that population-level mean-field trajectories described by a system of partial differential equations may also approximate individual-level times of infection and recovery. This idea gives rise to a certain non-Markovian agent-based model and provides an agent-level likelihood function for a random sample of infection and/or recovery times. Extensive numerical analyses on both synthetic and real epidemic data from foot-and-mouth disease in the UK (2001) and COVID-19 in India (2020) show good accuracy and confirm the method’s versatility in likelihood-based parameter estimation. The accompanying software package gives prospective users a practical tool for modelling, analysing and interpreting epidemic data with the help of the DSA approach.  相似文献   

17.
A nonparametric probabilistic approach for modeling uncertainties in projection‐based, nonlinear, reduced‐order models is presented. When experimental data are available, this approach can also quantify uncertainties in the associated high‐dimensional models. The main underlying idea is twofold. First, to substitute the deterministic reduced‐order basis (ROB) with a stochastic counterpart. Second, to construct the probability measure of the stochastic reduced‐order basis (SROB) on a subset of a compact Stiefel manifold in order to preserve some important properties of a ROB. The stochastic modeling is performed so that the probability distribution of the constructed SROB depends on a small number of hyperparameters. These are determined by solving a reduced‐order statistical inverse problem. The mathematical properties of this novel approach for quantifying model uncertainties are analyzed through theoretical developments and numerical simulations. Its potential is demonstrated through several example problems from computational structural dynamics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
Excitations of disordered systems such as glasses are of fundamental and practical interest but computationally very expensive to solve. Here, we introduce a technique for modeling these excitations in an infinite disordered medium with a reasonable computational cost. The technique relies on a discrete atomic model to simulate the low‐energy behavior of an atomic lattice with molecular impurities. The interaction between different atoms is approximated using a spring‐like interaction based on the Lennard‐Jones potential, but the method can be easily adapted to other potentials. The technique allows to solve a statistically representative number of samples with low computational expense and uses a Monte Carlo approach to achieve a state corresponding to any given temperature. This technique has already been applied successfully to a problem with interest in condensed matter physics: the solid solution of N2 in Ar. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
缓冲包装系统计算机仿真的应用研究   总被引:11,自引:7,他引:4  
研究缓冲包装系统计算机仿真流程、仿真模型的建模理论及方法,采用VB语言、Mat-lab/Simulink仿真工具进行Windows操作环境下缓冲包装系统的仿真,可避免不必要的破坏性试验,减少试验费用,反复修改模型参数、优化缓冲包装结构及尺寸。  相似文献   

20.
This article compares genetic algorithm (GA) and genetic programming (GP) for system modeling in metal forming. As an example, the radial stress distribution in a cold-formed specimen (steel X6Cr13) was predicted by GA and GP. First, cylindrical workpieces were forward extruded and analyzed by the visioplasticity method. After each extrusion, the values of independent variables (radial position of measured stress node, axial position of measured stress node, and coefficient of friction) were collected. These variables influence the value of the dependent variable, radial stress. On the basis of training data, different prediction models for radial stress distribution were developed independently by GA and GP. The obtained models were tested with the testing data. The research has shown that both approaches are suitable for system modeling. However, if the relations between input and output variables are complex, the models developed by the GP approach are much more accurate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号