共查询到20条相似文献,搜索用时 0 毫秒
1.
Andrzej Banaszuk Vladimir A. Fonoberov Thomas A. Frewen Marin Kobilarov George Mathew Igor Mezic Alessandro Pinto Tuhin Sahai Harshad Sane Alberto Speranzon Amit SuranaAuthor vitae 《Annual Reviews in Control》2011,35(1):77-98
Development of robust dynamical systems and networks such as autonomous aircraft systems capable of accomplishing complex missions faces challenges due to the dynamically evolving uncertainties coming from model uncertainties, necessity to operate in a hostile cluttered urban environment, and the distributed and dynamic nature of the communication and computation resources. Model-based robust design is difficult because of the complexity of the hybrid dynamic models including continuous vehicle dynamics, the discrete models of computations and communications, and the size of the problem. We will overview recent advances in methodology and tools to model, analyze, and design robust autonomous aerospace systems operating in uncertain environment, with stress on efficient uncertainty quantification and robust design using the case studies of the mission including model-based target tracking and search, and trajectory planning in uncertain urban environment. To show that the methodology is generally applicable to uncertain dynamical systems, we will also show examples of application of the new methods to efficient uncertainty quantification of energy usage in buildings, and stability assessment of interconnected power networks. 相似文献
2.
This paper provides a review of various non-traditional uncertainty models for engineering computation and responds to the criticism of those models. This criticism imputes inappropriateness in representing uncertain quantities and an absence of numerically efficient algorithms to solve industry-sized problems. Non-traditional uncertainty models, however, run counter to this criticism by enabling the solution of problems that defy an appropriate treatment with traditional probabilistic computations due to non-frequentative characteristics, a lack of available information, or subjective influences. The usefulness of such models becomes evident in many cases within engineering practice. Examples include: numerical investigations in the early design stage, the consideration of exceptional environmental conditions and socio-economic changes, and the prediction of the behavior of novel materials based on limited test data. Non-traditional uncertainty models thus represent a beneficial supplement to the traditional probabilistic model and a sound basis for decision-making. In this paper non-probabilistic uncertainty modeling is discussed by means of interval modeling and fuzzy methods. Mixed, probabilistic/non-probabilistic uncertainty modeling is dealt with in the framework of imprecise probabilities possessing the selected components of evidence theory, interval probabilities, and fuzzy randomness. The capabilities of the approaches selected are addressed in view of realistic modeling and processing of uncertain quantities in engineering. Associated numerical methods for the processing of uncertainty through structural computations are elucidated and considered from a numerical efficiency perspective. The benefit of these particular developments is emphasized in conjunction with the meaning of the uncertain results and in view of engineering applications. 相似文献
3.
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones. 相似文献
4.
H.-R. Bae R. V. Grandhi R. A. Canfield 《Structural and Multidisciplinary Optimization》2006,31(4):270-279
Sensitivity analysis for the quantified uncertainty in evidence theory is developed. In reliability quantification, classical
probabilistic analysis has been a popular approach in many engineering disciplines. However, when we cannot obtain sufficient
data to construct probability distributions in a large-complex system, the classical probability methodology may not be appropriate
to quantify the uncertainty. Evidence theory, also called Dempster–Shafer Theory, has the potential to quantify aleatory (random)
and epistemic (subjective) uncertainties because it can directly handle insufficient data and incomplete knowledge situations.
In this paper, interval information is assumed for the best representation of imprecise information, and the sensitivity analysis
of plausibility in evidence theory is analytically derived with respect to expert opinions and structural parameters. The
results from the sensitivity analysis are expected to be very useful in finding the major contributors for quantified uncertainty
and also in redesigning the structural system for risk minimization. 相似文献
5.
Efficient sampling strategies that scale with the size of the problem, computational budget, and users’ needs are essential for various sampling-based analyses, such as sensitivity and uncertainty analysis. In this study, we propose a new strategy, called Progressive Latin Hypercube Sampling (PLHS), which sequentially generates sample points while progressively preserving the distributional properties of interest (Latin hypercube properties, space-filling, etc.), as the sample size grows. Unlike Latin hypercube sampling, PLHS generates a series of smaller sub-sets (slices) such that (1) the first slice is Latin hypercube, (2) the progressive union of slices remains Latin hypercube and achieves maximum stratification in any one-dimensional projection, and as such (3) the entire sample set is Latin hypercube. The performance of PLHS is compared with benchmark sampling strategies across multiple case studies for Monte Carlo simulation, sensitivity and uncertainty analysis. Our results indicate that PLHS leads to improved efficiency, convergence, and robustness of sampling-based analyses. 相似文献
6.
Air pollution in atmosphere derives from complex non-linear relationships, involving anthropogenic and biogenic precursor emissions. Due to this complexity, Decision Support Systems (DSSs) are important tools to help Environmental Authorities to control/improve air quality, reducing human and ecosystems pollution impacts. DSSs implementing cost-effective or multi-objective methodologies require fast air quality models, able to properly describe the relations between emissions and air quality indexes. These, namely surrogate models (SM), are identified processing deterministic model simulation data. In this work, the Lazy Learning technique has been applied to reproduce the relations linking precursor emissions and pollutant concentrations. Since computational time has to be minimized without losing precision and accuracy, tests aimed at reducing the amount of input data have been performed on a case study over Lombardia Region in Northern Italy. 相似文献
7.
Parametrized surrogate models are used in alloy modeling to quickly obtain otherwise expensive properties such as quantum mechanical energies, and thereafter used to optimize, or simply compute, some alloy quantity of interest, e.g., a phase transition, subject to given constraints. Once learned on a data set, the surrogate can compute alloy properties fast, but with an increased uncertainty compared to the computer code. This uncertainty propagates to the quantity of interest and in this work we seek to quantify it. Furthermore, since the alloy property is expensive to compute, we only have available a limited amount of data from which the surrogate is to be learned. Thus, limited data further increases the uncertainties in the quantity of interest, and we show how to capture this as well. We cannot, and should not, trust the surrogate before we quantify the uncertainties in the application at hand. Therefore, in this work we develop a fully Bayesian framework for quantifying the uncertainties in alloy quantities of interest, originating from replacing the expensive computer code with the fast surrogate, and from limited data. We consider a particular surrogate popular in alloy modeling, the cluster expansion, and aim to quantify how well it captures quantum mechanical energies. Our framework is applicable to other surrogates and alloy properties. 相似文献
8.
P. Baraldi M. Librizzi E. Zio L. Podofillini V.N. Dang 《Expert systems with applications》2009,36(10):12461-12471
Problems characterized by qualitative uncertainty described by expert judgments can be addressed by the fuzzy logic modeling paradigm, structured within a so-called fuzzy expert system (FES) to handle and propagate the qualitative, linguistic assessments by the experts. Once constructed, the FES model should be verified to make sure that it represents correctly the experts’ knowledge. For FES verification, typically there is not enough data to support and compare directly the expert- and FES-inferred solutions. Thus, there is the necessity to develop indirect methods for determining whether the expert system model provides a proper representation of the expert knowledge. A possible way to proceed is to examine the importance of the different input factors in determining the output of the FES model and to verify whether it is in agreement with the expert conceptualization of the model. In this view, two sensitivity and uncertainty analysis techniques applicable to generic FES models are proposed in this paper with the objective of providing appropriate tools of verification in support of the experts in the FES design phase. To analyze the insights gained by using the proposed techniques, a case study concerning a FES developed in the field of human reliability analysis has been considered. 相似文献
9.
In this article we present our recent efforts in designing a comprehensive consistent scientific workflow, nicknamed Wolf2 Pack, for force-field optimization in the field of computational chemistry. Atomistic force fields represent a multiscale bridge that connects high-resolution quantum mechanics knowledge to coarser molecular mechanics-based models. Force-field optimization has so far been a time-consuming and error-prone process, and is a topic where the use of a scientific workflow can provide obvious great benefits. As a case study we generate a gas-phase force field for methanol using Wolf2 Pack, with special attention given toward deriving partial atomic charges. 相似文献
10.
Optimal Latin Hypercubes (OLH) created in a constrained design space might produce Design of Experiments (DoE) containing infeasible points if the underlying formulation disregards the constraints. Simply omitting these infeasible points leads to a DoE with fewer experiments than desired and to a set of points that is not optimally distributed. By using the same number of points a better mapping of the feasible space can be achieved. This paper describes the development of a procedure that creates OLHs for constrained design spaces. An existing formulation is extended to meet this requirement. Here, the OLH is found by minimizing the Audze-Eglais potential energy of the points using a permutation genetic algorithm. Examples validate the procedure and demonstrate its capabilities in finding space-filling Latin Hypercubes in arbitrarily shaped design spaces. 相似文献
11.
Alexandros Kiparissides Michalis Koutinas Cleo Kontoravdi Athanasios Mantalaris Efstratios N. Pistikopoulos Author vitae 《Automatica》2011,(6):1147-1155
This work presents a holistic ‘closed loop’ approach for the development of models of biological systems. The ever-increasing availability of experimental information necessitates the advancement of a systematic methodology to organise and utilise these data. Herein, we present a biological model building framework that maps the treatment of the information from the initial conception of the model, through its experimental validation and finally to its application in model-based optimisation studies. We highlight and discuss current issues associated with the development of mathematical models of biological systems and share our perspective towards a holistic ‘closed loop’ approach that will facilitate the control of the in vitro through the in silico. 相似文献
12.
Variational method is applied to the state equations in order to derive the costate equations and their boundary conditions. Thereafter, the analyses of the eigenvalues of the state and costate equations are performed. It is shown that the eigenvalues of the Jacobean matrices of the state and the transposed Jacobean matrices of the costate equations are analytically and numerically the same. Based on the eigenvalue analysis, the costate equations with their boundary conditions are numerically integrated. Numerical results of the eigenvalues problems of the state and costate equations and of a maximization problem are finally presented. 相似文献
13.
巩曰泰 《自动化与仪器仪表》2009,(6):100-102
根据国家质量技术监督局发布实施JJF1059-《测量不确定度评定与表示》计量技术规范,我单位对配热电阻测温二次仪表的不确定度进行了新的评定,评定方法依据新标准进行,并已顺利通过省局复审。 相似文献
14.
Malcolm McPhee Jim Oltjen James Fadel David Mayer Roberto Sainz 《Mathematics and computers in simulation》2009
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting for energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications. 相似文献
15.
This work focuses on the fast computation of the moment-independent importance measure δi. We first analyse why δi is associated with a possible computational complexity problem. One of the reasons that we thought of is the use of two-loop Monte Carlo simulation, because its rate of convergence is O(N−1/4), and another one is the computation of the norm of the difference between a density and a conditional density. We find that these problems are nonessential difficulties and try to give associated improvements. A kernel estimate is introduced to avoid the use of two-loop Monte Carlo simulation, and a moment expansion of the associated norm which is not simply obtained by using the Edgeworth series is proposed to avoid the density estimation. Then, a fast computational method is introduced for δi. In our method, all δi can be obtained by using a single sample set. From the comparison of the numerical error analyses, we believe that the proposed method is clearly helpful for improving computational efficiency. 相似文献
16.
The US carpet industry is striving to reach a 40% diversion rate from landfills by 2012, according to a memorandum of understanding signed by industry and government officials in 2002. As a result, they are interested in methods of setting up a reverse logistics (RL) system which will allow them to manage the highly variable return flows. In this paper, we simulate such a carpet RL supply chain and use a designed experiment to analyze the impact of the system design factors as well as environmental factors impacting the operational performance of the RL system. First, we identify the relative importance of various network design parameters. We then show that even with the design of an efficient RL system, the use of better recycling technologies, and optimistic growth in recycling rates, the return flows cannot meet demand for nearly a decade. We conclude by discussing possible management options for the carpet industry to address this problem, including legal responses to require return flows and the use of market incentives for recycling. 相似文献
17.
Decision support tools are increasingly used in operations where key decision inputs such as demand, quality, or costs are uncertain. Often such uncertainties are modeled with probability distributions, but very little attention is given to the shape of the distributions. For example, state-of-the-art planning systems have weak, if any, capabilities to account for the distribution shape. We consider demand uncertainties of different shapes and show that the shape can considerably change the optimal decision recommendations of decision models. Inspired by discussions with a leading consumer electronics manufacturer, we analyze how four plausible demand distributions affect three representative decision models that can be employed in support of inventory management, supply contract selection and capacity planning decisions. It is found, for example, that in supply contracts flexibility is much more appreciated if demand is negatively skewed, i.e., has downside potential, compared to positively skewed demand. We then analyze the value of distributional information in the light of these models to find out how the scope of improvement actions that aim to decrease demand uncertainty vary depending on the decision to be made. Based on the results, we present guidelines for effective utilization of probability distributions in decision models for operations management. 相似文献
18.
LAI retrieval and uncertainty evaluations for typical row-planted crops at different growth stages 总被引:1,自引:0,他引:1
Leaf area index (LAI) is a basic quantity indicating crop growth situation and plays a significant role in agricultural, ecological and meteorological models at local, regional and global scale. It is a common approach to invert LAI based on canopy reflectance models using optimization method. Radiative transfer model for continuous vegetation canopy such as SAIL models is widely used for crop LAI inversion. However, crops are mostly planted as row structure in China and they don't fit the assumptions of continuous vegetation especially at the earlier growth stages. What kind of models should be used to invert LAI for typical row-planted crops at different growing stages? Taking corn as an example, the factors which influence the row planted crop LAI estimation are investigated in this paper. Using the computer simulated BRDF data sets, different models for LAI inversion at different growth stages are evaluated based on parameter sensitivity analysis. Bayes theory is used to introduce a priori knowledge in the inversion process. In 2005, a field campaign is carried out to validate LAI inversion accuracy during corn's growing stages in Huailai, Hebei Province, China. Inverted LAI from both the measured Canopy Reflectance (CR) data and Moderate Resolution Imaging Spectroradiometer (MODIS) data are very promising. The results show that at least two kinds of models should be adopted for corn canopy at different growth stages, i.e., row structure model for early growth stage (before elongation) and homogeneous canopy model for later growth stage (after elongation). 相似文献
19.
In recent years, both parameter estimation and fractional calculus have attracted a considerable interest. Parameter estimation of the fractional dynamical models is a new topic. In this paper, we consider novel techniques for parameter estimation of fractional nonlinear dynamical models in systems biology. First, a computationally effective fractional Predictor-Corrector method is proposed for simulating fractional complex dynamical models. Second, we convert the parameter estimation of fractional complex dynamical models into a minimization problem of the unknown parameters. Third, a modified hybrid simplex search (MHSS) and a particle swarm optimization (PSO) is proposed. Finally, these techniques are applied to a dynamical model of competence induction in a cell with measurement error and noisy data. Some numerical results are given that demonstrate the effectiveness of the theoretical analysis. 相似文献
20.
介绍一种解决复杂系统仿真可信性问题的模型确认方法.该方法首先将复杂系统划分成相对简单的子系统、基准系统、单元,得到一分层模型树;接下来对模型树中的模型进行排序并安排确认试验;然后利用信息差方法对拥有试验数据的子层模型进行单层确认;最后通过灵敏度分析将子层模型的确认结果传播到父层模型,最后得到全系统模型的确认结果.文中提出的方法适用于试验数据少、可分层的复杂系统. 相似文献