首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai’s urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies.  相似文献   

2.
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.  相似文献   

3.
This paper surveys issues associated with the statistical calibration of physics-based computer simulators. Even in solidly physics-based models there are usually a number of parameters that are suitable targets for calibration. Statistical calibration means refining the prior distributions of such uncertain parameters based on matching some simulation outputs with data, as opposed to the practice of “tuning” or point estimation that is commonly called calibration in non-statistical contexts. Older methods for statistical calibration are reviewed before turning to recent work in which the calibration problem is embedded in a Gaussian process model. In procedures of this type, parameter estimation is carried out simultaneously with the estimation of the relationship between the calibrated simulator and truth.  相似文献   

4.
The first motivation of this work is to take into account model uncertainty in sensitivity analysis (SA). We present with some examples, a methodology to treat uncertainty due to a mutation of the studied model. Development of this methodology has highlighted an important problem, frequently encountered in SA: how to interpret sensitivity indices when random inputs are non-independent? This paper suggests a strategy for the problem of SA of models with non-independent random inputs. We propose a new application of the multidimensional generalization of classical sensitivity indices, resulting from group sensitivities (sensitivity of the output of the model to a group of inputs), and describe an estimation method based on Monte-Carlo simulations. Practical and theoretical applications illustrate the interest of this method.  相似文献   

5.
Bayesian uncertainty analysis with applications to turbulence modeling   总被引:2,自引:0,他引:2  
In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.  相似文献   

6.
This paper proposes a different likelihood formulation within the Bayesian paradigm for parameter estimation of reliability models. Moreover, the assessment of the uncertainties associated with parameters, the goodness of fit, and the model prediction of reliability are included in a systematic framework for better aiding the model selection procedure. Two case studies are appraised to highlight the contributions of the proposed method and demonstrate the differences between the proposed Bayesian formulation and an existing Bayesian formulation.  相似文献   

7.
Mathematical models described by multivariable functions f(x) where x=(x1,…,xn) are investigated. If the model output is influenced mainly by low order combinations of input variables x1,…,xn, an attempt can be made to construct a low order approximation to the model using values of f(x) only.  相似文献   

8.
The scenario in a risk analysis can be defined as the propagating feature of specific initiating event which can go to a wide range of undesirable consequences. If we take various scenarios into consideration, the risk analysis becomes more complex than do without them. A lot of risk analyses have been performed to actually estimate a risk profile under both uncertain future states of hazard sources and undesirable scenarios. Unfortunately, in case of considering specific systems such as a radioactive waste disposal facility, since the behaviour of future scenarios is hardly predicted without special reasoning process, we cannot estimate their risk only with a traditional risk analysis methodology. Moreover, we believe that the sources of uncertainty at future states can be reduced pertinently by setting up dependency relationships interrelating geological, hydrological, and ecological aspects of the site with all the scenarios. It is then required current methodology of uncertainty analysis of the waste disposal facility be revisited under this belief.In order to consider the effects predicting from an evolution of environmental conditions of waste disposal facilities, this paper proposes a quantitative assessment framework integrating the inference process of Bayesian network to the traditional probabilistic risk analysis. We developed and verified an approximate probabilistic inference program for the specific Bayesian network using a bounded-variance likelihood weighting algorithm. Ultimately, specific models, including a model for uncertainty propagation of relevant parameters were developed with a comparison of variable-specific effects due to the occurrence of diverse altered evolution scenarios (AESs). After providing supporting information to get a variety of quantitative expectations about the dependency relationship between domain variables and AESs, we could connect the results of probabilistic inference from the Bayesian network with the consequence evaluation model addressed. We got a number of practical results to improve current knowledge base for the prioritization of future risk-dominant variables in an actual site.  相似文献   

9.
《Advanced Powder Technology》2021,32(8):2962-2977
The limitations in numerical treatment of solids-phase in conventional methods like Discrete Element Model and Two-Fluid Model have facilitated the development of alternative techniques such as Particle-In-Cell (PIC). However, a number of parameters are involved in PIC due to its empiricism. In this work, global sensitivity analysis of PIC model parameters is performed under three distinct operating regimes common in chemical engineering applications, viz. settling bed, bubbling fluidized bed and circulating fluidized bed. Simulations were performed using the PIC method in Multiphase Flow with Interphase eXchanges (MFiX) developed by National Energy Technology Laboratory (NETL). A non-intrusive uncertainty quantication (UQ) based approach is applied using Nodeworks to first construct an adequate surrogate model and then identify the most influential parameters in each case. This knowledge will aid in developing an effective design of experiments and determine optimal parameters through techniques such as deterministic or statistical calibration.  相似文献   

10.
A multiphase model for Ti–6Al–4V is proposed. This material is widely used in industrial applications and so needs accurate behaviour modeling. Tests have been performed in the temperature range from 25 °C to 1020 °C and at strain rates between 10−3 s−1 and 1 s−1. This allowed the identification of a multiphase mechanical model coupled with a metallurgical model. The behaviour of each phase is calibrated by solving an inverse problem including a phase transformation model and a mechanical model to simulate tests under thermomechanical loadings. A scale transition rule (β-rule) is proposed in order to represent the redistribution of local stresses linked to the heterogeneity of plastic strain. Finally this model is applied to two laser assisted processes: direct laser fabrication and laser welding.  相似文献   

11.
Dynamic models are often used to predict the effects of farmers’ practices on crop yield, crop quality, and environment. These models usually include many parameters that must be estimated from experimental data before practical use. Parameter estimation is a difficult problem especially when some of the parameters vary across genotypes. These genetic parameters may be estimated from plant breeding experiments but this is very costly and requires a lot of experimental work. Moreover, some of the genetic parameters may account for only a very small part of the output variance and, so, do not deserve an accurate determination. This paper shows how methods of global sensitivity analysis can be used to evaluate the contributions of the genetic parameters to the variance of model prediction. Two methods are applied to a complex crop model for estimating the sensitivity indices associated to 13 genetic parameters. The results show that only five genetic parameters have a significant effect on crop yield and grain quality.  相似文献   

12.
Footpaths provide an integral component of our urban environments and have the potential to act as safe places for people and the focus for community life. Despite this, the approach to designing footpaths that are safe while providing this sense of place often occurs in silos. There is often very little consideration given to how designing for sense of place impacts safety and vice versa. The aim of this study was to use a systems analysis and design framework to develop a design template for an ‘ideal’ footpath system that embodies both safety and sense of place. This was achieved through using the first phase of the Cognitive Work Analysis framework, Work Domain Analysis, to specify a model of footpaths as safe places for pedestrians. This model was subsequently used to assess two existing footpath environments to determine the extent to which they meet the design requirements specified. The findings show instances where the existing footpaths both meet and fail to meet the design requirements specified. Through utilising a systems approach for footpaths, this paper has provided a novel design template that can inform new footpath design efforts or be used to evaluate the extent to which existing footpaths achieve their safety and sense of place requirements.  相似文献   

13.
Neuronal activities including calcium sodium current, ligands current, and synaptic transmembrane current create electromagnetic fields. Here, an analytic method is suggested to obtain the electromagnetic fields and potential signals resulting from the function of nerve cells inside the brain. Modeling simulates the behavior of cells three‐dimensionally. The proposed method employs the electric scalar potential and magnetic vector potential to solve the time‐domain three‐dimensional equations using the partial differential method. All ion flows are considered as electrical current densities. In this method, the brain and desired cells are meshed to solve the problem using the numerical method. As an example, the electric fields, magnetic fields, and signals generated by cingulum nerve fibers are illustrated and compared in Cz, Fz, and T3 electrode positions. A direct analysis method based on the same mechanism and biophysics of the nervous system is proposed. Employing this direct method leads not only to a better understanding of neuronal activity but also to a more accurate vision regarding the accuracy/inaccuracy of experimental and inverse methods. The analysis of these data provides insights into the brain function processes.  相似文献   

14.
A design sensitivity analysis of high‐frequency structural–acoustic problems is formulated and presented. The energy finite element method (EFEM) is used to predict structural–acoustic responses in the high frequency range, where the coupling between structural junctions and the structural–acoustic interface are modelled using power transfer coefficients. The continuum design sensitivity formulation is derived from the governing equation of EFEM and the discrete method is applied in the variation of the structural–structural and structural–acoustic coupling matrices. The direct differentiation and adjoint variable method are both developed for the sensitivity analysis, where the difficulty of the adjoint variable method is overcome by solving a transposed system equation. Parametric design variables such as panel thickness and material damping are considered for sensitivity analysis, and numerical sensitivity results show excellent agreement as compared to analytical finite difference results. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, the predictions of elastic constants of 2.5D (three-dimension angle-interlock woven) continue carbon fiber reinforced silicon carbide (C/SiC) composites are studied by means of theoretical model and numerical simulation. A semi-analytical method expressing elastic constants in terms of microstructure geometrical parameters and constitute properties has been proposed. First, both the geometrical model of the 2.5D composite and the representative volume element (RVE) in both micro- and meso-scale are proposed. Second, the effective elastic properties of the RVE in 2.5D C/SiC composites are obtained using finite element method (FEM) simulation based on energy equivalent principle. Finally, the remedied spatial stiffness average (RSSA) method is proposed to obtain more accurate elastic constants using nine correction factor functions determined by FEM simulations, also the effects of geometrical variables on mechanical properties in 2.5D C/SiC composites are analyzed. These results will play an important role in designing advanced C/SiC composites.  相似文献   

16.
Problems involving reaction and species diffusion involve field and flux jumps at a moving reaction front. In multi‐scale problems such as carbon fiber composite oxidation, these effects need to be tracked at the microscopic scale of individual carbon fibers. A multi‐scale model is derived in this paper for predicting species distribution in such problems using a fully coupled multi‐scale homogenization approach. The homogenized fluxes from the micro‐scale are derived using Hill's macro‐homogeneity condition accounting for both flux jumps and species density field jumps at the reacting interface in the micro‐scale unit cell. At the macro‐scale, the competition between the transport of reacting species (oxygen) and the reaction product (carbon dioxide) is modeled using homogenized mass conservation equations. The moving reaction front in carbon fibers at the micro‐scale is tracked using level set method and an adaptive meshing strategy. The macroscopic weight loss of the composite when exposed to oxygen is simulated as a function of time using a coupled finite element methodology at various locations in a validated macroscopic model. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Cognitive Reliability and Error Analysis Method (CREAM) is a common Human Reliability Analysis (HRA) method of second generation. In this paper, to improve the capabilities of CREAM, we propose a probabilistic method based on Bayesian Network (BN) to determine control mode and quantify Human Error Probability (HEP). The BN development process is described in a four‐phase methodology including (i) definition of the nodes and their states; (ii) building the graphical structure; (iii) quantification of BN through assessment of the Conditional Probability Tables (CPT) values and (iv) model validation. Intractability of knowledge acquisition of large CPTs is the most significant limitation of existing BN model of CREAM. So, the main contribution of this paper lies in its application of Recursive Noisy‐OR (RN‐OR) gate to treat large CPTs assessment and ease knowledge acquisition. RN‐OR allows combination of dependent Common Performance Conditions (CPCs). Finally, a quantitative HEP analysis is applied to enable more precise estimation of HEP through a probabilistic approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
On the basis of the relaxation element method, the stress field from the configuration of three pores in the plane under uniaxial tensile loading was calculated. The model of development of sites of localised plastic deformation in a polycrystal with absolutely hard inclusion was represented. A comparison of results, obtained by the relaxation element method and that, made by finite element method has been performed.  相似文献   

19.
A computational framework is presented to evaluate the shape as well as non‐shape (parameter) sensitivity of finite thermo‐inelastic deformations using the continuum sensitivity method (CSM). Weak sensitivity equations are developed for the large thermo‐mechanical deformation of hyperelastic thermo‐viscoplastic materials that are consistent with the kinematic, constitutive, contact and thermal analyses used in the solution of the direct deformation problem. The sensitivities are defined in a rigorous sense and the sensitivity analysis is performed in an infinite‐dimensional continuum framework. The effects of perturbation in the preform, die surface, or other process parameters are carefully considered in the CSM development for the computation of the die temperature sensitivity fields. The direct deformation and sensitivity deformation problems are solved using the finite element method. The results of the continuum sensitivity analysis are validated extensively by a comparison with those obtained by finite difference approximations (i.e. using the solution of a deformation problem with perturbed design variables). The effectiveness of the method is demonstrated with a number of applications in the design optimization of metal forming processes. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
Interactions between deformable composite structures and compressible multiphase flow are common for many marine/submarine problems. Recently, there has been an increased interest in the application of composite structures in marine industry (e.g. propulsion system, ship hulls, marine platforms, marine turbines, etc) to take advantage their high stiffness to weight and strength to weight ratios, and high impact/shock resistance characteristics. It is therefore important to evaluate the performance of composite structures subject to dynamic loads. In this paper, a coupled Eulerian–Lagrangian numerical method is proposed to model the two‐dimensional (2D) or axisymmetric response of deformable composite structures subject to shock and blast loads. The method couples an Eulerian compressible multiphase fluid solver with a general Lagrangian solid solver using an interface capturing method, and is validated using analytical, numerical, and experimental results. A 2D case study is shown for an underwater explosion beneath a three‐layered composite structure with clamped ends. The importance of 2D fluid–structure interaction effects on the transient response between composite structures and compressible multiphase flow is discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号