首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model’s structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study.  相似文献   

2.
CPT-Based Liquefaction Evaluation Using Artificial Neural Networks   总被引:4,自引:0,他引:4  
This article presents various artificial neural network (ANN) models for evaluating liquefaction resistance and potential of sandy soils. Various issues concerning ANN modeling such as data preprocessing, training algorithms, and implementation are discussed. The desired ANN is trained and tested with a large historical database of liquefaction performance at sites where cone penetration test (CPT) measurements are available. The ANN models are found to be effective in predicting liquefaction resistance and potential. The developed ANN models are ported to a spreadsheet for ease of use. A simple procedure for conducting uncertainty analysis to address the issue of parameter and model uncertainties is also presented using the ANN‐based spreadsheet model. This uncertainty analysis is carried out using @Risk, which is an add-in macro that works well with popular spreadsheet programs such as Microsoft Excel and Lotus 1-2-3. The results of the present study show that the developed ANN model has potential as a practical design tool for assessing liquefaction resistance of sandy soils.  相似文献   

3.
The spatial information of rockhead is crucial for the design and construction of tunneling or underground excavation. Although the conventional site investigation methods (i.e. borehole drilling) could provide local engineering geological information, the accurate prediction of the rockhead position with limited borehole data is still challenging due to its spatial variation and great uncertainties involved. With the development of computer science, machine learning (ML) has been proved to be a promising way to avoid subjective judgments by human beings and to establish complex relationships with mega data automatically. However, few studies have been reported on the adoption of ML models for the prediction of the rockhead position. In this paper, we proposed a robust probabilistic ML model for predicting the rockhead distribution using the spatial geographic information. The framework of the natural gradient boosting (NGBoost) algorithm combined with the extreme gradient boosting (XGBoost) is used as the basic learner. The XGBoost model was also compared with some other ML models such as the gradient boosting regression tree (GBRT), the light gradient boosting machine (LightGBM), the multivariate linear regression (MLR), the artificial neural network (ANN), and the support vector machine (SVM). The results demonstrate that the XGBoost algorithm, the core algorithm of the probabilistic N-XGBoost model, outperformed the other conventional ML models with a coefficient of determination (R2) of 0.89 and a root mean squared error (RMSE) of 5.8 m for the prediction of rockhead position based on limited borehole data. The probabilistic N-XGBoost model not only achieved a higher prediction accuracy, but also provided a predictive estimation of the uncertainty. Thus, the proposed N-XGBoost probabilistic model has the potential to be used as a reliable and effective ML algorithm for the prediction of rockhead position in rock and geotechnical engineering.  相似文献   

4.
Hongping P  Yong W 《Water research》2003,37(2):416-428
The models such as the eutrophication ecosystem model of West Lake, Hangzhou (EEM), are always used to make policy decisions for eutrophication management. Thus it is important to know the uncertainty in the model predictions due to the combined effects of uncertainty in the full set of input variables, and the individual input parameters whose variations have the greatest effect on variations in model predictions. In this study, randomized methods based on Monte Carlo technique have been developed and applied to the model (EEM). The technique consists of parameter sensitivity analysis, randomly sampling from underlying probability distributions and multivariate regression analysis. With this technique, model uncertainties during modeling are clarified and their propagation evaluated. Results show that among the five input parameters selected for uncertainty analysis, the settling rate of algae SVS and water temperature TEM have the largest contribution to model prediction uncertainty of the model outputs (PC, PS and PHYT).  相似文献   

5.
《Urban Water Journal》2013,10(6):407-415
The potential carcinogenicity of trihalomethanes (THMs) has led to increasingly stricter regulation of drinking water supplies. This has led to the need to manage better the chemical and microbiological risk balance in chlorinated supplies. The use of empirical equations to predict THM concentrations in water quality models is challenging and expensive due to the numerous temporally and spatially dependent uncertainties involved. In this paper, the benefits of a simple predictive method using a THM productivity parameter based on chlorine consumed by bulk free chlorine reactions are explored using extensive field data from a water distribution system in the Midlands region of the UK. It is concluded that the productivity parameter provides an appropriate, relatively robust, yet straightforward alternative to the use of an empirical equation based on regression analyses to predict THM concentrations in distribution, and that the method has the potential to help distribution system water quality model calibration.  相似文献   

6.
Water resource management decisions often depend on mechanistic or empirical models to predict water quality conditions under future pollutant loading scenarios. These decisions, such as whether or not to restrict public access to a water resource area, may therefore vary depending on how models reflect process, observation, and analytical uncertainty and variability. Nonetheless, few probabilistic modeling tools have been developed which explicitly propagate fecal indicator bacteria (FIB) analysis uncertainty into predictive bacterial water quality model parameters and response variables. Here, we compare three approaches to modeling variability in two different FIB water quality models. We first calibrate a well-known first-order bacterial decay model using approaches ranging from ordinary least squares (OLS) linear regression to Bayesian Markov chain Monte Carlo (MCMC) procedures. We then calibrate a less frequently used empirical bacterial die-off model using the same range of procedures (and the same data). Finally, we propose an innovative approach to evaluating the predictive performance of each calibrated model using a leave-one-out cross-validation procedure and assessing the probability distributions of the resulting Bayesian posterior predictive p-values. Our results suggest that different approaches to acknowledging uncertainty can lead to discrepancies between parameter mean and variance estimates and predictive performance for the same FIB water quality model. Our results also suggest that models without a bacterial kinetics parameter related to the rate of decay may more appropriately reflect FIB fate and transport processes, regardless of how variability and uncertainty are acknowledged.  相似文献   

7.
Radionuclides in fruit systems: model-model intercomparison study   总被引:1,自引:0,他引:1  
Modeling is widely used to predict radionuclide distribution following accidental radionuclide releases. Modeling is crucial in emergency response planning and risk communication, and understanding model uncertainty is important not only in conducting analysis consistent with current regulatory guidance, but also in gaining stakeholder and decision-maker trust in the process and confidence in the results. However, while methods for dealing with parameter uncertainty are fairly well developed, an adequate representation of uncertainties associated with models remains rare. This paper addresses uncertainty about a model's structure (i.e., the relevance of simplifying assumptions and mathematical equations) that is seldom addressed in practical applications of environmental modeling. The use of several alternative models to derive a range of model outputs or risks is probably the only available technique to assess consistency in model prediction. Since each independent model requires significant resources for development and calibration, multiple models are not generally applied to the same problem. This study uses results from one such model intercomparison conducted by the Fruits Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS (BIOsphere Modelling and ASSessment) Program. Model-model intercomparisons presented in this study were conducted by the working group for two different scenarios (acute or continuous deposition), one radionuclide ((137)Cs), and three fruit-bearing crops (strawberries, apples, and blackcurrants). The differences between models were as great as five orders of magnitude for short-term predictions following acute radionuclide deposition. For long-term predictions and for the continuous deposition scenario, the differences between models were about two orders of magnitude. The difference between strawberry, apple, and blackcurrant contamination predicted by one model is far less than the difference in prediction of contamination for a single plant species given by different models. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

8.
Due to associated uncertainties, modelling the spatial distribution of depth to bedrock (DTB) is an important and challenging concern in many geo-engineering applications. The association between DTB, the safety and economy of design structures implies that generating more precise predictive models can be of vital interest. In the present study, the challenge of applying an optimally predictive three-dimensional (3D) spatial DTB model for an area in Stockholm, Sweden was addressed using an automated intelligent computing design procedure. The process was developed and programmed in both C++ and Python to track their performance in specified tasks and also to cover a wide variety of different internal characteristics and libraries. In comparison to the ordinary Kriging (OK) geostatistical tool, the superiority of the developed automated intelligence system was demonstrated through the analysis of confusion matrices and the ranked accuracies of different statistical errors. The results showed that in the absence of measured data, the intelligence models as a flexible and efficient alternative approach can account for associated uncertainties, thus creating more accurate spatial 3D models and providing an appropriate prediction at any point in the subsurface of the study area.  相似文献   

9.
Deterioration models for the condition and reliability prediction of civil infrastructure facilities involve numerous assumptions and simplifications. Furthermore, input parameters of these models are fraught with uncertainties. A Bayesian methodology has been developed by the authors, which uses information obtained through health monitoring to improve the quality of prediction. The sensitivity of prior and posterior predicted performance to different input parameters of the deterioration models, and the effect of instrument and measurement uncertainty, is investigated in this paper. The results quantify the influence of these uncertainties and highlight the efficacy of the updating methodology based on integrating monitoring data. It has been found that the probabilistic posterior performance predictions are significantly less sensitive to most of the input uncertainties. Furthermore, updating the performance distribution based on ‘event’ outcomes is likely to be more beneficial than monitoring and updating of the input parameters on an individual basis.  相似文献   

10.
The paper presents a general agent-based system identification framework as potential solution for data-driven models of building systems that can be developed and integrated with improved efficiency, flexibility and scalability, compared to centralized approaches. The proposed method introduces building sub-system agents, which are optimized independently, by solving locally a maximum likelihood estimation problem. Several models are considered for the sub-system agents and a systematic selection approach is established considering the root mean square error, the parameter sensitivity to output trajectory and the parameter correlation. The final model is integrated from selected models for each agent. Two different approaches are developed for the integration; the negotiated-shared parameter model, which is a distributed method, and the free-shared parameter model based on a decentralized method. The results from a case-study for a high performance building indicate that the model prediction accuracy of the new approach is fairly good for implementation in predictive control.  相似文献   

11.
Abstract:   This article proposes a methodology for predicting the time to onset of corrosion of reinforcing steel in concrete bridge decks while incorporating parameter uncertainty. It is based on the integration of artificial neural network (ANN), case-based reasoning (CBR), mechanistic model, and Monte Carlo simulation (MCS). A probabilistic mechanistic model is used to generate the distribution of the time to corrosion initiation based on statistical models of the governing parameters obtained from field data. The proposed ANN and CBR models act as universal functional mapping tools to approximate the relationship between the input and output of the mechanistic model. These tools are integrated with the MCS technique to generate the distribution of the corrosion initiation time using the distributions of the governing parameters. The proposed methodology is applied to predict the time to corrosion initiation of the top reinforcing steel in the concrete deck of the Dickson Bridge in Montreal. This study demonstrates the feasibility, adequate reliability, and computational efficiency of the proposed integrated ANN-MCS and CBR-MCS approaches for preliminary project-level and also network-level analyses.  相似文献   

12.
Lateral displacement due to liquefaction (DH) is the most destructive effect of earthquakes in saturated loose or semi-loose sandy soil. Among all earthquake parameters, the standardized cumulative absolute velocity (CAV5) exhibits the largest correlation with increasing pore water pressure and liquefaction. Furthermore, the complex effect of fine content (FC) at different values has been studied and demonstrated. Nevertheless, these two contexts have not been entered into empirical and semi-empirical models to predict DH. This study bridges this gap by adding CAV5 to the data set and developing two artificial neural network (ANN) models. The first model is based on the entire range of the parameters, whereas the second model is based on the samples with FC values that are less than the 28% critical value. The results demonstrate the higher accuracy of the second model that is developed even with less data. Additionally, according to the uncertainties in the geotechnical and earthquake parameters, sensitivity analysis was performed via Monte Carlo simulation (MCS) using the second developed ANN model that exhibited higher accuracy. The results demonstrated the significant influence of the uncertainties of earthquake parameters on predicting DH.  相似文献   

13.
Sun F  Chen J  Tong Q  Zeng S 《Water research》2008,42(1-2):229-237
In order to understand the dual impact of both deteriorating water resources and stringent water quality regulations on the performance of conventional waterworks on a nationwide level, a methodology of a risk-based screening analysis is developed and further applied to evaluate the natural organic matter (NOM) regulation in the new standards for drinking water quality. Due to the large number of drinking water sources and conventional waterworks, as well as the lack of detailed field observations in China, such an analysis is wholly based on a validated conceptual model. The performance risk of conventional waterworks in compliance with the new regulation is estimated within the framework of risk assessment through Monte Carlo simulation to account for the uncertainties associated with model parameters, source water quality and operation conditions across different waterworks. A screening analysis is simultaneously performed using a task-based Hornberger-Spear-Young algorithm to identify the critical operation parameters that determine the performance risk, based on which potential strategies to manage the performance risk are proposed and evaluated. The effects of the model parameter uncertainties on the simulation results are also discussed.  相似文献   

14.
Numerous experimental studies have shown the type and gradation of coarse aggregates effect on the mechanical properties of concrete. The type and gradation of coarse aggregates have not been taken into account in the available machine learning prediction models. In this study, a two-dimensional concrete microscopic image was generated by using a random aggregate model (RAM), and the coarse aggregate and other concrete ingredients were represented innovatively using polygons and trichromatic chromaticity values in the RAM images. The RAM image set was created by applying this method to represent 1110 sets of different concrete mixes. Then based on the Bayesian optimization algorithm and the image set, a compressive strength prediction model considering the effect of coarse aggregate types and gradations was developed utilizing a convolutional neural network (CNN) model. Meanwhile, an artificial neural network (ANN) compressive strength prediction model was developed using 1110 sets of mix ratio data. The results show that the proposed RAM image generation method has the capability to represent different concrete mix ratios collected in this study. The prediction performance of the CNN compressive strength model considering aggregate types and gradations is better than that of the ANN model. The method can provide a new perspective for predicting other concrete mechanical properties and technically support performance-based intelligent concrete mix design.  相似文献   

15.
In this study, the infrastructure leakage index (ILI) indicator that is preferred frequently by the water utilities with sufficient data to determine the performances of water distribution systems is modeled for the first time through the three different methodologies using different input data. In addition to the variables in the literature used for the classical ILI calculations, the age parameter is also included in the models. In the first step, the ILI values have been estimated via multiple linear regression (MLR) using water supply quantity, water accrual quantity, network length, service connection length, number of service connections, and pressure variables. Secondly, the Artificial Neural Network (ANN) approach has been applied with raw data to improve the ILI prediction performance. Finally, the data set has been standardized with the Z-Score method for increasing the learning power of the ANN models, and then the ANN predictions have been made by converting the data through the principal component analysis (PCA) method to minimize complexity by reducing the data set size. The model predictions have been evaluated via mean square error, G-value, mean absolute error, mean bias error, and adjusted-R2 model performance scale. When the model outputs obtained at the end of the study are evaluated together with the classical ILI calculations, it is seen that the successful ILI predictions with three and four variables, including the age parameter, rather than six variables, have been made through the PC-ANN method. Water utilities with insufficient physical and operational data for ILI indicator calculation can make network performance evaluations by predicting the ILI through the models suggested in this study with high accuracy in a reliable way.  相似文献   

16.
The forecast performance of alternative artificial neural network (ANN) models was studied to compare their forecast accuracy to the fractionally integrated autoregressive moving average model using monthly rainfall data for the Canadian province of Prince Edward Island (PEI). A multilayer feed-forward back-propagation ANN algorithm is implemented to evaluate the forecast accuracy and to analyse the statistical characteristics of the ANN model for original data and for data pre-processed with moving average and exponential smoothing transformations. The prediction performance of these models is compared to that of a seasonal autoregressive fractionally integrated moving average time series model. The statistical results show that the ANN model with exponential smoothing of the data has the smallest root mean square error and the highest correlation coefficient and thus, outperforms the alternative models investigated in this study.  相似文献   

17.
One of the best approaches to date to obtain overall binding constants (Ko) for Al and dissolved organic matter (DOM) from acidic soil solutions is to collect 'free' Al data with diffusive gradients in thin films (DGT) and to infer the Ko values by fitting a continuous distribution model based on Scatchard plots. Although there is clear established literature demonstrating the usefulness of the Scatchard approach, relatively little attention has been given to a realistic assessment of the uncertainties associated with the final fitted Ko values. In this study we present an uncertainty analysis of the fitted Ko values using a synthetic dataset with different levels of random noise and a real data set using DGT data from an acidic soil solution. The parameters in the continuous distribution model and their corresponding upper and lower 95% uncertainty bounds were determined using the Shuffled Complex Evolution Metropolis (SCEM) algorithm. Although reasonable fits of the distribution model to the experimental data were obtained in all cases, an appreciable uncertainty in the resulting Ko values was found due to three main reasons. Firstly, obtaining 'free' Al data even with the DGT method is relatively difficult, leading to uncertainty in the data. Secondly, before Scatchard plots can be constructed, the maximum binding capacity (MBC) must be estimated. Any uncertainty in this MBC propagates into uncertainty associated with the final plots. Thirdly, as the final fitted Ko values are largely based on extrapolation, a small uncertainty in the fit of the binding data results in an appreciable uncertainty in the obtained Ko. Therefore, while trends in Ko for Al and DOM could easily be discerned and compared, the uncertainty in the Ko values hinders the application in quantitative speciation calculation. More comprehensive speciation models that avoid the use of Ko seem to fit better for this purpose.  相似文献   

18.
The data-driven phenomenological models based on deformation measurements have been widely utilized to predict the slope failure time (SFT). The observational and model uncertainties could lead the predicted SFT calculated from the phenomenological models to deviate from the actual SFT. Currently, very limited study has been conducted on how to evaluate the effect of such uncertainties on SFT prediction. In this paper, a comprehensive slope failure database was compiled. A Bayesian machine learning (BML)-based method was developed to learn the model and observational uncertainties involved in SFT prediction, through which the probabilistic distribution of the SFT can be obtained. This method was illustrated in detail with an example. Verification studies show that the BML-based method is superior to the traditional inverse velocity method (INVM) and the maximum likelihood method for predicting SFT. The proposed method in this study provides an effective tool for SFT prediction.  相似文献   

19.
This paper describes the development and application of a method for estimating uncertainty in the prediction of sewer flow quantity and quality and how this may impact on the prediction of water quality failures in integrated catchment modelling (ICM) studies. The method is generic and readily adaptable for use with different flow quality prediction models that are used in ICM studies. Use is made of the elicitation concept, whereby expert knowledge combined with a limited amount of data are translated into probability distributions describing the level of uncertainty of various input and model variables. This type of approach can be used even if little or no site specific data is available. Integrated catchment modelling studies often use complex deterministic models. To apply the results of elicitation in a case study, a computational reduction method has been developed in order to determine levels of uncertainty in model outputs with a reasonably practical level of computational effort. This approach was applied to determine the level of uncertainty in the number of water quality failures predicted by an ICM study, due to uncertainty associated with input and model parameters of the urban drainage model component of the ICM. For a small case study catchment in the UK, it was shown that the predicted number of water quality failures in the receiving water could vary by around 45% of the number predicted without consideration of model uncertainty for dissolved oxygen and around 32% for unionised ammonia. It was concluded that the potential overall levels of uncertainty in the ICM outputs could be significant. Any solutions designed using modelling approaches that do not consider uncertainty associated with model input and model parameters may be significantly over-dimensioned or under-dimensioned. With changing external inputs, such as rainfall and river flows due to climate change, better accounting for uncertainty is required.  相似文献   

20.
Knowledge acquisition is perhaps the most important phase in the development of knowledge‐based systems (KBSs). Problems associated with knowledge acquisition include creating an explicit model of handling uncertainty for solving models in a complex domain. This article illustrates how knowledge modeling facilitates the acquisition of knowledge that is vague and uncertain. A hierarchical model is adopted for knowledge acquisition. The domain of the problem, i.e., damage assessment and vulnerability analysis of structures subjected to cyclones, is characterized by the presence of uncertainties in various forms. A KBS based on the hierarchical knowledge model has been developed that has the flexibility to handle the uncertainties using probabilistic and fuzzy set approaches depending on the nature of uncertainty. The hierarchical model for handling complexities and uncertainties in knowledge, the knowledge‐acquisition strategy, the inference mechanism, and the representation used are described. Two typical sessions, one for damage assessment and another for vulnerability analysis, are presented to demonstrate the working of the KBS and its efficacy in handling uncertain information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号