首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model’s structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study.  相似文献   

2.
CPT-Based Liquefaction Evaluation Using Artificial Neural Networks   总被引:4,自引:0,他引:4  
This article presents various artificial neural network (ANN) models for evaluating liquefaction resistance and potential of sandy soils. Various issues concerning ANN modeling such as data preprocessing, training algorithms, and implementation are discussed. The desired ANN is trained and tested with a large historical database of liquefaction performance at sites where cone penetration test (CPT) measurements are available. The ANN models are found to be effective in predicting liquefaction resistance and potential. The developed ANN models are ported to a spreadsheet for ease of use. A simple procedure for conducting uncertainty analysis to address the issue of parameter and model uncertainties is also presented using the ANN‐based spreadsheet model. This uncertainty analysis is carried out using @Risk, which is an add-in macro that works well with popular spreadsheet programs such as Microsoft Excel and Lotus 1-2-3. The results of the present study show that the developed ANN model has potential as a practical design tool for assessing liquefaction resistance of sandy soils.  相似文献   

3.
The spatial information of rockhead is crucial for the design and construction of tunneling or underground excavation. Although the conventional site investigation methods (i.e. borehole drilling) could provide local engineering geological information, the accurate prediction of the rockhead position with limited borehole data is still challenging due to its spatial variation and great uncertainties involved. With the development of computer science, machine learning (ML) has been proved to be a promising way to avoid subjective judgments by human beings and to establish complex relationships with mega data automatically. However, few studies have been reported on the adoption of ML models for the prediction of the rockhead position. In this paper, we proposed a robust probabilistic ML model for predicting the rockhead distribution using the spatial geographic information. The framework of the natural gradient boosting (NGBoost) algorithm combined with the extreme gradient boosting (XGBoost) is used as the basic learner. The XGBoost model was also compared with some other ML models such as the gradient boosting regression tree (GBRT), the light gradient boosting machine (LightGBM), the multivariate linear regression (MLR), the artificial neural network (ANN), and the support vector machine (SVM). The results demonstrate that the XGBoost algorithm, the core algorithm of the probabilistic N-XGBoost model, outperformed the other conventional ML models with a coefficient of determination (R2) of 0.89 and a root mean squared error (RMSE) of 5.8 m for the prediction of rockhead position based on limited borehole data. The probabilistic N-XGBoost model not only achieved a higher prediction accuracy, but also provided a predictive estimation of the uncertainty. Thus, the proposed N-XGBoost probabilistic model has the potential to be used as a reliable and effective ML algorithm for the prediction of rockhead position in rock and geotechnical engineering.  相似文献   

4.
Hongping P  Yong W 《Water research》2003,37(2):416-428
The models such as the eutrophication ecosystem model of West Lake, Hangzhou (EEM), are always used to make policy decisions for eutrophication management. Thus it is important to know the uncertainty in the model predictions due to the combined effects of uncertainty in the full set of input variables, and the individual input parameters whose variations have the greatest effect on variations in model predictions. In this study, randomized methods based on Monte Carlo technique have been developed and applied to the model (EEM). The technique consists of parameter sensitivity analysis, randomly sampling from underlying probability distributions and multivariate regression analysis. With this technique, model uncertainties during modeling are clarified and their propagation evaluated. Results show that among the five input parameters selected for uncertainty analysis, the settling rate of algae SVS and water temperature TEM have the largest contribution to model prediction uncertainty of the model outputs (PC, PS and PHYT).  相似文献   

5.
《Urban Water Journal》2013,10(6):407-415
The potential carcinogenicity of trihalomethanes (THMs) has led to increasingly stricter regulation of drinking water supplies. This has led to the need to manage better the chemical and microbiological risk balance in chlorinated supplies. The use of empirical equations to predict THM concentrations in water quality models is challenging and expensive due to the numerous temporally and spatially dependent uncertainties involved. In this paper, the benefits of a simple predictive method using a THM productivity parameter based on chlorine consumed by bulk free chlorine reactions are explored using extensive field data from a water distribution system in the Midlands region of the UK. It is concluded that the productivity parameter provides an appropriate, relatively robust, yet straightforward alternative to the use of an empirical equation based on regression analyses to predict THM concentrations in distribution, and that the method has the potential to help distribution system water quality model calibration.  相似文献   

6.
Water resource management decisions often depend on mechanistic or empirical models to predict water quality conditions under future pollutant loading scenarios. These decisions, such as whether or not to restrict public access to a water resource area, may therefore vary depending on how models reflect process, observation, and analytical uncertainty and variability. Nonetheless, few probabilistic modeling tools have been developed which explicitly propagate fecal indicator bacteria (FIB) analysis uncertainty into predictive bacterial water quality model parameters and response variables. Here, we compare three approaches to modeling variability in two different FIB water quality models. We first calibrate a well-known first-order bacterial decay model using approaches ranging from ordinary least squares (OLS) linear regression to Bayesian Markov chain Monte Carlo (MCMC) procedures. We then calibrate a less frequently used empirical bacterial die-off model using the same range of procedures (and the same data). Finally, we propose an innovative approach to evaluating the predictive performance of each calibrated model using a leave-one-out cross-validation procedure and assessing the probability distributions of the resulting Bayesian posterior predictive p-values. Our results suggest that different approaches to acknowledging uncertainty can lead to discrepancies between parameter mean and variance estimates and predictive performance for the same FIB water quality model. Our results also suggest that models without a bacterial kinetics parameter related to the rate of decay may more appropriately reflect FIB fate and transport processes, regardless of how variability and uncertainty are acknowledged.  相似文献   

7.
Radionuclides in fruit systems: model-model intercomparison study   总被引:1,自引:0,他引:1  
Modeling is widely used to predict radionuclide distribution following accidental radionuclide releases. Modeling is crucial in emergency response planning and risk communication, and understanding model uncertainty is important not only in conducting analysis consistent with current regulatory guidance, but also in gaining stakeholder and decision-maker trust in the process and confidence in the results. However, while methods for dealing with parameter uncertainty are fairly well developed, an adequate representation of uncertainties associated with models remains rare. This paper addresses uncertainty about a model's structure (i.e., the relevance of simplifying assumptions and mathematical equations) that is seldom addressed in practical applications of environmental modeling. The use of several alternative models to derive a range of model outputs or risks is probably the only available technique to assess consistency in model prediction. Since each independent model requires significant resources for development and calibration, multiple models are not generally applied to the same problem. This study uses results from one such model intercomparison conducted by the Fruits Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS (BIOsphere Modelling and ASSessment) Program. Model-model intercomparisons presented in this study were conducted by the working group for two different scenarios (acute or continuous deposition), one radionuclide ((137)Cs), and three fruit-bearing crops (strawberries, apples, and blackcurrants). The differences between models were as great as five orders of magnitude for short-term predictions following acute radionuclide deposition. For long-term predictions and for the continuous deposition scenario, the differences between models were about two orders of magnitude. The difference between strawberry, apple, and blackcurrant contamination predicted by one model is far less than the difference in prediction of contamination for a single plant species given by different models. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

8.
Due to associated uncertainties, modelling the spatial distribution of depth to bedrock (DTB) is an important and challenging concern in many geo-engineering applications. The association between DTB, the safety and economy of design structures implies that generating more precise predictive models can be of vital interest. In the present study, the challenge of applying an optimally predictive three-dimensional (3D) spatial DTB model for an area in Stockholm, Sweden was addressed using an automated intelligent computing design procedure. The process was developed and programmed in both C++ and Python to track their performance in specified tasks and also to cover a wide variety of different internal characteristics and libraries. In comparison to the ordinary Kriging (OK) geostatistical tool, the superiority of the developed automated intelligence system was demonstrated through the analysis of confusion matrices and the ranked accuracies of different statistical errors. The results showed that in the absence of measured data, the intelligence models as a flexible and efficient alternative approach can account for associated uncertainties, thus creating more accurate spatial 3D models and providing an appropriate prediction at any point in the subsurface of the study area.  相似文献   

9.
Deterioration models for the condition and reliability prediction of civil infrastructure facilities involve numerous assumptions and simplifications. Furthermore, input parameters of these models are fraught with uncertainties. A Bayesian methodology has been developed by the authors, which uses information obtained through health monitoring to improve the quality of prediction. The sensitivity of prior and posterior predicted performance to different input parameters of the deterioration models, and the effect of instrument and measurement uncertainty, is investigated in this paper. The results quantify the influence of these uncertainties and highlight the efficacy of the updating methodology based on integrating monitoring data. It has been found that the probabilistic posterior performance predictions are significantly less sensitive to most of the input uncertainties. Furthermore, updating the performance distribution based on ‘event’ outcomes is likely to be more beneficial than monitoring and updating of the input parameters on an individual basis.  相似文献   

10.
Lateral displacement due to liquefaction (DH) is the most destructive effect of earthquakes in saturated loose or semi-loose sandy soil. Among all earthquake parameters, the standardized cumulative absolute velocity (CAV5) exhibits the largest correlation with increasing pore water pressure and liquefaction. Furthermore, the complex effect of fine content (FC) at different values has been studied and demonstrated. Nevertheless, these two contexts have not been entered into empirical and semi-empirical models to predict DH. This study bridges this gap by adding CAV5 to the data set and developing two artificial neural network (ANN) models. The first model is based on the entire range of the parameters, whereas the second model is based on the samples with FC values that are less than the 28% critical value. The results demonstrate the higher accuracy of the second model that is developed even with less data. Additionally, according to the uncertainties in the geotechnical and earthquake parameters, sensitivity analysis was performed via Monte Carlo simulation (MCS) using the second developed ANN model that exhibited higher accuracy. The results demonstrated the significant influence of the uncertainties of earthquake parameters on predicting DH.  相似文献   

11.
One of the best approaches to date to obtain overall binding constants (Ko) for Al and dissolved organic matter (DOM) from acidic soil solutions is to collect 'free' Al data with diffusive gradients in thin films (DGT) and to infer the Ko values by fitting a continuous distribution model based on Scatchard plots. Although there is clear established literature demonstrating the usefulness of the Scatchard approach, relatively little attention has been given to a realistic assessment of the uncertainties associated with the final fitted Ko values. In this study we present an uncertainty analysis of the fitted Ko values using a synthetic dataset with different levels of random noise and a real data set using DGT data from an acidic soil solution. The parameters in the continuous distribution model and their corresponding upper and lower 95% uncertainty bounds were determined using the Shuffled Complex Evolution Metropolis (SCEM) algorithm. Although reasonable fits of the distribution model to the experimental data were obtained in all cases, an appreciable uncertainty in the resulting Ko values was found due to three main reasons. Firstly, obtaining 'free' Al data even with the DGT method is relatively difficult, leading to uncertainty in the data. Secondly, before Scatchard plots can be constructed, the maximum binding capacity (MBC) must be estimated. Any uncertainty in this MBC propagates into uncertainty associated with the final plots. Thirdly, as the final fitted Ko values are largely based on extrapolation, a small uncertainty in the fit of the binding data results in an appreciable uncertainty in the obtained Ko. Therefore, while trends in Ko for Al and DOM could easily be discerned and compared, the uncertainty in the Ko values hinders the application in quantitative speciation calculation. More comprehensive speciation models that avoid the use of Ko seem to fit better for this purpose.  相似文献   

12.
The forecast performance of alternative artificial neural network (ANN) models was studied to compare their forecast accuracy to the fractionally integrated autoregressive moving average model using monthly rainfall data for the Canadian province of Prince Edward Island (PEI). A multilayer feed-forward back-propagation ANN algorithm is implemented to evaluate the forecast accuracy and to analyse the statistical characteristics of the ANN model for original data and for data pre-processed with moving average and exponential smoothing transformations. The prediction performance of these models is compared to that of a seasonal autoregressive fractionally integrated moving average time series model. The statistical results show that the ANN model with exponential smoothing of the data has the smallest root mean square error and the highest correlation coefficient and thus, outperforms the alternative models investigated in this study.  相似文献   

13.
The paper presents a general agent-based system identification framework as potential solution for data-driven models of building systems that can be developed and integrated with improved efficiency, flexibility and scalability, compared to centralized approaches. The proposed method introduces building sub-system agents, which are optimized independently, by solving locally a maximum likelihood estimation problem. Several models are considered for the sub-system agents and a systematic selection approach is established considering the root mean square error, the parameter sensitivity to output trajectory and the parameter correlation. The final model is integrated from selected models for each agent. Two different approaches are developed for the integration; the negotiated-shared parameter model, which is a distributed method, and the free-shared parameter model based on a decentralized method. The results from a case-study for a high performance building indicate that the model prediction accuracy of the new approach is fairly good for implementation in predictive control.  相似文献   

14.
Zhang Z  Deng Z  Rusch KA 《Water research》2012,46(2):465-474
The US EPA BEACH Act requires beach managers to issue swimming advisories when water quality standards are exceeded. While a number of methods/models have been proposed to meet the BEACH Act requirement, no systematic comparisons of different methods against the same data series are available in terms of relative performance of existing methods. This study presents and compares three models for nowcasting and forecasting enterococci levels at Gulf Coast beaches in Louisiana, USA. One was developed using the artificial neural network (ANN) in MATLAB Toolbox and the other two were based on the US EPA Virtual Beach (VB) Program. A total of 944 sets of environmental and bacteriological data were utilized. The data were collected and analyzed weekly during the swimming season (May-October) at six sites of the Holly Beach by Louisiana Beach Monitoring Program in the six year period of May 2005-October 2010. The ANN model includes 15 readily available environmental variables such as salinity, water temperature, wind speed and direction, tide level and type, weather type, and various combinations of antecedent rainfalls. The ANN model was trained, validated, and tested using 308, 103, and 103 data sets (collected in 2007, 2008, and 2009) with an average linear correlation coefficient (LCC) of 0.857 and a Root Mean Square Error (RMSE) of 0.336. The two VB models, including a linear transformation-based model and a nonlinear transformation-based model, were constructed using the same data sets. The linear VB model with 6 input variables achieved an LCC of 0.230 and an RMSE of 1.302 while the nonlinear VB model with 5 input variables produced an LCC of 0.337 and an RMSE of 1.205. In order to assess the predictive performance of the ANN and VB models, hindcasting was conducted using a total of 430 sets of independent environmental and bacteriological data collected at six Holly Beach sites in 2005, 2006, and 2010. The hindcasting results show that the ANN model is capable of predicting enterococci levels at the Holly Beach sites with an adjusted RMSE of 0.803 and LCC of 0.320 while the adjusted RMSE and LCC values are 1.815 and 0.354 for the linear VB model and 1.961and 0.521 for the nonlinear VB model. The results indicate that the ANN model with 15 parameters performs better than the VB models with 6 or 5 parameters in terms of RMSE while VB models perform better than the ANN model in terms of LCC. The predictive models (especially the ANN and the nonlinear VB models) developed in this study in combination with readily available real-time environmental and weather forecast data can be utilized to nowcast and forecast beach water quality, greatly reducing the potential risk of contaminated beach waters to human health and improving beach management. While the models were developed specifically for the Holly Beach, Louisiana, the methods used in this paper are generally applicable to other coastal beaches.  相似文献   

15.
Abstract:   This article proposes a methodology for predicting the time to onset of corrosion of reinforcing steel in concrete bridge decks while incorporating parameter uncertainty. It is based on the integration of artificial neural network (ANN), case-based reasoning (CBR), mechanistic model, and Monte Carlo simulation (MCS). A probabilistic mechanistic model is used to generate the distribution of the time to corrosion initiation based on statistical models of the governing parameters obtained from field data. The proposed ANN and CBR models act as universal functional mapping tools to approximate the relationship between the input and output of the mechanistic model. These tools are integrated with the MCS technique to generate the distribution of the corrosion initiation time using the distributions of the governing parameters. The proposed methodology is applied to predict the time to corrosion initiation of the top reinforcing steel in the concrete deck of the Dickson Bridge in Montreal. This study demonstrates the feasibility, adequate reliability, and computational efficiency of the proposed integrated ANN-MCS and CBR-MCS approaches for preliminary project-level and also network-level analyses.  相似文献   

16.
Sun F  Chen J  Tong Q  Zeng S 《Water research》2008,42(1-2):229-237
In order to understand the dual impact of both deteriorating water resources and stringent water quality regulations on the performance of conventional waterworks on a nationwide level, a methodology of a risk-based screening analysis is developed and further applied to evaluate the natural organic matter (NOM) regulation in the new standards for drinking water quality. Due to the large number of drinking water sources and conventional waterworks, as well as the lack of detailed field observations in China, such an analysis is wholly based on a validated conceptual model. The performance risk of conventional waterworks in compliance with the new regulation is estimated within the framework of risk assessment through Monte Carlo simulation to account for the uncertainties associated with model parameters, source water quality and operation conditions across different waterworks. A screening analysis is simultaneously performed using a task-based Hornberger-Spear-Young algorithm to identify the critical operation parameters that determine the performance risk, based on which potential strategies to manage the performance risk are proposed and evaluated. The effects of the model parameter uncertainties on the simulation results are also discussed.  相似文献   

17.
Wall decay coefficients vary between pipes and must be determined indirectly from field measured concentration data. A general calibration model for identifying these parameters is formulated here. The problem is solved using the shuffled frog leaping algorithm optimisation algorithm that is coupled with hydraulic and water quality simulation models using the EPANET Toolkit. The methodology is applied to two application networks to examine the robustness of the parameter estimation algorithm and to study the effects of the network flow conditions, data availability, model simplification, and measurement errors. To that end, different field conditions are considered, including a network with or without tanks, altering disinfectant injection policies, changing measurement locations, and varying the number of wall decay coefficients. Results from conditions with exact data show that the solution approach is robust and consistently finds the true parameter values. However, when the number of decay coefficients is increased, results suggest that the distribution and number of meter locations affects model parameter identifiability. It is also noted that the parameter sensitivity is relatively small and related to the velocities in the network. Finally, isolated tracer data can supplement information from normal operating conditions to improve decay coefficient calibration but, if sufficient data is available, the incremental improvement may not be significant. To confirm this result, model calibration must be extended to parameter and model prediction uncertainty.  相似文献   

18.
Numerous experimental studies have shown the type and gradation of coarse aggregates effect on the mechanical properties of concrete. The type and gradation of coarse aggregates have not been taken into account in the available machine learning prediction models. In this study, a two-dimensional concrete microscopic image was generated by using a random aggregate model (RAM), and the coarse aggregate and other concrete ingredients were represented innovatively using polygons and trichromatic chromaticity values in the RAM images. The RAM image set was created by applying this method to represent 1110 sets of different concrete mixes. Then based on the Bayesian optimization algorithm and the image set, a compressive strength prediction model considering the effect of coarse aggregate types and gradations was developed utilizing a convolutional neural network (CNN) model. Meanwhile, an artificial neural network (ANN) compressive strength prediction model was developed using 1110 sets of mix ratio data. The results show that the proposed RAM image generation method has the capability to represent different concrete mix ratios collected in this study. The prediction performance of the CNN compressive strength model considering aggregate types and gradations is better than that of the ANN model. The method can provide a new perspective for predicting other concrete mechanical properties and technically support performance-based intelligent concrete mix design.  相似文献   

19.
In this study, the infrastructure leakage index (ILI) indicator that is preferred frequently by the water utilities with sufficient data to determine the performances of water distribution systems is modeled for the first time through the three different methodologies using different input data. In addition to the variables in the literature used for the classical ILI calculations, the age parameter is also included in the models. In the first step, the ILI values have been estimated via multiple linear regression (MLR) using water supply quantity, water accrual quantity, network length, service connection length, number of service connections, and pressure variables. Secondly, the Artificial Neural Network (ANN) approach has been applied with raw data to improve the ILI prediction performance. Finally, the data set has been standardized with the Z-Score method for increasing the learning power of the ANN models, and then the ANN predictions have been made by converting the data through the principal component analysis (PCA) method to minimize complexity by reducing the data set size. The model predictions have been evaluated via mean square error, G-value, mean absolute error, mean bias error, and adjusted-R2 model performance scale. When the model outputs obtained at the end of the study are evaluated together with the classical ILI calculations, it is seen that the successful ILI predictions with three and four variables, including the age parameter, rather than six variables, have been made through the PC-ANN method. Water utilities with insufficient physical and operational data for ILI indicator calculation can make network performance evaluations by predicting the ILI through the models suggested in this study with high accuracy in a reliable way.  相似文献   

20.
Hydraulic impact hammers are mechanical excavators that can be used in tunneling projects economically under geologic conditions suitable for rock breakage by indentation. However, there is relatively less published material in the literature in relation to predicting the performance of that equipment employing rock properties and machine parameters. In tunnel excavation projects, there is often a need for accurate prediction the performance of such machinery. The poor prediction of machine performance can lead to very costly contractual claims. In this study, the application of soft computing methods for data analysis called artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) to predict the net breaking rate of an impact hammer is demonstrated. The prediction capabilities offered by ANN and ANFIS were shown by using field data of obtained from metro tunnel project in Istanbul, Turkey. For this purpose, two prediction models based on ANN and ANFIS were developed and the results obtained from those models were then compared to those of multiple regression-based predictions. Various statistical performance indexes were used to compare the performance of those prediction models. The results suggest that the proposed ANFIS-based prediction model outperforms both ANN model and the classical multiple regression-based prediction model, and thus can be used to produce a more accurate and reliable estimate of impact hammer performance from Schmidt hammer rebound hardness (SHRH) and rock quality designation (RQD) values obtained from the field tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号