首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Operational Street Pollution Model (OSPM®) is a widely used air quality model for urban street canyons. It is a parametric model, simulating the contribution from traffic emissions on a single street at receptor points at the buildings' facades. The OSPM contains a number of empirical parameters, accounting for processes such as emission factors or dispersion of pollutants. The values of these parameters are based on empirical assumptions, and might not be optimal for a specific street. In this work, we allow these parameters to vary within a certain meaningful range.We implemented two different parameter estimation schemes: a dynamic estimation procedure (using an ensemble Kalman filter) that allowed parameter values to vary, and a static estimation procedure scheme (using a least-squares algorithm) that kept parameter values fixed during the course of the simulation. We ran year-long simulations for five different streets in Danish cities, and evaluated performance by comparing forecast concentrations of NOx, NO2, O3 and CO with observations.Overall, the parameter estimation substantially improved the performance of the model in forecasting, especially for NO2 and CO. However it led to slightly more bias in the modelled daily maximum concentrations, suggesting that the parameter estimation fits to the bulk of the data rather than the extremes. Estimated parameter values varied substantially in time and between sites, making it difficult to generalise parameter estimates to other locations. Modelled concentrations from the OSPM were, on average, notably more accurate in simulations using measured urban background concentrations and meteorological parameters compared to using modelled data for these inputs. However this is only applicable when observations from nearby meteorological and urban background monitoring sites are available.We conclude that although dynamic parameter estimation has limited applicability to real-time air quality forecasting, it can potentially give useful feedback about the quality of model parameterisations or model inputs. Static parameter estimation is a simpler method, which is often as effective as dynamic parameter estimation.  相似文献   

2.
Parameter uncertainty and sensitivity for a watershed-scale simulation model in Portugal were explored to identify the most critical model parameters in terms of model calibration and prediction. The research is intended to help provide guidance regarding allocation of limited data collection and model parameterization resources for modelers working in any data and resource limited environment. The watershed-scale hydrology and water quality simulation model, Hydrologic Simulation Program – FORTRAN (HSPF), was used to predict the hydrology of Lis River basin in Portugal. The model was calibrated for a 5-year period 1985–1989 and validated for a 4-year period 2003–2006. Agreement between simulated and observed streamflow data was satisfactory considering the performance measures such as Nash–Sutcliffe efficiency (E), deviation runoff (Dv) and coefficient of determination (R2). The Generalized Likelihood Uncertainty Estimation (GLUE) method was used to establish uncertainty bounds for the simulated flow using the Nash–Sutcliffe coefficient as a performance likelihood measure. Sensitivity analysis results indicate that runoff estimations are most sensitive to parameters related to climate conditions, soil and land use. These results state that even though climate conditions are generally most significant in water balance modeling, attention should also focus on land use characteristics as well. Specifically with respect to HSPF, the two most sensitive parameters, INFILT and LZSN, are both directly dependent on soil and land use characteristics.  相似文献   

3.
4.
The identification and representation of uncertainty is recognized as an essential component in model applications. One important approach in the identification of uncertainty is sensitivity analysis. Sensitivity analysis evaluates how the variations in the model output can be apportioned to variations in model parameters. One of the most popular sensitivity analysis techniques is Fourier amplitude sensitivity test (FAST). The main mechanism of FAST is to assign each parameter with a distinct integer frequency (characteristic frequency) through a periodic sampling function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency based on a Fourier transformation. One limitation of FAST is that it can only be applied for models with independent parameters. However, in many cases, the parameters are correlated with one another. In this study, we propose to extend FAST to models with correlated parameters. The extension is based on the reordering of the independent sample in the traditional FAST. We apply the improved FAST to linear, nonlinear, nonmonotonic and real application models. The results show that the sensitivity indices derived by FAST are in a good agreement with those from the correlation ratio sensitivity method, which is a nonparametric method for models with correlated parameters.  相似文献   

5.
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.  相似文献   

6.
This work introduces a heuristic index (the “tolerance distance”) to define the “closeness” of two variable categories in multiple correspondence analysis (MCA). This index is a weighted Euclidean distance where weightings are based on the “importance” of each MCA axis, and variable categories were considered to be associated when their distances were below the tolerance distance. This approach was applied to a renal transplantation data. The analysed variables were allograft survival and 13 of its putative predictors. A bootstrap-based stability analysis was employed for assessing result reliability. The method identified previously detected associations within the database, such as that between race of donors and recipients, and that between HLA match and Cyclosporine use. A hierarchical clustering algorithm was also applied to the same data, allowing for interpretations similar to those based on MCA. The defined tolerance distance could thus be used as an index of “closeness” in MCA, hence decreasing the subjectivity of interpreting MCA results.  相似文献   

7.
We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.  相似文献   

8.
Land change modelers often create future maps using reference land use map. However, future land use maps may mislead decision-makers, who are often unaware of the sensitivity and the uncertainty in land use maps due to error in data. Since most metrics that communicate uncertainty require using reference land use data to calculate accuracy, the assessment of uncertainty becomes challenging when no reference land use map for future is available. This study aims to develop a new conceptual framework for sensitivity analysis and uncertainty assessment (FSAUA) which compares multiple maps under various data error scenarios. FSAUA performs sensitivity analyses in land use maps using a reference map and assess uncertainty in predicted maps. FSAUA was applied using three well-known land change models (ANN, CART and MARS) in Delhi, India. FSAUA was found to be a practical tool for communicating the uncertainty with end-users who develop reliable planning decisions.  相似文献   

9.
We present a study on the Hydro-Informatic Modelling System (HIMS) rainfall-runoff model for a semiarid region. The model includes nine parameters in need of calibration. A master-slave swarms, shuffling evolution algorithm based on self-adaptive dynamic particle swarm optimization (MSSE-SDPSO) is proposed to derive model parameters. In comparison with SCE-UA, PSO, MSSE-PSO and MSSE-SPSO algorithms, MSSE-SDPSO has faster convergence and more stable performance. The model is used to simulate discharge in the Luanhe River basin, a semiarid region. Compared with the SimHyd and SMAR models, HIMS model has the highest Nash-Sutcliffe efficiencies (NSE) and smallest relative errors (RE) of volumetric fitness for the periods of calibration and verification. In addition, the studies indicate that the HIMS model with all-gauge data improves runoff prediction compared with single-gauge data. A distributed HIMS model performs better than a lumped one. Finally, the Morris method is used to analyze model parameters sensitivity for the objective functions NSE and RE.  相似文献   

10.
We present a novel algorithm for joint state-parameter estimation using sequential three dimensional variational data assimilation (3D Var) and demonstrate its application in the context of morphodynamic modelling using an idealised two parameter 1D sediment transport model. The new scheme combines a static representation of the state background error covariances with a flow dependent approximation of the state-parameter cross-covariances. For the case presented here, this involves calculating a local finite difference approximation of the gradient of the model with respect to the parameters. The new method is easy to implement and computationally inexpensive to run. Experimental results are positive with the scheme able to recover the model parameters to a high level of accuracy. We expect that there is potential for successful application of this new methodology to larger, more realistic models with more complex parameterisations.  相似文献   

11.
黄磊  黄迪明 《计算机应用》2008,28(2):307-310
提出了一种新型的人工免疫网络模型TSIN。通过应用包括克隆选择、基于合作的变异以及抗体抑制在内的免疫算子,抗体种群从单一的个体逐步分化繁殖成为有效的聚类。这些聚类既能够准确地表示原始数据集在形态空间中的分布特性,又能够较好地拟合局部分布形态,这些都为高维数据的分析提供了良好的基础。描述了TSIN学习算法的总体框架,详细分析了其中的关键环节。仿真实验表明,TSIN具有良好的数据分析能力,且较传统的自组织神经网络方法更能体现数据中蕴含的拓扑关系和分布特性。  相似文献   

12.
One of the concerns in Data Envelopment Analysis (DEA) is the sensitivity and stability analysis of specific Decision Making Unit (DMU), which is under evaluation. In economical point of view, the stability region in input–output space for maintaining the efficiency score of efficient DMU is important. In this paper, a new sensitivity analysis approach based on Banker, Charnes and Cooper (BCC) model which is modified by facet analysis, is developed. An extended stability region is determined especially for DMUs that are placed on intersection of efficient and weak efficient frontier. The results are shown by numerical examples.  相似文献   

13.
Sustainable management of groundwater-dependent vegetation (GDV) requires the accurate identification of GDVs, characterisation of their water use dynamics and an understanding of associated errors. This paper presents sensitivity and uncertainty analyses of one GDV mapping method which uses temperature differences between time-series of modelled and observed land surface temperature (LST) to detect groundwater use by vegetation in a subtropical woodland. Uncertainty in modelled LST was quantified using the Jacobian method with error variances obtained from literature. Groundwater use was inferred where modelled and observed LST were significantly different using a Student's t-test. Modelled LST was most sensitive to low-range wind speeds (<1.5 m s−1), low-range vegetation height (<=0.5 m), and low-range leaf area index (<=0.5 m2 m−2), limiting the detectability of groundwater use by vegetation under such conditions. The model-data approach was well-suited to detection of GDV because model-data errors were lowest for climatic conditions conducive to groundwater use.  相似文献   

14.
With the rapid development of economy and the frequent occurrence of air pollution incidents, the problem of air pollution has become a hot issue of concern to the whole people. The air quality big data is generally characterized by multi-source heterogeneity, dynamic mutability, and spatial–temporal correlation, which usually uses big data technology for air quality analysis after data fusion. In recent years, various models and algorithms using big data techniques have been proposed. To summarize these methodologies of air quality study, in this paper, we first classify air quality monitoring by big data techniques into three categories, consisting of the spatial model, temporal model and spatial–temporal model. Second, we summarize the typical methods by big data techniques that are needed in air quality forecasting into three folds, which are statistical forecasting model, deep neural network model, and hybrid model, presenting representative scenarios in some folds. Third, we analyze and compare some representative air pollution traceability methods in detail, classifying them into two categories: traditional model combined with big data techniques and data-driven model. Finally, we provide an outlook on the future of air quality analysis with some promising and challenging ideas.  相似文献   

15.
In this study, an interval-parameter fuzzy programming mixed integer programming method (IFMIP) is designed for supporting the planning of energy systems management (ESM) and air pollution mitigation control under multiple uncertainties. The IFMIP-ESM model is based on an integration of interval-parameter programming (IPP), fuzzy programming (FP), and mixed-integer programming (MIP), which can reflect multiple uncertainties presented as both interval values and fuzzy distributions numbers. Moreover, it can successfully identify dynamics of capacity expansion schemes, reflect dual dynamics in terms of interval membership function, and analyze various emission-mitigation scenarios through incorporating energy and environmental policies. The designed model is applied to a case of energy systems management in Tangshan City, China, and the results indicate that reasonable solutions obtained from the model would be helpful for decision makers to effectively (a) adjust the allocation patterns of energy resources and transform the patterns of energy consumption and economic development, (b) facilitate the implement of air pollution control action plan, and (c) analysis dynamic interactions among system cost, energy-supply security, and environmental requirement.  相似文献   

16.
17.
城市空气质量数据处理分析的Web服务模型   总被引:2,自引:1,他引:1  
研究了目前环境空气质量数据处理分析技术应用于大规模、分布式、异构的环境中所面临的问题,提出了一个基于Web服务的数据处理分析模型,将数据的实时采集、预处理、存储、发布、分析等都集成到一个框架结构中,满足了对系统的开放性、可扩展性和易维护性等方面的要求。  相似文献   

18.
Manual calibration of distributed models with many unknown parameters can result in problems of equifinality and high uncertainty. In this study, the Generalized Likelihood Uncertainty Estimation (GLUE) technique was used to address these issues through uncertainty and sensitivity analysis of a distributed watershed scale model (SAHYSMOD) for predicting changes in the groundwater levels of the Rechna Doab basin, Pakistan. The study proposes and then describes a stepwise methodology for SAHYSMOD uncertainty analysis that has not been explored in any study before. One thousand input data files created through Monte Carlo simulations were classified as behavior and non-behavior sets using threshold likelihood values. The model was calibrated (1983–1988) and validated (1998–2003) through satisfactory agreement between simulated and observed data. Acceptable values were observed in the statistical performance indices. Approximately 70% of the observed groundwater level values fell within uncertainty bounds. Groundwater pumping (Gw) and hydraulic conductivity (Kaq) were found to be highly sensitive parameters affecting groundwater recharge.  相似文献   

19.
Data envelopment analysis (DEA) uses extreme observations to identify superior performance, making it vulnerable to outliers. This paper develops a unified model to identify both efficient and inefficient outliers in DEA. Finding both types is important since many post analyses, after measuring efficiency, depend on the entire distribution of efficiency estimates. Thus, outliers that are distinguished by poor performance can significantly alter the results. Besides allowing the identification of outliers, the method described is consistent with a relaxed set of DEA axioms. Several examples demonstrate the need for identifying both efficient and inefficient outliers and the effectiveness of the proposed method. Applications of the model reveal that observations with low efficiency estimates are not necessarily outliers. In addition, a strategy to accelerate the computation is proposed that can apply to influential observation detection.  相似文献   

20.
The issues of data integration and interoperability pose significant challenges in scientific hydrological and environmental studies, due largely to the inherent semantic and structural heterogeneities of massive datasets and non-uniform autonomous data sources. To address these data integration challenges, we propose a unified data integration framework, called Hydrological Integrated Data Environment (HIDE). HIDE is based on a labeled-tree data integration model referred to as DataNode tree. Using this framework, characteristics of datasets gathered from diverse data sources - with different logical and access organizations - can be extracted and classified as Time-Space-Attribute (TSA) labels and are subsequently arranged in a DataNode tree. The uniqueness of our approach is that it effectively combines the semantic aspects of the scientific domain with diverse datasets having different logical organizations to form a unified view. Further, we also adopt a metadata-based approach for specifying the TSA-DataNode tree in order to achieve flexibility and extensibility. The search engine of our HIDE prototype system evaluates a simple user query systematically on the TSA-DataNode tree, presenting integrated results in a standardized format that facilitates both effective and efficient data integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号