首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article analyses the flow of cadmium through the Australian economy during the one-year period, 1998–1999 using material flow analysis (MFA) or substance flow analysis (SFA) as a framework. MFA/SFA can provide a holistic picture of resource use and loss through a geographic region in a specific year, allowing all material/substance inflows, outflows, and stocks through each sub-compartment in the economy to be examined. The results of the study were visualized and presented in diagrams, including an aggregate diagram of the economic system. Existing data from a large variety of sources was utilised to complete all cadmium flows within the Australian economy. Some assumptions and judgments were made in order to determine the cadmium flows in each operation and application stage. Australian cadmium sources are linked to the resources of zinc, lead, copper, iron, limestone and gypsum. A large accumulation of cadmium can result from on-site waste treatment arising from industrial facilities and household-waste landfills. Atmospheric deposition, phosphate fertilisers and animal manure have been identified as other significant inputs to agricultural soils, especially at some polluted areas near industrial facilities. The measurement, analysis and control of the cadmium flows in Australia are therefore considered on the basis of these abundant resources, certain commodities and agricultural inputs. The SFA analysis presented is a useful tool in the development of a cadmium management policy suited to the Australian economy and the receiving environment.  相似文献   

2.
One general goal of sensitivity or uncertainty analysis of a computer model is the determination of which inputs most influence the outputs of interest. Simple methodologies based on randomly sampled input values are attractive because they require few assumptions about the nature of the model. However, when the number of inputs is large and the computational effort required per model evaluation is significant, methods based on more complex assumptions, analysis techniques, and/or sampling plans may be preferable. This paper will review some approaches that have been proposed for input screening, with an emphasis on the balance between assumptions and the number of model evaluations required.  相似文献   

3.
This article describes a finite element-based formulation for the statistical analysis of the response of stochastic structural composite systems whose material properties are described by random fields. A first-order technique is used to obtain the second-order statistics for the structural response considering means and variances of the displacement and stress fields of plate or shell composite structures. Propagation of uncertainties depends on sensitivities taken as measurement of variation effects. The adjoint variable method is used to obtain the sensitivity matrix. This method is appropriated for composite structures due to the large number of random input parameters. Dominant effects on the stochastic characteristics are studied analyzing the influence of different random parameters. In particular, a study of the anisotropy influence on uncertainties propagation of angle-ply composites is carried out based on the proposed approach.  相似文献   

4.
This paper evaluates the thermal behavior of a magnetic-Brayton-based parallel plate reciprocating active magnetic regenerator (AMR). A time-dependent, 2D model of the fluid flow and the coupled heat transfer between the working fluid and the solid refrigerant (gadolinium) is proposed. A hybrid calculation method which consists of an analytical solution for the flow and a numerical solution for the thermal field has been adopted. Results for the cooling capacity as a function of the temperature span and mass flow rate agree well with trends observed in experimental data and other theoretical models available in the literature. The volume of fluid displaced through the channels during the isofield processes influences significantly the AMR performance. For a cycle frequency of 1 Hz, the cycle-averaged cooling capacity reaches a maximum when the utilization factor is 0.1 and the displaced fluid volume equals 62% of the fluid volume of the AMR.  相似文献   

5.
For a risk assessment model, the uncertainty in input parameters is propagated through the model and leads to the uncertainty in the model output. The study of how the uncertainty in the output of a model can be apportioned to the uncertainty in the model inputs is the job of sensitivity analysis. Saltelli [Sensitivity analysis for importance assessment. Risk Analysis 2002;22(3):579-90] pointed out that a good sensitivity indicator should be global, quantitative and model free. Borgonovo [A new uncertainty importance measure. Reliability Engineering and System Safety 2007;92(6):771-84] further extended these three requirements by adding the fourth feature, moment-independence, and proposed a new sensitivity measure, δi. It evaluates the influence of the input uncertainty on the entire output distribution without reference to any specific moment of the model output. In this paper, a new computational method of δi is proposed. It is conceptually simple and easier to implement. The feasibility of this new method is proved by applying it to two examples.  相似文献   

6.
Abstract

This study applies the material flow analysis (MFA) of Cadmium to evaluate municipal solid waste (MSW) management policy in Taiwan.

In 2002, the Cd flow in Taiwan was approximately 441.2 tons, mainly contributed by the Cd in nickel‐cadmium batteries (60.15%) and plastics (33.45%). 415.6 tons of Cd entered the MSW treatment system from consumers. However, aside from the Cd emitted into the atmosphere (0.4 tons) and the Cd in incinerator ash (15.1 tons), the recycled Cd was 5.2 tons, representing a recycling rate of 1.2%. Moreover, instead of being effectively used, the recycled Cd is often casually deposited in the environment. Currently, Taiwan's Cd MFA data indicates that the MSW treatment is mainly performed by incineration, which does not conform to the main principles of sustainable development. To achieve a more sustainable policy, recycling and/or restriction of nickel‐cadmium batteries and plastics turn out to be important issues.  相似文献   

7.
8.
大米中镉含量的测量不确定度评定   总被引:1,自引:0,他引:1  
分析了石墨炉原子吸收光谱法测定大米中镉含量不确定度的各分量,对其测量不确定度进行合理的评定,结果表明:大米样品中镉的含量为0.18 mg/kg时,其扩展不确定度为0.01 mg/kg(k=2),不确定度主要是最小二乘法拟合标准工作曲线求得样品浓度过程和测试过程随机效应引入的。  相似文献   

9.
Three applications of sampling-based sensitivity analysis in conjunction with evidence theory representations for epistemic uncertainty in model inputs are described: (i) an initial exploratory analysis to assess model behavior and provide insights for additional analysis; (ii) a stepwise analysis showing the incremental effects of uncertain variables on complementary cumulative belief functions and complementary cumulative plausibility functions; and (iii) a summary analysis showing a spectrum of variance-based sensitivity analysis results that derive from probability spaces that are consistent with the evidence space under consideration.  相似文献   

10.
The protonated Sargassum muticum seaweed was studied as a possible biosorbent for cadmium removal in a fixed-bed column. The experiments were conducted in order to determine the effect of flow rate (0.42, 5, 10 and 20 mL min(-1)) and bed height (0.6 and 15.3 cm for the lowest flow rate or 7.4, 13 and 16.6 cm for the others) on breakthrough curves behaviour. The determined breakthrough and exhaustion times increased with the diminution in flow rate and with the increase in bed height. The maximum cadmium uptake capacity, obtained from the area below adsorbed cadmium concentration versus time curves, was found to remain practically constant with bed depth and flow rate. The bed depth service time (BDST) model was applied to analyse experimental data, determining the characteristic process parameters. The optimal lowest sorbent usage rate was evaluated at 2 min contact time and the minimum bed height values necessary to prevent the effluent solution concentration from exceeding 0.02 mg L(-1) at zero time were 5.3, 6.9 and 7.5 cm for flow rates of 5, 10 and 20 mL min(-1), respectively. Several empirical models proposed in the literature (Bohart-Adams, Yan, Belter and Chu models) were investigated in order to obtain the best fit of column data, describing in a simple manner the breakthrough curves. A correlation between model parameters and the variables implied in the process was attempted.  相似文献   

11.
Catalyst emissions from fluidising catalytic cracking units have the potential to impact significantly on the environmental compliance of oil refineries. Traditionally it has been assumed that gas velocity and fine particles significantly impact on emission levels. Through the use of a simple fluidised bed model, sensitivity analysis was conducted to identify the key operating parameters that influence emission rates. It was found that in addition to velocity, density and mid sized particles are the most influential factors for emission rates. Further work is needed to identify how these parameters can be altered during normal operations to reduce catalyst emissions.  相似文献   

12.
A cumulative distribution function (CDF)-based method has been used to perform sensitivity analysis on a computer model that conducts total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, and to identify the most influential input parameters affecting the output of the model. The performance assessment computer model referred to as the TPA code, was recently developed by the US nuclear regulatory commission (NRC) and the center for nuclear waste regulatory analyses (CNWRA), to evaluate the performance assessments conducted by the US department of energy (DOE) in support of their license application. The model uses a probabilistic framework implemented through Monte Carlo or Latin hypercube sampling (LHS) to permit the propagation of uncertainties associated with model parameters, conceptual models, and future system states. The problem involves more than 246 uncertain parameters (also referred to as random variables) of which the ones that have significant influence on the response or the uncertainty of the response must be identified and ranked. The CDF-based approach identifies and ranks important parameters based on the sensitivity of the response CDF to the input parameter distributions. Based on a reliability sensitivity concept [AIAA Journal 32 (1994) 1717], the response CDF is defined as the integral of the joint probability-density-function of the input parameters, with a domain of integration that is defined by a subset of the samples. The sensitivity analysis does not require explicit knowledge of any specific relationship between the response and the input parameters, and the sensitivity is dependent upon the magnitude of the response. The method allows for calculating sensitivity over a wide range of the response and is not limited to the mean value.  相似文献   

13.
This paper focuses on sensitivity analysis of results from computer models in which both epistemic and aleatory uncertainties are present. Sensitivity is defined in the sense of “uncertainty importance” in order to identify and to rank the principal sources of epistemic uncertainty. A natural and consistent way to arrive at sensitivity results in such cases would be a two-dimensional or double-loop nested Monte Carlo sampling strategy in which the epistemic parameters are sampled in the outer loop and the aleatory variables are sampled in the nested inner loop. However, the computational effort of this procedure may be prohibitive for complex and time-demanding codes. This paper therefore suggests an approximate method for sensitivity analysis based on particular one-dimensional or single-loop sampling procedures, which require substantially less computational effort. From the results of such sampling one can obtain approximate estimates of several standard uncertainty importance measures for the aleatory probability distributions and related probabilistic quantities of the model outcomes of interest. The reliability of the approximate sensitivity results depends on the effect of all epistemic uncertainties on the total joint epistemic and aleatory uncertainty of the outcome. The magnitude of this effect can be expressed quantitatively and estimated from the same single-loop samples. The higher it is the more accurate the approximate sensitivity results will be. A case study, which shows that the results from the proposed approximate method are comparable to those obtained with the full two-dimensional approach, is provided.  相似文献   

14.
Risk analysis is a tool for investigating and reducing uncertainty related to outcomes of future activities. Probabilities are key elements in risk analysis, but confusion about interpretation and use of probabilities often weakens the message from the analyses. Under the predictive, epistemic approach to risk analysis, probabilities are used to express uncertainty related to future values of observable quantities like the number of fatalities or monetary loss in a period of time. The procedure for quantifying this uncertainty in terms of probabilities is, however, not obvious. Examples of topics from the literature relevant in this discussion are use of expert judgement, the effect of so-called heuristics and biases, application of historical data, dependency and updating of probabilities. The purpose of this paper is to discuss and give guidelines on how to quantify uncertainty in the perspective of these topics. Emphasis is on the use of models and assessment of uncertainties of similar quantities.  相似文献   

15.
This paper presents an integrated life cycle methodology for mapping the flows of pollutants in the urban environment, following the pollutants from their sources through the environment to receptors. The sources of pollution that can be considered by this methodology include products, processes and human activities. Life cycle assessment (LCA), substance flow analysis (SFA), fate and transport modelling (F&TM) and geographical information systems (GIS) have been used as tools for these purposes. A mathematical framework has been developed to enable linking and integration of LCA and SFA. The main feature of the framework is a distinction between the foreground and background systems, where the foreground system includes pollution sources of primary interest in the urban environment and the background comprises all other supporting activities occurring elsewhere in the life cycle. Applying the foreground–background approach, SFA is used to track specific pollutants in the urban environment (foreground) from different sources. LCA is applied to quantify emissions of a number of different pollutants and their impacts in both the urban (foreground) and in the wider environment (background). Therefore, two “pollution vectors" are generated: one each by LCA and SFA. The former comprises all environmental burdens or impacts generated by a source of interest on a life cycle basis and the latter is defined by the flows of a particular burden (substance or pollutant) generated by different sources in the foreground. The vectors are related to the “unit of analysis" which represents a modified functional unit used in LCA and defines the level of activity of the pollution source of interest. A further methodological development has also included integration of LCA and SFA with F&TM and GIS. A four-step methodology is proposed to enable spatial mapping of pollution from sources through the environment to receptors. The approach involves the use of GIS to map sources of pollution, application of the LCA–SFA approach to define sources of interest and quantify environmental burdens and impacts on a life-cycle basis. This is followed by F&TM to track pollution through the environment and by the quantification of site-specific impacts on human health and the environment. The application of the integrated methodology and the mathematical framework is illustrated by a hypothetical example involving four pollution sources in a city: incineration of MSW, manufacture of PVC, car travel and truck freight.  相似文献   

16.
In this paper we propose and test a generalisation of the method originally proposed by Sobol’, and recently extended by Saltelli, to estimate the first-order and total effect sensitivity indices. Exploiting the symmetries and the dualities of the formulas, we obtain additional estimates of first-order and total indices at no extra computational cost. We test the technique on a case study involving the construction of a composite indicator of e-business readiness, which is part of the initiative “e-Readiness of European enterprises” of the European Commission “e-Europe 2005” action plan. The method is used to assess the contribution of uncertainties in (a) the weights of the component indicators and (b) the imputation of missing data on the composite indicator values for several European countries.  相似文献   

17.
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. This presentation discusses and illustrates the conceptual and computational basis of QMU in analyses that use computational models to predict the behavior of complex systems. The following topics are considered: (i) the role of aleatory and epistemic uncertainty in QMU, (ii) the representation of uncertainty with probability, (iii) the probabilistic representation of uncertainty in QMU analyses involving only epistemic uncertainty, and (iv) the probabilistic representation of uncertainty in QMU analyses involving aleatory and epistemic uncertainty.  相似文献   

18.
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow in the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation, repository pressure, brine saturation, and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to which the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.  相似文献   

19.
Generic simulation models noted as AGM-I and AGM-II for performance and control analysis of variable refrigerant flow (VRF) systems were developed. Firstly, simulation models from component to whole VRF system were addressed. Then the simulation models were validated using experimental data reported in an open literature. The average error percentages to predict system cooling capacity, energy consumption and COP are 4.69%, 4.64%, 1.19%, respectively. Finally, tests were carried out. Results show that the developed models are fast computing and evaporator-number independent. From the computation speed point of view, the AGM-I is more suitable for multi-evaporator VRF system, while the AGM-II is more suitable for the one evaporator VRF system. Test results also show the system models good ability responding to varying conditions, including inlet air temperature of the evaporator, outdoor air temperature, opening of the EEVs and compressor speed, which are all very important variables for the control analysis.  相似文献   

20.
Compensation of flow maldistribution in multi-channel fin-and-tube evaporators for residential air-conditioning is investigated by numerical modeling. The considered sources of maldistribution are distribution of the liquid and vapor phases in the distributor and non-uniform airflow distribution. Fin-and-tube heat exchangers usually have a predefined circuitry, however, the evaporator model is simplified to have straight tubes, in order to perform a generic investigation. The compensation of flow maldistribution is performed by control of the superheat in the individual channels. Furthermore, the effect of combinations of individual maldistribution sources is investigated for different evaporator sizes and outdoor temperatures. It is shown that a decrease in cooling capacity and coefficient of performance by flow maldistribution can be compensated by the control of individual channel superheat. Alternatively, a larger evaporator may be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号