首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The method used by archaeologists for excavation and recording of the stratigraphic evidence, within trenches with or without archaeological remains, can potentially be useful to contaminated land consultants (CLCs). The implementation of archaeological practice in contaminated land assessments (CLAs) is not meant to be an exercise in data overkill; neither should it increase costs. Rather, we suggest, that if the excavation and recording, by a trained archaeologist, of the stratigraphy is followed by in-situ chemical characterisation then it is possible that much uncertainty associated with current field sampling practices, may be removed. This is because built into the chemical stratigraphy is the temporal and spatial relationship between different parts of the site reflecting the logic behind the distribution of contamination.An archaeological recording with chemical stratigraphy approach to sampling may possibly provide ‘one method fits all’ for potentially contaminated land sites (CLSs), just as archaeological characterisation of the stratigraphic record provides ‘one method fits all’ for all archaeological sites irrespective of period (prehistoric to modern) or type (rural, urban or industrial). We also suggest that there may be practical and financial benefits to be gained by pulling together expertise and resources stemming from different disciplines, not simply at the assessment phase, but also subsequent phases, in contaminated land improvement.  相似文献   

2.
Real-time PCR absolute quantification applications are becoming more common in the recreational and drinking water quality industries. Many methods rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, the generation of a standard curve for each qPCR experiment set-up can be expensive and time consuming, especially for studies with large numbers of unknown samples. As a result, many researchers have adopted a master calibration strategy where a single curve is derived from DNA standard measurements generated from multiple instrument runs. However, a master curve can inflate uncertainty associated with intercept and slope parameters and decrease the accuracy of unknown sample DNA target concentration estimates. Here we report two alternative strategies termed ‘pooled’ and ‘mixed’ for the generation of calibration equations from absolute standard curves which can help reduce the cost and time of laboratory testing, as well as the uncertainty in calibration model parameter estimates. In this study, four different strategies for generating calibration models were compared based on a series of repeated experiments for two different qPCR assays using a Monte Carlo Markov Chain method. The hierarchical Bayesian approach allowed for the comparison of uncertainty in intercept and slope model parameters and the optimization of experiment design. Data suggests that the ‘pooled’ model can reduce uncertainty in both slope and intercept parameter estimates compared to the traditional single curve approach. In addition, the ‘mixed’ model achieved uncertainty estimates similar to the ‘single’ model while increasing the number of available reaction wells per instrument run.  相似文献   

3.
Project delivery method of Build-Operate-Transfer (BOT) increases the commencement probability of public construction works through private investments. Public construction works worldwide that adopt the BOT model as their project delivery method are increasing gradually. Although many BOT projects have been implemented at various stages, some projects encounter major obstacles for advancement. This study attempts to identify the delay causes in various stages of BOT projects. Opinions of BOT participants’ replies are solicited using two questionnaire surveys. Those outcomes are analyzed using by traditional statistical methods and structural equation modeling method. Study results reveal that the stage of ‘negotiation and signing of concession agreement’ is the most essential stage, in which ‘improper contract planning,’ ‘debt problem’ and ‘uncertainty on political issues and government-finished items’ are the most significant delay causes. Identified causes of delay can be used to prevent the postponement of future BOT projects.  相似文献   

4.
This paper reports a study of the subjective preference to daylit indoor environment of a residential room using conjoint analysis, which is a highly reputable method used to analyze the mutual relationships among different attributes. Seven influential attributes were selected in the view of daylight performance assessment. They include ‘general brightness’, ‘desktop brightness’, ‘perceived glare’, ‘sunlight penetration’, ‘quality of view’, ‘user friendliness of shading control’ and ‘impact on energy’. Each of them has two levels. A total of eight combinations (profiles) of attributes with various levels were established by adopting fractional factorial design. Subjects were asked to rank-order the eight profiles according to their preference in terms of daylit environment of a residential room. The study aims at finding out the relative impact of the seven selected attributes to the overall daylight performance and seeking an organized assessment method for a residential daylit environment. Conjoint analysis found that the seven attributes have importance level in the order of ‘quality of view’, ‘general brightness’, ‘impact on energy’, ‘user friendliness of shading control’, ‘perceived glare’, ‘desktop brightness’ and ‘sunlight penetration’.  相似文献   

5.
Many Dutch ecosystems, whether terrestrial, aquatic or sediment-based, are diffusely polluted by mixtures of contaminants, whose concentrations often exceed regulatory Safe Values or other generic quality criteria. This situation has unclear consequences, especially when local authorities are confronted with such pollution. Water managers are frequently in doubt whether their water systems satisfy the criteria for ‘Good Ecological Status’ as defined in the EU's Water Framework Directive. In case of soils, soil users may wonder whether the soil is ‘fit for use’. In case of nature conservation, the problem is that protected species might suffer from toxic stress. Official regulations in these cases call for appropriate action, but it is unclear whether the diffuse exposure causes adverse effects, and what the action should be. This paper proposes and discusses a site-oriented approach in the risk assessment of diffusely contaminated sites that can be used in addition to the compound-oriented policies from which the abovementioned generic quality criteria were derived. The site-oriented approach can be of help in reducing site-specific risks of diffuse contamination.Reflecting on the results of a large Dutch research effort in systems-oriented ecotoxicological effects, the conclusion is drawn that exposure and effects of diffuse pollution are site-specific in kind and magnitude, determined by the local combination of source-pathway-receptor issues, and often not clearly detectable (though often present). To assist in risk management, higher-tier methods can address various aspects, like addressing local mixture composition, bioavailability, and sensitivity of local species groups. Higher-tier risk assessment methods have as yet been developed mainly for cases of serious contamination, like for pesticide management and Risk-Based Land Management. For diffuse pollution, site-specific information can also be used to obtain site-specific exposure and impact information, while practical and ecology-based approaches can be introduced to obtain an integrated overview of the meaning of site contamination and to derive options for managing and reducing the local risks. These issues are discussed against the background of current major policy shifts, in The Netherlands and elsewhere, from a pollutant-oriented assessment to an additional ecological and site-oriented assessment. The latter is most clearly represented in the Good Ecological Status aim of the EU-Water Framework Directive. The paper assesses, integrates and discusses the results of the Dutch research effort in this policy context.  相似文献   

6.
A new method is presented which is designed to investigate whether laboratory test data used in the development of vehicle emission models adequately reflects emission distributions, and in particular the influence of high-emitting vehicles. The method includes the computation of a ‘high-emitter’ or ‘emission distribution’ correction factor for use in emission inventories. In order to make a valid comparison we control for a number of factors such as vehicle technology, measurement technique and driving conditions and use a variable called ‘Pollution Index’ (g/kg). Our investigation into one vehicle class has shown that laboratory and remote sensing data are substantially different for CO, HC and NOx emissions, both in terms of their distributions as well as in their mean and 99-percentile values. Given that the remote sensing data has larger mean values for these pollutants, the analysis suggests that high-emitting vehicles may not be adequately captured in the laboratory test data.The paper presents two different methods for the computation of weighted correction factors for use in emission inventories based on laboratory test data: one using mean values for six ‘power bins’ and one using multivariate regression functions. The computed correction factors are substantial leading to an increase for laboratory-based emission factors with a factor of 1.7-1.9 for CO, 1.3-1.6 for HC and 1.4-1.7 for NOx (actual value depending on the method). However, it also clear that there are points that require further examination before these correction factors should be applied. One important step will be to include a comparison with other types of validation studies such as tunnel studies and near-road air quality assessments to examine if these correction factors are confirmed. If so, we would recommend using the correction factors in emission inventories for motor vehicles.  相似文献   

7.
The Unified Bioaccessibility Method (UBM), which simulates the fluids of the human gastrointestinal tract, was used to assess the oral bioaccessibility of Cr in 27 Glasgow soils. These included several contaminated with Cr(VI), the most toxic form of Cr, from the past disposal of chromite ore processing residue (COPR). The extraction was employed in conjunction with the subsequent determination of the bioaccessible Cr by ICP-OES and Cr(VI) by the diphenylcarbazide complexation colorimetric procedure. In addition, Cr(III)-containing species were determined by (i) HPLC-ICP-MS and (ii) ICP-OES analysis of gel electrophoretically separated components of colloidal and dissolved fractions from centrifugal ultrafiltration of extracts. Similar analytical procedures were applied to the determination of Cr and its species in extracts of the < 10 μm fraction of soils subjected to a simulated lung fluid test to assess the inhalation bioaccessibility of Cr.The oral bioaccessibility of Cr was typically greater by a factor of 1.5 in the ‘stomach’ (pH ~ 1.2) compared with the ‘stomach + intestine’ (pH ~ 6.3) simulation. On average, excluding two COPR-contaminated soil samples, the oral bioaccessibility (‘stomach’) was 5% of total soil Cr and, overall, similar to the soil Cr(VI) concentration. Chromium(VI) was not detected in the extracts, a consequence of pH- and soil organic matter-mediated reduction in the ‘stomach’ to Cr(III)-containing species, identified as predominantly Cr(III)-humic complexes. Insertion of oral bioaccessible fraction data into the SNIFFER human health risk assessment model identified site-specific assessment criteria (for residential land without plant uptake) that were exceeded by the soil total Cr (3680 mg kg-1) and Cr(VI) (1485 mg kg-1) concentration at only the most COPR-Cr(VI)-contaminated location. However, the presence of measurable Cr(VI) in the < 10 μm fraction of the two most highly Cr(VI)-contaminated soils demonstrated that inhalation of Cr(VI)-containing dust remains the most potentially harmful exposure route.  相似文献   

8.
Ideas and thinking about sustainability and sustainable development have permeated over the last decades into most disciplines and sectors. The area of urban studies is no exception and has generated an impressive body of literature, which aims to marry ‘sustainability’ and ‘urban development’ by grounding the many interpretations of sustainability in an urban setting. This has taken many forms and inspired a range of initiatives across the world including ‘healthy cities’, ‘urban villages’, ‘millennium communities’ and the ‘mixed communities’ movement. Moreover, urban regeneration has come under considerable scrutiny as one of the core mechanisms for delivering sustainable urban development. At the most basic level, it can be argued that all urban regeneration contributes to a certain extent to sustainable development through the recycling of derelict land and buildings, reducing demand for peripheral development and facilitating the development of more compact cities. Yet, whether urban regeneration bears an effect on urban sustainability is an underresearched area. In addition, little is known about these impacts at local level. This paper aims to extend our understanding in these areas of research. We do so, by taking a closer look at three neighbourhoods in Salford, Newcastle and Merseyside. These neighbourhoods underwent urban regeneration under the Housing Marker Renewal Programme (2003–2011), which aimed to ‘create sustainable urban areas and communities’ in the Midlands and North of England. Approximately 130 residents from the three areas were interviewed and a further 60 regeneration officials and local stakeholders consulted. The paper looks at the impact of urban regeneration on urban sustainability by examining whether interventions under the Housing Market Renewal Programme have helped urban areas and communities to become more sustainable. It also discusses impacts at local level, by probing into some of Housing Market Renewal's grounded ‘sustainability stories’ and looking at how change is perceived by local residents. Furthermore, it re-opens a window into the Housing Market Renewal Programme and documents the three neighbourhoods within the wider context of scale and intervention across the whole programme.  相似文献   

9.
10.
The management of project risk is considered a key discipline by most organisations involved in projects. Best practice project risk management processes are claimed to be self-evidently correct. However, project risk management involves a choice between which information is utilized and which is deemed to be irrelevant and hence excluded. Little research has been carried out to ascertain the manifestation of barriers to optimal project risk management such as ‘irrelevance’; the deliberate inattention of risk actors to risk. This paper presents the results of a qualitative study of IT project managers, investigating their reasons for deeming certain known risks to be irrelevant. The results both confirm and expand on Smithson’s [Smithson, M., 1989. Ignorance and Uncertainty. Springer-Verlag, New York] taxonomy of ignorance and uncertainty and in particular offer further context related insights into the phenomenon of ‘irrelevance’ in project risk management. We suggest that coping with ‘irrelevance’ requires defence mechanisms, the effective management of relevance as well as the setting of, and sticking to priorities.  相似文献   

11.
The availability of innumerable intelligent building (IB) products, and the current dearth of inclusive building component selection methods suggest that decision makers might be confronted with the quandary of forming a particular combination of components to suit the needs of a specific IB project. Despite this problem, few empirical studies have so far been undertaken to analyse the selection of the IB systems, and to identify key selection criteria for major IB systems. This study is designed to fill these research gaps. Two surveys: a general survey and the analytic hierarchy process (AHP) survey are proposed to achieve these objectives. The first general survey aims to collect general views from IB experts and practitioners to identify the perceived critical selection criteria, while the AHP survey was conducted to prioritize and assign the important weightings for the perceived criteria in the general survey. Results generally suggest that each IB system was determined by a disparate set of selection criteria with different weightings. ‘Work efficiency’ is perceived to be most important core selection criterion for various IB systems, while ‘user comfort’, ‘safety’ and ‘cost effectiveness’ are also considered to be significant. Two sub-criteria, ‘reliability’ and ‘operating and maintenance costs’, are regarded as prime factors to be considered in selecting IB systems. The current study contributes to the industry and IB research in at least two aspects. First, it widens the understanding of the selection criteria, as well as their degree of importance, of the IB systems. It also adopts a multi-criteria AHP approach which is a new method to analyse and select the building systems in IB. Further research would investigate the inter-relationship amongst the selection criteria.  相似文献   

12.
This analysis explores the pattern of variation of the desired thermal sensation on the ASHRAE scale, applying the method of direct enquiry. Data are from studies of thermal comfort at university lectures and in selected dwellings. Respondents reported both their thermal sensation and the sensation they would have desired at that time. The data contain 868 comparisons of the actual and the desired sensation. On 57% of occasions the desired sensation was other than ‘neutral’. The respondents did not always desire the same sensation, and the mean desired sensation differed systematically among the respondents. The mean desired sensation depended to some extent on the actual sensation, there being a positive correlation in the region from ‘neutral’ and ‘warm’ and a negative correlation outside this region. Sensations on the ASHRAE scale are shown to have more than one meaning. Adjusting the ASHRAE scale to allow for the desired sensation yields different distributions of thermal comfort and different group-optimum temperatures. The adjustment should therefore be applied whenever the ASHRAE scale is used. The implications for thermal simulation and for energy use in buildings are considered.  相似文献   

13.
For want of appropriate and effectual methodological approaches or analytical instruments, analysis of socio-economic phenomena can sometimes be problematic. As Frances et al. have aptly noted, any analysis of social nature begs the question: ‘with what theoretical tools do we approach the analysis of events and processes?’ (p. 1). In land matters, for example, Malpezzi (1999b,c) has keenly observed that most of the studies on land reforms have focused on the rural and agricultural sectors, at the expense of the urban sector. Arguably, the paucity of land policy/reform studies on the urban sector could be explained, to a large measure, by the fact that most of the studies carried out in this area apply theoretical frameworks or models that are highly abstract (e.g. neoclassical economic models) or too simplistic in approach.1 Such models or methodological frameworks may not be readily applicable or efficacious in explaining the convoluted urban land market realities. The narrow option for alternative tools of analysis for studying land market ‘events and processes’ (to use Frances et al.’s (1991) terminology) may well have hindered land policy/reform research in the urban sector. Against that background, this paper advances an eclectic, property-rights-based approach that is robust and versatile enough to have wide application. An empirical study conducted under the framework attests to the relevance of such an approach to land use policy and urban land market analysis.  相似文献   

14.
Installation of temporary or long term monitoring sites is expensive, so it is important to rationally identify potential locations that will achieve the requirements of regional air quality management strategies. A simple, but effective, numerical approach to selecting ambient particulate matter (PM) monitoring site locations has therefore been developed using the MM5-CAMx4 air pollution dispersion modelling system. A new method, ‘site efficiency,’ was developed to assess the ability of any monitoring site to provide peak ambient air pollution concentrations that are representative of the urban area. ‘Site efficiency’ varies from 0 to 100%, with the latter representing the most representative site location for monitoring peak PM concentrations. Four heavy pollution episodes in Christchurch (New Zealand) during winter 2005, representing 4 different aerosol dispersion patterns, were used to develop and test this site assessment technique. Evaluation of the efficiency of monitoring sites was undertaken for night and morning aerosol peaks for 4 different particulate material (PM) spatial patterns. The results demonstrate that the existing long term monitoring site at Coles Place is quite well located, with a site efficiency value of 57.8%. A temporary ambient PM monitoring site (operating during winter 2006) showed a lower ability to capture night and morning peak aerosol concentrations. Evaluation of multiple site locations used during an extensive field campaign in Christchurch (New Zealand) in 2000 indicated that the maximum efficiency achieved by any site in the city would be 60-65%, while the efficiency of a virtual background site is calculated to be about 7%. This method of assessing the appropriateness of any potential monitoring site can be used to optimize monitoring site locations for any air pollution measurement programme.  相似文献   

15.
This paper proposes that projects and programmes can be empirically distinguished by the way in which they are associated with expectations and evaluations of success and failure. Support for the proposition is grounded in analysis of over sixteen hundred examples of occurrences of the terms ‘project’ and ‘programme’ with ‘success’ and ‘failure’ derived from the Oxford English Corpus (OEC). The OEC is a structured and coded database of over two billion words of naturally occurring English collected from the World Wide Web. The analysis highlights that project and programme are each modified by the terms ‘success’ and ‘failure’ in significantly different ways, indicating that they are conceptually distinct phenomena. These findings imply that academics must be cautious in their use of language in investigations of project and programme evaluations, and that practitioners should consider the implications of considering programmes as ‘scaled‐up’ projects, given their propensity to different evaluation outcomes.  相似文献   

16.
The lines of ‘damage-begin’ and ‘specimen-break’ for dynamic loading of a geogrid were determined in a series of laboratory testing. The cyclic load ratio was set to R = 0.5, loading frequency f = 10 Hz and f = 3 Hz. The test results show clearly that the chosen procedure for the determination and analysis of the beginning of damage and break is reproducible and allow for safe extrapolation for lower load levels. Furthermore the method chosen enables explicit decrease of the required testing time. The assumption of linear damage accumulation was examined in two-step-trials. The number of load cycles to ‘break’ evaluated in ‘one-step-tests’ compared with those of ‘two-step-loading’ are practically the same. The existence of ‘damage-lines’ for the examined geogrid under a dynamic pulsating load of 10 Hz and 3 Hz and a R-value of 0.5 could be verified. Damage of the specimens occurs only for load-cycles lying between the ‘damage-line’ and the ‘stress-cycle-diagram’ (‘Woehler-curve’). When it comes to dimensioning against ‘damage-beginning’ or ‘break’, higher loading frequencies present the critical case.  相似文献   

17.
The overall aim of this study is to identify factors that influence architects' demotivation in design firms. After a review of extant literatures in design management, project management, and organisational behaviour, a list of 43 demotivating criteria was produced and used in a questionnaire survey. Analyses included reliability analysis, Mann–Whitney U and Kruskal–Wallis tests, demotivation severity index (DSI) computation and exploratory factor analysis. Results show an underlying factor structure of seven demotivating factors that include ‘organisational injustice’, ‘project induced stress’, ‘dysfunctional design team’, ‘poor interpersonal relationships’, ‘perceived career decline’, ‘negative leadership behaviours’ and ‘poor organisational culture’. Comparing these demotivational factors with motivational factors identified from previous related research, this study confirms that demotivation and motivation are on the same pole. In addition, what causes motivation or demotivation is a function of individual frame of reference. This implies that the presence or absence of a factor might cause motivation or demotivation depending on an individual frame of reference. Positive attention to the identified factors in relation to individual personality differences therefore helps to remove impediments that could affect employees' well-being such as being downcast, dispirited, depressed and despondent. The study would help directors and managers of design firms to develop a healthy workforce through recognition and eradication of the identified demotivating factors using some of the suggested solutions.  相似文献   

18.
This paper deals with an investigation of the phenomenon of Helmholtz resonance under oblique wind flow, and an examination of the applicability of the quasi-steady approach to internal pressures in buildings with a dominant opening. Studies on a 1:50 scale model of the Texas Tech University (TTU) test building in a boundary layer simulation show that ‘Helmholtz resonance under oblique wind flow’ produces an extremely strong response in internal pressure fluctuations, in comparison with that obtained under normal onset flow. It is verified that ‘eddy dynamics over the opening’ rather than ‘freestream turbulence’ is responsible for the intense excitation at oblique flow angles, implying that even if the Helmholtz resonance frequency were to be in the tail of the freestream turbulence spectrum, severe excitation would still be possible.Experimental measurements of internal pressures for a range of opening situations also reveal that the quasi-steady approach is inapplicable in the prediction of peak internal pressures. Furthermore, it is demonstrated that while the provisions of the Australian/New Zealand wind loading code—AS/NZS1170.2:2002, which is based upon the quasi-steady method, is adequate as far as mean internal pressures are concerned, it however underpredicts peak internal pressures in some situations. In particular, for the range of situations studied, measurements indicated that peak pressures were up to 25% higher than the AS/NZS1170.2:2002 provisions, in the case of openings in the positive pressure and sidewall regions. It is also shown that for openings located in the sidewall region, peak internal pressures could be just as extremely positive as it can be negative. It is suggested that in the calculation of internal pressures, the AS/NZS1170.2:2002 provide for the use of local pressure factors Kl, that are at present applied only to external pressure calculations. Secondly, the code should provide for internal pressure coefficients to be both negative and positive, when openings are located in sidewall regions. Finally, in order to account for the effects of additional fluctuations arising from Helmholtz resonance oscillations, the possibility of the use of an internal pressure factor Ki should be explored.  相似文献   

19.
The authors’ group has been conducting full-scale measurements of wind velocities with Doppler sodars. It is very important to accurately assess the profiles of mean wind speeds and turbulence intensities in relation to terrain roughness. In this study, the profiles were evaluated for all data measured over a long period at a seashore and two inland sites. It is confirmed that for strong winds the profiles can be approximated by a single power law at altitudes between 50 and 340 m. The power law exponents of the mean wind speed profiles are approximately 0.1 for wind from the sea and 0.2-0.3 for wind blown over land. Those of the turbulence intensity profiles are approximately 0 and −0.2 to 0.4, respectively.  相似文献   

20.
There are various definitions of ‘zero energy’ and ‘net-zero’ energy building. In most cases, the definitions refer only to the energy that is used in the operation of the building, ignoring the aspects of energy use related to the construction and delivery of the building and its components. On the other hand the concept of ‘net energy’ as used in the field of ecological economics, which does take into account the energy used during the production process of a commodity, is widely applied in fields such as renewable energy assessment. In this paper the concept of ‘net energy’ is introduced and applied within the built environment, based on a methodology accounting for the embodied energy of building components together with energy use in operation. A definition of life cycle zero energy buildings (LC-ZEB) is proposed, as well as the use of the net energy ratio (NER) as a factor to aid in building design with a life cycle perspective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号