首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
For the development of effective air pollution control strategies, it is crucial to identify the sources that are the principal contributors to air pollution and estimate how much each source contributes. Multivariate receptor modeling aims to address these problems by decomposing ambient concentrations of multiple air pollutants into components associated with different source types. With the expanded monitoring efforts that have been established over the past several decades, extensive multivariate air pollution data obtained from multiple monitoring sites (multisite multipollutant data) are now available. Although considerable research has been conducted on modeling multivariate space-time data in other contexts, there has been little research on spatial multivariate receptor models for multisite, multipollutant data. We present a Bayesian spatial multivariate receptor modeling (BSMRM) approach that can incorporate spatial correlations in multisite, multipollutant data into the estimation of source composition profiles and contributions, based on discrete process convolution models for multivariate spatial processes. The new BSMRM approach enables predictions of source contributions at unmonitored sites as well as simultaneously dealing with model uncertainty caused by the unknown number of sources and identifiability conditions. The new approach can also provide uncertainty estimates for the predicted source contributions at any location, which was not possible in previous multivariate receptor modeling approaches. The proposed approach is applied to 24-hour ambient air concentrations of 17 Volatile Organic Compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, between 2003 and 2005. Supplementary materials for this article, including real data and MATLAB codes for implementing BSMRM, are available online on the journal web site.  相似文献   

2.
Multivariate receptor models and model uncertainty   总被引:1,自引:0,他引:1  
Estimation of the number of major pollution sources, the source composition profiles, and the source contributions are the main interests in multivariate receptor modeling. Due to lack of identifiability of the receptor model, however, the estimation cannot be done without some additional assumptions.

A common approach to this problem is to estimate the number of sources, q, at the first stage, and then estimate source profiles and contributions at the second stage, given additional constraints (identifiability conditions) to prevent source rotation/transformation and the assumption that the q-source model is correct. These assumptions on the parameters (the number of sources and identifiability conditions) are the main source of model uncertainty in multivariate receptor modeling.

In this paper, we suggest a Bayesian approach to deal with model uncertainties in multivariate receptor models by using Markov chain Monte Carlo (MCMC) schemes. Specifically, we suggest a method which can simultaneously estimate parameters (compositions and contributions), parameter uncertainties, and model uncertainties (number of sources and identifiability conditions). Simulation results and an application to air pollution data are presented.  相似文献   


3.
The use of receptor modeling is now a widely accepted approach to model air pollution data. The resulting estimates of pollution source profiles have error and frequently the uncertainties are obtained under an assumption of independence. In addition traditional Bootstrap approaches are very computationally intensive. We present an intuitive Jackknife alternative that is much less computationally intensive and in simulation examples and actual data seems to demonstrate that it provides wider confidence intervals and larger standard errors for receptor model profile estimates than does the Bootstrap done under the assumption of independence.  相似文献   

4.
This paper deals with model‐order reduction of parametric partial differential equations (PPDEs). More specifically, we consider the problem of finding a good approximation subspace of the solution manifold of the PPDE when only partial information on the latter is available. We assume that 2 sources of information are available: (a) a “rough” prior knowledge taking the form of a manifold containing the target solution manifold and (b) partial linear measurements of the solutions of the PPDE (the term partial refers to the fact that observation operators cannot be inverted). We provide and study several tools to derive good approximation subspaces from these 2 sources of information. We first identify the best worst‐case performance achievable in this setup and propose simple procedures to approximate the corresponding optimal approximation subspace. We then provide, in a simplified setup, a theoretical analysis relating the achievable reduction performance to the choice of the observation operator and the prior knowledge available on the solution manifold.  相似文献   

5.
The development of multivariate statistical approaches to receptor models have focussed on factor analysis. Target transformation factor analysis (TTFA) offers the possibility of determining the number of sources and their elemental composition as well as their mass contributions. In this current work, a new approach is presented for calculating the mass contributions of each source to each sample. In addition an approach to estimating the uncertainties in the analysis is introduced. The method is applied to a subset of the Regional Air Pollution Study (RAPS) particulate composition data set for site 203 in July and August 1976. The data is divided into subsets covering the daylight (6 AM to 6 PM) or night (6 PM to 6 AM) samples. Similar source profiles are obtained for these subsets.  相似文献   

6.
We present preliminary results of our joint investigations to monitor and mitigate environmental pollution, a leading contributor to chronic and deadly health disorders and diseases affecting millions of people each year. Using nanotechnology-based gas sensors; pollution is monitored at several ground stations. The sensor unit is portable, provides instantaneous ground pollution concentrations accurately, and can be readily deployed to disseminate real-time pollution data to a web server providing a topological overview of monitored locations. We are also employing remote sensing technologies with high-spatial and spectral resolution to model urban pollution using satellite images and image processing. One of the objectives of this investigation is to develop a unique capability to acquire, display and assimilate these valuable sources of data to accurately assess urban pollution by real-time monitoring using commercial sensors fabricated using nanofabrication technologies and satellite imagery. This integrated tool will be beneficial towards prediction processes to support public awareness and establish policy priorities for air quality in polluted areas. The complex nature of environmental pollution data mining requires computing technologies that integrate multiple sources and repositories of data over multiple networking systems and platforms that must be accurate, secure, and reliable. An evaluation of information security risks and strategies within an environmental information system is presented. In addition to air pollution, we explore the efficacy of nanostructured materials in the detection and remediation of water pollution. We present our results of sorption on advanced nanomaterials-based sorbents that have been found effective in the removal of cadmium and arsenic from water streams.  相似文献   

7.
In China, daily respirable suspended particulate (RSP, particles with aerodynamic diameters less than 10 microm) concentrations exceeding 420 microg m(-3) are considered "hazardous" to health. These can lead to the premature onset of certain diseases and premature death of sick and elderly people; even healthy people are warned to avoid outdoor activity when RSP concentrations are high. Such high pollution levels are defined as extreme RSP pollution events. Recent epidemiological studies have shown that a distinct difference exists between the health effects caused by natural sources and anthropogenic sources, mandating knowledge of the source of extreme RSP pollution. Twenty-six extreme RSP pollution events were recorded in Beijing from January 2003 to December 2006. The HYSPLIT4 (Hybrid Single Particle Lagrangian Integrated Trajectory) model (Version 4) was used to discriminate the sources of these extreme RSP pollution events. The model found that twelve events were caused from natural sources (dust storms), nine events from anthropogenic sources (e.g., vehicles and industrial activities, etc.) under quasi-quiescent weather, and five events were from mixed causes. Identifying such events will be valuable in epidemiological studies on air pollution in Beijing.  相似文献   

8.
With the rapid rise in social media, alternative news sources, and blogs, ordinary citizens have become information producers as much as information consumers. Highly charged prose, images, and videos spread virally, and stoke the embers of social unrest by alerting fellow citizens to relevant happenings and spurring them into action. We are interested in using Big Data approaches to generate forecasts of civil unrest from open source indicators. The heterogenous nature of data coupled with the rich and diverse origins of civil unrest call for a multi-model approach to such forecasting. We present a modular approach wherein a collection of models use overlapping sources of data to independently forecast protests. Fusion of alerts into one single alert stream becomes a key system informatics problem and we present a statistical framework to accomplish such fusion. Given an alert from one of the numerous models, the decision space for fusion has two possibilities: (i) release the alert or (ii) suppress the alert. Using a Bayesian decision theoretical framework, we present a fusion approach for releasing or suppressing alerts. The resulting system enables real-time decisions and more importantly tuning of precision and recall. Supplementary materials for this article are available online.  相似文献   

9.
The effectiveness of a scatter correction approach based on decoupling absorption and scattering effects through the use of the radiative transfer theory to invert a suitable set of measurements is studied by considering a model multicomponent suspension. The method was used in conjunction with partial least-squares regression to build calibration models for estimating the concentration of two types of analytes: an absorbing (nonscattering) species and a particulate (absorbing and scattering) species. The performances of the models built by this approach were compared with those obtained by applying empirical scatter correction approaches to diffuse reflectance, diffuse transmittance, and collimated transmittance measurements. It was found that the method provided appreciable improvement in model performance for the prediction of both types of analytes. The study indicates that, as long as the bulk absorption spectra are accurately extracted, no further empirical preprocessing to remove light scattering effects is required.  相似文献   

10.
Liu HB  Yang JC  Yi WJ  Wang JQ  Yang JK  Li XJ  Tan JC 《Applied optics》2012,51(16):3590-3598
In most spacecraft, there is a need to know the craft's angular rate. Approaches with least squares and an adaptive Kalman filter are proposed for estimating the angular rate directly from the star tracker measurements. In these approaches, only knowledge of the vector measurements and sampling interval is required. The designed adaptive Kalman filter can filter out noise without information of the dynamic model and inertia dyadic. To verify the proposed estimation approaches, simulations based on the orbit data of the challenging minisatellite payload (CHAMP) satellite and experimental tests with night-sky observation are performed. Both the simulations and experimental testing results have demonstrated that the proposed approach performs well in terms of accuracy, robustness, and performance.  相似文献   

11.
Hyperspectral imaging (HSI) is a spectroscopic method that uses densely sampled measurements along the electromagnetic spectrum to identify the unique molecular composition of an object. Traditionally HSI has been associated with remote sensing-type applications, but recently has found increased use in biomedicine, from investigations at the cellular to the tissue level. One of the main challenges in the analysis of HSI is estimating the proportions, also called abundance fractions of each of the molecular signatures. While there is great promise for HSI in the area of biomedicine, large variability in the measurements and artifacts related to the instrumentation has slow adoption into more widespread practice. In this article, we propose a novel regularization and variable selection method called the spatial LASSO (SPLASSO). The SPLASSO incorporates spatial information via a graph Laplacian-based penalty to help improve the model estimation process for multivariate response data. We show the strong performance of this approach on a benchmark HSI dataset with considerable improvement in predictive accuracy over the standard LASSO. Supplementary materials for this article are available online.  相似文献   

12.
Leblanc T  McDermid IS 《Applied optics》2008,47(30):5592-5603
A Raman lidar calibration method adapted to the long-term monitoring of atmospheric water vapor is proposed. The accuracy of Raman lidar water vapor profiles is limited by that of the calibration process. Typically, calibration using in situ balloon-borne measurements suffers from the nonsimultaneity and noncollocation of the lidar and in situ measurements, while calibration from passive remote sensors suffers from the lower accuracy of the retrievals and incomplete sampling of the water vapor column observed by lidar. We propose a new hybrid calibration method using a combination of absolute calibration from radiosonde campaigns and routine-basis (off-campaign) partial calibration using a standard lamp. This new method takes advantage of the stability of traceable calibrated lamps as reliable sources of known spectral irradiance combined with the best available in situ measurements. An integrated approach is formulated, which can be used for the future long-term monitoring of water vapor by Raman lidars within the international Network for the Detection of Atmospheric Composition Change and other networks.  相似文献   

13.
Abstract

Actuarial practitioners now have access to multiple sources of insurance data corresponding to various situations: multiple business lines, umbrella coverage, multiple hazards, and so on. Despite the wide use and simple nature of single-target approaches, modeling these types of data may benefit from an approach performing variable selection jointly across the sources. We propose a unified algorithm to perform sparse learning of such fused insurance data under the Tweedie (compound Poisson) model. By integrating ideas from multitask sparse learning and sparse Tweedie modeling, our algorithm produces flexible regularization that balances predictor sparsity and between-sources sparsity. When applied to simulated and real data, our approach clearly outperforms single-target modeling in both prediction and selection accuracy, notably when the sources do not have exactly the same set of predictors. An efficient implementation of the proposed algorithm is provided in our R package MStweedie, which is available at https://github.com/fontaine618/MStweedie. Supplementary materials for this article are available online.  相似文献   

14.
The purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy–O’Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy–O’Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied to study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large. Supplementary materials for this article are available online.  相似文献   

15.
A very important problem in industrial applications of PCA and PLS models, such as process modelling or monitoring, is the estimation of scores when the observation vector has missing measurements. The alternative of suspending the application until all measurements are available is usually unacceptable. The problem treated in this work is that of estimating scores from an existing PCA or PLS model when new observation vectors are incomplete. Building the model with incomplete observations is not treated here, although the analysis given in this paper provides considerable insight into this problem. Several methods for estimating scores from data with missing measurements are presented, and analysed: a method, termed single component projection, derived from the NIPALS algorithm for model building with missing data; a method of projection to the model plane; and data replacement by the conditional mean. Expressions are developed for the error in the scores calculated by each method. The error analysis is illustrated using simulated data sets designed to highlight problem situations. A larger industrial data set is also used to compare the approaches. In general, all the methods perform reasonable well with moderate amounts of missing data (up to 20% of the measurements). However, in extreme cases where critical combinations of measurements are missing, the conditional mean replacement method is generally superior to the other approaches.  相似文献   

16.
The practical difficulties encountered in analyzing the kinetics of new reactions are considered from the viewpoint of the capabilities of state-of-the-art high-throughput systems. There are three problems. The first problem is that of model selection, i.e., choosing the correct reaction rate law. The second problem is how to obtain good estimates of the reaction parameters using only a small number of samples once a kinetic model is selected. The third problem is how to perform both functions using just one small set of measurements. To solve the first problem, we present an optimal sampling protocol to choose the correct kinetic model for a given reaction, based on T-optimal design. This protocol is then tested for the case of second-order and pseudo-first-order reactions using both experiments and computer simulations. To solve the second problem, we derive the information function for second-order reactions and use this function to find the optimal sampling points for estimating the kinetic constants. The third problem is further complicated by the fact that the optimal measurement times for determining the correct kinetic model differ from those needed to obtain good estimates of the kinetic constants. To solve this problem, we propose a Pareto optimal approach that can be tuned to give the set of best possible solutions for the two criteria. One important advantage of this approach is that it enables the integration of a priori knowledge into the workflow.  相似文献   

17.
The authors propose a robust model for characterizing the statistical nature of signals obtained from ultrasonic backscatter processes. The model can accommodate frequency-dependent attenuation, spatially varying media statistics, arbitrary beam geometries, and arbitrary pulse shapes. On the basis of this model, statistical schemes are proposed for estimating the scatterer number density (SND) of tissues. The algorithm for estimating the scatterer number incorporates measurements of both the statistical moments of the backscattered signals and the point spread function of the acoustic system. The number density algorithm has been applied to waveforms obtained from ultrasonic phantoms with known number densities and in vitro mammalian tissues. There is an excellent agreement among theoretical, histological, and experimental results. The application of this technique for noninvasive clinical tissue characterization is discussed.  相似文献   

18.
公路隧道柴油车烟雾基准排放量研究   总被引:1,自引:0,他引:1  
建立了基于发动机燃烧理论分析的柴油机烟雾排放量计算模型;通过实际道路试验,研究了柴油机烟雾排放的速度特性;通过发动机台架模拟隧道污染环境试验,研究了发动机进气受到污染后对烟雾排放量的影响;采用发动机台架模拟试验探讨了隧道内道路纵坡对柴油车烟雾排放量的影响。研究成果不仅对公路隧道通风方案设计时汽车污染物基准排放量的确定具有一定的指导意义,同时在进行汽车排放对城市污染和公路沿线环境污染评价时可作为计算汽车污染物排放源强的参考。  相似文献   

19.
Shimizu A  Sakuda K 《Applied optics》1997,36(23):5769-5774
To measure diffraction efficiencies of gratings as a function of wavelength, it is necessary to have quasi-monochromatic light sources of various wavelengths. We propose a method to measure the wavelength dependence of the grating diffraction efficiency by using a quasi-monochromatic light source. This method of estimating the real diffraction characteristics of the gratings for various wavelengths is very useful and simple. First the diffraction efficiency of the grating as a function of various incident-beam angles of monochromatic light is measured, then, using these data, we can obtain the diffraction efficiencies for various wavelengths of the same incident angle of light by virtue of a mathematical-conversion method. The mathematical-conversion results for two laminated differently slanted angle gratings of the same volume grating period are in good agreement with the experimental ones.  相似文献   

20.
This paper presents an integrated life cycle methodology for mapping the flows of pollutants in the urban environment, following the pollutants from their sources through the environment to receptors. The sources of pollution that can be considered by this methodology include products, processes and human activities. Life cycle assessment (LCA), substance flow analysis (SFA), fate and transport modelling (F&TM) and geographical information systems (GIS) have been used as tools for these purposes. A mathematical framework has been developed to enable linking and integration of LCA and SFA. The main feature of the framework is a distinction between the foreground and background systems, where the foreground system includes pollution sources of primary interest in the urban environment and the background comprises all other supporting activities occurring elsewhere in the life cycle. Applying the foreground–background approach, SFA is used to track specific pollutants in the urban environment (foreground) from different sources. LCA is applied to quantify emissions of a number of different pollutants and their impacts in both the urban (foreground) and in the wider environment (background). Therefore, two “pollution vectors" are generated: one each by LCA and SFA. The former comprises all environmental burdens or impacts generated by a source of interest on a life cycle basis and the latter is defined by the flows of a particular burden (substance or pollutant) generated by different sources in the foreground. The vectors are related to the “unit of analysis" which represents a modified functional unit used in LCA and defines the level of activity of the pollution source of interest. A further methodological development has also included integration of LCA and SFA with F&TM and GIS. A four-step methodology is proposed to enable spatial mapping of pollution from sources through the environment to receptors. The approach involves the use of GIS to map sources of pollution, application of the LCA–SFA approach to define sources of interest and quantify environmental burdens and impacts on a life-cycle basis. This is followed by F&TM to track pollution through the environment and by the quantification of site-specific impacts on human health and the environment. The application of the integrated methodology and the mathematical framework is illustrated by a hypothetical example involving four pollution sources in a city: incineration of MSW, manufacture of PVC, car travel and truck freight.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号