首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An up to date and accurate aviation emission inventory is a prerequisite for any detailed analysis of aviation emission impact on greenhouse gases and local air quality around airports. In this paper we present an aviation emission inventory using real time air traffic trajectory data. The reported inventory is in the form of a 4D database which provides resolution of 1° ×  × 1000 ft for temporal and spatial emission analysis. The inventory is for an ongoing period of six months starting from October 2008 for Australian Airspace.In this study we show 6 months of data, with 492,936 flights (inbound, outbound and over flying). These flights used about 2515.83 kt of fuel and emitted 114.59 kt of HC, 200.95 kt of CO, 45.92 kt of NOx, 7929.89 kt of CO2, and 2.11 kt of SOx. From the spatial analysis of emissions data, we found that the CO2 concentration in some parts of Australia is much higher than other parts, especially in some major cities. The emission results also show that NOx emission of aviation may have a significant impact on the ozone layer in the upper troposphere, but not in the stratosphere.It is expected that with the availability of this real time aviation emission database, environmental analysts and aviation experts will have an indispensable source of information for making timely decisions regarding expansion of runways, building new airports, applying route charges based on environmentally congested airways, and restructuring air traffic flow to achieve sustainable air traffic growth.  相似文献   

2.
《Parallel Computing》2013,39(10):615-637
A key point for the efficient use of large grid systems is the discovery of resources, and this task becomes more complicated as the size of the system grows up. In this case, large amounts of information on the available resources must be stored and kept up-to-date along the system so that it can be queried by users to find resources meeting specific requirements (e.g. a given operating system or available memory). Thus, three tasks must be performed, (1) information on resources must be gathered and processed, (2) such processed information has to be disseminated over the system, and (3) upon users’ requests, the system must be able to discover resources meeting some requirements using the processed information. This paper presents a new technique for the discovery of resources in grids which can be used in the case of multi-attribute (e.g. {OS = Linux & memory = 4 GB}) and range queries (e.g. {50 GB < disk-space < 100 GB}). This technique relies on the use of content summarisation techniques to perform the first task mentioned before and strives at the main drawback found in proposals from literature using summarization. This drawback is related to scalability, and is tackled by means of using Peer-to-Peer (P2P) techniques, namely Routing Indices (RIs), to perform the second and third tasks.Another contribution of this work is a performance evaluation conducted by means of simulations of the EU DataGRID Testbed which shows the usefulness of this approach compared to other proposals from literature. More specifically, the technique presented in this paper improves on the scalability and produces good performance. Besides, the parameters involved in the summary creation have been tuned and the most suitable values for the presented test case have been found.  相似文献   

3.
There is a need to develop operational land degradation indicators for large regions to prevent losses of biological and economic productivity. Disturbance events press ecosystems beyond resilience and modify the associated hydrological and surface energy balance. Therefore, new indicators for water-limited ecosystems can be based on the partition of the surface energy into latent (λE) and sensible heat flux (H).In this study, a new methodology for monitoring land degradation risk for regional scale application is evaluated in a semiarid area of SE Spain. Input data include ASTER surface temperature and reflectance products, and other ancillary data. The methodology employs two land degradation indicators, one related to ecosystem water use derived from the non-evaporative fraction (NEF = H / (λE + H)), and another related to vegetation greenness derived from the NDVI. The surface energy modeling approach used to estimate the NEF showed errors within the range of similar studies (R2 = 0.88; RMSE = 0.18 (22%)).To create quantitative indicators suitable for regional analysis, the NEF and NDVI were standardized between two possible extremes of ecosystem status: extremely disturbed and undisturbed in each climatic region to define the NEFS (NEF Standardized) and NDVIS (NDVI Standardized). The procedure was successful, as it statistically identified ecosystem status extremes for both indicators without supervision. Evaluation of the indicators at disturbed and undisturbed (control) sites, and intermediate surface variables such as albedo or surface temperature, provided insights on the main surface energy status controls following disturbance events. These results suggest that ecosystem functional indicators, such as the NEFS, can provide information related to the surface water deficit, including the role of soil properties.  相似文献   

4.
This study investigated the effects of upstream stations’ flow records on the performance of artificial neural network (ANN) models for predicting daily watershed runoff. As a comparison, a multiple linear regression (MLR) analysis was also examined using various statistical indices. Five streamflow measuring stations on the Cahaba River, Alabama, were selected as case studies. Two different ANN models, multi layer feed forward neural network using Levenberg–Marquardt learning algorithm (LMFF) and radial basis function (RBF), were introduced in this paper. These models were then used to forecast one day ahead streamflows. The correlation analysis was applied for determining the architecture of each ANN model in terms of input variables. Several statistical criteria (RMSE, MAE and coefficient of correlation) were used to check the model accuracy in comparison with the observed data by means of K-fold cross validation method. Additionally, residual analysis was applied for the model results. The comparison results revealed that using upstream records could significantly increase the accuracy of ANN and MLR models in predicting daily stream flows (by around 30%). The comparison of the prediction accuracy of both ANN models (LMFF and RBF) and linear regression method indicated that the ANN approaches were more accurate than the MLR in predicting streamflow dynamics. The LMFF model was able to improve the average of root mean square error (RMSEave) and average of mean absolute percentage error (MAPEave) values of the multiple linear regression forecasts by about 18% and 21%, respectively. In spite of the fact that the RBF model acted better for predicting the highest range of flow rate (flood events, RMSEave/RBF = 26.8 m3/s vs. RMSEave/LMFF = 40.2 m3/s), in general, the results suggested that the LMFF method was somehow superior to the RBF method in predicting watershed runoff (RMSE/LMFF = 18.8 m3/s vs. RMSE/RBF = 19.2 m3/s). Eventually, statistical differences between measured and predicted medians were evaluated using Mann-Whitney test, and differences in variances were evaluated using the Levene's test.  相似文献   

5.
Dicumyl peroxide (DCPO), is produced by cumene hydroperoxide (CHP) process, is utilized as an initiator for polymerization, a prevailing source of free radicals, a hardener, and a linking agent. DCPO has caused several thermal explosion and runaway reaction accidents in reaction and storage zone in Taiwan because of its unstable reactive property. Differential scanning calorimetry (DSC) was used to determine thermokinetic parameters including 700 J g–1 of heat of decomposition (ΔHd), 110 °C of exothermic onset temperature (T0), 130 kJ mol–1 of activation energy (Ea), etc., and to analyze the runaway behavior of DCPO in a reaction and storage zone. To evaluate thermal explosion of DCPO with storage equipment, solid thermal explosion (STE) and liquid thermal explosion (LTE) of thermal safety software (TSS) were applied to simulate storage tank under various environmental temperatures (Te). Te exceeding the T0 of DCPO can be discovered as a liquid thermal explosion situation. DCPO was stored under room temperature without sunshine and was prohibited exceeding 67 °C of self-accelerating decomposition temperature (SADT) for a tank (radius = 1 m and height = 2 m). SADT of DCPO in a box (width, length and height = 1 m, respectively) was determined to be 60 °C. The TSS was employed to simulate the fundamental thermal explosion behavior in a large tank or a drum. Results from curve fitting demonstrated that, even at the earlier stage of the reaction in the experiments, ambient temperature could elicit exothermic reactions of DCPO. To curtail the extent of the risk, relevant hazard information is quite significant and must be provided in the manufacturing process.  相似文献   

6.
Context: An important task in civil engineering is the detection of collisions of a 3D model with an environment representation. Existing methods using the structure gauge provide an insufficient measure because the model either rotates or because the trajectory makes tight turns through narrow passages. This is the case in either automotive assembly lines or in narrow train tunnels.Objective: Given two point clouds, one of the environment and one of a model and a trajectory with six degrees of freedom along which the model moves through the environment, find all colliding points of the environment with the model within a certain clearance radius.Method: This paper presents two collision detection (CD) methods called kd-CD and kd-CD-simple and two penetration depth (PD) calculation methods called kd-PD and kd-PD-fast. All four methods are based on searches in a k-d tree representation of the environment. The creation of the k-d tree, its search methods and other features will be explained in the scope of their use to detect collisions and calculate depths of penetration.Results: The algorithms are benchmarked by moving the point cloud of a train wagon with 2.5 million points along the point cloud of a 1144 m long train track through a narrow tunnel with overall 18.92 million points. Points where the wagon collides with the tunnel wall are visually highlighted with their penetration depth. With a safety margin of 5 cm kd-PD-simple finds all colliding points on its trajectory which is sampled into 19,392 positions in 77 s on a standard desktop machine of 1.6 GHz.Conclusion: The presented methods for collision detection and penetration depth calculation are shown to solve problems for which the structure gauge is an insufficient measure. The underlying k-d tree is shown to be an effective data structure for the required look-up operations.  相似文献   

7.
This paper presents two solution representations and the corresponding decoding methods for solving the capacitated vehicle routing problem (CVRP) using particle swarm optimization (PSO). The first solution representation (SR-1) is a (n + 2m)-dimensional particle for CVRP with n customers and m vehicles. The decoding method for this representation starts with the transformation of particle into a priority list of customer to enter route and a priority matrix of vehicle to serve each customer. The vehicle routes are then constructed based on the customer priority list and vehicle priority matrix. The second representation (SR-2) is a 3m-dimensional particle. The decoding method for this representation starts with the transformation of particle into the vehicle orientation points and the vehicle coverage radius. The vehicle routes are constructed based on these points and radius. The proposed representations are applied using GLNPSO, a PSO algorithm with multiple social learning structures, and tested using some benchmark problems. The computational result shows that representation SR-2 is better than representation SR-1 and also competitive with other methods for solving CVRP.  相似文献   

8.
In this paper, the dynamic behavior of a non-linear eight degrees of freedom vehicle model having active suspensions and passenger seat controlled by a neural network (NN) controller is examined. A robust NN structure is established by using principle design data from the Matlab diagrams of system functions. In the NN structure, Classic Back-Propagation Algorithm (CBA) is employed. The user inputs a set of x1  x16 while the output from the NN consists of f1  f16 non-linear functions. Further, the Permanent Magnet Synchronous Motor (PMSM) controller is also determined using the same NN structure. According to various tests of the NN structure it is demonstrated that the model is able to give highly sensitive outputs for vibration condition, even using a more restricted input data set. The non-linearity occurs due to dry friction on the dampers. The vehicle body and the passenger seat using PMSM are fully controlled at the same time. The time responses of the non-linear vehicle model due to road disturbance and the frequency responses are obtained. Finally, uncontrolled and controlled cases are compared. It is seen that seat vibrations of a non-linear full vehicle model are controlled by NN based system exactly.  相似文献   

9.
The passenger’s perception of the airport’s level of service (LOS) may have a significant impact on promoting or discouraging future tourism and business activities. In this study, we take a look at this problem, but unlike in traditional statistical analysis, we apply a new method, the dominance-based rough set approach (DRSA), to an airport service survey. A set of “if  then  ” decision rules is used in the preference model. The passengers indicate their perception of airport LOS by rating a set of criteria/attributes. The proposed method provides practical information that should be of help to airport planners, designers, operators, and managers to develop LOS improvement strategies. The model was implemented using survey data from a large sample of customers from an international airport in Taiwan.  相似文献   

10.
ContextDefect prediction research mostly focus on optimizing the performance of models that are constructed for isolated projects (i.e. within project (WP)) through retrospective analyses. On the other hand, recent studies try to utilize data across projects (i.e. cross project (CP)) for building defect prediction models for new projects. There are no cases where the combination of within and cross (i.e. mixed) project data are used together.ObjectiveOur goal is to investigate the merits of using mixed project data for binary defect prediction. Specifically, we want to check whether it is feasible, in terms of defect detection performance, to use data from other projects for the cases (i) when there is an existing within project history and (ii) when there are limited within project data.MethodWe use data from 73 versions of 41 projects that are publicly available. We simulate the two above-mentioned cases, and compare the performances of naive Bayes classifiers by using within project data vs. mixed project data.ResultsFor the first case, we find that the performance of mixed project predictors significantly improves over full within project predictors (p-value < 0.001), however the effect size is small (Hedgesg = 0.25). For the second case, we found that mixed project predictors are comparable to full within project predictors, using only 10% of available within project data (p-value = 0.002, g = 0.17).ConclusionWe conclude that the extra effort associated with collecting data from other projects is not feasible in terms of practical performance improvement when there is already an established within project defect predictor using full project history. However, when there is limited project history, e.g. early phases of development, mixed project predictions are justifiable as they perform as good as full within project models.  相似文献   

11.
We present a validation strategy for enhancement of an unstructured industrial finite-volume solver designed for steady RANS problems for large-eddy-type simulation with near-wall modelling of incompressible high Reynolds number flow. Different parts of the projection-based discretisation are investigated to ensure LES capability of the numerical method. Turbulence model parameters are calibrated by using a minimisation of least-squares functionals for first and second order statistics of the basic benchmark problems decaying homogeneous turbulence and turbulent channel flow. Then the method is applied to the flow over a backward facing step at Reh = 37,500. Of special interest is the role of the spatial and temporal discretisation error for low order schemes. For wall-bounded flows, present results confirm existing best practice guidelines for mesh design. For free-shear layers, a sensor to quantify the resolution quality of the LES based on the resolved turbulent kinetic energy is presented and applied to the flow over a backward facing step at Reh = 37,500.  相似文献   

12.
Accurate assessment of phytoplankton chlorophyll-a (chla) concentrations in turbid waters by means of remote sensing is challenging due to the optical complexity of case 2 waters. We have applied a recently developed model of the form [Rrs? 1(λ1) ? Rrs? 1(λ2)] × Rrs(λ3) where Rrs(λi) is the remote-sensing reflectance at the wavelength λi, for the estimation of chla concentrations in turbid waters. The objectives of this paper are (a) to validate the three-band model as well as its special case, the two-band model Rrs? 1(λ1) × Rrs(λ3), using datasets collected over a considerable range of optical properties, trophic status, and geographical locations in turbid lakes, reservoirs, estuaries, and coastal waters, and (b) to evaluate the extent to which the three-band model could be applied to the Medium Resolution Imaging Spectrometer (MERIS) and two-band model could be applied to the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate chla in turbid waters.The three-band model was calibrated and validated using three MERIS spectral bands (660–670 nm, 703.75–713.75 nm, and 750?757.5 nm), and the 2-band model was tested using two MODIS spectral bands (λ1 = 662–672, λ3 = 743–753 nm). We assessed the accuracy of chla prediction in four independent datasets without re-parameterization (adjustment of the coefficients) after initial calibration elsewhere. Although the validation data set contained widely variable chla (1.2 to 236 mg m? 3), Secchi disk depth (0.18 to 4.1 m), and turbidity (1.3 to 78 NTU), chla predicted by the three-band algorithm was strongly correlated with observed chla (r2 > 0.96), with a precision of 32% and average bias across data sets of ? 4.9% to 11%. Chla predicted by the two-band algorithm was also closely correlated with observed chla (r2 > 0.92); however, the precision declined to 57%, and average bias across the data sets was 18% to 50.3%. These findings imply that, provided that an atmospheric correction scheme for the red and NIR bands is available, the extensive database of MERIS and MODIS imagery could be used for quantitative monitoring of chla in turbid waters.  相似文献   

13.
14.
15.
It is widely recognized that effective ranking methods for relational data (e.g., tuples) enable users to overcome the limitations of the traditional Boolean retrieval model and the hardness of structured query writing. To determine the rank of a tuple, term frequency-based methods, such as tf × idf (term frequency × inverse document frequency) schemes, have been commonly adopted in the literature by simply considering a tuple as a single document. However, in many cases, we have noted that tf × idf schemes may not produce effective rankings or specific orderings for relational data with categorical attributes, which is pervasive today. To support fundamental aspects of relational data, we apply the notions of correlation analysis to estimate the extent of relationships between queries and data. This paper proposes a probabilistic ranking model to exploit statistical relationships that exist in relational data of categorical attributes. Given a set of query terms, information on correlative attribute values to the query terms is used to estimate the relevance of the tuple to the query. To quantify the information, we compute the extent of the dependency between correlative attribute values on a Bayesian network. Moreover, we avoid the prohibitive cost of computing insignificant ranking features based on a limited assumption of node independence. Our probabilistic ranking model is domain-independent and leverages only data statistics without any prior knowledge such as user query logs. Experimental results show that our work improves the effectiveness of rankings for real-world datasets and has a reasonable query processing efficiency compared to related work.  相似文献   

16.
Multi-temporal C-band SAR data (C-HH and C-VV), collected by ERS-2 and ENVISAT satellite systems, are compared with field observations of hydrology (i.e., inundation and soil moisture) and National Wetland Inventory maps (U.S. Fish and Wildlife Service) of a large forested wetland complex adjacent to the Patuxent and Middle Patuxent Rivers, tributaries of the Chesapeake Bay. Multi-temporal C-band SAR data were shown to be capable of mapping forested wetlands and monitoring hydroperiod (i.e., temporal fluctuations in inundation and soil moisture) at the study site, and the discrimination of wetland from upland was improved with 10 m digital elevation data. Principal component analysis was used to summarize the multi-temporal SAR data sets and to isolate the dominant temporal trend in inundation and soil moisture (i.e., relative hydroperiod). Significant positive, linear correlations were found between the first principal component and percent area flooded and soil moisture. The correlation (r2) between the first principal component (PC1) of multi-temporal C-HH SAR data and average soil moisture was 0.88 (p = < .0001) during the leaf-off season and 0.87 (p = < .0001) during the leaf-on season, while the correlation between PC1 and average percent area inundated was 0.82 (p = < .0001) and 0.47 (p = .0016) during the leaf-off and leaf-on seasons, respectively. When compared to field data, the SAR forested wetland maps identified areas that were flooded for 25% of the time with 63–96% agreement and areas flooded for 5% of the time with 44–89% agreement, depending on polarization and time of year. The results are encouraging and justify further studies to attempt to quantify the relative SAR-derived hydroperiod classes in terms of physical variables and also to test the application of SAR data to more diverse landscapes at a broader scale. The present evidence suggests that the SAR data will significantly improve routine wooded wetland mapping.  相似文献   

17.
《Displays》2006,27(3):108-111
In this paper, the relationship between exciton recombination zone and applied voltage in organic light-emitting diodes (OLEDs) ITO/NPB (40 nm)/Alq3(w nm)/rubrene(3 nm)/Alq3(50−w)/Al, in which a 3 nm rubrene as sensing layer is inserted in Alq3 layer at different depth, is studied. By comparing the electroluminescence (EL) spectra of device driven under different applied voltages, a conclusion can be drawn that the recombination zone shifts logarithmically with increasing applied voltages.  相似文献   

18.
Urbanization related alterations to the surface energy balance impact urban warming (‘heat islands’), the growth of the boundary layer, and many other biophysical processes. Traditionally, in situ heat flux measures have been used to quantify such processes, but these typically represent only a small local-scale area within the heterogeneous urban environment. For this reason, remote sensing approaches are very attractive for elucidating more spatially representative information. Here we use hyperspectral imagery from a new airborne sensor, the Operative Modular Imaging Spectrometer (OMIS), along with a survey map and meteorological data, to derive the land cover information and surface parameters required to map spatial variations in turbulent sensible heat flux (QH). The results from two spatially-explicit flux retrieval methods which use contrasting approaches and, to a large degree, different input data are compared for a central urban area of Shanghai, China: (1) the Local-scale Urban Meteorological Parameterization Scheme (LUMPS) and (2) an Aerodynamic Resistance Method (ARM). Sensible heat fluxes are determined at the full 6 m spatial resolution of the OMIS sensor, and at lower resolutions via pixel aggregation and spatial averaging. At the 6 m spatial resolution, the sensible heat flux of rooftop dominated pixels exceeds that of roads, water and vegetated areas, with values peaking at ~ 350 W m? 2, whilst the storage heat flux is greatest for road dominated pixels (peaking at around 420 W m? 2). We investigate the use of both OMIS-derived land surface temperatures made using a Temperature–Emissivity Separation (TES) approach, and land surface temperatures estimated from air temperature measures. Sensible heat flux differences from the two approaches over the entire 2 × 2 km study area are less than 30 W m? 2, suggesting that methods employing either strategy maybe practica1 when operated using low spatial resolution (e.g. 1 km) data. Due to the differing methodologies, direct comparisons between results obtained with the LUMPS and ARM methods are most sensibly made at reduced spatial scales. At 30 m spatial resolution, both approaches produce similar results, with the smallest difference being less than 15 W m? 2 in mean QH averaged over the entire study area. This is encouraging given the differing architecture and data requirements of the LUMPS and ARM methods. Furthermore, in terms of mean study QH, the results obtained by averaging the original 6 m spatial resolution LUMPS-derived QH values to 30 and 90 m spatial resolution are within ~ 5 W m? 2 of those derived from averaging the original surface parameter maps prior to input into LUMPS, suggesting that that use of much lower spatial resolution spaceborne imagery data, for example from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is likely to be a practical solution for heat flux determination in urban areas.  相似文献   

19.
Strong ties play a crucial role in transmitting sensitive information in social networks, especially in the criminal justice domain. However, large social networks containing many entities and relations may also contain a large amount of noisy data. Thus, identifying strong ties accurately and efficiently within such a network poses a major challenge. This paper presents a novel approach to address the noise problem. We transform the original social network graph into a relation context-oriented edge-dual graph by adding new nodes to the original graph based on abstracting the relation contexts from the original edges (relations). Then we compute the local k-connectivity between two given nodes. This produces a measure of the robustness of the relations. To evaluate the correctness and the efficiency of this measure, we conducted an implementation of a system which integrated a total of 450 GB of data from several different data sources. The discovered social network contains 4,906,460 nodes (individuals) and 211,403,212 edges. Our experiments are based on 700 co-offenders involved in robbery crimes. The experimental results show that most strong ties are formed with k ? 2.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号