首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
《Computers & Geosciences》2006,32(4):512-526
Volcanic hazard assessment is of paramount importance for the safeguard of the resources exposed to volcanic hazards. In the paper we present ELFM, a lava flow simulation model for the evaluation of the lava flow hazard on Mount Etna (Sicily, Italy), the most important active volcano in Europe. The major contributions of the paper are: (a) a detailed specification of the lava flow simulation model and the specification of an algorithm implementing it; (b) the definition of a methodological framework for applying the model to the specific volcano. For what concerns the former issue, we propose an extended version of an existing stochastic model that has been applied so far only to the assessment of the volcanic hazard on Lanzarote and Tenerife (Canary Islands). Concerning the methodological framework, we claim model validation is definitely needed for assessing the effectiveness of the lava flow simulation model. To that extent a strategy has been devised for the generation of simulation experiments and evaluation of their outcomes.  相似文献   

2.
Mount Erebus (Antarctica) is a remote and inhospitable volcano, where field campaigns are possible only during the austral summer. In addition to continuously monitoring seismic instruments and video cameras, data from scanners flown aboard polar orbiting space-craft, such as the Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR), can contribute to continuous, year-round monitoring of this volcano. Together these data allow measurement of the temperature of, thermal and gas flux from, and mass flux to a persistently active lava lake at Erebus' summit. The monitoring potential of such polar-orbiting instruments is enhanced by the poleward convergence of sub-spacecraft ground-tracks at the Erebus latitudes, permitting more frequent return periods than at the equator. Ground-based observations show that the Erebus lava lake was active with an area of ~2800m2 and sulphur dioxide (SO2) flux of (230 +/- 90)td-1 prior to September 1984. AVHRR-based lake area and SO2 flux estimates are in good agreement with these measurements, giving (2320 +/- 1200)m2 and (190 +/- 100)td-1, respectively, during 1980. However during late-1984 the lava lake became buried, with TM data showing re-establishment of the lake, with a TM-derived surface temperature of 578-903 C, by January 1985. Following these events, ground-based lake area and SO2 flux measurements show that the lake area and SO2 flux was lower (180-630m2 and 9-91td-1, respectively). This is matched by a decline in the AVHRR- and TM-derived rate of magma supply to the lake from 330 167kgs-1 prior to 1984 to 30-76kgs-1 thereafter. Clearly, a reduction in magma supply to, and activity at, the lava lake occurred during 1984. We look forward to using data from such future polar-orbiting sensors as the Moderate Resolution Imaging Spectrometer (MODIS), Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER), Enhanced Thematic Mapper (ETM+) and Advanced Along Track Scanning Radiometer (AASTR) to contribute to high (once a day) temporal resolution measurement and monitoring of activity at this volcano. Such analyses will in turn contribute to a more complete understanding of how this volcano works.  相似文献   

3.
Thermal data are directly available from the Geostationary Operational Environmental Satellites (GOES) every 15 minutes at existing or inexpensively installed receiving stations. This data stream is ideal for monitoring high temperature features such as active lava flows and fires. To provide a near-real-time hot spot monitoring tool, we have developed, tested and installed software to analyse GOES data on-reception and then make results available in a timely fashion via the web. Our software automatically: (1) produces hot spot images and movies; (2) uses a thresholding procedure to generate a hot spot map; (3) updates hot spot radiance and cloud index time series; and (4) issues a threshold-based e-mail alert. Results are added to http://volcano1.pgd.hawaii.edu/goes/ within ~12 minutes of image acquisition and are updated every 15 minutes. Analysis of GOES data acquired for effusive activity at Kilauea volcano (Hawai'i) during 1997-98 show that short (<1 hour long) events producing 100m long (102 to 103 m2) lava flows are detectable. This means that time constraints can be placed on sudden, rapidly evolving efflusive events with an accuracy of 7.5 minutes. Changes in activity style and extent can also be documented using hot spot size, intensity and shape. From radiance time series we distinguish (1) tube-fed activity (low radiance, <10 MW m2 m-1); (2) activity pauses (no radiance); (3) lava lake activity (low radiance, <5 MW m2 m-1); (4) short (<3 km long) flow extension (moderate radiance, 10-20 MW m2 m 1 ); and (5) 12 km long flow extension (high radiance, 15-30 MW m2 m-1). The ability of GOES to detect short-lived effusive events, coupled with the speed with which GOES-based hot spot information can be processed and disseminated, means that GOES offers a valuable additional volcano monitoring tool.  相似文献   

4.
A study of the remote Dubbi volcano, located in the northeastern part of the Afar triangle, Eritrea, was carried out using JERS-1 Synthetic Aperture Radar (SAR) and Landsat Thematic Mapper (TM) imagery. It investigated the last known eruption of Dubbi volcano in 1861, the only volcano in Afar for which historical reports indicate a major explosive eruption. Various image processing techniques were tested and compared in order to map different volcanic units, including effusive and explosive products. Principal component analysis and optical-SAR fusion were found to be useful to determine the extent of the 1861 pumice deposits surrounding the volcano. SAR imagery revealed old lava flows buried below tephra deposits, emphasizing the ground penetrating property of the L-band (HH polarization). The interpretation obtained from satellite imagery was cross-checked with sparse historical testimonies and available ground-truth data. Two scenarios are proposed for the 1861 eruptive sequences in order to estimate the volumes of lava flows erupted and the timing of explosive and effusive activity. Identified as a bimodal basaltic-trachytic eruption, with a minimum volume of 1.2 km3 of hawaiite lava and a minimum area of 70 km2 of trachytic pumice, it represents the largest known historic eruption in the Afar triangle. This paper raises the issue of the potential volcanic hazards posed by Dubbi, which concern both the local population and the maritime traffic using the strategic route of the Red Sea.  相似文献   

5.
The Internet now harbours vast amounts of cheap and potentially useful remote sensing data. Advanced Very High Resolution Radiometer (AVHRR) data are being increasingly used for volcano surveillance, and the provision of AVHRR Global Area Coverage (GAC) imagery at no cost over the Internet offers the possibility of cheap volcano monitoring on a global scale. Herein we use an extensive, 690-scene AVHRR GAC dataset to observe volcanic activity in the Indonesian island arc between January 1996 and November 1997. Indonesia contains over 70 active volcanoes, with styles of activity during the observation period including active lava domes, lava flows, pyroclastic flows and hot crater lakes, many in close proximity to major centres of population. The detection potential of these and other phenomena in GAC data is assessed. Thermal anomalies were identified at ~18 volcanoes during the observation period, including lava flows at Anak Krakatau, persistent open-vent activity at Semeru and a previously unreported eruption at Sangeang Api volcano. Using these results, a classification scheme for night-time Indonesian GAC data is presented. Routine use of freely available high temporal resolution data such as AVHRR GAC could help elucidate cyclic activity at active volcanoes, which would contribute significantly to hazard mitigation in affected areas. Browse images of higher resolution data (e.g. SPOT) from the daily updated archives of the Centre for Remote Imaging, Sensing and Processing (CRISP) in Singapore also show potential as an aid to volcano monitoring in the region.  相似文献   

6.
In recent years, the GPU (graphics processing unit) has evolved into an extremely powerful and flexible processor, with it now representing an attractive platform for general-purpose computation. Moreover, changes to the design and programmability of GPUs provide the opportunity to perform general-purpose computation on a GPU (GPGPU). Even though many programming languages, software tools, and libraries have been proposed to facilitate GPGPU programming, the unusual and specific programming model of the GPU remains a significant barrier to writing GPGPU programs. In this paper, we introduce a novel compiler-based approach for GPGPU programming. Compiler directives are used to label code fragments that are to be executed on the GPU. Our GPGPU compiler, Guru, converts the labeled code fragments into ISO-compliant C code that contains appropriate OpenGL and Cg APIs. A native C compiler can then be used to compile it into the executable code for GPU. Our compiler is implemented based on the Open64 compiler infrastructure. Preliminary experimental results from selected benchmarks show that our compiler produces significant performance improvements for programs that exhibit a high degree of data parallelism.  相似文献   

7.
LiDAR (Light Detection and Ranging) is a novel and very useful active remote sensing system which can be used to directly identify geomorphological features as well as the properties of materials on the ground surface. In this work, LiDAR data were applied to the study of the Stromboli volcano in Italy. LiDAR data points, collected during a survey in October 2005, were used to generate a Digital Elevation Model (DEM) and a calibrated intensity map of the ground surface. The DEM, derived maps and topographic cross-sections were used to complete a geomorphological analysis of Stromboli, which led to the identification of four main geomorphological domains linked to major volcanic cycles. Moreover, we investigated and documented the potential of LiDAR intensity data for distinguishing and characterizing different volcanic products, such as fallout deposits, epiclastic sediments and lava flows.  相似文献   

8.
In the field of wildfire risk management the so-called burn probability maps (BPMs) are increasingly used with the aim of estimating the probability of each point of a landscape to be burned under certain environmental conditions. Such BPMs are usually computed through the explicit simulation of thousands of fires using fast and accurate models. However, even adopting the most optimized algorithms, the building of simulation-based BPMs for large areas results in a highly intensive computational process that makes mandatory the use of high performance computing. In this paper, General-Purpose Computation with Graphics Processing Units (GPGPU) is applied, in conjunction with a wildfire simulation model based on the Cellular Automata approach, to the process of BPM building. Using three different GPGPU devices, the paper illustrates several implementation strategies to speedup the overall mapping process and discusses some numerical results obtained on a real landscape.  相似文献   

9.
The MAGFLOW cellular automata (CA) model was able to fairly accurately reproduce the time of the lava flow advance during the 2006 Etna eruption, leading to very plausible flow predictions. MAGFLOW is intended for use in emergency response situations during an eruption to quickly forecast the lava flow path over some time interval from the immediate future to a long-time forecast. Major discrepancies between the observed and simulated paths occurred in the early phase of the 2006 eruption due to an underestimation of the initial flow rate, and at the time of the overlapping with the 2004–2005 lava flow. Very good representations of the areas likely to be inundated by lava flows were obtained when we adopt a time-varying effusion rate and include the 2004–2005 lava flow field in the Digital Elevation Model (DEM) of topography.  相似文献   

10.
The general-purpose computing on graphic processing units (GPGPUs) becomes increasingly popular due to its high computational throughput for data parallel applications. Modern GPU architectures have limited capability for error detection and fault tolerance since they are originally designed for graphics processing. However, the rigorous execution correctness is required for general-purpose applications, which makes reliability a growing concern in the GPGPU architecture design. With CMOS processing technologies continuously scaling down to the nano-scale, on-chip soft error rate (SER) has been predicted to increase exponentially. GPGPUs with hundreds of cores integrated into a single chip are prone to manifest high SER. This paper explores a first step to model and characterize GPGPU reliability in light of soft errors. We develop GPGPU-SODA (GPGPU SOftware Dependability Analysis), a framework to estimate the soft-error vulnerability of GPGPU microarchitecture. By using GPGPU-SODA, we observe that several microarchitecture structures in GPGPUs exhibit high soft-error susceptibility, and the structure vulnerability is sensitive to the workload characteristics (e.g. branch divergences, memory access pattern). We further investigate the impact of several architectural optimizations on GPU soft-error robustness. For example, we find that increasing the number of threads supported by GPU significantly affects the GPGPU soft-error robustness. However, changing the warp scheduling policy has little impact on the structure vulnerability. The observations made in this study provide designers the useful guidance to build resilient GPGPUs: a comprehensive resiliency solution for GPGPUs should consider the entire GPGPU design instead of solely focusing on a particular structure.  相似文献   

11.
General-purpose computing on graphics processing unit (GPGPU) has been adopted to accelerate the running of applications which require long execution time in various problem domains. Tabu Search belonging to meta-heuristics optimization has been used to find a suboptimal solution for NP-hard problems within a more reasonable time interval. In this paper, we have investigated in how to improve the performance of Tabu Search algorithm on GPGPU and took the permutation flow shop scheduling problem (PFSP) as the example for our study. In previous approach proposed recently for solving PFSP by Tabu Search on GPU, all the job permutations are stored in global memory to successfully eliminate the occurrences of branch divergence. Nevertheless, the previous algorithm requires a large amount of global memory space, because of a lot of global memory access resulting in system performance degradation. We propose a new approach to address the problem. The main contribution of this paper is an efficient multiple-loop struct to generate most part of the permutation on the fly, which can decrease the size of permutation table and significantly reduce the amount of global memory access. Computational experiments on problems according with benchmark suite for PFSP reveal that the best performance improvement of our approach is about 100%, comparing with the previous work.  相似文献   

12.
Detecting and localizing abnormal events in crowded scenes still remains a challenging task among computer vision community. An unsupervised framework is proposed in this paper to address the problem. Low-level features and optical flows (OF) of video sequences are extracted to represent motion information in the temporal domain. Moreover, abnormal events usually occur in local regions and are closely linked to their surrounding areas in the spatial domain. To extract high-level information from local regions and model the relationship in spatial domain, the first step is to calculate optical flow maps and divide them into a set of non-overlapping sub-maps. Next, corresponding PCANet models are trained using the sub-maps at same spatial location in the optical flow maps. Based on the block-wise histograms extracted by PCANet models, a set of one-class classifiers are trained to predict the anomaly scores of test frames. The framework is completely unsupervised because it utilizes only normal videos. Experiments were carried out on UCSD Ped2 and UMN datasets, and the results show competitive performance of this framework when compared with other state-of-the-art methods.  相似文献   

13.
In this work, we carry out the parallelization of the single level Fast Multipole Method (FMM) for solving acoustic-scattering problems (using the Helmholtz equation) on distributed-memory GPGPU systems. With the aim of enlarging the scope of feasible simulations, the presented solution combines the techniques developed for our distributed-memory CPU solver with our shared-memory GPGPU solver. The performance of the developed solution is proved using two different GPGPU clusters: the first one consists of two workstations with NVIDIA GTX 480 GPUs linked by a Gigabit Ethernet network, and the second one comprises four nodes with NVIDIA Tesla M2090 GPUs linked by an Infiniband network.  相似文献   

14.
New parallel objective function determination methods for the job shop scheduling problem are proposed in this paper, considering makespan and the sum of jobs execution times criteria, however, the methods proposed can be applied also to another popular objective functions such as jobs tardiness or flow time. Parallel Random Access Machine (PRAM) model is applied for the theoretical analysis of algorithm efficiency. The methods need a fine-grained parallelization, therefore the approach proposed is especially devoted to parallel computing systems with fast shared memory (e.g. GPGPU, General-Purpose computing on Graphics Processing Units).  相似文献   

15.
Cellular automata simulation of urban dynamics through GPGPU   总被引:1,自引:0,他引:1  
In recent years, urban models based on Cellular Automata (CA) are becoming increasingly sophisticated and are being applied to real-world problems covering large geographical areas. As a result, they often require extended computing times. However, in spite of the improved availability of parallel computing facilities, the applications in the field of urban and regional dynamics are almost always based on sequential algorithms. This paper makes a contribution toward a wider use in the field of geosimulation of high performance computing techniques based on General-Purpose computing on Graphics Processing Units (GPGPU). In particular, we investigate the parallel speedup achieved by applying GPGPU to a popular constrained urban CA model. The major contribution of this work is in the specific modeling we propose to achieve significant gains in computing time, while maintaining the most relevant features of the traditional sequential model.  相似文献   

16.
The 2D Cellular Automata model, MAGFLOW, simulates lava flows and an algorithm based on the Monte Carlo approach solves the anisotropic flow direction problem. The model was applied to reproduce a lava flow formed during the 2001 Etna eruption. This eruption provided the opportunity to verify the ability of MAGFLOW to simulate the path of lava flows which was made possible due to the availability of the necessary data for both modeling and subsequent validation. MAGFLOW reproduced quite accurately the spread of flow. A good agreement was highlighted between the simulated and observed length on steep slopes, whereas the area covered by the lava flow tends to be overestimated. The major inconsistencies found in the comparison between simulated and observed lava flow due to neglecting the effects of ephemeral vent formation.  相似文献   

17.
18.
In this paper we optimize mean reverting portfolios subject to cardinality constraints. First, the parameters of the corresponding Ornstein–Uhlenbeck (OU) process are estimated by auto-regressive Hidden Markov Models (AR-HMM), in order to capture the underlying characteristics of the financial time series. Portfolio optimization is then performed by maximizing the return achieved with a predefined probability instead of optimizing the predictability parameter, which provides more profitable portfolios. The selection of the optimal portfolio according to the goal function is carried out by stochastic search algorithms. The presented solutions satisfy the cardinality constraint in terms of providing a sparse portfolios which minimize the transaction costs (and, as a result, maximize the interpretability of the results). In order to use the method for high frequency trading (HFT) we utilize a massively parallel GPGPU architecture. Both the portfolio optimization and the model identification algorithms are successfully tailored to be running on GPGPU to meet the challenges of efficient software implementation and fast execution time. The performance of the new method has been extensively tested both on historical daily and intraday FOREX data and on artificially generated data series. The results demonstrate that a good average return can be achieved by the proposed trading algorithm in realistic scenarios. The speed profiling has proven that GPGPU is capable of HFT, achieving high-throughput real-time performance.  相似文献   

19.
The general-purpose graphic processing unit (GPGPU) is a popular accelerator for general applications such as scientific computing because the applications are massively parallel and the significant power of parallel computing inheriting from GPUs. However, distributing workload among the large number of cores as the execution configuration in a GPGPU is currently still a manual trial-and-error process. Programmers try out manually some configurations and might settle for a sub-optimal one leading to poor performance and/or high power consumption. This paper presents an auto-tuning approach for GPGPU applications with the performance and power models. First, a model-based analytic approach for estimating performance and power consumption of kernels is proposed. Second, an auto-tuning framework is proposed for automatically obtaining a near-optimal configuration for a kernel computation. In this work, we formulated that automatically finding an optimal configuration as the constraint optimization and solved it using either simulated annealing (SA) or genetic algorithm (GA). Experiment results show that the fidelity of the proposed models for performance and energy consumption are 0.86 and 0.89, respectively. Further, the optimization algorithms result in a normalized optimality offset of 0.94% and 0.79% for SA and GA, respectively.  相似文献   

20.
Graphics processing unit (GPU) virtualization technology enables a single GPU to be shared among multiple virtual machines (VMs), thereby allowing multiple VMs to perform GPU operations simultaneously with a single GPU. Because GPUs exhibit lower resource scalability than central processing units (CPUs), memory, and storage, many VMs encounter resource shortages while running GPU operations concurrently, implying that the VM performing the GPU operation must wait to use the GPU. In this paper, we propose a partial migration technique for general-purpose graphics processing unit (GPGPU) tasks to prevent the GPU resource shortage in a remote procedure call-based GPU virtualization environment. The proposed method allows a GPGPU task to be migrated to another physical server's GPU based on the available resources of the target's GPU device, thereby reducing the wait time of the VM to use the GPU. With this approach, we prevent resource shortages and minimize performance degradation for GPGPU operations running on multiple VMs. Our proposed method can prevent GPU memory shortage, improve GPGPU task performance by up to 14%, and improve GPU computational performance by up to 82%. In addition, experiments show that the migration of GPGPU tasks minimizes the impact on other VMs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号