首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article describes the idea of coupling three site-specific large scale finite element groundwater models. Each independent groundwater model represents one geological unit containing adjacent boundaries. In order to preserve consistency of the simulation results, the exchange rates at the shared borders used to be calculated independently, set up as boundary condition for the adjacent model unit and adjusted in an iterative process. To overcome this time- consuming procedure of generating consistent boundary condition values for the exchange rates, the adjacent boundary nodes of the three models are linked to form a unified model. As a first result it is shown, how the quality of simulation results in the vicinity of shared boundaries increase by the proposed solution. With the friendly permission of The North Rhine-Westphalia State Environment Agency (Landesumweltamt Nordrhein-Westfalen) ().  相似文献   

2.
Usually, modeling of the evacuations is done during the planning and authorizing process of office buildings or large scale facilities, where computing time is not an issue at all. The collaborative Hermes project [1] aims at improving the safety of mass events by constructing an evacuation assistant, a decision support system for heads of operation in an actual evacuation. For this, the status (occupancy and available egress routes) of a facility is constantly monitored with automatic person counters, door sensors, smoke sensors, and manual input from security staff. Starting from this status, egress is simulated faster than real time, and the result visualized in a suitable fashion to show what is likely to happen in the next 15 min. The test case for this evacuation assistant is the clearing of the ESPRIT Arena in Düsseldorf which holds 50,000–65,000 persons depending on the event type. The on site prediction requires the ability to simulate the egress in ≈2 min, a task that requires the combination of a fast algorithm and a parallel computer. The paper will describe the details of the evacuation problem, the architecture of the evacuation assistant, the pedestrian motion model employed and the optimization and parallelization of the code.  相似文献   

3.
Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. Fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid-approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage times involving individual components, as well as passage times which capture the time taken for a population of components to evolve.Specifically, we show that for models of sufficient scale, global passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate upper and lower approximations for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Additionally, we show that, for passage-time measures involving individual components, the cumulative distribution function can be directly approximated by fluid techniques.Finally, using the GPA tool, we take advantage of the rapid fluid computation of passage times to show how a multi-class client-server system can be optimised to satisfy multiple service level agreements.  相似文献   

4.
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over 100 processors and indicate that similar speedup can be obtained with several hundred processors.  相似文献   

5.
采用MPI多进程和Open MP多线程两级并行相结合的方式,实现了循环盒子法的并行计算,并对其预处理算法进行了改进。在国家超算广州中心的"天河-2"系统上,完成了对亿级网格量的超燃冲压发动机燃烧室算例的测试。结果分析表明,进程盒子法和边界盒子法不存在盒子切割数的选择问题,边界盒子法较其他算法具有更好的加速比,可显著提高壁面距离的计算效率。  相似文献   

6.
This paper evaluates a recently created Soil and Water Assessment Tool (SWAT) calibration tool built using the Windows Azure Cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed for three watersheds of increasing size each for a 2 year and 10 year simulation duration. Results show significant speedup in calibration time and, for up to 64 cores, minimal losses in speedup for all watershed sizes and simulation durations. An empirical relationship is presented for estimating the time needed to calibration a SWAT model using the cloud calibration tool as a function of the number of Hydrologic Response Units (HRUs), time steps, and cores used for the calibration.  相似文献   

7.
Collaborative filtering (CF) is an effective technique addressing the information overloading problem, where each user is associated with a set of rating scores on a set of items. For a chosen target user, conventional CF algorithms measure similarity between this user and other users by utilizing pairs of rating scores on common rated items, but discarding scores rated by one of them only. We call these comparative scores as dual ratings, while the non-comparative scores as singular ratings. Our experiments show that only about 10% ratings are dual ones that can be used for similarity evaluation, while the other 90% are singular ones. In this paper, we propose SingCF approach, which attempts to incorporate multiple singular ratings, in addition to dual ratings, to implement collaborative filtering, aiming at improving the recommendation accuracy. We first estimate the unrated scores for singular ratings and transform them into dual ones. Then we perform a CF process to discover neighborhood users and make predictions for each target user. Furthermore, we provide a MapReduce-based distributed framework on Hadoop for significant improvement in efficiency. Experiments in comparison with the state-of-the-art methods demonstrate the performance gains of our approaches.  相似文献   

8.
为了提高大规模场景模型的渲染帧率,提出了一种基于纹理集(texture atlas)技术的优化算法。通过动态空间分配算法,将模型中的纹理以最小空间代价合并为数个大纹理。在此基础上,通过"冗余存储"的方式解决重复纹理模式下的子纹理无法正常显示的问题,并且将模型中节点的纹理坐标更新。实验结果表明,该算法有效且可行,通过该算法优化后的模型的纹理状态切换次数大大减少,同时最大程度地节约了纹理空间,渲染帧率明显提高。  相似文献   

9.
The creation of a routing overlay network on the Internet requires the identification of shorter detour paths between end hosts in comparison to the default path available. These detour paths are typically the edges forming a Triangle Inequality Violation (TIV), an artifact of the Internet delay space where the sum of latencies across an intermediate hop is lesser than the direct latency between the pair of end hosts. These violations are caused mainly due to interdomain routing policies between Autonomous Systems (ASes) and AS peering through Internet eXchange Points (IXPs). Identifying detours for a global overlay network requires large amounts of computational capabilities due to the sheer number of possible paths linking source and destination ASes. In this work, we use parallel programming paradigms to exploit the massively parallel capabilities of analyzing the large network measurement datasets made available to the network research community by CAIDA. We study Internet routes traversing IXPs and measure potential TIVs created by these paths. Large scale analysis of the dataset is carried out by implementing an efficient parallel solution on the CPU and then the general purpose graphics processor unit (GPGPU) as well. Both multicore CPU and GPGPU implementations can be carried out with ease on desktop environments with readily available software. We find both parallel solutions yield high improvements in speedup (2-35x) in comparison to the serial methodologies thereby opening up the possibility of harnessing the power of parallel programming with readily available hardware. The large amount of data analyzed and studied helps draw various inferences for the networking research community in building future scalable Internet routing overlays with greater routing efficiencies.  相似文献   

10.
This paper presents a fast algorithm to compute the global clear-sky irradiation, appropriate for extended high-resolution Digital Elevation Models (DEMs). The latest equations published in the European Solar Radiation Atlas (ESRA) have been used as a starting point for the proposed model and solved using a numerical method. A new calculation reordering has been performed to (1) substantially diminish the computational requirements, and (2) to reduce dependence on both, the DEM size and the simulated period, i.e., the period during which the irradiation is calculated. All relevant parameters related to shadowing, atmospheric, and climatological factors have been considered. The computational results demonstrate that the obtained implementation is faster by many orders of magnitude than all existing advanced irradiation models while maintaining accuracy. Although this paper focuses on the clear-sky irradiation, the developed software also computes the global irradiation applying a filter that considers the clear-sky index.  相似文献   

11.
In this paper, we describe Triana, a distributed problem‐solving environment that makes use of the Grid to enable a user to compose applications from a set of components, select resources on which the composed application can be distributed and then execute the application on those resources. We describe Triana's current pluggable architecture that can support many different modes of operation by the use of flexible writers for many popular Web service choreography languages. We further show, that the Triana architecture is middleware‐independent through the use of the Grid Application Toolkit (GAT) API and demonstrate this through the use of a GAT binding to JXTA. We describe how other bindings being developed to Grid infrastructures, such as OGSA, can seamlessly be integrated within the current prototype by using the switching capability of the GAT. Finally, we outline an experiment we conducted using this prototype and discuss its current status. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

12.
For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. During the last 15 years, Petri nets have attracted more and more attention to help to solve this key problem. Regarding the published papers, it seems clear that hybrid functional Petri nets are the adequate method to model complex biological networks. Today, a Petri net model of biological networks is built manually by drawing places, transitions and arcs with mouse events. Therefore, based on relevant molecular database and information systems biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the application of Petri nets for modeling and simulation of biological networks. Furthermore, we will present a type of access to relevant metabolic databases such as KEGG, BRENDA, etc. Based on this integration process, the system supports semi-automatic generation of the correlated hybrid Petri net model. A case study of the cardio-disease related gene-regulated biological network is also presented. MoVisPP is available at .  相似文献   

13.
In this paper we describe the successful application of the ProB tool for data validation in several industrial applications. The initial case study centred on the San Juan metro system installed by Siemens. The control software was developed and formally proven with B. However, the development contains certain assumptions about the actual rail network topology which have to be validated separately in order to ensure safe operation. For this task, Siemens has developed custom proof rules for Atelier B. Atelier B, however, was unable to deal with about 80 properties of the deployment (running out of memory). These properties thus had to be validated by hand at great expense, and they need to be revalidated whenever the rail network infrastructure changes. In this paper we show how we were able to use ProB to validate all of the about 300 properties of the San Juan deployment, detecting exactly the same faults automatically in a few minutes that were manually uncovered in about one man-month. We have repeated this task for three ongoing projects at Siemens, notably the ongoing automatisation of the line 1 of the Paris Métro. Here again, about a man month of effort has been replaced by a few minutes of computation. This achievement required the extension of the ProB kernel for large sets as well as an improved constraint propagation algorithm. We also outline some of the effort and features that were required in moving from a tool capable of dealing with medium-sized examples towards a tool able to deal with actual industrial specifications. We also describe the issue of validating ProB, so that it can be integrated into the SIL4 development chain at Siemens.  相似文献   

14.
通常在大系统中, 全局信息优化的系统, 其性能要高于局部信息优化系统. 全局信息优化的算法由于大系统的复杂程度往往不可行. 所以通常会用分布式算法来解决此类问题. 在分布式算法中, 为了获得更好的系统性能, 要尽可能多的采用更多的信息信息交换, 然而这样会带来信息网络的负担增大. 本文在预测控制性能指标中引入通信代价, 并提出了一种随着系统状态变化的通信网络拓扑切换方法. 文中给出了该算法在供水管网动态模型中的仿真结果, 表明本方法的可行性.  相似文献   

15.
阐述网格计算概念及其与传统分布式计算的区别。介绍了一种分布式关联规则挖掘算法,并对其进行了几点改进,最后用网格服务实现了该算法。实验测试结果表明,使用网格服务可以合并若干台计算机的计算能力来减少算法的运行时间。  相似文献   

16.
吕品  陈年生  董武世 《微机发展》2006,16(10):14-16
元学习方法是采用集成学习的方式来生成最终的全局预测模型。该方法的基本思想是从已经获得的知识中再进行学习,从而得到最终的数据模式。网格能有效地为元学习提供高性能和分布式的基础设施。文中根据知识网格的概念,在Globus Toolkit的基础上,分析了知识网格的体系结构和它的主要组件。根据分布式元学习的一般过程,设计了在知识网格体系结构下的元学习任务。  相似文献   

17.
《Automatica》1987,23(4):523-533
In this paper, a general class of nonquadratic convex Nash games is studied, from the points of view of existence, stability and iterative computation of noncooperative equilibria. Conditions for contraction of general nonlinear operators are obtained, which are then used in the stability study of such games. These lead to existence and uniqueness conditions for stable Nash equilibrium solutions, under both global and local analysis. Also, convergence of an algorithm which employs inaccurate search techniques is verified. It is shown in the context of a fish war example that the algorithm given is in some aspects superior to various algorithms found in the literature, and is furthermore more meaningful for real world implementation.  相似文献   

18.
A programming method that facilitates interworkstation communications on a local area network (LAN) of microcomputers has been developed. Communications are managed using a set of common access status and data files, which are writen to and read from the file server hard disk.Use of this programming method permits the work load associated with large computational problems to be distributed to various workstations connected to a LAN for concurrent processing, and has resulted in substantial solution time savings in problems that have been run.Test problems have been programmed in IBM Compiled BASIC [1] and are continuing with further programs in BASIC and IBM Professional FORTRAN [2]. Applications to actual computational engineering problems are presently being investigated and are briefly discussed.This paper describes the basic principles underlying the distributed processing technique that was developed and presents several example problems that were run to test the technique and develop benchmark results for a particular LAN configuration.  相似文献   

19.
20.
胡蓉  肖基毅 《微机发展》2007,17(10):99-101
科学和工商业应用需要分析分布在各异构站点的海量数据。这就需要合适的分布式并行系统来存储和管理数据。网格为分布式数据挖掘和知识发现提供了有效的计算支持。文中在讨论知识网格体系结构的基础上,利用可视化网格应用环境VEGA实现了基于网格的分布式数据挖掘过程。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号