首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Simulation represents a powerful technique for the analysis of dependability and performance aspects of distributed systems. For large-scale critical systems, simulation demands complex experimentation environments and the integration of different tools, in turn requiring sophisticated modeling skills. Moreover, the criticality of the involved systems implies the set-up of expensive testbeds on private infrastructures. This paper presents a middleware for performing hybrid simulation of large-scale critical systems. The services offered by the middleware allow the integration and interoperability of simulated and emulated subsystems, compliant with the reference interoperability standards, which can provide greater realism of the scenario under test. The hybrid simulation of complex critical systems is a research challenge due to the interoperability issues of emulated and simulated subsystems and to the cost associated with the scenarios to set up, which involve a large number of entities and expensive long running simulations. Therefore, a multi-objective optimization approach is proposed to optimize the simulation task allocation on a private cloud.  相似文献   

2.
The complexity of developing and deploying context-aware pervasive-computing applications calls for distributed software infrastructures that assist applications to collect, aggregate, and disseminate contextual data. In this paper, we motivate a data-centric design for such an infrastructure to support context-aware applications. Our middleware system, Solar, treats contextual data sources as stream publishers. The core of Solar is a scalable and self-organizing peer-to-peer overlay to support data-driven services. We describe how different services can be systematically integrated on top of the Solar overlay and evaluate the resource discovery and data-dissemination services. We also discuss our experience and lessons learned when using Solar to support several implemented scenarios. We conclude that a data-centric infrastructure is necessary to facilitate both the development and deployment of context-aware pervasive-computing applications.  相似文献   

3.
I/O- and data-intensive workloads, as represented by the Grand Challenge problems, multimedia applications, cosmology simulations, climate modeling, and large collaborative visualizations, to name a few, entail innovative approaches to alleviate the I/O (both bandwidth and data access) performance bottlenecks. The advent of low-cost hardware platforms, such as the Beowulf clusters, has opened up numerous possibilities in mass data storage, scalable architectures, and large-scale simulations. The objective of this Special Issue is to discuss problems and solutions, to identify new issues, and to help shape future research and development directions in these areas. From these perspectives, the Special Issue addresses the problems encountered at the hardware, middleware, and application levels, providing conceptual as well as empirical treatments.  相似文献   

4.
Evaluating new ideas for job scheduling or data transfer algorithms in large-scale grid systems is known to be notoriously challenging. Existing grid simulators expect to receive a realistic workload as an input. Such input is difficult to obtain in the absence of an in-depth study of representative grid workloads.In this work, we analyze the workload of the ATLAS experiment at CERN at the LHC, processed on the resources of Nordic Data Grid Facility. ATLAS is one of the biggest grid technology users, with extreme demands for CPU power, data volume and bandwidth. The analysis is based on the data sample with ∼1.6 million jobs, 3029 TB of data transfer, and 873 years of processor time. Our additional contributions are (a) scalable workload models that can be used to generate a synthetic workload for a given number of jobs, (b) an open-source workload generator software integrated with existing grid simulators, and (c) suggestions for grid system designers based on the insights of our analysis.  相似文献   

5.
Programming distributed computer systems is difficult because of complexities in addressing remote entities, message handling, and program coupling. As systems grow, scalability becomes critical, as bottlenecks can serialize portions of the system. When these distributed system aspects are exposed to programmers, code size and complexity grow, as does the fragility of the system. This paper describes a distributed software architecture and middleware implementation that combines object-based blackboard-style communications with data-driven and periodic application scheduling to greatly simplify distributed programming while achieving scalable performance. Data-Activated Replication Object Communications (DAROC) allows programmers to treat shared objects as local variables while providing implicit communications.  相似文献   

6.
为利用多种高性能计算资源的计算能力,设计一种可扩展的桌面问题求解环境计算加速中间件,采用应用层、中间层和计算层 3层结构,以降低系统设计的复杂度,支持多种并行后端、分布式扩展以及并行后端的即插即用。用该加速中间件进行Matlab中的LU分解实验,结果证明了其有效性。  相似文献   

7.
F-MPJ: scalable Java message-passing communications on parallel systems   总被引:1,自引:0,他引:1  
This paper presents F-MPJ (Fast MPJ), a scalable and efficient Message-Passing in Java (MPJ) communication middleware for parallel computing. The increasing interest in Java as the programming language of the multi-core era demands scalable performance on hybrid architectures (with both shared and distributed memory spaces). However, current Java communication middleware lacks efficient communication support. F-MPJ boosts this situation by: (1) providing efficient non-blocking communication, which allows communication overlapping and thus scalable performance; (2) taking advantage of shared memory systems and high-performance networks through the use of our high-performance Java sockets implementation (named JFS, Java Fast Sockets); (3) avoiding the use of communication buffers; and (4) optimizing MPJ collective primitives. Thus, F-MPJ significantly improves the scalability of current MPJ implementations. A performance evaluation on an InfiniBand multi-core cluster has shown that F-MPJ communication primitives outperform representative MPJ libraries up to 60 times. Furthermore, the use of F-MPJ in communication-intensive MPJ codes has increased their performance up to seven times.  相似文献   

8.
A Beowulf‐type cluster can: (1) mitigate many issues associated with the analysis of large, complex remotely sensed data sets; (2) shorten the response time of operational agencies to crisis‐management situations; and (3) expedite the reanalysis of large archives of satellite data. Whereas most Beowulf‐type designs support modeling applications, the Parallel Image Processing Environment (PIPE) addresses the unique requirements of remote sensing applications. PIPE has four hierarchical layers: hardware, operating system, middleware and applications. Rocks, a middleware sublayer, manages the cluster. DIAL‐developed interprocess communication and control daemons form the second middleware sublayer. They encapsulate user‐defined applications and thereby support automated, user‐transparent parallelization of satellite data analyses, implemented in the applications layer using generalized constructs. The daemons also monitor resource (computational and I/O) utilization on a node/thread basis, a feature not supported by other generally available monitoring utilities. The application support libraries are fully extensible, facilitate the reuse of modular and commonly used software functions in new applications and thereby reduce both the cost and time to implement new applications. Two applications (signal analysis, image classification) show PIPE's versatility and performance characteristics. PIPE is intrinsically scalable, reliable and can be incrementally implemented. A comparison with other embarrassingly parallel systems is also provided. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
The Logistical Computing and Internetworking (LoCI) project is a reflection of the way that the next generation internetworking fundamentally changes our definition of high performance wide area computing. A key to achieving this aim is the development of middleware that can provide reliable, flexible, scalable, and cost-effective delivery of data with quality of service guarantees to support high performance applications of all types. The LoCI effort attacks this problem with a simple but innovative strategy. At the base of the LoCI project is a richer view of the use of storage in communication and information sharing.  相似文献   

10.
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号