首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
科学问题求解环境以一套高效的科学计算工具集为基础,为求解科学问题提供了一个方便易用的平台。把传统的科学问题求解环境同网络的共享与协同特征相结合,为满足科学问题求解过程中普遍存在的知识复用与大规模计算的需求提供了新的机遇。该文介绍了科学问题求解环境的发展历史及存在的问题,在此基础上提出了一种解决方案——基于网络的科学问题求解环境(Grid-PSE)。  相似文献   

2.
Scientists and engineers need computational power to satisfy the increasing resource intensive nature of their simulations. For example, running Parameter Sweep Experiments (PSE) involve processing many independent jobs, given by multiple initial configurations (input parameter values) against the same program code. Hence, paradigms like Grid Computing and Cloud Computing are employed for gaining scalability. However, job scheduling in Grid and Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, specially those from Swarm Intelligence (SI), have been proposed. These techniques have the ability of searching for problem solutions in a very efficient way. This paper surveys SI-based job scheduling algorithms for bag-of-tasks applications (such as PSEs) on distributed computing environments, and uniformly compares them based on a derived comparison framework. We also discuss open problems and future research in the area.  相似文献   

3.
徐顺  王武  张鉴  姜金荣  金钟  迟学斌 《软件学报》2021,32(8):2365-2376
研发适应国产异构计算环境的高性能计算算法与软件是非常重要的课题,对我国高性能计算软件研发匹配高性能计算硬件高水平发展的速度具有重要意义.首先,简要介绍高性能计算应用软件的现状、趋势和面临挑战,并对几类典型高性能计算应用软件开展并行计算算法特征分析,涵盖了宇宙N体模拟、地球系统模式、计算材料相场动力学、分子动力学、量子计算化学和格点量子色力学等多个问题、尺度和领域.其次,讨论了面向国产异构计算系统的对策,提炼出若干典型应用算法和软件的共性问题,涉及核心算法、算法发展、优化策略等.最后,面向异构计算体系结构,对高性能计算算法与软件进行了总结.  相似文献   

4.
5.
Studies of computational scientists developing software for high-performance computing systems indicate that these scientists face unique software engineering issues. Previous failed attempts to transfer SE technologies to this domain haven't always taken these issues into account. To support scientific-software development, the SE community can disseminate appropriate practices and processes, develop educational materials specifically for computational scientists, and investigate the large-scale reuse of development frameworks.  相似文献   

6.
High performance computing problems usually have the characteristics of parallelization of subtasks, and a lot of computing resources are consumed in the process of execution. It has been proved that traditional cloud computing based on virtual machine can deal with such problems, but the management of distributed environment and the distributed design of solutions make the processing more complex. Function computing is a new type of serverless cloud computing paradigm, its automatic expansion and considerable computing resources can be well combined with HPC problems. However, the cold start delay is an unavoidable problem on the public cloud function computing platform, especially in the task of HPC problems having high concurrent jobs of which delay will be further magnified. In this paper, we first analyze the completion time of a simple HPC task under cold start and hot start conditions, and analyze the causes of additional delay. According to these analyses, we combine the time series ana lysis tools and the platform's automatic expansion mechanism to propose an effective preheating method, which can effectively reduce the cold start delay of HPC tasks on the function computing platform.  相似文献   

7.
is a comprehensive set of tools for creating customized graphical user interfaces (GUIs). It draws from the concept of computing portals, which are here seen as interfaces to application-specific computing services for user communities. While was originally designed for the use in computational grids, it can be used in client/server environments as well.Compared to other GUI generators, is more versatile and more portable. It can be employed in many different application domains and on different target platforms. With , application experts (rather than computer scientists) are able to create their own individually tailored GUIs.  相似文献   

8.
This paper describes the functionality and software architecture of a generic problem‐solving environment (PSE) for collaborative computational science and engineering. A PSE is designed to provide transparent access to heterogeneous distributed computing resources, and is intended to enhance research productivity by making it easier to construct, run, and analyze the results of computer simulations. Although implementation details are not discussed in depth, the role of software technologies such as CORBA, Java, and XML is outlined. An XML‐based component model is presented. The main features of a Visual Component Composition Environment for software development, and an Intelligent Resource Management System for scheduling components, are described. Some prototype implementations of PSE applications are also presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

9.
Multicore computational accelerators such as GPUs are now commodity components for high-performance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.  相似文献   

10.
Adapting scientific computing problems to clouds using MapReduce   总被引:1,自引:0,他引:1  
Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study this, we established a scientific computing cloud (SciCloud) project and environment on our internal clusters. The main goal of the project is to study the scope of establishing private clouds at the universities. With these clouds, students and researchers can efficiently use the already existing resources of university computer networks, in solving computationally intensive scientific, mathematical, and academic problems. However, to be able to run the scientific computing applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. This paper summarizes the challenges associated with reducing iterative algorithms to the MapReduce model. Algorithms used by scientific computing are divided into different classes by how they can be adapted to the MapReduce model; examples from each such class are reduced to the MapReduce model and their performance is measured and analyzed. The study mainly focuses on the Hadoop MapReduce framework but also compares it to an alternative MapReduce framework called Twister, which is specifically designed for iterative algorithms. The analysis shows that Hadoop MapReduce has significant trouble with iterative problems while it suits well for embarrassingly parallel problems, and that Twister can handle iterative problems much more efficiently. This work shows how to adapt algorithms from each class into the MapReduce model, what affects the efficiency and scalability of algorithms in each class and allows us to judge which framework is more efficient for each of them, by mapping the advantages and disadvantages of the two frameworks. This study is of significant importance for scientific computing as it often uses complex iterative methods to solve critical problems and adapting such methods to cloud computing frameworks is not a trivial task.  相似文献   

11.
Computational biology research is now faced with the burgeoning number of genome data. The rigorous postprocessing of this data requires an increased role for high-performance computing (HPC). Because the development of HPC applications for computational biology problems is much more complex than the corresponding sequential applications, existing traditional programming techniques have demonstrated their inadequacy. Many high level programming techniques, such as skeleton and pattern-based programming, have therefore been designed to provide users new ways to get HPC applications without much effort. However, most of them remain absent from the mainstream practice for computational biology. In this paper, we present a new parallel pattern-based system prototype for computational biology. The underlying programming techniques are based on generic programming, a programming technique suited for the generic representation of abstract concepts. This allows the system to be built in a generic way at application level and, thus, provides good extensibility and flexibility. We show how this system can be used to develop HPC applications for popular computational biology algorithms and lead to significant runtime savings on distributed memory architectures.  相似文献   

12.
基于Babel的构件程序设计   总被引:1,自引:1,他引:0  
为了解决高性能科学计算程序设计当中存在的开发难度大,开发周期长以及时开发人员要求高等问题,人们已经开始将软件构件技术引入该领域。由美国能源部、犹他州大学、印弟安那大学等联合提出的CCA便是研究高性能科学计算构件技术的项目之一。本文主要介绍了CCA以及CCA框架下的语言互操作工具-Babel的相关情况,并且通过NPB基准测试程序IS详细描述了Babel的使用,分析了基于Babel的程序设计对程序性能的影响。初步实验表明Babel能够有效解决语言的互操作问题,在面向科学计算的构件程序设计环境中能够发挥关键作用。  相似文献   

13.
A reference architecture for scientific virtual laboratories   总被引:3,自引:0,他引:3  
H.  E. C.  A.  C.  L. O. 《Future Generation Computer Systems》2001,17(8):999-1008
Recent advances in the IT can be applied to properly support certain complex requirements in the scientific and engineering domains. In experimental sciences, for instance, researchers should be assisted with conducting their complex scientific experimentation and supporting their collaboration with other scientists. The main requirements identified in such domains include the management of large data sets, distributed collaboration support, and high-performance issues, among others. The virtual laboratory project initiated at the University of Amsterdam aims at the development of a hardware and software reference architecture, and an open, flexible, and configurable laboratory framework to enable scientists and engineers with working on their experimentation problems, while making optimum use of modern information technology approaches. This paper describes the current stage of design of a reference architecture for this scientific virtual laboratory, and focuses further on the cooperative information management component of this architecture, and exemplifying its application to experimentation domain of biology.  相似文献   

14.
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.  相似文献   

15.
身份认证与密钥协商是接入物联网首先要考虑的安全问题.传统的物联网身份认证是基于"云中心-终端设备"的认证架构.而随着边缘计算技术的引入,认证架构转变为"边缘设备-终端设备"的架构,传统的认证方式不再适用.此外,物联网中存在多个通信域,不同域中的设备之间需要进行跨域间认证与密钥协商.针对以上问题,本文设计了边缘计算环境下...  相似文献   

16.
Abstract

As an alternative to traditional computing architecture, cloud computing now is rapidly growing. However, it is based on models like cluster computing in general. Now supercomputers are getting more and more powerful, helping scientists have more indepth understanding of the world. At the same time, clusters of commodity servers have been mainstream in the IT industry, powering not only large Internet services but also a growing number of data-intensive scientific applications, such as MPI based deep learning applications. In order to reduce the energy cost, more and more efforts are made to improve the energy consumption of HPC systems. Because I/O accesses account for a large portion of the execution time for data intensive applications, it is critical to design energy-aware parallel I/O functions for addressing challenges related to HPC energy efficiency. As the de facto standard for designing parallel applications in cluster environment, the Message Passing Interface has been widely used in high performance computing, therefore, getting the energy consumption information of MPI applications is critical for improving the energy efficiency of HPC systems. In this work we first present our energy measurement tool, a software framework that eases the energy collection in cluster environment. And then we present an approach which can optimise the parallel I/O operation’s energy efficiency. The energy scheduling algorithm is evaluated in a cluster.  相似文献   

17.
《国际计算机数学杂志》2012,89(15):2047-2060
The large spatial scale associated with the modelling of strong ground motion in three dimensions requires enormous computational resources. For this reason, the simulation of soil shaking requires high-performance computing. The aim of this work is to present a new parallel approach for these kind of problems based on domain decomposition technique. The main idea is to subdivide the original problem into local ones. It allows to investigate large-scale problems that cannot be solved by a serial code. The performance of our parallel algorithm has been examined analysing computational times, speed-up and efficiency. Results of this approach are shown and discussed.  相似文献   

18.
提出了一个约束规划框架,用于支持装备全寿命保障过程中各类约束满足问题的求解。框架包括问题规约,业务领域、寿命周期和求解策略4个相互正交的剖面,分别针对问题的目标函数和约束条件,业务领域中的保障内容,寿命周期各阶段的任务划分,以及问题求解的策略和算法进行组织。通过对问题规约、业务领域、寿命周期剖面的正交组合,用户能够方便地对问题规约进行定义、复合和精化。框架中还提供了一组启发式规则,用于帮助用户在问题求解剖面中快速确定一个有效的算法,并将之应用于具体的问题规约。  相似文献   

19.
基于并行云计算模式的建筑结构设计   总被引:3,自引:2,他引:1  
刘晓群  邹欣  范虹 《电子技术应用》2011,(10):123-125,130
在建筑结构并行计算与云计算相结合的基础上,提出了结合两种计算技术的软硬件结构及应用方法.云计算及并行计算技术的结合为超高、超长、大跨度复杂建筑工程计算问题提供了实现高效能计算的可能.理论分析表明,该方法优于传统的并行计算技术,能够为实现建筑结构的高效能计算提供新思路.  相似文献   

20.
Load balancing is a very important and complex problem in computational grids. A computational grid differs from traditional high performance computing systems in the heterogeneity of the computing nodes and communication links, as well as background workloads that may be present in the computing nodes. There is a need to develop algorithms that could capture this complexity yet can be easily implemented and used to solve a wide range of load balancing scenarios. Artificial life techniques have been used to solve a wide range of complex problems in recent times. The power of these techniques stems from their capability in searching large search spaces, which arise in many combinatorial optimization problems, very efficiently. This paper studies several well-known artificial life techniques to gauge their suitability for solving grid load balancing problems. Due to their popularity and robustness, a genetic algorithm (GA) and tabu search (TS) are used to solve the grid load balancing problem. The effectiveness of each algorithm is shown for a number of test problems, especially when prediction information is not fully accurate. Performance comparisons with Min-min, Max-min, and Sufferage are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号