首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 108 毫秒
1.
戈登奖(Gordon Bell Prize)是高性能计算应用领域的最高学术奖项。与TOP500重视衡量高性能计算机系统性能相比,该奖项更关注用于解决重要科学问题的高性能计算技术创新,是国际上公认的高性能计算应用技术发展水平的重要标杆。本文综合分析近年来戈登奖的获奖研究成果,尤其是最高性能奖和特别奖的研究特点及科学意义。在此基础上,总结规律,并就如何推进高性能计算应用研究,给出了一些思考,以期为我国从事超级计算应用研究的同仁提供参考。  相似文献   

2.
中小企业是支撑我国经济发展的重要支柱,而高性能计算在推动科技创新中具有重要作用,二者的结合具有重要意义,然而高性能计算技术门槛较高,一定程度上阻碍了中小企业对此技术的有效应用.基于高性能计算环境,建设便捷易用的高性能计算应用社区,目的是探索基于高性能环境的中小企业数值模拟与计算服务社区服务机制,降低中小企业使用高性能技术的门槛,并建立了一套面向中小企业需求的服务规范.  相似文献   

3.
Java在科学计算方面的并行处理   总被引:6,自引:0,他引:6  
讨论了Java和Web技术在科学和工程计算中的作用。应用三种大计算量问题,调查了Java作为高性能并行分布计算语言的可能性。Java将能够很好地成为科学和工程领域的主导语言。  相似文献   

4.
浅析高性能计算应用的需求与发展   总被引:3,自引:0,他引:3  
高性能计算应用在高性能计算技术的支持下为科技创新做出了巨大贡献,并且和高性能计算技术在相辅相成中不断发展.自2004年以来,中国科学院计算机网络信息中心超级计算中心针对中国科学院在“十一五”期间的高性能计算需求在全院范围内开展了多次调研活动,对中国科学院在“十一五”期间高性能计算的整体需求及各应用领域需求的分布情况有了比较全面的了解,其调研结果对“十一五”中国科学院高性能计算环境建设和高性能计算应用的发展具有良好的借鉴作用.首先介绍了国内外高性能计算应用的发展现状,并结合中国科学院高性能计算环境建设和高性能计算应用的发展情况,分析了“十一五”中国科学院高性能计算的应用需求,最后对我国高性能计算应用的发展前景进行了展望.  相似文献   

5.
高性能计算正在成为继理论研究和实验科学之后的第三种科研方法。相对于美国、日本等发达国家,我国的高等院校尝试高性能计算技术起步较晚,高性能应用也尚属初级。那么,利用高性能计算进行科研的原理到底是什么?我国高性能计算的实际应用与国际同行有哪些差距?哪种建设模式更利于将高性能计算中心转换成高校的实际生产力?我国高校提高高性能计算应用水平的发展瓶颈是什么?下面就通过湘潭大学高性能计算平台的建设故事来帮助大家解开这些谜底。  相似文献   

6.
在自强2000高性能集群计算机应用环境中使用NetSolve时,发现该系统存在一些问题。针对这些问题,结合Web Service技术提出了一种应用于高性能计算的网格系统,对该系统的体系结构进行了探讨。该系统有效地把高性能计算资源和网格技术结合在一起,取得了良好的应用效果。  相似文献   

7.
面向21世纪的广域高性能元计算技术   总被引:3,自引:1,他引:2  
一、引言高性能并行与分布计算是关系到国家战略利益的核心高技术,也是国民经济发展的重要基础,是一个重大的基础研究问题。由于计算机微处理器的速度越来越高,网络通信技术的迅猛发展,网络规模的扩大和速度的提高,特别是Internet技术的兴起和广泛应用,网络已经交叉纵横整个世界。与此同时,人类的应用需求朝着高性能、多样性、多功能发展,许多大规模科学计  相似文献   

8.
面向服务的网格高性能计算策略   总被引:1,自引:0,他引:1  
网格技术和Web服务的发展,促成了服务计算的诞生和发展.本文在面向服务的架构下,重新研究传统计算网格下的高性能计算.首先,针舛高性能计算应用的特点,结合面向服务的思想,提出了一种层次资源管理体系结构.其次,分析了适用于网格环境的高性能计算应用的程序结构,并通过有向无循环图(DAG)加以表示.第三,基于上述的资源管理体系结构和高性能计算应用模型,提出了一种改进的动态优先级调度算法.最后,通过仿真实验,分析了提出的算法的性能,实验结果表明提出的算法适用于网格环境,进而验证了本文提出的面向服务的网格高性能计算策略的有效性.  相似文献   

9.
孙俊 《福建电脑》2007,(2):210-210,184
本文根据目前高性能计算在国内外的发展现状以及未来的发展趋势,结合高性能计算的实际的应用领域,系统的分析了福建省目前对高性能计算的需求以及未来高性能计算的发展走向.本文旨在说明在我省加快高性能计算的应用与研究是符合世界进步的明智之举.  相似文献   

10.
吕士颖 《福建电脑》2007,(4):24-24,26
近些年来高性能计算迅猛发展,高性能计算得到了越来越广泛地应用,高性能计算已经成为一个国家综合实力的一个重要标志.我国在高性能硬件发面已经跻身国际先进水平,可是在高性能软件和应用方面和国际先进水平还有很大差距。针对目前这种现状,我国应该采取系列措施加以应对。  相似文献   

11.
由于科学研究与商业应用等对高性能计算的需求与日俱增,高性能计算的性能和系统规模得到迅速发展。但是,急剧增长的功耗严重限制了高性能计算系统的设计和使用,使得低功耗技术成为高性能计算领域的关键技术。作为整个系统的核心组件,作业调度系统立足有限的系统资源,对用户提交的应用进行作业-资源分配,其能效性对于整个高性能计算系统的能耗控制与调节起到至关重要的作用。首先介绍主要的能量效率技术和常用的作业调度策略,然后对当前高性能计算作业调度能效性进行分析,并讨论了其面临的挑战及未来发展方向。  相似文献   

12.
随着虚拟化技术和云计算技术的发展,越来越多的高性能计算应用运行在云计算资源上.在基于虚拟化技术的高性能计算云系统中,高性能计算应用运行在多个虚拟机之中,这些虚拟机可能放置在不同的物理节点上.若多个通信密集型作业的虚拟机放置在相同的物理节点上,虚拟机之间将竞争物理节点的网络Ⅰ/O资源,如果虚拟机对网络Ⅰ/O资源的需求超过物理节点的网络Ⅰ/O带宽上限,将严重影响通信密集型作业的计算性能.针对虚拟机对网络Ⅰ/O资源的竞争问题,提出一种基于网络Ⅰ/O负载均衡的虚拟机放置算法NLPA,该算法采用网络Ⅰ/O负载均衡策略来减少虚拟机对网络Ⅰ/O资源的竞争.实验表明,与贪心算法进行比较,对于同样的高性能计算作业测试集,NLPA算法在完成作业的计算时间、系统中的网络Ⅰ/O负载吞吐率、网络Ⅰ/O负载均衡3个方面均有更好的表现.  相似文献   

13.
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi‐threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full‐scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread‐safe Java messaging system—MPJ Express. The first application is the Gadget‐2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite‐domain time‐difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
The field of scientific workflow management systems has grown significantly as applications start using them successfully. In 2007, several active researchers in scientific workflow developments presented the challenges for the state of the art in workflow technologies at that time. Many issues have been addressed, but one of them named ‘dynamic workflows and user steering’ remains with many open problems despite the contributions presented in the recent years. This article surveys the early and current efforts in this topic and proposes a taxonomy to identify the main concepts related to addressing issues in dynamic steering of high performance computing (HPC) in scientific workflows. The main concepts are related to putting the human-in-the-loop of the workflow lifecycle, involving user support in real-time monitoring, notification, analysis and interference by adapting the workflow execution at runtime.  相似文献   

15.
Today, various Science Gateways created in close collaboration with scientific communities provide access to remote and distributed HPC, Grid and Cloud computing resources and large-scale storage facilities. However, as we have observed there are still many entry barriers for new users and various limitations for active scientists. In this paper we present our latest achievements and software solutions that significantly simplify the use of large scale and distributed computing. We describe several Science Gateways that have been successfully created with the help of our application tools and the QCG (Quality in Cloud and Grid) middleware, in particular Vine Toolkit, QCG-Portal and QCG-Now, and make the use of HPC, Grid and Cloud more straightforward and transparent. Additionally, we share the best practices and lessons learned after creating jointly with user communities many domain-specific Science Gateways, e.g. dedicated for physicists, medical scientists, chemists, engineers and external communities performing multi-scale simulations. As our deployed software solutions have reached recently a critical mass of active users in the PLGrid e-infrastructure in Poland, we also discuss in this paper how changing technologies, visual design and user experience could impact the way we should re-design Science Getaways or even develop new attractive tools, e.g. desktop or mobile-based applications in the future. Finally, we present information and statistics regarding the behaviour of users to help readers understand how new capabilities and functionalities may influence the growth of user interest in Science Gateways and HPC technologies.  相似文献   

16.
The energy consumption of High Performance Computing (HPC) systems, which are the key technology for many modern computation-intensive applications, is rapidly increasing in parallel with their performance improvements. This increase leads HPC data centers to focus on three major challenges: the reduction of overall environmental impacts, which is driven by policy makers; the reduction of operating costs, which are increasing due to rising system density and electrical energy costs; and the 20 MW power consumption boundary for Exascale computing systems, which represent the next thousandfold increase in computing capability beyond the currently existing petascale systems. Energy efficiency improvements will play a major part in addressing these challenges.This paper presents a toolset, called Power Data Aggregation Monitor (PowerDAM), which collects and evaluates data from all aspects of the HPC data center (e.g. environmental information, site infrastructure, information technology systems, resource management systems, and applications). The aim of PowerDAM is not to improve the HPC data center's energy efficiency, but is to collect energy relevant data for analysis without which energy efficiency improvements would be non-trivial and incomplete. Thus, PowerDAM represents a first step towards a truly unified energy efficiency evaluation toolset needed for improving the overall energy efficiency of HPC data centers.  相似文献   

17.
18.
Abstract

As an alternative to traditional computing architecture, cloud computing now is rapidly growing. However, it is based on models like cluster computing in general. Now supercomputers are getting more and more powerful, helping scientists have more indepth understanding of the world. At the same time, clusters of commodity servers have been mainstream in the IT industry, powering not only large Internet services but also a growing number of data-intensive scientific applications, such as MPI based deep learning applications. In order to reduce the energy cost, more and more efforts are made to improve the energy consumption of HPC systems. Because I/O accesses account for a large portion of the execution time for data intensive applications, it is critical to design energy-aware parallel I/O functions for addressing challenges related to HPC energy efficiency. As the de facto standard for designing parallel applications in cluster environment, the Message Passing Interface has been widely used in high performance computing, therefore, getting the energy consumption information of MPI applications is critical for improving the energy efficiency of HPC systems. In this work we first present our energy measurement tool, a software framework that eases the energy collection in cluster environment. And then we present an approach which can optimise the parallel I/O operation’s energy efficiency. The energy scheduling algorithm is evaluated in a cluster.  相似文献   

19.
With the rise of parallel applications complexity, the needs in term of computational power are continually growing. Recent trends in High-Performance Computing (HPC) have shown that improvements in single-core performance will not be sufficient to face the challenges of an exascale machine: we expect an enormous growth of the number of cores as well as a multiplication of the data volume exchanged across compute nodes. To scale applications up to Exascale, the communication layer has to minimize the time while waiting for network messages. This paper presents a message progression based on Collaborative Polling which allows an efficient auto-adaptive overlapping of communication phases by performing computing. This approach is new as it increases the application overlap potential without introducing overheads of a threaded message progression. We designed our approch for Infiniband into a thread-based MPI runtime called MPC. We evaluate the gain from Collaborative Polling on the NAS Parallel Benchmarks and three scientific applications, where we show significant improvements in communication times up to a factor of 2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号