共查询到20条相似文献,搜索用时 484 毫秒
1.
基于MPICH2的高性能计算集群系统研究 总被引:4,自引:0,他引:4
目前在高等学校和科研机构中对于高性能计算的需求很大,而商业的超级计算机性能虽高但价格昂贵,同时这些单位又都拥有大量普通的PC机和网络设备.为了利用现有硬件资源获取高性能计算能力,文中研究了在PC机和Linux环境下构建基于MPICH2的高性能计算集群系统的方法,搭建了一个拥有16个节点的系统并利用高性能Linpack基准测试方法进行了性能测试.测试结果表明,这种构建高性能计算集群系统的方法切实可行,是低成本获取高性能计算能力的良好途径. 相似文献
2.
3.
目前在高等学校和科研机构中对于高性能计算的需求很大,而商业的超级计算机性能虽高但价格昂贵,同时这些单位又都拥有大量普通的PC机和网络设备。为了利用现有硬件资源获取高性能计算能力,文中研究了在PC机和Linux环境下构建基于MPICH2的高性能计算集群系统的方法,搭建了一个拥有16个节点的系统并利用高性能Linpack基准测试方法进行了性能测试。测试结果表明,这种构建高性能计算集群系统的方法切实可行,是低成本获取高性能计算能力的良好途径。 相似文献
4.
高能物理计算平台中的HTCondor和SLURM计算集群为多个高能物理实验提供数据处理服务,然而HTCondor并行作业调度效率较低、SLURM难以应对大量串行作业,且计算平台整体资源管理及调度策略过于简单。为满足高能物理计算集群高负荷运行的需求,在传统作业调度器上增加作业管理层,设计双层作业调度系统,通过高效调度串并行作业并兼顾实验组间资源的使用公平性,实现用户对作业的细粒度管理。测试结果表明,双层作业调度系统支持大批量高能物理作业的快速提交,并充分利用计算平台的总体资源,具有较好的作业调度性能。 相似文献
5.
研究了集群的系统结构和主要优势,以及集群式高性能计算系统的诞生;分析了集群式高性能计算系统的架构和构建方式,集群构建包括网络部署、存储系统、计算节点、管理节点、登录节点等部分。在此基础上构建了基于Linux的集群式高性能计算系统。 相似文献
6.
7.
在集群系统中,调度模块的设计对整个系统的性能而言是至关重要的。文章针对以Linux操作系统为平台的集群系统,提出了一种联合调度模块的实现方案。该方案在不改动Linux内核的前提下,实现了基于并行作业级的调度,从而大大提高了集群系统的性能和资源的协调利用率。 相似文献
8.
9.
为适应海量地震数据以及集群并行规模不断增大的趋势,提出了多维度成像空间分解算法.根据大规模集群系统有多个并行层次的特征,首先沿炮检距方向分解成像空间;然后再沿in-line方向继续切分,直到成像空间小于计算节点物理内存;最后在二维地表上以面元为单位分解成像空间.算法实现上,共炮检距成像空间映射到计算节点组上,计算节点内的CPU核之间按照round-robin均分面元.该并行算法在不增加数据通信量的情况下,降低了内存的需求,减少了通信开销和同步时间,提高了数据的局部性.实际资料测试表明,该并行算法比传统的输出并行和输入并行算法具备更好的性能与可扩展性,实验作业调度多达497个节点、7 552个线程,仍然具备较好的加速效果. 相似文献
10.
11.
本文介绍了如何用Java实现粒子模拟的面向对象的并行程序设计方法,并在由16个Pentium Ⅲ 1.6G CPU组成的微机机群上测试了其性能。同时,为了提高它的计算性能,我们还介绍了利用JNI实现Java和Fortran混合编程的方法,即把程序中计算量较大的部分用Fortran语言代替,以提高其计算性能。结果表明,Java/Fortran混合编程是进行科科学计算的一种有效途径。 相似文献
12.
在MPICH集群分布系统下复杂分子动力学的并行计算 总被引:1,自引:1,他引:0
在以MPICH技术构建的局域网集群系统下,利用分子动力学并行计算软件Protomol和三维分子模拟软件VMD构建大规模并行计算平台,完成若干复杂分子动力学典型实例的仿真运算。计算结果表明:采用并行计算能持续有效地利用现有计算机资源,同时大幅度提高计算效率,在现有并行集群系统下可以获得3倍以上的加速比,为实现复杂分子动力学的深入研究提供了可行方案。 相似文献
13.
为解决超大图像(2048×2048)的FBP与OR-OSEM扇束图像重建,作者采用PC机群的并行处理技术。将图像重建算法改写为并行运算方式,按角度数均匀地分配计算任务给各个CPU。并行运算结果表明:图像重建速度与CPU的个数基本上成线性正比关系,可提高近25倍(CPU数为25时)。超大图像的在线重建可采用CPU阵列机来高速实现,这一技术对发展高精度CT具有重要的作用。 相似文献
14.
在Red Hat Linux 9.0操作系统环境下,通过建立双CPU服务器,并采用MPICH并行技术,实现双CPU的并行计算。采用分子动力学模拟程序Amber 7.0分析双CPU并行系统的计算效率,结果表明:该并行系统能够有效地利用现有计算资源,同时计算效率得到较大幅度地提高。在该系统的基础上,采用分子动力学模拟计算生物大分子核酸与药物小分子复合物,在分子水平上提供了较详细、明确的结构变化情况。 相似文献
15.
16.
Previously, large-scale fluid dynamics problem required supercomputers, such as the Cray, and took a long time to obtain a solution. Clustering technology has changed the world of the supercomputer and fluid dynamics. Affordable cluster computers have replaced the huge and expansive supercomputers in computational fluid dynamics (CFD) field in recent years. Even supercomputers are designed in the form of clusters based on high-performance servers. This paper describes the configuration of the affordable PC hardware cluster as well as the parallel computing performance using commercial CFD code in the developed cluster. A multi-core cluster using the Linux operating system was developed with affordable PC hardware and low-cost high-speed gigabit network switches instead of Myrinet or Infiniband. The PC cluster consisted of 52 cores and easily expandable up to 96 cores in the current configuration. For operating software, the Rock cluster package was installed in the master node to minimize the need for maintenance. This cluster was designed to solve large fluid dynamics and heat transfer problems in parallel. Using a commercial CFD package, the performance of the cluster was evaluated by changing the number of CPU cores involved in the computation. A forced convection problem around a linear cascade was solved using the CFX program, and the heat transfer coefficient along the surface of the turbine cascade was simulated. The mesh of the model CFD problem has 1.5 million nodes, and the steady computation was performed for 2,000 time-integrations. The computation results were compared with previously published heat transfer experimental results to check the reliability of the computation. A comparison of the simulation and experimental results showed good agreement. The performance of the designed PC cluster increased with increasing number of cores up to 16 cores The computation (elapsed) 16-core was approximately three times faster than that with a 4-core. 相似文献
17.
Rajkumar Buyya 《Software》2000,30(7):723-739
Workstation/PC clusters have become a cost‐effective solution for high performance computing. C‐DAC's PARAM 10000 (or OpenFrame, internal code name) is a large cluster of high‐performance workstations interconnected through low‐latency and high bandwidth networks. The management and control of such a huge system is a tedious and challenging task since workstations/PCs are typically designed to work as a standalone system rather than part of a cluster. We have designed and developed a tool called PARMON that allows effective monitoring and control of large clusters. It supports the monitoring of critical system resource activities and their utilization at three different levels: entire system, node and component level. It also allows the monitoring of multiple instances of the same component; for instance, multiple processors in SMP type cluster nodes. PARMON is a portable, flexible, interactive, scalable, location‐transparent, and comprehensive environment based on client–server technology. The major components of PARMON are parmon‐server—system resource activities and utilization information provider and parmon‐client—a GUI based client responsible for interacting with parmon‐server and users for data gathering in real‐time and presenting information graphically for visualization. The client is developed as a Java application and the server is developed as a multithreaded server using C and POSIX/Solaris threads since Java does not support interfaces to access system internals. PARMON is regularly used to monitor PARAM 10000 supercomputer, a cluster of 48+ Ultra‐4 workstations powered by the Solaris operating system. The recent popularity of Beowulf‐class clusters (dedicated Linux clusters) in terms of price–performance ratio has motivated us to port PARMON to Linux (accomplished by porting system dependent portions of parmon‐server). This enables management/monitoring of both Solaris and Linux‐based clusters (federated clusters) through a single user interface. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
18.
19.
基于PC集群系统的MPICH大规模并行计算实现与应用研究 总被引:5,自引:2,他引:5
在Win2000 Server操作系统环境下采用MPICH并行技术,建立了基于PC局域网平台的并行集群系统,并通过VC 6.0调用消息传递库MPI函数完成了3个并行计算实例。符合MPICH规范的PC并行集群系统配置简便、系统稳定、界面友好、性价比高,能够持续利用计算机现有资源和大幅度提高计算效率。 相似文献
20.
由于硬件资源的更新换代,集群中各个节点的计算能力会变得不一致。集群异构的出现导致集群计算资源不均衡。目前Spark大数据平台在任务调度时未考虑集群的异构性以及节点资源的利用情况,影响了系统性能的发挥。构建了集群节点的评价指标体系,提出利用节点的优先级来表示其计算能力。提出的节点优先级调整算法能够根据任务执行过程中节点的状态动态调整各个节点的优先级。基于节点优先级的Spark动态自适应调度算法(SDASA)则根据实时的节点优先级值完成任务的分配。实验表明,SDASA能够缩短任务在集群中的执行时间,从而提升集群整体计算性能。 相似文献