首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computers & Fluids》2006,35(8-9):910-919
This report presents a comprehensive survey of the effect of different data layouts on the single processor performance characteristics for the lattice Boltzmann method both for commodity “off-the-shelf” (COTS) architectures and tailored HPC systems, such as vector computers. We cover modern 64-bit processors ranging from IA32 compatible (Intel Xeon/Nocona, AMD Opteron), superscalar RISC (IBM Power4), IA64 (Intel Itanium 2) to classical vector (NEC SX6+) and novel vector (Cray X1) architectures. Combining different data layouts with architecture dependent optimization strategies we demonstrate that the optimal implementation strongly depends on the architecture used. In particular, the correct choice of the data layout could supersede complex cache-blocking techniques in our kernels. Furthermore our results demonstrate that vector systems can outperform COTS architectures by one order of magnitude.  相似文献   

2.
Cheng  H. 《Computer》1989,22(9)
Vector pipelining and chaining are clarified through the use of timing and pipeline diagrams of the instruction execution process. The technique for evaluating the performance of the concurrent vector operations of vector processors is evaluated by testing two of the most widely used computers with vector facilities: the IBM 3090 and Cray X-MP. On the basis of the testing results analyzed at the assembler level, suggestions are given for machine users and designers about vectorization on these two machines. The ideas presented can be applied to other vector processors. The actual implementations, however, may differ, depending on individual machine architecture  相似文献   

3.
The subset‐sum problem is a well‐known NP‐complete combinatorial problem that is solvable in pseudo‐polynomial time, that is, time proportional to the number of input objects multiplied by the sum of their sizes. This product defines the size of the dynamic programming table used to solve the problem. We show how this problem can be parallelized on three contemporary architectures, that is, a 128‐processor Cray Extreme Multithreading (XMT) massively multithreaded machine, a 16‐processor IBM x3755 shared memory machine, and a 240‐core NVIDIA FX 5800 graphics processing unit (GPU). We show that it is straightforward to parallelize this algorithm on the Cray XMT primarily because of the word‐level locking that is available on this architecture. For the other two machines, we present an alternating word algorithm that can implement an efficient solution. Our results show that the GPU performs well for problems whose tables fit within the device memory. Because GPUs typically have memories in the order of 10GB, such architectures are best for small problem sizes that have tables of size approximately 1010. The IBM x3755 performs very well on medium‐sized problems that fit within its 64‐GB memory but has poor scalability as the number of processors increases and is unable to sustain performance as the problem size increases. This machine tends to saturate for problem sizes of 1011 bits. The Cray XMT shows very good scaling for large problems and demonstrates sustained performance as the problem size increases. However, this machine has poor scaling for small problem sizes; it performs best for problem sizes of 1012 bits or more. The results in this paper illustrate that the subset‐sum problem can be parallelized well on all three architectures, albeit for different ranges of problem sizes. The performance of these three machines under varying problem sizes show the strengths and weaknesses of the three architectures. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
The resolution of combinatorial optimization problems can greatly benefit from the parallel and distributed processing which is characteristic of neural network paradigms. Nevertheless, the fine grain parallelism of the usual neural models cannot be implemented in an entirely efficient way either in general-purpose multicomputers or in networks of computers, which are nowadays the most common parallel computer architectures. Therefore, we present a parallel implementation of a modified Boltzmann machine where the neurons are distributed among the processors of the multicomputer, which asynchronously compute the evolution of their subset of neurons using values for the other neurons that might not be updated, thus reducing the communication requirements. Several alternatives to allow the processors to work cooperatively are analyzed and their performance detailed. Among the proposed schemes, we have identified one that allows the corresponding Boltzmann Machine to converge to solutions with high quality and which provides a high acceleration over the execution of the Boltzmann machine in uniprocessor computers.  相似文献   

5.
Multicore processors form the basis of most traditional high performance parallel processing architectures. Early experiences with these computers showed significant performance problems, both with regard to computation and inter‐process communication. The transition from Purple, an IBM POWER5‐based machine, to Cielo, a Cray XE6, as the main capability computing platform for the United States Department of Energy's Advanced Simulation and Computing campaign provides an opportunity to reexamine these issues after experiences with a few generations of multicore‐based machines. Experiences with Purple identified some important characteristics that led to strong performance of complex scientific application programs at very large scales. Herein, we compare the performance of some Advanced Simulation and Computing mission critical applications at capability scale across this transition to multicore processors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
《Computers & Structures》1987,26(4):551-559
The development of general-purpose finite element computer software systems has provided the capability to analyze a wide range of linear and non-linear structural problems. However, these software systems are severely limited for non-linear response calculations because of the available speed on current sequential computers. Recent and projected advances in parallel multiple instruction multiple data (MIMD) computers provide an opportunity for significant gains in computing speed and for broadening the range of structural problems which may be solved. The key to these gains is the effective selection and implementation of algorithms which exploit parallel computing. This paper documents experiences solving transient response calculations on an experimental MIMD computer, termed the Finite Element Machine. The paper describes the algorithm used, its implementation for parallel computations, and results for representative one- and two-dimensional dynamic response test problems. The results show computation speedups of up to 7.83 for eight processors, and indicate that significant speedups of solution time are possible for non-linear dynamic response calculations through the use of many processors and appropriate parallel integration algorithms. The results are extremely encouraging and suggest that significant speedups in structural computations can be achieved through advances in parallel computers.  相似文献   

7.
Karp  A.H. Babb  R.B.  II 《Software, IEEE》1988,5(5):52-67
A simple program that approximates π by numerical quadrature is rewritten to run on nine commercially available processors to illustrate the compilations that arise in parallel programming in FORTRAN. The machines used are the Alliant FX/8, BBN Butterfly, Cray X-MP/48, ELXSI 6400, Encore Multimax, Flex/32, IBM 3090/VF, Intel iPSC, and Sequent Balance. Some general impediments to using parallel processors to do production work are identified  相似文献   

8.
In this paper we present two algorithms for the parallel solution of first-order linear recurrences, We show that the algorithms can be used to efficiently solve both scalar and blocked versions of the problem on vector and SIMD architectures. The first algorithm is a parallel approach whose resulting code can be explicitly vectorized, making it suitable for efficient execution on vector architectures such as the Cray 2. The second algorithm is a modified recursive approach designed to reduce the communication overhead encountered in SIMD architectures such as the Connection Machine 2 (CM-2). We present the performance exhibited by the parallel algorithm implementations on the Cray 2 and CM-2 for both scalar and blocked versions of the recurrence problem.  相似文献   

9.
We describe an implementation of a vector quantization codebook design algorithm based on the frequencysensitive competitive learning artificial neural network. The implementation, designed for use on high-performance computers, employs both multitasking and vectorization techniques. A C version of the algorithm tested on a CRAY Y-MP8/864 is discussed. We show how the implementation can be used to perform vector quantization, and demonstrate its use in compressing digital video image data. Two images are used, with various size codebooks, to test the performance of the implementation. The results show that the supercomputer techniques employed have significantly decreased the total execution time without affecting vector quantization performance.This work was supported by a Cray University Research Award and by NASA Lewis research grant number NAG3-1164.  相似文献   

10.
The matrix–vector product kernel can represent most of the computation in a gradient iterative solver. Thus, an efficient solver requires that the matrix–vector product kernel be fast. We show that standard approaches with Fortran or C may not deliver good performance and present a strategy involving managing the cache to improve the performance. As an example, using this approach we demonstrate that it is possible to achieve 2.5 times better performance over a Fortran implementation with an assembly coded kernel on an Intel i860. These issues are of general interest for all computer architectures but are particularly important for users of MIMD computers to achieve a useful fraction of the advertised peak performance of these machines.  相似文献   

11.
In this paper, a comparative study of one- and two-dimensional Slant transform algorithms using vector and parallel computers is presented. A new factorization for the Slant matrix is discussed. General methods of vectorizing short data vectors are included. These methods are suitable for applications where long data may not be available to take advantage of a vector computer. Microtasking with four processors is implemented to improve the speed performance of two-dimensional transforms. Simulation results on the Cray Y-MP and Cray X-MP using single processors and multiprocessors are also included.  相似文献   

12.
This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical software library that extends the widely used LAPACK library to run efficiently on scalable concurrent computers. To ensure good scalability and performance, the ScaLAPACK routines are based on block-partitioned algorithms that reduce the frequency of data movement between different levels of the memory hierarchy, and particularly between processors. The block cyclic data distribution, that is used in all three factorization algorithms, is described. An outline of the sequential and parallel block-partitioned algorithms is given. Approximate models of algorithms′ performance are presented to indicate which factors in the design of the algorithm have an impact upon scalability. These models are compared with timings results on a 128-node Intel iPSC/860 hypercube. It is shown that the routines are highly scalable on this machine for problems that occupy more than about 25% of the memory on each processor, and that the measured timings are consistent with the performance model. The contribution of this paper goes beyond reporting our experience: our implementations are available in the public domain.  相似文献   

13.
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.  相似文献   

14.
We present a new scheme for evaluating the performance of multithreaded computers and demonstrate its application to the Cray MTA‐2 and XMT supercomputers. Our scheme is based on the concept of clock cycles per element, ${\cal C}$, plotted against both problem size and the number of processors. This scheme clearly shows if an implementation has achieved its asymptotic efficiency and is more general than (but includes) the commonly used speedup metric. It permits the discovery of any imperfections in both the software as well as the hardware, and is expected to permit a unified comparison of many different parallel architectures. Measurements on a number of well‐known parallel algorithms, ranging from matrix multiply to quicksort, are presented for the MTA‐2 and XMT and highlight some interesting differences between these machines. The performance of sequence alignment using dynamic programming is evaluated on the MTA‐2, XMT, IBM x3755 and SGI Altix 350 and provides a useful comparison of the capabilities of the Cray machines with more conventional shared memory architectures. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.  相似文献   

16.
In this paper we benchmark the performance of the Cray T3D, IBM 9076 SP/1 and Intel Paragon XP/S parallel computers, using implementations of parallel algorithms for the computation of the vector outer-product A = uvT operation. The vector outer-product operation, although very simple in nature, requires the computation of a large number of floating-point operations and its parallelization induces a great level of communication between the processors. It is thus suited to measure the relative speed of the processor, memory subsystem and network capabilities of a parallel computer. It should not be considered a ‘toy problem’, since it arises in numerical methods in the context of the solution of systems of non-linear equations – still a difficult problem to solve. We present algorithms for both the explicit shared-memory and message-passing programming models together with theoretical computation models for those algorithms. Actual experiments were run on those computers, using Fortran 77 implementations of the algorithms. The results obtained with these experiments show that due to the high degree of communication between the processors one needs a parallel computer with fast communications and carefully implemented data exchange routines. The theoretical computation model allows prediction of the speed-up to be obtained for some problem size on a given number of processors. © 1997 John Wiley & Sons, Ltd.  相似文献   

17.
本文主要讨论了富士通公司推出的VPP300/500向量并行MPP超级计算机的系统结构、硬件特性和软件配置情况。同时,本文也介绍了VPP300/500系统中所采用的一些关键性技术及实现方法,并对MPP未来的发展趋势作出了预测  相似文献   

18.
This paper introduces queuing network models for the performance analysis of SPMD applications executed on general-purpose parallel architectures such as MIMD and clusters of workstations. The models are based on the pattern of computation, communication, and I/O operations of typical parallel applications. Analysis of the models leads to the definition of speedup surfaces which capture the relative influence of processors and I/O parallelism and show the effects of different hardware and software components on the performance. Since the parameters of the models correspond to measurable program and hardware characteristics, the models can be used to anticipate the performance behavior of a parallel application as a function of the target architecture (i.e., number of processors, number of disks, I/O topology, etc).  相似文献   

19.
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52–56 Mflops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a real world simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application.  相似文献   

20.
This paper presents the results of parallelizing a three-dimensional Navier-Stokes solver on a 32K-processor Thinking Machines CM-2, a 128-node Intel iPSC/860, and an 8-processor CRAY Y-MP. The main objective of this work is to study the performance of the flow solver, INS3D-LU code, on two distributed-memory machines, a massively parallel SIMD machine (CM-2) and a moderately parallel MIMD machine (iPSC/860), and compare it with its performance on a shared-memory MIMD machine with a small number of processors (Y-MP). The code is based on a Lower-Upper Symmetric-Gauss-Seidel implicit scheme for the pseudocompressibility formulation of the three-dimensional incompressible Navier-Stokes equations. The code was rewritten in CMFORTRAN with shift operations and run on the CM-2 using the slicewise model. The code was also rewritten with distributed data and Intel message-passing calls and run on the iPSC/860. The timing results for two grid sizes are presented and analyzed using both 32-bit and 64-bit arithmetic. Also, the impact of communication and load balancing on the performance of the code is outlined. The results show that reasonable performance can be achieved on these parallel machines. However, the CRAY Y-MP outperforms the CM-2 and iPSC/860 for this particular algorithm.The author is an employee of Computer Sciences Corporation. This work was funded through NASA Contract NAS 2-12961.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号