首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Two different numerical solutions of the two-component kinetic collection equation were implemented on parallel computers. The parallelization approach included domain decomposition and MPI commands for communications. Four different parallel codes were tested. A dynamic decomposition based on an occupancy function provided the optimum balance between time performance and flexibility for any number of processors. The occupancy function was defined according to the number of calculations required at each grid point in the domain. Speed-up performance depended very much on the parallel code used and in some cases very good results were obtained for up to 32 processors.  相似文献   

2.
The Garey–Johnson algorithm is a well known polynomial-time algorithm constructing an optimal schedule for the maximum lateness problem with unit execution time tasks, two parallel identical processors, precedence constraints and release times. The paper is concerned with the worst-case analysis of a generalization of the Garey–Johnson algorithm to the case of arbitrary number of processors. In contrast to other algorithms for the maximum lateness problem, the tight performance guarantee for the even number of processors differs from the tight performance guarantee for the odd number of processors.  相似文献   

3.
The earlier proposed algorithm of parallelization of the computer-code developed for solving the two-dimensional stationary Navier-Stokes equations using the implicit iterative scheme is extended to the nonequilibrium gaseous mixture flow. The parallelization algorithm is based on decomposition of the computation region into several parts corresponding to the number of processors, with the maintenance of the implicit type of a difference scheme in each subregion. The parallelization efficiency is analyzed by the example of the computation of the flow past a re-entry vehicle moving in the Earth’s atmosphere at a hypersonic velocity. The algorithm has demonstrated good scalability for a number of processors N ≤ 15.  相似文献   

4.
Consideration was given to the approaches to solve the linear programming problems with an absolute precision attained through rational computation without roundoff in the algorithms of the simplex method. Realization of the modified simplex method with the use of the inverse matrix was shown to have the least spatial complexity. The main memory area sufficient to solve the linear programming problem with the use of rational computations without roundoff is at most 4lm 4 + O(lm 3), where m is the minimal problem size and l is the number of bits sufficient to represent one element of the source data matrix. The efficiency of parallelization, that is, the ratio of acceleration to the number of processors, was shown to be asymptotically close to 100%. Computing experiments on practical problems with the sparse matrix corroborated high efficiency of parallelization and demonstrated the advantage of the parallel method of inverse matrix.  相似文献   

5.
6.
In our study, we investigate a packet-level protocol parallelization approach, which works by parallel multithreading the protocol execution such that packets within and among connections are processed in parallel using distinct processors/threads. The major advantage of this approach is its high scalability—with proper scheduling, more protocol connections, and hence more requests, can be supported by using more threads. In this paper, we present results of our detailed simulations using the NS-2 platform for reliably transferring video stream data in a client-server system. Different types of parameters are used to measure the performance. The parameters include available network bandwidth, different number of TCP connections, and different video sources. Our results show that a parallel approach can indeed significantly enhance the playback quality.  相似文献   

7.
In this paper, a parallel implementation of the Iterative Alternating Direction Explicit method by D’Yakonov (IADE-DY) to solve 2-D telegraphic problem on a distributed system using Message Passing Interface (MPI) and Parallel Virtue Machine (PVM) are presented. The parallelization of the program is implemented by a domain decomposition strategy. A Single Program Multiple Data (SPMD) model is employed for the implementation. The implementation is discussed in relation to means of the parallel performance strategies and analysis. The model enhances overlap communication and computation to avoid unnecessary synchronization, hence, the method yields significant speedup. The level of speedup observed from tables as the mesh increases are in the range of 5–10%. Improvement has been achieved by numbers of tables and figures in our experiment. We present some analyses that are helpful for speedup and efficiency. It is concluded that the efficiency is strongly dependent on the grid size, block numbers and the number of processors for both MPI and PVM. Different strategies to improve the computational efficiency are proposed.  相似文献   

8.
We introduce a new parallelization framework for scientific computing based on BDSC, an efficient automatic scheduling algorithm for parallel programs in the presence of resource constraints on the number of processors and their local memory size. BDSC extends Yang and Gerasoulis’s Dominant Sequence Clustering (DSC) algorithm; it uses sophisticated cost models and addresses both shared and distributed parallel memory architectures. We describe BDSC, its integration within the PIPS compiler infrastructure and its application to the parallelization of four well-known scientific applications: Harris, ABF, equake and IS. Our experiments suggest that BDSC’s focus on efficient resource management leads to significant parallelization speedups on both shared and distributed memory systems, improving upon DSC results, as shown by the comparison of the sequential and parallelized versions of these four applications running on both OpenMP and MPI frameworks.  相似文献   

9.
In this paper we present a parallel implementation of a well-known heuristic optimisation algorithm (the downhill simplex algorithm developed by Nelder and Mead in 1965) which is well suited for unconstrained optimisation. We present the sequential algorithm as well as the parallel algorithm which we used to generate numerical results. They include numerical results of experiments on neural networks and a test suite of functions which demonstrate the parallel algorithm's increased robustness and convergence rate for high-dimensional problems compared to the sequential algorithm. © 1998 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we benchmark the performance of the Cray T3D, IBM 9076 SP/1 and Intel Paragon XP/S parallel computers, using implementations of parallel algorithms for the computation of the vector outer-product A = uvT operation. The vector outer-product operation, although very simple in nature, requires the computation of a large number of floating-point operations and its parallelization induces a great level of communication between the processors. It is thus suited to measure the relative speed of the processor, memory subsystem and network capabilities of a parallel computer. It should not be considered a ‘toy problem’, since it arises in numerical methods in the context of the solution of systems of non-linear equations – still a difficult problem to solve. We present algorithms for both the explicit shared-memory and message-passing programming models together with theoretical computation models for those algorithms. Actual experiments were run on those computers, using Fortran 77 implementations of the algorithms. The results obtained with these experiments show that due to the high degree of communication between the processors one needs a parallel computer with fast communications and carefully implemented data exchange routines. The theoretical computation model allows prediction of the speed-up to be obtained for some problem size on a given number of processors. © 1997 John Wiley & Sons, Ltd.  相似文献   

11.
A parallel FFT on an MIMD machine   总被引:5,自引:0,他引:5  
In this paper we present a parallelization of the Cooley- Tukey FFT algorithm that is implemented on a shared-memory MIMD (non-vector) machine that was built in the Dept. of Computer Science, Tel Aviv University. A parallel algorithm is presented for one dimension Fourier transform with performance analysis. For a large array of complex numbers to be transformed, an almost linear speed-up is demonstrated. This algorithm can be executed by any number of processors, but generally the number is much less than the length of the input data.  相似文献   

12.
A portable parallelization of the Cooley–Tukey FFT algorithm for MIMD multiprocessors is presented. The implementation uses the virtual machine for multiprocessors (VMMP) and PVM portable software packages. Since VMMP provides the same set of services on all target machines, a single version of the parallel FFT code was used for shared memory (25-processor Sequent Symmetry), shared bus (MOS-running distributed UNIX) and distributed memory multiprocessor (transputer network and 64-processor IBM SP2). It is accompanied with detailed performance analysis of the implementations. The algorithm achieved high efficiencies on all target machines. The analysis indicates that most overheads are caused by the target architecture and not by VMMP or PVM inefficiencies. The portability analysis of the FFT provides several important insights. On the message passing architecture, the parallel FFT algorithm can obtain linearly increasing speedup with respect to the number of processors with only a moderate increase in the problem size. The parallel FFT can be executed by any number of processors, but generally the number of processors is much less than the length of the input data. The results indicate that the parallel FFT is portable: it achieves very good speedups on either a shared memory multiprocessor with high memory bandwidth or on a message passing multiprocessor without any change in the programs. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
We have parallelized the FASTA algorithm for biological sequence comparison using Linda, a machine-independent parallel programming language. The resulting parallel program runs on a variety of different parallel machines. A straight-forward parallelization strategy works well if the amount of computation to be done is relatively large. When the amount of computation is reduced, however, disk I/O becomes a bottleneck which may prevent additional speed-up as the number of processors is increased. The paper describes the parallelization of FASTA, and uses FASTA to illustrate the I/O bottleneck problem that may arise when performing parallel database search with a fast sequence comparison algorithm. The paper also describes several program design strategies that can help with this problem. The paper discusses how this bottleneck is an example of a general problem that may occur when parallelizing, or otherwise speeding up, a time-consuming computation.  相似文献   

14.
The problem of mapping the parallel bottom up execution of Datalog programs to an interconnected network of processors is studied. The parallelization is achieved by using hash functions that partition the set of instantiations for the rules. We first examine this problem in an environment where the number of processors and the interconnection topology is known, and communication between program segments residing at non-adjacent processors is not permitted. An algorithm is presented that decides whether a given Datalog program can be mapped onto such an architecture. We then relax the constraint on the architecture by allowing program segments residing at non-adjacent processors to communicate, A theory of approximate mappings is developed, and an algorithm to obtain the closest approximate mapping of a given Datalog program onto a given architecture is presented  相似文献   

15.
The advent of multicores presents a promising opportunity for speeding up the execution of sequential programs through their parallelization. In this paper we present a novel solution for efficiently supporting software-based speculative parallelization of sequential loops on multicore processors. The execution model we employ is based upon state separation, an approach for separately maintaining the speculative state of parallel threads and non-speculative state of the computation. If speculation is successful, the results produced by parallel threads in speculative state are committed by copying them into the computation’s non-speculative state. If misspeculation is detected, no costly state recovery mechanisms are needed as the speculative state can be simply discarded. Techniques are proposed to reduce the cost of data copying between non-speculative and speculative state and efficiently carrying out misspeculation detection. We apply the above approach to speculative parallelization of loops in several sequential programs which results in significant speedups on a Dell PowerEdge 1900 server with two Intel Xeon quad-core processors.  相似文献   

16.
In this paper, we propose a new algorithm named Parallel Multipass with Inverted Hashing and Pruning (PMIHP) for mining association rules between words in text databases. The characteristics of text databases are quite different from those of retail transaction databases, and existing mining algorithms cannot handle text databases efficiently because of the large number of itemsets (i.e., sets of words) that need to be counted. The new PMIHP algorithm is a parallel version of our Multipass with Inverted Hashing and Pruning (MIHP) algorithm (Holt, Chung in: Proc of the 14th IEEE int’l conf on tools with artificial intelligence, 2002, pp 49–56), which was shown to be quite efficient than other existing algorithms in the context of mining text databases. The PMIHP algorithm reduces the overhead of communication between miners running on different processors because they are mining local databases asynchronously and prune the global candidates by using the Inverted Hashing and Pruning technique. Compared with the well-known Count Distribution algorithm (Agrawal, Shafer in: (1996) IEEE Trans Knowl Data Eng 8(6):962–969), PMIHP demonstrates superior performance characteristics for mining association rules in large text databases, and when the minimum support level is low, its speedup is superlinear as the number of processors increases. These experiments were performed on a cluster of Linux workstations using a collection of Wall Street Journal articles. This research was supported in part by Ohio Board of Regents, LexisNexis, and AFRL/Wright Brothers Institute (WBI).  相似文献   

17.
Parallelizing the Data Cube   总被引:1,自引:0,他引:1  
This paper presents a general methodology for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom-up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication overhead by partitioning the load in advance instead of computing each individual group-by in parallel. Our partitioning strategies create a small number of coarse tasks. This allows for sharing of prefixes and sort orders between different group-by computations. Our methods enable code reuse by permitting the use of existing sequential (external memory) data cube algorithms for the subcube computations on each processor. This supports the transfer of optimized sequential data cube code to a parallel setting.The bottom-up partitioning strategy balances the number of single attribute external memory sorts made by each processor. The top-down strategy partitions a weighted tree in which weights reflect algorithm specific cost measures like estimated group-by sizes. Both partitioning approaches can be implemented on any shared disk type parallel machine composed of p processors connected via an interconnection fabric and with access to a shared parallel disk array.We have implemented our parallel top-down data cube construction method in C++ with the MPI message passing library for communication and the LEDA library for the required graph algorithms. We tested our code on an eight processor cluster, using a variety of different data sets with a range of sizes, dimensions, density, and skew. Comparison tests were performed on a SunFire 6800. The tests show that our partitioning strategies generate a close to optimal load balance between processors. The actual run times observed show an optimal speedup of p.  相似文献   

18.
本文根据影响并行蚁群算法性能的关键因素,提出了一种自适应的并行蚁群算法.首先提出了基于适应度和基于距离选择的两种不同的信息交流策略,使得各处理机自适应地选择与之进行信息交换的处理机,然后采用自适应的更新策略进行信息素的更新.为了增强该算法的搜索能力,还根据解的多样性给出了自适应地调节处理机之间的信息交流周期的方法.在MPP处理机深腾1800上对TSP问题的实验结果表明了该算法在保证有效的加速比的同时,具有很好的收敛性.  相似文献   

19.
A hybrid message passing and shared memory parallelization technique is presented for improving the scalability of the adaptive integral method (AIM), an FFT based algorithm, on clusters of identical multi-core processors. The proposed hybrid MPI/OpenMP parallelization scheme is based on a nested one-dimensional (1-D) slab decomposition of the 3-D auxiliary regular grid and the associated AIM calculations: If there are M processors and T cores per processor, the scheme (i) divides the regular grid into M slabs and MT sub-slabs, (ii) assigns each slab/sub-slab and the associated operations to one of the processors/cores, and (iii) uses MPI for inter-processor data communication and OpenMP for intra-processor data exchange. The MPI/OpenMP parallel AIM is used to accelerate the solution of the combined-field integral equation pertinent to the analysis of time-harmonic electromagnetic scattering from perfectly conducting surfaces. The scalability of the scheme is investigated theoretically and verified on a state-of-the-art multi-core cluster for benchmark scattering problems. Timing and speedup results on up to 1024 quad-core processors show that the hybrid MPI/OpenMP parallelization of AIM exhibits better strong scalability (fixed problem size speedup) than pure MPI parallelization of it when multiple cores are used on each processor.  相似文献   

20.
The Nelder-Mead simplex method is an optimization routine that works well with irregular objective functions. For a function of $n$ parameters, it compares the objective function at the $n+1$ vertices of a simplex and updates the worst vertex through simplex search steps. However, a standard serial implementation can be prohibitively expensive for optimizations over a large number of parameters. We describe an implementation of the Nelder-Mead method in parallel using a distributed memory. For $p$ processors, each processor is assigned $(n+1)/p$ vertices at each iteration. Each processor then updates its worst local vertices, communicates the results, and a new simplex is formed with the vertices from all processors. We also describe how the algorithm can be implemented with only two MPI commands. In simulations, our implementation exhibits large speedups and is scalable to large problem sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号