共查询到20条相似文献,搜索用时 15 毫秒
1.
The radiosity method is a very demanding process in terms of computing and memory resources. To cope with these problems, parallel solutions have been proposed in the literature. A new parallel solution, based on the use of a shared virtual memory, is proposed. It will be shown that this concept of SVM greatly simplifies the implementation of a parallel algorithm since distributed data are managed by the operating system. This new parallel radiosity algorithm has been implemented on an iPSC/2 hypercube using the
SVM. The first results obtained with this algorithm are encouraging since the calculated efficiency curve is nearly linear. 相似文献
2.
数据并行虽然已经获得了广泛的应用,但是,仍然有一些应用程序不适于数据并行语言的并行模式,如树结构算法。数据并行与任务并行的结合可以很好地解决这些问题。该文主要讨论了在数据并行中引入任务并行时,遇到的共享变量、代码生成和处理器分配等问题,比较和分析了基于编译、基于语言和基于协作库的方法。 相似文献
3.
Sebastian Danicic Richard W. BarracloughMark Harman John D. HowroydÁkos Kiss Michael R. Laurence 《Theoretical computer science》2011,412(49):6809-6842
There are several similar, but not identical, definitions of control dependence in the literature. These definitions are given in terms of control flow graphs which have had extra restrictions imposed (for example, end-reachability).We define two new generalisations of non-termination insensitive and non-termination sensitive control dependence called weak and strong control-closure. These are defined for all finite directed graphs, not just control flow graphs and are hence allow control dependence to be applied to a wider class of program structures than before.We investigate all previous forms of control dependence in the literature and prove that, for the restricted graphs for which each is defined, vertex sets are closed under each if and only if they are either weakly or strongly control-closed. Low polynomial-time algorithms for producing minimal weakly and strongly control-closed sets over generalised control flow graphs are given.This paper is the first to define an underlying semantics for control dependence: we define two relations between graphs called weak and strong projections, and prove that the graph induced by a set of vertices is a weak/strong projection of the original if and only if the set is weakly/strongly control-closed. Thus, all previous forms of control dependence also satisfy our semantics. Weak and strong projections, therefore, precisely capture the essence of control dependence in our generalisations and all the previous, more restricted forms. More fundamentally, these semantics can be thought of as correctness criteria for future definitions of control dependence. 相似文献
4.
5.
Precise value-based data dependence analysis for scalars is useful for advanced compiler optimizations. The new method presented
here for flow and output dependence uses Factored Use and Def chains (FUD chains), our interpretation and extension of Static
Single Assignment. It is precise with respect to conditional control flow and dependence vectors. Our method detects dependences
which are independent with respect to arbitrary loop nesting, as well as loop-carried dependences. A loop-carried dependence
is further classified as being carried from the previous iteration, with distance 1, or from any previous iteration, with
direction <. This precision cannot be achieved by traditional analysis, such as dominator information or reaching definitions.
To compute anti- and input dependence, we use Factored Redef-Use chains, which are related to FUD chains. We are not aware
of any prior work which explicitly deals with scalar data dependence utilizing a sparse graph representation.
A preliminary version of this paper appeared in theSeventh Anual Workshop on Languages and Compilers for Parallel Computing, August 1994.
Supported in part by NSF Grant CCR-9113885 and a grant from Intel Corporation and the Oregon Advanced Computing Institute. 相似文献
6.
数据融合的方法及应用研究 总被引:7,自引:0,他引:7
介绍了数据融合的定义、融合层次、融合方法及其应用领域,特别是在军事上,机器人控制及图像处理上的应用。在此基础上,分析了数据融合的研究热点及发展趋势。数据融合技术是一种用途广泛的数字信号及信息处理方法,它通过对大量的数据进行处理提纯,得到一组直观有效的数据,为进一步处理和判断控制提供精确的数据依据。 相似文献
7.
8.
Johan Montagnat Tristan Glatard Isabel Campos Plasencia Francisco Castejón Xavier Pennec Giuliano Taffoni Vladimir Voznesensky Claudio Vuerli 《Journal of Grid Computing》2008,6(4):369-383
Setting up and deploying complex applications on a Grid infrastructure is still challenging and the programming models are
rapidly evolving. Efficiently exploiting Grid parallelism is often not straight forward. In this paper, we report on the techniques
used for deploying applications on the EGEE production Grid through four experiments coming from completely different scientific
areas: nuclear fusion, astrophysics and medical imaging. These applications have in common the need for manipulating huge
amounts of data and all are computationally intensive. All the cases studied show that the deployment of data intensive applications
require the development of more or less elaborated application-level workload management systems on top of the gLite middleware
to efficiently exploit the EGEE Grid resources. In particular, the adoption of high level workflow management systems eases
the integration of large scale applications while exploiting Grid parallelism transparently. Different approaches for scientific
workflow management are discussed. The MOTEUR workflow manager strategy to efficiently deal with complex data flows is more
particularly detailed. Without requiring specific application development, it leads to very significant speed-ups. 相似文献
9.
Leana Golubchik Samir Khuller Yoo-Ah Kim Svetlana Shargorodskaya Yung-Chun Wan 《Algorithmica》2006,45(1):137-158
Our work is motivated by the problem of managing data on storage devices, typically a set of disks. Such storage servers are
used as web servers or multimedia servers, for handling high demand for data. As the system is running, to exhibit good performance,
it needs to respond dynamically to changes in demand for different data items. There are known algorithms for mapping demand
to a layout. When the demand changes, a new layout can be computed. In this work we study thedata migration problem, which arises when we need to change one layout to another quickly. This problem has been studied earlier where for each
disk a new layout has been prescribed. However, to apply these algorithms effectively, we identify another problem that we
refer to as the correspondence problem, whose solution has a significant impact on the overall solution for the data migration
problem. We study algorithms for the data migration problem in more detail and identify variations of the basic algorithm
that seem to improve performance in practice, even though some of the variations have poor worst-case behavior.
This research was supported by the NSF Awards CCR-0113192 and EIA-0091474 as well as the Okawa Research Award. This work made
use of Integrated Media Systems Center Shared Facilities supported by the National Science Foundation under Cooperative Agreement
No. EEC-9529152; any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s)
and do not necessarily reflect those of the National Science Foundation. This work was done while Svetlana Shargorodskaya
was at the University of Maryland. 相似文献
10.
J. L. Davidson 《Journal of Mathematical Imaging and Vision》1992,1(2):169-192
This paper introduces several decomposition results for a class of nonlinear transforms calledlattice transforms. A lattice transform has a matrix representation in the context of minimax algebra, a matrix algebraic structure developed for operations research. A general matrix decomposition method is presented and is then extended to provide necessary and sufficient conditions for mapping a lattice transform to a limited-connection parallel architecture. An additional result, necessary and sufficient conditions for finding a decomposition of a block Toeplitz matrix with Toeplitz blocks, is also given. Prior to these results, no minimax matrix decompositions had been developed. 相似文献
11.
12.
Tu Huy Phan Enrico Pontelli Tran Cao Son Son Thanh To 《Concurrency and Computation》2009,21(15):1928-1960
The goal of this paper is to investigate the application of parallel programming techniques to boost the performance of heuristic search‐based planning systems in various aspects. It shows that an appropriate parallelization of a sequential planning system often brings gain in performance and/or scalability. We start by describing general schemes for parallelizing the construction of a plan. We then discuss the applications of these techniques to two domain‐independent heuristic search‐based planners—a competitive conformant planner (CP A) and a state‐of‐the‐art classical planner (FF). We present experimental results—on both shared memory and distributed memory platforms—which show that the performance improvements and scalability are obtained in both cases. Finally, we discuss the issues that should be taken into consideration when designing a parallel planning system and relate our work to the existing literature. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
13.
《Data Processing》1986,28(8):405-409
Parallel processing can reduce the cost of computing, improve reliability and provide greater throughput. Equally important is the likelihood that real-life problems contain parallelism that should be exploited in the computer architecture if natural solutions are to be found. Many parallel machines are now on the market, both single instruction, multiple data (SIMD) and multiple instruction, multiple data (MIMD), but software engineering techniques have yet to make their complete impact on these architectures. 相似文献
14.
数据依赖与异常数据分离-应用 总被引:1,自引:1,他引:1
数据在传递过程中,经常出现两类现象:一些被传递的数据在传递中发生部分数据元丢失;一些未知的数据元入侵到被传递的数据内。这两类现象使得被传递的数据出现“异常”。利用一个新的数学模型,给出两类现象的理论研究与应用。这个新的数学模型是P集合(packet sets) , P-集合是由内P集合XF(internal packet set XF)与外P集合XF (outer packet set XF)构成的集合对;或者,(XF,XF)是P集合。给出数据的F依赖、F依赖的概念与特性,提出数据的依赖定理,给出异常数据被分离的应用。数据依赖是P集合诸多应用特性之一。P集合是研究动态数据系统的一个新理论与新方法。 相似文献
15.
串行程序的依赖关系分析和向量化 总被引:1,自引:0,他引:1
本文提出了两种新的数据依赖关系分析方法——系数判别法和实分析方法,其中,系数判别法在GCD方法基础上,给出数组项之间精确的依赖关系,并直接给出依赖的方向.实分析方法没有目前其他分析方法所要求的下标为循环控制变量的线性函数的限制.并且,很自然地解决了Coupled subscr-ipts及隐关系的分析问题.另外,本文探讨了破除数据依赖关系的方法及向量化的问题.文中的算法均己在UNIX环境下实现. 相似文献
16.
弹性数据相关与软件流水 总被引:1,自引:0,他引:1
最差路径是有分支循环软件流水的一大障碍.对于有分支循环,某些数据相关(称为弹性相关)在循环的动态执行中可能产生、也可能不产生实例.据此,可将严重限制并行性的弹性相关用限制较松的虚构相关代替,再进行软件流水.若调度没有遵守原来的弹性相关,则使用下推变换修正.从而缓解或者完全解除了最差路径的限制.该方法与经典的控制猜测互补,特点是允许调度含错,然后纠错. 相似文献
17.
Dror E. Maydan John L. Hennessy Monica S. Lam 《International journal of parallel programming》1995,23(1):63-81
Data dependence testing is the basic step in detecting loop level parallelism in numerical programs. The problem is undecidable
in the general case. Therefore, work has been concentrated on a simplified problem, affine memory disambiguation. In this
simpler domain, array references and loops bounds are assumed to be linear integer functions of loop variables. Dataflow information
is ignored. For this domain, we have shown that in practice the problem can be solved accurately and efficiently.(1) This paper studies empirically the effectiveness of this domain restriction, how many real references are affine and flow
insensitive. We use Larus's llpp system(2) to find all the data dependences dynamically. We compare these to the results given by our affine memory disambiguation system.
This system is exact for all the cases we see in practice. We show that while the affine approximation is reasonable, memory
disambiguation is not a sufficient approximation for data dependence analysis. We propose extensions to improve the analysis.
This research was supported in part by a fellowship from AT & T Bell Laboratories and by DARPA contract N00014-87-K-0828. 相似文献
18.
Graphical processing units (GPUs) have recently attracted attention for scientific applications such as particle simulations. This is partially driven by low commodity pricing of GPUs but also by recent toolkit and library developments that make them more accessible to scientific programmers. We discuss the application of GPU programming to two significantly different paradigms—regular mesh field equations with unusual boundary conditions and graph analysis algorithms. The differing optimization techniques required for these two paradigms cover many of the challenges faced when developing GPU applications. We discuss the relevance of these application paradigms to simulation engines and games. GPUs were aimed primarily at the accelerated graphics market but since this is often closely coupled to advanced game products it is interesting to speculate about the future of fully integrated accelerator hardware for both visualization and simulation combined. As well as reporting the speed‐up performance on selected simulation paradigms, we discuss suitable data‐parallel algorithms and present code examples for exploiting GPU features like large numbers of threads and localized texture memory. We find a surprising variation in the performance that can be achieved on GPUs for our applications and discuss how these findings relate to past known effects in parallel computing such as memory speed‐related super‐linear speed up. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
19.
软件流水是循环调度的重要方法.有分支循环的流水依然是个难题.现有算法可以分为4类:循环线性化、路径分离、整体调度和路径选择.它们都未能和谐地解决两个对立问题:转移时间最小化和最差约束问题.提出了基于路径分组和数据相关松弛的软件流水框架,试图无矛盾地解决上述问题.其主要思想是:(1)路径分组,即按照路径的执行概率和转移概率将路径分组,力求最小化转移时间;(2)数据相关松弛,力求避免最差约束,即当循环有多条路径时,有些相关在循环执行中并不一定有实例,理想的策略是仅当它有实例时才遵守.初步实验和定性分析表明,此 相似文献
20.
Instruction-level parallel processing: History,overview, and perspective 总被引:11,自引:0,他引:11
Instruction-level parallelism (ILP) is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel. Although ILP has appeared in the highest performance uniprocessors for the past 30 years, the 1980s saw it become a much more significant force in computer design. Several systems were built and sold commercially, which pushed ILP far beyond where it had been before, both in terms of the amount of ILP offered and in the central role ILP played in the design of the system. By the end of the decade, advanced microprocessor design at all major CPU manufacturers had incorporated ILP, and new techniques for ILP had become a popular topic at academic conferences. This article provides an overview and historical perspective of the field of ILP and its development over the past three decades. 相似文献