首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A heuristic algorithm for testing absolute irreducibility of multivariate polynomials over arbitrary fields using Newton polytopes was proposed in Gao and Lauder (Discrete Comput. Geom. 26:89–104, [2001]). A preliminary implementation by Gao and Lauder (2003) established a wide range of families of low degree and sparse polynomials for which the algorithm works efficiently and with a high success rate. In this paper, we develop a BSP variant of the absolute irreducibility testing via polytopes, with the aim of producing a more memory and run-time efficient method that can provide wider ranges of applicability, specifically in terms of the degrees of the input polynomials. In the bivariate case, we describe a balanced load scheme and a corresponding data distribution leading to a parallel algorithm whose efficiency can be established under reasonably realistic conditions. This is later incorporated in a doubly parallel algorithm in the multivariate case that achieves similar scalable performance. Both parallel models are analyzed for efficiency, and the theoretical analysis is compared to the performance of our experiments. In the empirical results we report, we achieve absolute irreducibility testing for bivariate and trivariate polynomials of degrees up to 30,000, and for low degree multivariate polynomials with more than 3,000 variables. To the best of our knowledge, this sets a world record in establishing absolute irreducibility of multivariate polynomials.  相似文献   

2.
Permuting a vector is a fundamental primitive which arises in many applications. In particular, rational permutations, which are defined by permutations of the bits of the binary representations of the vector indices, are widely used. Matrix transposition and bit-reversal are notable examples of rational permutations. In this paper we contribute a number of results regarding the execution of these permutations in cache hierarchies, with particular emphasis on the cache-oblivious setting. We first bound from below the work needed to execute a rational permutation with an optimal cache complexity. Then, we develop a cache-oblivious algorithm to perform any rational permutation, which exhibits optimal work and cache complexities under the tall cache assumption. We finally show that for certain families of rational permutations (including matrix transposition and bit reversal) no cache-oblivious algorithm can exhibit optimal cache complexity for all values of the cache parameters. This latter result specializes the one proved by Brodal and Fagerberg for general permutations to the case of rational permutations, and provides further evidence that the tall cache assumption is often necessary to attain cache optimality in the context of cache-oblivious algorithms.  相似文献   

3.
The workload of multimedia applications has a strong impact on cache memory performance, since the locality of memory references embedded in multimedia programs differs from that of traditional programs. In many cases, standard cache memory organization achieves poorer performance when used for multimedia. A widely-explored approach to improve cache performance is hardware prefetching, which allows the pre-loading of data in the cache before they are referenced. However, existing hardware prefetching approaches are unable to exploit the potential improvement in performance, since they are not tailored to multimedia locality. In this paper we propose novel effective approaches to hardware prefetching to be used in image processing programs for multimedia. Experimental results are reported for a suite of multimedia image processing programs including MPEG-2 decoding and encoding, convolution, thresholding, and edge chain coding.  相似文献   

4.
Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cache-aware and cache-oblivious layouts of surface and volume meshes that improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cache-aware and cache-oblivious metrics that have high correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and view-dependent rendering. Moreover, we improve upon recent cache-oblivious mesh layouts in terms of performance, applicability, and accuracy.  相似文献   

5.
In a multiprogrammed system, when the operating system switches contexts, in addition to the cost for handling the processes being swapped out and in, the cache performance of processors also can be affected. If frequent context switching replaces the data loaded into cache memory before they are completely reused, the programs suffer from cache misses due to the damage in cache locality. In particular, for the programs with good cache locality, such as blocked programs, a scheduling mechanism of keeping cache locality against context switching is essential to achieve good processor utilization. To solve this requirement, we propose a preemption-safe policy to exploit the cache locality of blocked programs in a multiprogrammed system. The proposed policy delays context switching until a block is fully reused, but also compensates for the monopolized processor time on processor scheduling mechanisms. Our simulation results show that in a situation where blocked programs are run on multiprogrammed shared-memory multiprocessors, the proposed policy improves the performance of these programs due to a decrease in cache misses. In such situations, it also has a beneficial impact on the overall system performance due to the enhanced processor utilization.  相似文献   

6.
Multithreading is a well known technique to hide latency in a non-blocking cache architecture. By switching execution from one thread to another, the CPU can perform useful work, while waiting for pending requests to be processed by the main memory. In this paper we examine the effects of varying the associativity and block size on cache performance in a reduced locality of reference environment, due to multithreading. We find that for associativity equal to the number of threads, the cache produces very low miss rate even for small sizes. Also by taking into account the increase in cycle time due to larger cache size or associativity we find that the optimum cache configuration for best processor performance is 16Kbytes direct mapped. Finally, with a constant main memory bandwidth, increasing the block size to more than 32 bytes, reduces the miss rate, but degrades processor performance.  相似文献   

7.
We develop a cache-oblivious data structure for storing a set S of N axis-aligned rectangles in the plane, such that all rectangles in S intersecting a query rectangle or point can be found efficiently. Our structure is an axis-aligned bounding-box hierarchy and as such it is the first cache-oblivious R-tree with provable performance guarantees. If no point in the plane is contained in more than a constant number of rectangles in S, we can construct, for any constant ε, a structure that answers a rectangle query using \(O(\sqrt{N/B}+T/B)\) memory transfers and a point query using O((N/B) ε ) memory transfers, where T is the number of reported rectangles and B is the block size of memory transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure.  相似文献   

8.
多处理机系统循环间数据重用的cache优化*   总被引:2,自引:0,他引:2  
cache的使用缓解了CPU和主存储器之间速度差距太大的矛盾,同时,也使cache的命中率成为影响多处理机系统性能发挥的重要因素.人们对如何加强数据的局部性,提高cache命中率,使多处理机系统的性能得到更好的发挥进行了积极的探索.但过去的工作主要集中于如何加强并行循环内的数据局部性,减少甚至消除并行循环内真假共享cache行所引起的cache抖动,对多处理机系统中循环间数据重用的开发和利用却少有论述.该文对如何开发和利用这些循环间数据重用进行了分析和讨论,并提出了一些切实可行、易于实现的方法.这些方法的  相似文献   

9.
Making pointer-based data structures cache conscious   总被引:1,自引:0,他引:1  
《Computer》2000,33(12):67-74
To narrow the widening gap between processor and memory performance, the authors propose improving the cache locality of pointer-manipulating programs and bolstering performance by careful placement of structure elements. It is concluded that considering past trends and future technology, it seems clear that the processor-memory performance gap will continue to increase and software will continue to grow larger and more complex. Although cache-conscious algorithms and data structures are the first and perhaps best place to attack this performance problem, the complexity of software design and an increasing tendency to build large software systems by assembling smaller components does not favor a focused, integrated approach. We propose another, more incremental approach of cache-conscious data layout, which uses techniques such as clustering, coloring, and compression to enhance data locality by placing structure elements more carefully in the cache.  相似文献   

10.
We develop algorithmic optimizations to improve the cache performance of four fundamental graph algorithms. We present a cache-oblivious implementation of the Floyd-Warshall algorithm for the fundamental graph problem of all-pairs shortest paths by relaxing some dependencies in the iterative version. We show that this implementation achieves the lower bound on processor-memory traffic of /spl Omega/(N/sup 3///spl radic/C), where N and C are the problem size and cache size, respectively. Experimental results show that this cache-oblivious implementation shows more than six times the improvement in real execution time over that of the iterative implementation with the usual row major data layout, on three state-of-the-art architectures. Second, we address Dijkstra's algorithm for the single-source shortest paths problem and Prim's algorithm for minimum spanning tree problem. For these algorithms, we demonstrate up to two times the improvement in real execution time by using a simple cache-friendly graph representation, namely adjacency arrays. Finally, we address the matching algorithm for bipartite graphs. We show performance improvements of two to three times in real execution time by using the technique of making the algorithm initially work on subproblems to generate a suboptimal solution and, then, solving the whole problem using the suboptimal solution as a starting point. Experimental results are shown for the Pentium III, UltraSPARC III, Alpha 21264, and MIPS R12000 machines.  相似文献   

11.
Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing chunk fingerprints. Maintaining all fingerprints in memory is not cost-effective because fingerprint indexes are typically very large. Many data deduplication systems maintain a fingerprint cache in memory and exploit fingerprint prefetching to accelerate the deduplication process. Although fingerprint prefetching can improve the performance of data deduplication systems by leveraging the locality of workloads, inaccurately prefetched fingerprints may pollute the cache by evicting useful fingerprints. We observed that most of the prefetched fingerprints in a wide variety of applications are never used or used only once, which severely limits the performance of data deduplication systems. We introduce a prefetch-aware fingerprint cache management scheme for data deduplication systems (PreCache) to alleviate prefetch-related cache pollution. We propose three prefetch-aware fingerprint cache replacement policies (PreCache-UNU, PreCache-UOO, and PreCache-MIX) to handle different types of cache pollution. Additionally, we propose an adaptive policy selector to select suitable policies for prefetch requests. We implement PreCache on two representative data deduplication systems (Block Locality Caching and SiLo) and evaluate its performance utilizing three real-world workloads (Kernel, MacOS, and Homes). The experimental results reveal that PreCache improves deduplication throughput by up to 32.22% based on a reduction of on-disk fingerprint index lookups and improvement of the deduplication ratio by mitigating prefetch-related fingerprint cache pollution.  相似文献   

12.
Cache performance is strongly influenced by the type of locality embodied in programs. In particular, multimedia programs handling images and videos are characterized by a bidimensional spatial locality, which is not adequately exploited by standard caches. In this paper we propose novel cache prefetching techniques for image data, called neighbor prefetching, able to improve exploitation of bidimensional spatial locality. A performance comparison is provided against other assessed prefetching techniques on a multimedia workload (with MPEG-2 and MPEG-4 decoding, image processing, and visual object segmentation), including a detailed evaluation of both the miss rate and the memory access time. Results prove that neighbor prefetching achieves a significant reduction in the time due to delayed memory cycles (more than 97% on MPEG-4 with respect to 75% of the second performing technique). This reduction leads to a substantial speedup on the overall memory access time (up to 140% for MPEG-4). Performance has been measured with the PRIMA trace-driven simulator, specifically devised to support cache prefetching.  相似文献   

13.
《Parallel Computing》2014,40(5-6):59-69
We present a cache-aware method for accelerating texture-based volume rendering on a graphics processing unit (GPU). Because a GPU has hierarchical architecture in terms of processing and memory units, cache optimization is important to maximize performance for memory-intensive applications. Our method localizes texture memory reference according to the location of the viewpoint and dynamically selects the width and height of thread blocks (TBs) so that each warp, which is a series of 32 threads processed simultaneously, can minimize memory access strides. We also incorporate transposed indexing of threads to perform TB-level cache optimization for specific viewpoints. Furthermore, we maximize TB size to exploit spatial locality with fewer resident TBs. For viewpoints with relatively large strides, we synchronize threads of the same TB at regular intervals to realize synchronous ray propagation. Experimental results indicate that our cache-aware method doubles the worst rendering performance compared to those provided by the CUDA and OpenCL software development kits.  相似文献   

14.
Hardware and software cache optimizations are active fields of research, that have yielded powerful but occasionally complex designs and algorithms. The purpose of this paper is to investigate the performance of combined through simple software and hardware optimizations. Because current caches provide little flexibility for exploiting temporal and spatial locality, two hardware modifications are proposed to support these two kinds of locality. Spatial locality is exploited by using large virtual cache lines which do not exhibit the performance flaws of large physical cache lines. Temporal locality is exploited by minimizing cache pollution with a bypass mechanism that still allows to exploit spatial locality. Subsequently, it is shown that simple software informations on the spatial/temporal locality of array references, as provided by current data locality optimization algorithms, can be used to increase cache performance significantly. The performance and design tradeoffs of the proposed mechanisms are discussed, Software-assisted caches are also shown to provide a very convenient support for further enhancement of data locality optimizations.  相似文献   

15.
肖玄基  张云泉  李玉成  袁良 《软件学报》2013,24(S2):118-126
MAGMA是第一个面向下一代体系架构(多核CPU和GPU)开源的线性代数软件包,它采用了诸多针对异构平台的优化方法,包括混合同步、通信避免和动态任务调度.它在功能、数据存储、接口上与LAPACK相似,可以发挥GPU的巨大计算能力进行数值计算.对MAGMA进行了测试分析.首先对矩阵分解算法进行分析;然后通过测试结果,分析MAGMA有效的优化和并行方法,为MAGMA使用、优化提供有益的建议;最后提出了一种对于矩阵分块算法的自适应调优的方法,经过测试,对于方阵的SGEQRF函数加速比达到1.09,对于高瘦矩阵的CGEQRF函数加速比达到1.8.  相似文献   

16.
With the exponential growth in size of geometric data, it is becoming increasingly important to make effective use of multilevel caches, limited disk storage, and bandwidth. As a result, recent work in the visualization community has focused either on designing sequential access compression schemes or on producing cache-coherent layouts of (uncompressed) meshes for random access. Unfortunately combining these two strategies is challenging as they fundamentally assume conflicting modes of data access. In this paper, we propose a novel order-preserving compression method that supports transparent random access to compressed triangle meshes. Our decompression method selectively fetches from disk, decodes, and caches in memory requested parts of a mesh. We also provide a general mesh access API for seamless mesh traversal and incidence queries. While the method imposes no particular mesh layout, it is especially suitable for cache-oblivious layouts, which minimize the number of decompression I/O requests and provide high cache utilization during access to decompressed, in-memory portions of the mesh. Moreover, the transparency of our scheme enables improved performance without the need for application code changes. We achieve compression rates on the order of 20:1 and significantly improved I/O performance due to reduced data transfer. To demonstrate the benefits of our method, we implement two common applications as benchmarks. By using cache-oblivious layouts for the input models, we observe 2?6 times overall speedup compared to using uncompressed meshes.  相似文献   

17.
Technological advancements in the silicon industry, as predicted by Moore’s law, have resulted in an increasing number of processor cores on a single chip, giving rise to multicore, and subsequently many-core architectures. This work focuses on identifying key architecture and software optimizations to attain high performance from tiled many-core architectures (TMAs)—an architectural innovation in the multicore technology. Although embedded systems design is traditionally power-centric, there has been a recent shift toward high-performance embedded computing due to the proliferation of compute-intensive embedded applications. The TMAs are suitable for these embedded applications due to low-power design features in many of these TMAs. We discuss the performance optimizations on a single tile (processor core) as well as parallel performance optimizations, such as application decomposition, cache locality, tile locality, memory balancing, and horizontal communication for TMAs. We elaborate compiler-based optimizations that are applicable to TMAs, such as function inlining, loop unrolling, and feedback-based optimizations. We present a case study with optimized dense matrix multiplication algorithms for Tilera’s TILEPro64 to experimentally demonstrate the performance and performance per watt optimizations on TMAs. Our results quantify the effectiveness of algorithmic choices, cache blocking, compiler optimizations, and horizontal communication in attaining high performance and performance per watt on TMAs.  相似文献   

18.
为微处理器扩展向量指令集是提升现代微处理器性能的一种可行手段,然而传统向量指令对存储系统的访问表现出较差的局部性,因此难以与现代微处理器设计中广泛使用的Cache很好的结合。本文以优化Cache性能为目标,对传统向量指令集进行改造,提出了COV(Cache Optimized Vector Instruction Set)向量指令集,并以OpenRISC1200为平台,对该指令集进行了实现与测评,获得了约四倍的性能加速比。  相似文献   

19.
作为提高CPU读取和存储数据的效率,弥补与主存之间存取速度差距的有效策略,CPU的缓存(Cache)充分利用其对数据使用的局部性原理,对最近或最常使用的数据进行暂存,对CPU的性能起着决定性作用.缓存的微架构正是决定缓存性能的关键性因素.然而,现代先进的CPU缓存都具备极为复杂的结构,存在多种策略、多种硬件算法和多个层级等不同维度的设计,从硬件上直接设计和论证不仅耗时而且成本很高,Cache微架构模拟器正是用软件方法对硬件微架构进行模拟和仿真.设计一款结构优良的缓存,对不同微架构进行评估,是一件具有深远意义的工作.本文从硬件结构出发,设计实现了一款多级、高可配、高可扩展的缓存微架构功能模拟器CMFSim(Cache microarchitecture functional simulator),实现了常见的缓存策略和硬件算法,可以进行给定配置下的缓存功能的模拟,从而分析配置参数与缓存性能间的关系.  相似文献   

20.
We present new deterministic and probabilistic algorithms that reduce the factorization of dense polynomials from several variables to one variable. The deterministic algorithm runs in sub-quadratic time in the dense size of the input polynomial, and the probabilistic algorithm is softly optimal when the number of variables is at least three. We also investigate the reduction from several to two variables and improve the quantitative version of Bertini’s irreducibility theorem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号