首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents Atomic RMI, a distributed transactional memory framework that supports the control flow model of execution. Atomic RMI extends Java RMI with distributed transactions that can run on many Java virtual machines located on different network nodes. Our system employs SVA, a fully-pessimistic concurrency control algorithm that provides exclusive access to shared objects and supports rollback and fault tolerance. SVA is capable of achieving a relatively high level of parallelism by interweaving transactions that access the same objects and by making transactions that do not share objects independent of one another. It also allows any operations within transactions, including irrevocable ones, like system calls, and provides an unobtrusive API. Our evaluation shows that in most cases Atomic RMI performs better than fine grained mutual-exclusion and read/write locking mechanisms. Atomic RMI also performs better than an optimistic transactional memory in environments with high contention and a high ratio of write operations, while being competitive otherwise.  相似文献   

2.
3.
Transactional memory is being advanced as an alternative to traditional lock-based synchronization for concurrent programming. Transactional memory simplifies the programming model and maximizes concurrency. At the same time, transactions can suffer from interference that causes them to often abort, from heavy overheads for memory accesses, and from expressiveness limitations (e.g., for I/O operations). In this paper we propose an adaptive locking technique that dynamically observes whether a critical section would be best executed transactionally or while holding a mutex lock. The critical new elements of our approach include the adaptivity logic and cost–benefit analysis, a low-overhead implementation of statistics collection and adaptive locking in a full C compiler, and an exposition of the effects on the programming model. In experiments with both micro and macrobenchmarks we found adaptive locks to consistently match or outperform the better of the two component mechanisms (mutexes or transactions). Compared to either mechanism alone, adaptive locks often provide 3-to-10x speedups. Additionally, adaptive locks simplify the programming model by reducing the need for fine-grained locking: with adaptive locks, the programmer can specify coarse-grained locking annotations and often achieve fine-grained locking performance due to the transactional memory mechanisms.  相似文献   

4.
事务存储被认为是极具前景的多核处理器并行编程的手段,但存在开销过大的问题。采用Bloom Filter对事务间访问共享变量进行冲突检测,能够有效地降低开销,但其存在误判会导致不必要的事务作废,因此要尽可能减少。简要介绍了Bloom Filter和事务存储,提出了一种事务存储的自适应冲突检测算法ACDA,根据事务读写集合大小自适应地调整Bloom Filter的位串大小,在较低开销的情况下,保持误判率不增加。分析了软件事务存储中实现ACDA的特点,初步实现ACDA,与主流软件事务存储实现RSTM相比,在事务存储测试程序STAMP中,开销可接受的前提下,减少因误判而作废的事务最高达93%。给出了对ACDA哈希函数进一步优化的思路。  相似文献   

5.
Thread-level speculation (TLS) was researched to automatically parallelize portions of serial programs for execution, and transactional memory (TM) was studied as a promising alternative of lock for parallel programming due to its simplicity. Both TLS and TM require similar underlying support. In the paper, we present SeTM (sequential transactional memory), a hardware enhanced TM system which supports TLS at minor extra cost. Signature is an effective way to buffer speculative states in TM and TLS. But it cripples TM and TLS performance due to its false-positive in terms of conflict detection, especially for conflict-intensive TLS. SeTM adopts R/W bits and signature concurrently to ameliorate this bad influence. Additionally, SeTM introduces the fast rollback mechanism, which provides fast abort recovery for eager log-based HTM and TLS. The most important contribution of SeTM is the conflict-tolerant mechanism, which tolerates some ambiguous data conflicts in TLS. Finally, in order to achieve an efficient execution for these un-order transactions, we add an extra ordering mechanism for SeTM. With this ordering mechanism, the transactions in TM can also gain the performance improvement with the support of conflict-tolerant mechanism. Our evaluation major on TM and TLS separately. For the TLS applications, six representative benchmarks have been adopted to evaluate the above model. Our experimental results show that our scheme improves the execution performance of most tested codes at a modest hardware cost. For a set of important scientific loops, we report the highest speedup of 6.5 with 15 cores. Besides, experimental results also show good scalability of SeTM system. For the TM applications, with respect to LogTM-SE, the benchmarks from STAMP also gain performance improvement signally.  相似文献   

6.
为了提高并行程序中共享内存数据的读写访问性能,事务内存机制于1993年被提出。因为事务内存机制直接涉及内存数据的读写控制,所以也得到了系统安全研究人员的极大关注。2013年,Intel公司开始支持TSX(Transactional Synchronizatione Xtension)特性,第一次在广泛使用的计算机硬件中支持事务内存机制。利用事务内存机制的内存访问跟踪、内存访问信号触发和内存操作回滚,以及Intel TSX特性的用户态事务回滚处理、在Cache中执行所有操作和硬件实现高效率,研究人员完成了各种的系统安全研究成果,包括:授权策略实施、虚拟机自省、密钥安全、控制流完整性、错误恢复和侧信道攻防等。本文先介绍了各种基于事务内存机制的研究成果;然后分析了现有各种系统安全研究成果与事务内存机制特性之间的关系,主要涉及了3个角度:内存访问的控制、事务回滚处理、和在Cache中执行所有操作。我们将已有的研究成果的技术方案从3个角度进行分解,与原有的、不基于事务内存机制的解决方案比较,解释了引入事务内存机制带来的技术优势。最后,我们总结展望了将来的研究,包括:硬件事务内存机制的实现改进,事务内存机制(尤其是硬件事务内存机制)在系统安全研究中的应用潜力。  相似文献   

7.
并行I/O系统有多种存取模式,它们有各自的存取特点和适用范围。为了获得不同模式下的系统性能,并行I/O测试中往往要综合使用多种微测试程序。这不仅要求用户深入了解并行I/O的特点,而且要求他们熟悉各种并行I/O微测试程序的输入与输出。提出并实现了一个并行I/O测试Jetter,它从接口类型、存取模式和进程-文件关系的角度划分了并行I/O接口,不仅能够测试I/O系统在上述模式下的性能,而且简化了测试工作。实际应用Jetter表明,并行I/O系统对不同模式的支持效果不同,最高差异可以达到两个数量级以上,这些测试结论有助于用户开发高质量的并行程序。  相似文献   

8.
非定常Monte Carlo输运问题的并行算法   总被引:1,自引:0,他引:1  
文中给出了非定常MonteCarlo(下文简写为MC)输运问题的并行算法 ,对并行程序的加载运行模式进行了讨论和优化设计 .针对MC并行计算设计了一种理想情况下无通信的并行随机数发生器算法 .动态MC输运问题有大量的I/O操作 ,特别是读取剩余粒子数据文件需要大量的I/O时间 ,文中针对I/O问题 ,提出了三种并行I/O算法 .最后给出了并行算法的性能测试结果 ,对比串行计算时间 ,使用 6 4台处理机时的并行计算时间缩短了 30倍  相似文献   

9.
The transactional memory in multicore processors has been a major research area over past several years. Many transactional memory systems have been proposed to be used to solve the synchronization problem of multicore processors. Hardware transactional memory is one of the critical methods to speedup communications in multicore environment. In this paper, we give a review of the current hardware transactional memory systems for multicore processors. We take a top-down approach to characterizing and classifying various hardware transactional design issues and present a taxonomy of hardware transactional memory systems which is consist of the five fundamental design issues: version management, conflict detection, contention management, virtualization and nesting. Finally, we discussed the active research challenge: the relationship between transactional memory and Input/Output operations and system calls.  相似文献   

10.
Publishing transactional data about individuals in an anonymous form is increasingly required by organizations. Recent approaches ensure that potentially identifying information cannot be used to link published transactions to individuals’ identities. However, these approaches are inadequate to anonymize data that is both protected and practically useful in applications because they incorporate coarse privacy requirements, do not integrate utility requirements, and tend to explore a small portion of the solution space. In this paper, we propose the first approach for anonymizing transactional data under application-specific privacy and utility requirements. We model such requirements as constraints, investigate how these constraints can be specified, and propose COnstraint-based Anonymization of Transactions, an algorithm that anonymizes transactions using a flexible anonymization scheme to meet the specified constraints. Experiments with benchmark datasets verify that COAT significantly outperforms the current state-of-the-art algorithm in terms of data utility, while being comparable in terms of efficiency. Our approach is also shown to be effective in preserving both privacy and utility in a real-world scenario that requires disseminating patients’ information.  相似文献   

11.
We considered the load-balanced multiplication of a large sparse matrix with a large sequence of vectors on parallel computers. We propose a method that combines fast load-balancing with efficient message-passing techniques to alleviate computational and inter-node communications challenges. The performance of the proposed method was evaluated on benchmark as well as on synthetically generated matrices and compared with the current work. It is shown that, by using our approach, a tangible improvement over prior work can be obtained, particularly for very sparse and skewed matrices. Moreover, it is also shown that I/O overhead for this problem can be efficiently amortized through I/O latency hiding and overall load-balancing.  相似文献   

12.
事务存储系统是一种高层次抽象并行编程模型,目的为方便开发并行程序。事务存储系统中的竞争管理模块用于解决事务之间的冲突。传统的事务竞争管理策略只负责仲裁两个冲突事务之间的冲突.提出将多个事务及事务冲突关联转换成一张无向图,基于全局事务冲突情景,利用图顶点着色技术求解无向图中最大独立集。最大独立集中事务相互不冲突,CM仲裁处理并发执行,实现系统并发最大化。  相似文献   

13.
This paper proposes a Restricted Admission Control (RAC) scheme for View-Oriented Transactional Memory. The scheme can control the number of threads concurrently accessing a view in order to reduce the number of aborts of transactions. The RAC scheme has the merits of both the locking mechanism and the transactional memory. A theoretical model is proposed to analyze the performance of the RAC scheme and to provide guidance for dynamic adjustment of the number of concurrent threads accessing the same view. Experimental results demonstrate that theoretical RAC model can mostly provide correct guidance to transactional concurrency control. Our RAC implementation shows that RAC can optimize concurrency control of transactions and performs much better than conventional transactional memory systems such as TinySTM that have no dynamic admission control.  相似文献   

14.
Parallel Algorithms for Discovery of Association Rules   总被引:2,自引:0,他引:2  
Discovery of association rules is an important data mining task. Several parallel and sequential algorithms have been proposed in the literature to solve this problem. Almost all of these algorithms make repeated passes over the database to determine the set of frequent itemsets (a subset of database items), thus incurring high I/O overhead. In the parallel case, most algorithms perform a sum-reduction at the end of each pass to construct the global counts, also incurring high synchronization cost. In this paper we describe new parallel association mining algorithms. The algorithms use novel itemset clustering techniques to approximate the set of potentially maximal frequent itemsets. Once this set has been identified, the algorithms make use of efficient traversal techniques to generate the frequent itemsets contained in each cluster. We propose two clustering schemes based on equivalence classes and maximal hypergraph cliques, and study two lattice traversal techniques based on bottom-up and hybrid search. We use a vertical database layout to cluster related transactions together. The database is also selectively replicated so that the portion of the database needed for the computation of associations is local to each processor. After the initial set-up phase, the algorithms do not need any further communication or synchronization. The algorithms minimize I/O overheads by scanning the local database portion only twice. Once in the set-up phase, and once when processing the itemset clusters. Unlike previous parallel approaches, the algorithms use simple intersection operations to compute frequent itemsets and do not have to maintain or search complex hash structures. Our experimental testbed is a 32-processor DEC Alpha cluster inter-connected by the Memory Channel network. We present results on the performance of our algorithms on various databases, and compare it against a well known parallel algorithm. The best new algorithm outperforms it by an order of magnitude.  相似文献   

15.
In this paper, we present a framework for synthesizing I/O efficient out-of-core programs for block recursive algorithms, such as the fast Fourier transform (FFT) and block matrix transposition algorithms. Our framework uses an algebraic representation which is based on tensor products and other matrix operations. The programs are optimized for the striped Vitter and Shriver's two-level memory model in which data can be distributed using various cyclic(B) distributions in contrast to the normally used physical track distribution cyclic(Bd ), where Bd is the physical disk block size. We first introduce tensor bases to capture the semantics of block-cyclic data distributions of out-of-core data and also data access patterns to out-of-core data. We then present program generation techniques for tensor products and matrix transposition. We accurately represent the number of parallel I/O operations required for the synthesized programs for tensor products and matrix transposition as a function of tensor bases and data distributions. We introduce an algorithm to determine the data distribution which optimizes the performance of the synthesized programs. Further, we formalize the procedure of synthesizing efficient out-of-core programs for tensor product formulas with various block-cyclic distributions as a dynamic programming problem. We demonstrate the effectiveness of our approach through several examples. We show that the choice of an appropriate data distribution can reduce the number of passes to access out-of-core data by as large as eight times for a tensor product and the dynamic programming approach can largely reduce the number of passes to access out-of-core data for the overall tensor product formulas  相似文献   

16.
Transactional coherence and consistency (TCC) simplifies parallel hardware and software design by eliminating the need for conventional cache coherence and consistency models and letting programmers parallelize a wide range of applications with a simple, lock-free transactional model. TCC eases both parallel programming and parallel architecture design by relying on programmer-defined transactions as the basic unit of parallel work, communication, memory coherence, and memory consistency  相似文献   

17.
Commercial transaction processing applications are an important workload running on symmetric multiprocessor systems (SMPs). They differ dramatically from scientific, numeric-intensive, and engineering applications because they are I/O bound, and they contain more system software activities. Most SMP servers available in the market have been designed and optimized for scientific and engineering workloads. A major challenge of studying architectural effects on the performance of a commercial workload is the lack of easy access to large-scale and complex database engines running on a multiprocessor system with powerful I/O facilities. Experiments involving case studies have been shown to be highly time-consuming and expensive. In this paper, we investigate the feasibility of using queuing network models with the support of simulation to study the SMP architectural impacts on the performance of commercial workloads. We use the commercial benchmark TPC-C as the workload. A bus-based SMP machine is used as the target platform. Queueing network modeling is employed to characterize the TPC-C workload on the SMP. The system components such as processors, memory, the memory bus, I/O buses, and disks are modeled as service centers, and their effects on performance are analyzed. Simulations are conducted as well to collect the workload-specific parameters (model parameterization) and to verify the accuracy of the model. Our studies find that among disk-related parameters, the disk rotation latency affects the performance of TPC-C most significantly. Among I/O buses and number of disks, the number of I/O buses has the deepest impact on performance. This study also demonstrates that our modeling approach is feasible, cost-effective, and accurate for evaluating the performance of commercial workloads on SMPs, and it is complementary to the measurement-based experimental approaches.  相似文献   

18.
Heuristics for scheduling I/O operations   总被引:1,自引:0,他引:1  
The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits. We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times  相似文献   

19.
基于依赖图的硬件事务存储技术研究   总被引:1,自引:0,他引:1  
事务存储技术能够简化并行程序中对共享资源的访问控制,是当前的研究热点之一.目前,多数基于硬件的事务存储系统采用基于冲突检测与处理的并发控制协议,当检测到两事务发生冲突时就中止二者之一.但是对事务间"冲突"更深入的分析表明,某些"冲突"并不一定会导致事务的回退,这种冲突称为"弱冲突".基于依赖图的硬件事务存储技术能够避免弱冲突引发的多余事务回退.模拟实验表明,基于依赖图的事务存储系统与基于冲突处理的事务存储系统相比具有明显的性能优势.  相似文献   

20.

Due to the increase and complexity of computer systems, reducing the overhead of fault tolerance techniques has become important in recent years. One technique in fault tolerance is checkpointing, which saves a snapshot with the information that has been computed up to a specific moment, suspending the execution of the application, consuming I/O resources and network bandwidth. Characterizing the files that are generated when performing the checkpoint of a parallel application is useful to determine the resources consumed and their impact on the I/O system. It is also important to characterize the application that performs checkpoints, and one of these characteristics is whether the application does I/O. In this paper, we present a model of checkpoint behavior for parallel applications that performs I/O; this depends on the application and on other factors such as the number of processes, the mapping of processes and the type of I/O used. These characteristics will also influence scalability, the resources consumed and their impact on the IO system. Our model describes the behavior of the checkpoint size based on the characteristics of the system and the type (or model) of I/O used, such as the number I/O aggregator processes, the buffering size utilized by the two-phase I/O optimization technique and components of collective file I/O operations. The BT benchmark and FLASH I/O are analyzed under different configurations of aggregator processes and buffer size to explain our approach. The model can be useful when selecting what type of checkpoint configuration is more appropriate according to the applications’ characteristics and resources available. Thus, the user will be able to know how much storage space the checkpoint consumes and how much the application consumes, in order to establish policies that help improve the distribution of resources.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号