首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
软件事务内存是为了简化并行程序设计而出现的一种新的程序设计技术.为了降低软件事务内存系统中事务冲突的发生频率以提升系统整体性能,提出了一种新的基于动态控制和队列调度的竞争管理策略.定义了竞争强度的概念和系统总体框架,并在此基础上给出了利用运行时反馈信息动态调节竞争强度的方法.同时给出了事务序列化的设计方法与实现中应注意的问题,通过将冲突概率大的事务序列化以达到避免相同冲突再次发生的目的.结合常用的基准数据结构,对模型和算法进行了实验,最后结果表明了算法的正确性和有效性.  相似文献   

2.
The Internet of Things (IoT) is an emerging technology paradigm where millions of sensors and actuators help monitor and manage physical, environmental, and human systems in real time. The inherent closed‐loop responsiveness and decision making of IoT applications make them ideal candidates for using low latency and scalable stream processing platforms. Distributed stream processing systems (DSPS) hosted in cloud data centers are becoming the vital engine for real‐time data processing and analytics in any IoT software architecture. But the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT applications and data streams. Here, we propose RIoTBench , a real‐time IoT benchmark suite, along with performance metrics, to evaluate DSPS for streaming IoT applications. The benchmark includes 27 common IoT tasks classified across various functional categories and implemented as modular microbenchmarks. Further, we define four IoT application benchmarks composed from these tasks based on common patterns of data preprocessing, statistical summarization, and predictive analytics that are intrinsic to the closed‐loop IoT decision‐making life cycle. These are coupled with four stream workloads sourced from real IoT observations on smart cities and smart health, with peak streams rates that range from 500 to 10 000 messages/second from up to 3 million sensors. We validate the RIoTBench suite for the popular Apache Storm DSPS on the Microsoft Azure public cloud and present empirical observations. This suite can be used by DSPS researchers for performance analysis and resource scheduling, by IoT practitioners to evaluate DSPS platforms, and even reused within IoT solutions.  相似文献   

3.
针对由计算机集群构成的云计算数据中心的特性,提出了一种基于事务内存的分布式编程框架。该框架将云计算任务封装为事务,自动完成所有事务的调度执行、负载均衡和故障恢复;将数据中心的分布式数据封装为事务对象,保证事务访问事务对象时的ACID特性。与同类研究相比,它无需用户关心程序的并行控制,具有简单易用性。该框架已在仿真环境下实现,实验结果表明它具有良好的可扩展性和容错性。  相似文献   

4.
ABSTRACT

This paper introduces and tackles a special performance hazard in Hardware Transactional Memory (HTM): false abortion. False abortion causes many unnecessary transaction abortions in HTM and can greatly impact the performance, making HTM not that useful when it is adopted as a fast path for Software Transactional Memory. By introducing a new memory allocator design, we are able to put objects that are likely to be accessed together from different threads into different cache lines and thus avoid conflicts of hardware transactions in different threads. Experiments show that our method can reduce 47% of transaction abortion and achieve a speedup of up to 1.67× (averagely 22%), yet only consume 14% more memory, showing great potential to enhance current HTM technology.  相似文献   

5.
事务存储研究   总被引:1,自引:0,他引:1  
为了研究多核处理器系统上的并行编程问题,开展了对事务存储模型的研究.阐述了事务存储,介绍了事务存储系统的实现方法,利用4种事务存储系统详细阐述了事务存储的实现;重点讨论了6种影响事务存储发展的关键技术,即实现方式、数据结构组织、并发控制,冲突检测、争用管理等;提出了事务存储将向着软硬件结合、提升性能、提高正确性和满足多核应用需求的方向发展.  相似文献   

6.
Transactional memory is an alternative to locks for handling concurrency in multi-threaded environments. Instead of providing critical regions that only one thread can enter at a time, transactional memory records sufficient information to detect and correct for conflicts if they occur. This paper surveys the range of options for implementing software transactional memory in Scala. Where possible, we provide references to implementations that instantiate each technique. As part of this survey, we document for the first time several techniques developed in the implementation of Manchester University Transactions for Scala. We order the implementation techniques on a scale moving from the least to the most invasive in terms of modifications to the compilation and runtime environment. This shows that, while the less invasive options are easier to implement and more common, they are more verbose and invasive in the codes using them, often requiring changes to the syntax and program structure throughout the code.  相似文献   

7.
We investigate how transactional memory can be adapted for embedded systems. We consider energy consumption and complexity to be driving concerns in the design of these systems and therefore adapt simple hardware transactional memory (HTM) schemes in our architectural design. We propose several different cache structures and contention management schemes to support HTM and evaluate them in terms of energy, performance, and complexity. We find that ignoring energy considerations can lead to poor design choices, particularly for resource-constrained embedded platforms. We conclude that with the right balance of energy efficiency and simplicity, HTM will become an attractive choice for future embedded system designs.  相似文献   

8.
OpenMP is an emerging industry standard for shared memory architectures. While OpenMP has advantages on its ease of use and incremental programming, message passing is today still the most widely-used programming model for distributed memory architectures. How to effectively extend OpenMP to distributed memory architectures has been a hot spot. This paper proposes an OpenMP system, called KLCoMP, for distributed memory architectures. Based on the partially replicating shared arrays memory model, we propose ...  相似文献   

9.
Thread-level speculation (TLS) was researched to automatically parallelize portions of serial programs for execution, and transactional memory (TM) was studied as a promising alternative of lock for parallel programming due to its simplicity. Both TLS and TM require similar underlying support. In the paper, we present SeTM (sequential transactional memory), a hardware enhanced TM system which supports TLS at minor extra cost. Signature is an effective way to buffer speculative states in TM and TLS. But it cripples TM and TLS performance due to its false-positive in terms of conflict detection, especially for conflict-intensive TLS. SeTM adopts R/W bits and signature concurrently to ameliorate this bad influence. Additionally, SeTM introduces the fast rollback mechanism, which provides fast abort recovery for eager log-based HTM and TLS. The most important contribution of SeTM is the conflict-tolerant mechanism, which tolerates some ambiguous data conflicts in TLS. Finally, in order to achieve an efficient execution for these un-order transactions, we add an extra ordering mechanism for SeTM. With this ordering mechanism, the transactions in TM can also gain the performance improvement with the support of conflict-tolerant mechanism. Our evaluation major on TM and TLS separately. For the TLS applications, six representative benchmarks have been adopted to evaluate the above model. Our experimental results show that our scheme improves the execution performance of most tested codes at a modest hardware cost. For a set of important scientific loops, we report the highest speedup of 6.5 with 15 cores. Besides, experimental results also show good scalability of SeTM system. For the TM applications, with respect to LogTM-SE, the benchmarks from STAMP also gain performance improvement signally.  相似文献   

10.
On-chip distributed memory has emerged as a promising memory organization for future many-core systems, since it efficiently exploits memory level parallelism and can lighten off the load on each memory module by providing a comparable number of memory interfaces with on-chip cores. The packet-based memory access model (PDMA) has provided a scalable and flexible solution for distributed memory management, but suffers from complicated and costly on-chip network protocol translation and massive interferences among packets, which leads to unpredictable performance. In this paper we propose a direct distributed memory access (DDMA) model, in which remote memory can be directly accessed by local cores via remote-to-local virtualization, without network protocol translation. From the perspective of local cores, remote memory controllers (MC) can be directly manipulated through accessing the local agent MC, which is responsible for accessing remote memory through high-performance inter-tile communication. We further discuss some detailed architecture supports for the DDMA model, including the memory interface design, work flow and the protocols involved. Simulation results of executing PARSEC benchmarks show that our DDMA architecture outperforms PDMA in terms of both average memory access latency and IPC by 17.8% and 16.6% respectively on average. Besides, DDMA can better manage congested memory traffic, since a reduction of bandwidth in running memory-intensive SPEC2006 workloads only incurs 18.9% performance penalty, compared with 38.3% for PDMA.  相似文献   

11.
It was shown in the paper of Solchenbach and Trottenberg (in this special issue) that grid algorithms are inherently parallel and that parallel grid algorithms for regular grids can be efficiently implemented on dm-mp systems using the concept of grid partitioning.

In this paper, we demonstrate that grid applications can be implemented quite easily on dm-mp systems if a hardware-independent process system exists and convenient tools (such as the SUPRENUM mapping and communications library) are available.

The evaluation of parallel grid algorithms shows that the multiprocessor speedup and efficiency for single grid applications depends on the communication/calculation performance ratio of the hardware, on the communication/calculation ratio of the algorithms, and on the process size. The efficiency of parallel multigrid algorithms additionally depends on the number of nodes.  相似文献   


12.
There are two distinct types of MIMD (Multiple Instruction, Multiple Data) computers: the shared memory machine, e.g. Butterfly, and the distributed memory machine, e.g. Hypercubes, Transputer arrays. Typically these utilize different programming models: the shared memory machine has monitors, semaphores and fetch-and-add; whereas the distributed memory machine uses message passing. Moreover there are two popular types of operating systems: a multi-tasking, asynchronous operating system and a crystalline, loosely synchronous operating system.

In this paper I firstly describe the Butterfly, Hypercube and Transputer array MIMD computers, and review monitors, semaphores, fetch-and-add and message passing; then I explain the two types of operating systems and give examples of how they are implemented on these MIMD computers. Next I discuss the advantages and disadvantages of shared memory machines with monitors, semaphores and fetch-and-add, compared to distributed memory machines using message passing, answering questions such as “is one model ‘easier’ to program than the other?” and “which is ‘more efficient‘?”. One may think that a shared memory machine with monitors, semaphores and fetch-and-add is simpler to program and runs faster than a distributed memory machine using message passing but we shall see that this is not necessarily the case. Finally I briefly discuss which type of operating system to use and on which type of computer. This of course depends on the algorithm one wishes to compute.  相似文献   


13.
14.
Emerging byte-addressable non-volatile memory (NVM) technologies offer higher density and lower cost than DRAM, at the expense of lower performance and limited write endurance. There have been many studies on hybrid NVM/DRAMmemory management in a single physical server. However, it is still an open problem on how to manage hybrid memories efficiently in a distributed environment. This paper proposes Alloy, a memory resource abstraction and data placement strategy for an RDMA-enabled distributed hybrid memory pool (DHMP). Alloy provides simple APIs for applications to utilize DRAM or NVM resource in the DHMP, without being aware of the hardware details of the DHMP. We propose a hotness-aware data placement scheme, which combines hot data migration, data replication and write merging together to improve application performance and reduce the cost of DRAM. We evaluate Alloy with several micro-benchmark workloads and public benchmark workloads. Experimental results show that Alloy can significantly reduce the DRAM usage in the DHMP by up to 95%, while reducing the total memory access time by up to 57% compared with the state-of-the-art approaches.  相似文献   

15.
Abstract. Often information systems (IS) are classified in three groups: (a) transactional, used mainly for co-ordination and resource allocation purposes at the operational level of a company; (b) tactical, often employed to support the resource procurement activities typical of middle management; and (c) information systems for strategic decision making, designed to help in the planning and strategy design processes which are the direct responsibility of top management. In general, the amount of care and management attention that companies give to these different types of systems is proportional to their position in this hierarchy: little attention is devoted to the mundane transaction-pushing systems and exquisite care is put into developing the sophisticated decision making aid for the CEO and his/her staff.
The IS/IT literature has been reporting quite commonly cases in which companies have attained or lost great competitive advantages by way of their transactional information systems [for example, Emery Worldwide, Baxter Healthcare ASAP system, and Frontier Airlines]. The aim of this paper is to identify actions that companies can take to realize potential benefits of their IS, in particular from their low-level, transactional IS.
Among other actions, we will conclude that companies would be better off if they: (a) have the IS department at the right place in the organization, staffed with people knowledgeable about the basic nature of the business in which the company is engaged; (b) are sensible to what can be called 'strategic maintenance' of systems, (c) set up a formal procedure for IS planning to ensure coherence between IS plans and business plans, derived, in turn, from business strategy, and (d) keep abreast of the relevant technology.
Several examples taken from European companies are used to illustrate these conclusions.  相似文献   

16.
This paper presents a query processing algorithm, formulated and developed in support of the prototype architecture of the Distributed Access View Integrated Database (DAVID) which is a heterogeneous distributed database management system. The objective of the proposed query processing algorithm is to produce an inexpensive strategy for a given query. The inexpensive query strategy is obtained primarily by computing the most profitable semi-joins and by determining the best sequence of join operations per processing site. The latter is obtained by applying a zero-one integer linear program that uses a non-parametric statistical estimation technique to compute the sizes of the temporary clusters. A cluster is a subset of the cartesian product of a list of atomic and non-atomic domains and is the structure that can represent in a uniform way data stored in relational, hierarchical and network databases.Following some background information on the development of the DAVID prototype, this paper introduces the schema architecture. The schema architecture describes the mechanism by which the component heterogeneous database schemata are mapped into the uniform global schema. This is followed by the formulation of the query processing algorithm, its implementation and an illustration of its use in the context of NASA's Astrophysics Data System.Recommended by: Y. Breitbart  相似文献   

17.
The current trend in development of parallel programming models is to combine different well established models into a single programming model in order to support efficient implementation of a wide range of real world applications. The dataflow model has particularly managed to recapture the interest of the research community due to its ability to express parallelism efficiently. Thus, a number of recently proposed hybrid parallel programming models combine dataflow and traditional shared memory models. Their findings have influenced the introduction of task dependency in the OpenMP 4.0 standard.This article presents DaSH – the first comprehensive benchmark suite for hybrid dataflow and shared memory programming models. DaSH features 11 benchmarks, each representing one of the Berkeley dwarfs that capture patterns of communication and computation common to a wide range of emerging applications. DaSH also includes sequential and shared-memory implementations based on OpenMP and Intel TBB to facilitate easy comparison between hybrid dataflow implementations and traditional shared memory implementations based on work-sharing and/or tasks. Finally, we use DaSH to evaluate three different hybrid dataflow models, identify their advantages and shortcomings, and motivate further research on their characteristics.  相似文献   

18.
Paul Ashton  John Penny 《Software》1995,25(10):1117-1140
The paper describes in detail the implementation of INMON, a prototype monitor for recording and displaying interaction networks. The interaction network is designed to show the interrelationships between significant events that occur during the processing of an interaction on a loosely-coupled distributed system. By being directed at analysing what happens in an interaction, this approach is fundamentally different from other graphical representations that show what happens during the execution of a single program on a distributed system. Examples are given of interaction networks recorded by INMON. The approach is based on very general models of a distributed system and of an interaction, and could be widely applied. We conclude by summarizing what is needed to provide facilities within any operating system for recording interaction networks.  相似文献   

19.
Transactional Memory (TM) is a programmer friendly alternative to traditional lock-based concurrency. Although it intends to simplify concurrent programming, the performance of the applications still relies on how frequent they synchronize and the way they access shared data. These aspects must be taken into consideration if one intends to exploit the full potential of modern multicore platforms. Since these platforms feature complex memory hierarchies composed of different levels of cache, applications may suffer from memory latencies and bandwidth problems if threads are not properly placed on cores. An interesting approach to efficiently exploit the memory hierarchy is called thread mapping. However, a single fixed thread mapping cannot deliver the best performance when dealing with a large range of transactional workloads, TM systems and platforms. In this article, we propose and implement in a TM system a set of adaptive thread mapping strategies for TM applications to tackle this problem. They range from simple strategies that do not require any prior knowledge to strategies based on Machine Learning techniques. Taking the Linux default strategy as baseline, we achieved performance improvements of up to 64.4% on a set of synthetic applications and an overall performance improvement of up to 16.5% on the standard STAMP benchmark suite.  相似文献   

20.
Shared memory is a simple yet powerful paradigm for structuring systems. Recently, there has been an interest in extending this paradigm to non-shared memory architectures as well. For example, the virtual address spaces for all objects in a distributed object-based system could be viewed as constituting a global distributed shared memory. We propose a set of primitives for managing distributed shared memory. We present an implementation of these primitives in the context of an object-based operating system as well as on top of Unix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号