首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
在多核处理器上,事务存储是一种有望取代锁的同步手段。软件事务存储不需要增加额外硬件支持,就可以充分利用当前商业多核处理器的多线程能力。提出一种软件事务存储实现算法VectorSTM,该算法不需要使用原子操作。VectorSTM采用分布的向量时钟来跟踪各线程事务执行情况,能够提供更高的并发度。对事务存储基准程序STAMP的测试表明,VectorSTM在性能或者语义上比软件事务存储算法TL2和RingSTM有优势。  相似文献   

2.
The development of distributed real-time embedded systems presents a signi-ffcant practical challenge both because of the complexity of distributed computation and because of the need to rapidly assess a wide variety of design alternatives in early stages when requirements are often volatile. Formal methods can address some of these challenges but are often thought to require greater initial investment and longer development cycles than is desirable for the development of noncritical systems in highly competitive markets.In this paper we propose an approach that takes advantage of formal modelling and analysis technology in a lightweight way, making signi cant use of readily available tools. We describe an incremental approach in which detail is progressively added to abstract system-level speci cations of functional and timing properties via intermediate models that express system architecture, concurrency and distribution. The approach is illustrated using a modelof a home automation system. The models are expressed using the Vienna Development Method (VDM) and are validated primarily by scenario-based tests.  相似文献   

3.
This paper presents an extensive performance study of the implementation of Hardware Transactional Memory (HTM) in the Haswell generation of Intel x86 core processors. It evaluates the strengths and weaknesses of this new architecture by exploring several dimensions in the space of Transactional Memory (TM) application characteristics using the Eigenbench (Hong et al., 2010 [1]) and the CLOMP-TM (Schindewolf et al., 2012 [2]), benchmarks. This paper also introduces a new tool, called htm-pBuilder that tailors fallback policies and allows independent exploration of its parameters.This detailed performance study provides insights on the constraints imposed by the Intel’s Transaction Synchronization Extension (Intel’s TSX) and introduces a simple, but efficient policy for guaranteeing forward progress on top of the best-effort Intel’s HTM which was critical to achieving performance. The evaluation also shows that there are a number of potential improvements for designers of TM applications and software systems that use Intel’s TM and provides recommendations to extract maximum benefit from the current TM support available in Haswell.  相似文献   

4.
The Transactional Memory (TM) paradigm aims at simplifying the development of concurrent applications by means of the familiar abstraction of atomic transaction. After a decade of intense research, hardware implementations of TM have recently entered the domain of mainstream computing thanks to Intel’s decision to integrate TM support, codenamed RTM (Reduced Transactional Memory), in their last generation of processors.In this work we shed light on a relevant issue with great impact on the performance of Intel’s RTM: the correct tuning of the logic that regulates how to cope with failed hardware transactions. We show that the optimal tuning of this policy is strongly workload dependent, and that the relative difference in performance among the various possible configurations can be remarkable (up to 10 × slow-downs).We address this issue by introducing a simple and effective approach that aims to identify the optimal RTM configuration at run-time via lightweight reinforcement learning techniques. The proposed technique requires no off-line sampling of the application, and can be applied to optimize both the cases in which a single global lock or a software TM implementation is used as fall-back synchronization mechanism.We propose and evaluate different designs for the proposed self-tuning mechanisms, which we integrated with GCC in order to achieve full transparency for the programmers. Our experimental study, based on standard TM benchmarks, demonstrates average gains of 60% over any static approach while remaining within 5% from the performance of manually identified optimal configurations.  相似文献   

5.
This paper describes compiler techniques that can translate standard OpenMP applications into code for distributed computer systems. OpenMP has emerged as an important model and language extension for shared-memory parallel programming. However, despite OpenMP's success on these platforms, it is not currently being used on distributed system. The long-term goal of our project is to quantify the degree to which such a use is possible and develop supporting compiler techniques. Our present compiler techniques translate OpenMP programs into a form suitable for execution on a Software DSM system. We have implemented a compiler that performs this basic translation, and we have studied a number of hand optimizations that improve the baseline performance. Our approach complements related efforts that have proposed language extensions for efficient execution of OpenMP programs on distributed systems. Our results show that, while kernel benchmarks can show high efficiency of OpenMP programs on distributed systems, full applications need careful consideration of shared data access patterns. A naive translation (similar to OpenMP compilers for SMPs) leads to acceptable performance in very few applications only. However, additional optimizations, including access privatization, selective touch, and dynamic scheduling, resulting in 31% average improvement on our benchmarks.  相似文献   

6.
7.
The application of the tolerance paradigm to security - intrusion tolerance - has been raising a reasonable amount of attention in the dependability and security communities. In this paper we present a novel approach to intrusion tolerance. The idea is to use privileged components - generically designated by wormholes - to support the execution of intrusion-tolerant protocols, often called Byzantine-resilient in the literature.The paper introduces the design of wormhole-aware intrusion-tolerant protocols using a classical distributed systems problem: consensus. The system where the consensus protocol runs is mostly asynchronous and can fail in an arbitrary way, except for the wormhole, which is secure and synchronous. Using the wormhole to execute a few critical steps, the protocol manages to have a low time complexity: in the best case, it runs in two rounds, even if some processes are malicious. The protocol also shows how often theoretical partial synchrony assumptions can be substantiated in practical distributed systems. The paper shows the significance of the TTCB as an engineering paradigm, since the protocol manages to be simple when compared with other protocols in the literature.Published online: 29 October 2004This work was partially supported by the EC, through project IST-1999-11583 (MAFTIA), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LASIGE) and projects POSI/1999/CHS/33996 (DEFEATS) and POSI/CHS/39815/2001 (COPE).  相似文献   

8.
The development of distributed systems is an intricate task due to inherent characteristics of such systems. In this paper these characteristics are categorised into software engineering, concurrency, distribution and non-functional criteria. Popular classes of distributed systems are classified with respect to these challenges, and it is deduced that modern technological trends lead to the inception of new application classes with increased demands regarding challenges from more than one area. One recent example is the class of ubiquitous computing, which assumes dynamic scenarios in which devices come and go at any time. Furthermore, it is analysed to which extent today's prevailing software development paradigms – object, component, service and agent orientation – are conceptually capable of supporting the challenges. This comparison reveals that each of the paradigms has its own strengths and weaknesses and none addresses all of the challenges. The new active component approach is proposed aiming at a conceptual integration of the existing paradigms in order to tackle all challenges in an intuitive and unified way. The structure, behaviour and composition of active components are explained, and an infrastructure for active components is introduced. To underline the usefulness of the approach real-world applications is presented and an evaluation according to the challenges is given.  相似文献   

9.
Discrete event simulations are a powerful technique for modeling stochastic systems with multiple components where interactions between these components are governed by the probability distribution functions associated with them. Complex discrete event simulations are often computationally intensive with long completion times. This paper describes our solution to the problem of orchestrating the execution of a stochastic, discrete event simulation where computational hot spots evolve spatially over time. Our performance benchmarks report on our ability to balance computational loads in these settings. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Recently, an ever increasing number of e‐Commerce tools has been made available that are able to help customers by generating purposed recommendations. Many of them are centralized so that they have to face problems related to efficiency and scalability. A few of them are distributed, but in this case, the complexity of the e‐Commerce process implies computation overhead on the client side, which is often unsuitable if mobile devices are used by customers. In this paper, we study how the software distribution in recommender systems affects their performances, depending on the characteristics of the e‐Commerce population. To this end, we present a distributed testbed architecture for e‐Commerce recommender systems using a multi‐tiered agent‐based approach to generate effective recommendations without requiring such an onerous amount of computation per single client. We use such a testbed to study the main advantages and limitations associated with the problem of distributing the computation of recommendations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
An early understanding of the trust concerns while composing a distributed system from independently developed software services saves time and effort. It also allows the developer of such distributed systems to reason about the trust‐related properties of these systems. Although there are prevalent approaches for evaluating the trust of such systems, it is not clear which approach, if any, is the most comprehensive and best suited for a given situation. Moreover, there is no agreement about a unified approach, for quantifying trust, which can be applied to the entire software life‐cycle of distributed systems. This article, first, motivates the need for such a quantification of trust via a case study from the domain of indoor tracking. It then provides a comprehensive survey of current approaches that define trust, in general domains, and then focuses on the relevant approaches from the domain of software‐oriented distributed systems. These prevalent efforts are categorized into groups using existing clustering tools and then are further analyzed for their comprehensiveness. The analysis depicts: (1) many trust‐related efforts and associated models have their own constrained views of trust; (2) different trust models focus on different aspects of trust and life‐cycle details; and (3) it is difficult to interoperate across different trust models. Hence, the paper identifies a set of principles that can assist in quantifying and evaluating the trust throughout the software life‐cycle of distributed systems. These principles, then, are applied to the aforementioned case study to provide an outline of how trustworthy distributed systems can be composed from independent software services. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
The aims of this paper are twofold. First, it identifies the general requirements of future large-scale distributed virtual reality (VR) systems based on an analysis of current VR systems, of more general distributed systems platforms and a consideration of the key issues of scale and heterogeneity. These requirements subsequently inform the development of a general VR reference architecture; and a framework which identifies the key software components which will comprise future distributed VR systems. Second, it uses this reference architecture as a vehicle for conducting a broad review of current distributed VR products and research prototypes. The review covers twelve well known VR systems and intended as a general resource for researchers entering the field. These systems are: AVIARY, BrickNet, DIVE, dVS, MASSIVE, the MR Toolkit, NPSNET, Superscape, VEOS, VUE, WAVES and World Toolkit. The review also identifies relevant standards in the areas of computer graphics and distributed systems. The paper finishes by drawing out a number of more general conclusions from the review including the urgent need to conduct research into the subjects of security and resource discovery for distributed VR systems.  相似文献   

13.
Building Large,Complex, Distributed Safety-Critical Operating Systems   总被引:4,自引:0,他引:4  
Wedde  Horst F.  Lind  Jon A. 《Real-Time Systems》1997,13(3):277-302
Safety-critical systems typically operate in unpredictable environments. Requirements for safety and reliability are in conflict with those for real-time responsiveness. Due to unpredictable environmental needs there is no static trade-off between measures to accommodate the conflicting objectives. Instead every feature or operating system service has to be adaptive. Finally, for any design problem, there cannot be any closed-form (formal) approach taking care at the same time of (external) time constraints or deadlines, and synchronization requirements in distributed design. The reason is that these two aspects are causally independent. - In this situation we worked out a heuristic experimental, performance-driven and performance-based methodology that allows in an educated way to start with a coarse system model, with accurate logical expectations regarding its behavior. Through experiments these expectations are validated. If they are found to successfully stand the tests extended expectations and model features are generated for refining the previous design as well as its performance criteria. The refinement is done in such a way that the previous experimental configurations are extreme model cases or data profiles which both logically and experimentally are to reproduce the behavior of the previous modeling step. Thus the novel performance aspects or tendencies could then unambiguously be attributed to the influences of the refined model features. We termed this methodology Incremental Experimentation. As a general methodology it relies on a principle of comparative performance studies rather than on realistic data for narrow application ranges. The paper describes how we applied a 5-step design and refinement procedure for developing, analyzing, and evaluating our distributed operating system MELODY that exhibits novel services for supporting real-time and safety-critical applications in unpredictable environments. Experimental set-ups and theme-related findings are discussed in particular.  相似文献   

14.
网络管理系统是大规模的分布式软件系统 ,伸缩性和灵活性是今天的网络管理技术面临的两大难题 .软件 a-gent为这些难题提供了出色的解决方法 .本文首先指出当前网络管理中存在的问题 ;在此基础上 ,提出基于多 agent的网络管理结构 ;为了证明其技术可行性 ,专门建立了基于 agent的网络流量管理实验系统 .对该系统的评估证明了软件agent非常适合于开发分布式网络管理系统 ,具有实时性高、可扩充性好、自治能力强的特点 ,而且与传统系统相比 ,这种系统的建立、修改和升级更加经济  相似文献   

15.
This paper proposes a partially distributed functional observer scheme for a class of interconnected linear systems with very strong non-instantaneous subsystems interaction and with time delays in the local states and in the transmission of output information from the remote subsystems. A set of easily verifiable existence conditions is established and upon its satisfaction, simple distributed observers are designed using a straightforward design procedure. Simulation results of a numerical example are given to substantiate the feasibility of the approach.  相似文献   

16.
How should distributed systems preserve consistency in the presence of concurrency and failures? For systems designed as assemblies of independently developed components, concurrent access to data or data structures would normally arise within individual programs, and be controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operations are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group-oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems-often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony to interact with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.  相似文献   

17.
可持续发展是大规模分布式系统的基本特征。在研究了大规模分布式系统的特点和规律后,本文提出了一个新的概念一可成长的分布式系统,认为大规模分布式系统是随着需求、环境、技术、投资等多种因素的变化而不断成长起来的。文中分析了分布对象技术在支持分布式系统的可成长性方面的优势和缺陷,然后提出了一个面向Agent的系统框架,并阐述了面向Agent的方法对于开发和维护可成长的分布式系统是合适的,并给出了后续的主要研究方向。  相似文献   

18.
This article brings a distributed adaptive protocol for consensus and synchronization in multiagent systems on directed communication networks. Agents are modeled as general linear time‐invariant systems. The proposed protocol introduces a novel adaptation scheme allowing control coupling gains to decay to their reference values. This approach improves upon existing adaptive consensus protocols which may result in overly large or even unbounded coupling gains. The protocol design in this article does not rely on any centralized information; hence it is distributed. Nevertheless, the price to pay for this is the need to estimate those reference values. Convergence of the overall network dynamics is guaranteed for correctly estimated references; otherwise, the trajectory of the system is only uniformly ultimately bounded. Two estimation algorithms are proposed: one based on the interval‐halving method and the other based on a distributed estimation of Laplacian eigenvalues. Numerical simulations validate the proposed approach.  相似文献   

19.
In this paper, a distributed output regulation approach is presented for the cooperative control of linear multi-agent systems in the presence of communication delays. Both dynamic state and output feedback control laws are designed for achieving the property of output regulation. Sufficient conditions for the existence of these control laws are provided in terms of linear matrix inequalities. Simulation results are given to support the efficiency of the proposed distributed output regulation approach.  相似文献   

20.
Transactional Memory (TM) is a programmer friendly alternative to traditional lock-based concurrency. Although it intends to simplify concurrent programming, the performance of the applications still relies on how frequent they synchronize and the way they access shared data. These aspects must be taken into consideration if one intends to exploit the full potential of modern multicore platforms. Since these platforms feature complex memory hierarchies composed of different levels of cache, applications may suffer from memory latencies and bandwidth problems if threads are not properly placed on cores. An interesting approach to efficiently exploit the memory hierarchy is called thread mapping. However, a single fixed thread mapping cannot deliver the best performance when dealing with a large range of transactional workloads, TM systems and platforms. In this article, we propose and implement in a TM system a set of adaptive thread mapping strategies for TM applications to tackle this problem. They range from simple strategies that do not require any prior knowledge to strategies based on Machine Learning techniques. Taking the Linux default strategy as baseline, we achieved performance improvements of up to 64.4% on a set of synthetic applications and an overall performance improvement of up to 16.5% on the standard STAMP benchmark suite.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号