首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a comparative study of some concurrency control algorithms for distributed databases of computer clusters which emphasize high availability and high performance requirements. For this purpose, we have analyzed some concurrency control algorithms which are used in commercial DBMSs, such as the pessimistic locking algorithm as it verifies transaction conflicts early in their execution phase, and the optimistic algorithm which investigates the presence of conflicts after the execution phase. A new algorithm is proposed and implemented by a simulation program. The three algorithms were tested using different configurations. Simulation results showed that the locking algorithm performed better than the optimistic method in presence of conflicts between transactions, while the optimistic algorithm provided better results in the absence of conflicts. Furthermore, in a distributed database with a certain probability of conflicts, the locking algorithm can be used to guarantee strong consistency and an acceptable level of performance. However, if this probability is negligible, the system performance can be improved by using the optimistic algorithm. The proposed algorithm offers improved performance in numerous cases. As a result, it can be used in a distributed database to guarantee a satisfactory level of performance in the presence of conflicts.  相似文献   

2.
The computing power provided by high performance and low-cost PC-based clusters with Grid platforms are attractive and they are equal or superior to supercomputers and mainframes. In this paper, we present implementation and design rationale of Visuel toolkit for MPI parallel program performance measurement and analysis in cluster and grid environments. Most of performance visualization tools available today for high-performance platforms show solely system performance data (e.g., CPU load, memory usage, network bandwidth, server average load), and thus, being suitable for computing system activity visualization. The Visuel (Visuel (in French language) = to visualize) toolkit is web-based interface designed to show performance activities of all computing nodes of a distributed environment involved in the execution of MPI parallel program, such as CPU load level and memory usage of each computing node. In addition, this toolkit is able to display comparative performance data charts of MPI parallel applications and multiple executions under investigation. The usage of this toolkit shows that it outperforms in easing the process of investigation of parallel applications.
Hsun-Chang ChangEmail:
  相似文献   

3.
High-performance storage systems are evolving towards decentralized commodity clusters that can scale in capacity, processing power, and network throughput. Building such systems requires: (a) Sharing physical resources among applications; (b) Sharing data among applications; (c) Allowing customized data views. Current solutions typically satisfy the first two requirements through a cluster file-system, resulting in monolithic, hard-to-manage systems. In this paper we present a storage system that addresses all three requirements by extending the block layer below the file-system. First, we discuss how our system provides customized (virtualized) storage views within a single node. Then, we discuss how it scales in clustered setups. To achieve efficient resource and data sharing we support block-level allocation and locking as in-band mechanisms. We implement a prototype under Linux and use it to build a shared cluster file-system. Our evaluation in a 24-node cluster setup concludes that our approach offers flexibility, scalability and reduced effort to implement new functionality.  相似文献   

4.
High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.  相似文献   

5.
In a typical distributed computing system (DCS), nodes consist of processing elements, memory units, shared resources, data files, and programs. For a distributed application, programs and data files are distributed among many processing elements that may exchange data and control information via communication link. The reliability of DCS can be expressed by the analysis of distributed program reliability (DPR) and distributed system reliability (DSR). In this paper, two reliability measures are introduced which are Markov-chain distributed program reliability (MDPR) and Markov-chain distributed system reliability (MDSR) to accurately model the reliability of DCS. A discrete time Markov chain with one absorbing state is constructed for this problem. The transition probability matrix is employed to represent the transition probability from one state to another state in a unit of time. In addition to mathematical method to evaluate the MDPR and MDSR, a simulation result is also presented to prove its correction.  相似文献   

6.
In the computer science community there is a growing interest in the field of Ambient Intelligent Systems. This systems surround their human users with computing and networking technology unobtrusively embedded in their environment. This technology is aimed to provide the users with useful information and to take action to make the environment more convenient for them. As the number of users increases the resources that make Ambient Intelligence possible can be easily saturated making the system unstable and projecting an image of poor QoS to the users. The main goal of this paper is to provide the means for the Ambient Intelligent Systems to monitor themselves and take corrective action automatically if performance starts to drop. Our approach uses a Performance Ontology that structures the knowledge about Software Performance Engineering, and a reasoning engine that acts like an expert system with the Performance Ontology as its foundation. The case study at the end shows the applicability of the developed techniques.  相似文献   

7.
Data acquisition systems are mission-critical components in high-energy physics experiments. They are embedded in an environment of custom electronics, and are frequently characterized by high performance requirements. With the advent of powerful mainstream computing platforms and affordable high-speed networking equipment, system cost and time to completion can be significantly reduced. There still exists a considerable effort in custom software developments to build these systems and make them running efficiently. Therefore we strive for a software architecture flexible and robust enough to be usable in different system configurations and deployment cases. The software should cover the largest possible application domain and provide a practical balance between efficiency and flexibility. This article pinpoints the requirements imposed on such an on-line software infrastructure and sheds light on a viable design approach. As such, this article aims at laying out the foundations for a broader understanding of the importance for fostering a homogeneous architecture for high-energy physics data acquisition.  相似文献   

8.
This paper describes an approach to carry out performance analysis of parallel embedded applications. The approach is based on measurement, but in addition, the idea of driving the measurement process (application instrumentation and monitoring) by a behavioral model is introduced. Using this model, highly comprehensible performance information can be collected. The whole approach is based on this behavioral model, one instrumentation method and two tools, one for monitoring and the other for visualization and analysis. Each of these is briefly described, and the steps to carry out performance analysis using them are clearly defined. They are explained by means of a case study. Finally, one method to evaluate the intrusiveness of the monitoring approach is proposed, and the intrusiveness results for the case study are presented.  相似文献   

9.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   

10.
11.
Sheng Wan 《Automatica》2002,38(1):33-46
The proper measure of closed-loop performance variation in the presence of model-plant mismatch is discussed in this paper. A generalized closed-loop error transfer function, which is a special representation of the dual Youla parameter and has a close relationship with the pointwise ν-gap metric, is proposed as the suitable means of representing closed-loop performance variation in case of plant perturbation, and the closed-loop performance variation measure is accordingly defined as its maximum singular value frequency by frequency. It is shown that this measure is essential and informative in characterizing closed-loop performance variation. This measure is also shown to be readily applicable to on-line closed-loop performance assessment or monitoring, even without the explicit model of the plant. Its variant, defined as the η-function, which features the relative performance variation as well as generalized stability margin variation with respect to the nominal plant, is also discussed.  相似文献   

12.
This paper describes an accurate and efficient method to model and predict the performance of distributed/parallel systems. Various performance measures, such as the expected user response time, the system throughput and the average server utilization, can be easily estimated using this method. The methodology is based on known product form queueing network methods, with some additional approximations. The method is illustrated by evaluating performance of a multi-client multi-server distributed system. A system model is constructed and mapped to a probabilistic queueing network model which is used to predict its behavior. The effects of user think time and various design parameters on the performance of the system are investigated by both the analytical method and computer simulation. The accuracy of the former is verified. The methodology is applied to identify the bottleneck server and to establish proper balance between clients and servers in distributed/parallel systems.  相似文献   

13.
Three-dimensional simulation of discrete particle systems is performed by the discrete element method (DEM) software on the gLite-based BalticGrid infrastructure. The performance of a parallel algorithm for particles exchanging processors is investigated by using a number of benchmarks. Polydispersed particle systems are visualized by a novel grid e-service VizLitG designed for convenient access and interactive visualization of remote data files located on the grid. Partial dataset transfer from the storage element is implemented in the visualization e-service. The efficiency tests of VizLitG are performed on the datasets of different sizes. Two granular problems associated with triaxial compaction and hopper discharge are solved.  相似文献   

14.
With Moore’s law supplying billions of transistors on-chip, embedded systems are undergoing a transition from single-core to multi-core to exploit this high transistor density for high performance. However, the optimal layout of these multiple cores along with the memory subsystem (caches and main memory) to satisfy power, area, and stringent real-time constraints is a challenging design endeavor. The short time-to-market constraint of embedded systems exacerbates this design challenge and necessitates the architectural modeling of embedded systems to reduce the time-to-market by expediting target applications to device/architecture mapping. In this paper, we present a queueing theoretic approach for modeling multi-core embedded systems that provides a quick and inexpensive performance evaluation both in terms of time and resources as compared to the development of multi-core simulators and running benchmarks on these simulators. We verify our queueing theoretic modeling approach by running SPLASH-2 benchmarks on the SuperESCalar simulator (SESC). Results reveal that our queueing theoretic model qualitatively evaluates multi-core architectures accurately with an average difference of 5.6% as compared to the architectures’ evaluations from the SESC simulator. Our modeling approach can be used for performance per watt and performance per unit area characterizations of multi-core embedded architectures, with varying number of processor cores and cache configurations, to provide a comparative analysis.  相似文献   

15.
16.
In this paper, computation and communication performance is evaluated for single and multitransputer arrays. Performance models are proposed for Occam program execution, under Transputer Development System TDS2. The performance features of normalised arithmetic, concurrent floating and integer arithmetic, logarithmic array indexing, and on-chip/off-chip RAM are studied. The startup time, byte transfer rate, asymptotic link bandwidth, and half performance message length are estimated for simultaneous operation of one, two, three, and four links at 10/20 MHz clock in unidirectional/bidirectional modes. The impact of various performance maximisation techniques on execution time is also addressed.

The matrix factorisation algorithms for dense linear systems are chosen as the focus for this study. The implementations include LUD, Householder, Gauss-Jordan, Choleski, and Givens methods. Floating point operations count alone is inadequate to estimate computation time; many other factors such as array indexing, load/store overhead, and loop overhead play a significant role in the transouter performance for the dense linear systems. The reduction in array indexing overheads in multitransputer arrays may result in superlinear speedups.  相似文献   


17.
The goal of the Network Weather Service is to provide accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing resources. Providing a ubiquitous service that can both track dynamic performance changes and remain stable in spite of them requires adaptive programming techniques, an architectural design that supports extensibility, and internal abstractions that can be implemented efficiently and portably. In this paper, we describe the current implementation of the NWS for Unix and TCP/IP sockets and provide examples of its performance monitoring and forecasting capabilities.  相似文献   

18.
ISO 11898 (CAN) networks suffer from serious limitations on both the maximum network extension and the allowable bit rate because of the particular arbitration technique they adopt.

In this paper, a modification of the basic ISO 11898 protocol is described which combines the content compression and data gathering techniques and provides high performance services for periodic data exchanges and message transmissions.

The new protocol yields, on the average, a 10-times performance improvement over conventional CAN without worsening neither the responsiveness nor the bus length, and also features a satisfactory degree of compatibility with the original standard protocol.  相似文献   


19.
In transactional systems, the objectives of quality of service regarding are often specified by Service Level Objectives (SLOs) that stipulate a response time to be achieved for a percentile of the transactions. Usually, there are different client classes with different SLOs. In this paper, we extend a technique that enforces the fulfilment of the SLOs using admission control. The admission control of new user sessions is based on a response-time model. The technique proposed in this paper dynamically adapts the model to changes in workload characteristics and system configuration, so that the system can work autonomically, without human intervention. The technique requires no knowledge about the internals of the system; thus, it is easy to use and can be applied to many systems. Its utility is demonstrated by a set of experiments on a system that implements the TPC-App benchmark. The experiments show that the model adaptation works correctly in very different situations that include large and small changes in response times, increasing and decreasing response times, and different patterns of workload injection. In all this scenarios, the technique updates the model progressively until it adjusts to the new situation and in intermediate situations the model never experiences abnormal behaviour that could lead to a failure in the admission control component.  相似文献   

20.
Computer systems managers make decisions about hardware and software selection, performance evaluation, capacity planning, and other resource variables on the basis of factual data, accounting data, subjective judgements, and assumptions about the resource consumption of the jobs being run. The importance of computer resource planning calls for effective support methods. A Knowledge-Based DSS (KBDSS) will be able to assist managers in making these policy decisions by utilizing knowledge of the existing configuration and its capabilities, the organizational computing environment, available external resources, and their suppliers. Combining procedural and declarative methods, such a KBDSS may provide early warning of possible bottlenecks, forecast growth of hardware usage, and employ knowledge based inferencing to suggest suitable remedial actions to the systems manager. This paper presents a KBDSS for supporting computer resource planning decisions using a procedural/declarative framework, and illustrates the system's usage aspects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号