共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
John A. Chandy 《The Journal of supercomputing》2008,46(2):108-123
RAID has long been established as an effective way to provide highly reliable as well as high-performance disk subsystems.
However, reliability in RAID systems comes at the cost of extra disks. In this paper, we describe a mechanism that we have
termed RAID0.5 that enables striped disks with very high data reliability but low disk cost. We take advantage of the fact
that most disk systems use offline backup systems for disaster recovery. With the use of these offline backup systems, the
disk system needs to only replicate data since the last backup, thus drastically reducing the storage space requirement. Though
RAID0.5 has the same data loss characteristics of traditional mirroring, the lower storage space comes at the cost of lower
availability. Thus, RAID0.5 is a tradeoff between lower disk cost and lower availability while still preserving very high
data reliability. We present analytical reliability models and experimental results that demonstrate the enhanced reliability
and performance of the proposed RAID0.5 system. 相似文献
3.
4.
Web services are gaining high popularity and importance on mobile devices. Connected to ad-hoc networks, they provide the
possibility to establish spontaneously even complex service-based workflows and architectures. However, usually these architectures
are only as stable and reliable as the underlying network infrastructure. Since topologies of mobile ad-hoc networks behave
unpredictably, dependability within them can be only achieved with a dynamic replication mechanism. In this paper we present
a highly flexible solution for replication and synchronization of stateful Web services and discuss the behavior of the implemented
prototype in large-scale simulations. 相似文献
5.
has been shown to be the weakest realistic failure detector class needed to solve the consensus problem in an asynchronous distributed system prone to f<n process crashes in which communication is by message-passing. However, the only protocol that is known to meet this bound is based on three layers of protocol construction, and is therefore not efficient. This paper presents a surprisingly simple and very efficient direct message-passing implementation of a -based consensus protocol, and proves its correctness. 相似文献
6.
A novel n-SiCN/p-SiCN homojunction was developed on Si substrate for low cost and high temperature ultraviolet (UV) detecting applications. The current ratio of the junction under −5 V bias, with and without irradiation of 254 nm UV light are 1940 and 96.3, at room temperature and 175 °C, respectively. Compared to the reported UV detectors with material of 4H-SiC or β-SiC, the developed n-SiCN/p-SiCN homojunction has better current ratio in both room and elevated temperature. 相似文献
7.
Communication-Induced Checkpointing (CIC) protocols are classified into two categories in the literature: Index-based and Model-based. In this paper, we discuss two data structures being used in these two kinds of CIC protocols, and their different roles in helping the checkpointing algorithms to enforce Z-cycle Free (ZCF) property. Then, we present our Fully Informed aNd Efficient (FINE) communication-induced checkpointing algorithm, which not only has less checkpointing overhead than the well-known Fully Informed (FI) CIC protocol proposed by Helary et al. but also has less message overhead. Performance evaluation indicates that our protocol performs better than many of the other existing CIC protocols. 相似文献
8.
Current Medium Access Control (MAC) protocols for data collection scenarios with a large number of nodes that generate bursty traffic are based on Low-Power Listening (LPL) for network synchronization and Frame Slotted ALOHA (FSA) as the channel access mechanism. However, FSA has an efficiency bounded to 36.8% due to contention effects, which reduces packet throughput and increases energy consumption. In this paper, we target such scenarios by presenting Low-Power Distributed Queuing (LPDQ), a highly efficient and low-power MAC protocol. LPDQ is able to self-schedule data transmissions, acting as a FSA MAC under light traffic and seamlessly converging to a Time Division Multiple Access (TDMA) MAC under congestion. The paper presents the design principles and the implementation details of LPDQ using low-power commercial radio transceivers. Experiments demonstrate an efficiency close to 99% that is independent of the number of nodes and is fair in terms of resource allocation. 相似文献
9.
This paper examines the use of speculations, a form of distributed transactions, for improving the reliability and fault tolerance of distributed systems. A speculation is defined as a computation that is based on an assumption that is not validated before the computation is started. If the
assumption is later found to be false, the computation is aborted and the state of the program is rolled back; if the assumption is found to be true, the results of the computation are committed. The primary difference between a speculation and a transaction is that a speculation is not isolated—for example, a speculative
computation may send and receive messages, and it may modify shared objects. As a result, processes that share those objects
may be absorbed into a speculation. We present a syntax, and an operational semantics in two forms. The first one is a speculative
model, which takes full advantage of the speculative features. The second one is a nonspeculative, nondeterministic model,
where aborts are treated as failures. We prove the equivalence of the two models, demonstrating that speculative execution
is equivalent to failure-free computation. 相似文献
10.
We propose Chunks and Tasks, a parallel programming model built on abstractions for both data and work. The application programmer specifies how data and work can be split into smaller pieces, chunks and tasks, respectively. The Chunks and Tasks library maps the chunks and tasks to physical resources. In this way we seek to combine user friendliness with high performance. An application programmer can express a parallel algorithm using a few simple building blocks, defining data and work objects and their relationships. No explicit communication calls are needed; the distribution of both work and data is handled by the Chunks and Tasks library. This makes efficient implementation of complex applications that require dynamic distribution of work and data easier. At the same time, Chunks and Tasks imposes restrictions on data access and task dependencies that facilitate the development of high performance parallel back ends. We discuss the fundamental abstractions underlying the programming model, as well as performance, determinism, and fault resilience considerations. We also present a pilot C++ library implementation for clusters of multicore machines and demonstrate its performance for irregular block-sparse matrix–matrix multiplication. 相似文献
11.
Understanding how the architecture of neuronal populations contributes to brain function requires three-dimensional representations and analyses. Neuroanatomical techniques are available to locate neurons in animal brains. Repeating an experiment in different individuals yields a collection of point patterns from which common organization principles are generally difficult to extract. We recently addressed the problem of generating statistical density maps to integrate replicated point pattern data into meaningful, interpretable representations. Applications to different neuroanatomical systems illustrated the ability of our method to reveal organization rules that cannot be perceived directly on raw data. To make the method practicable for further applications, the aim of the present paper is to establish general guidelines for appropriate parameter tuning, valid result interpretation as well as efficient implementation. Accordingly, we characterize the method by analyzing the role of its main parameter, by reporting results on its statistical properties and by demonstrating its robustness, using both simulated and real neuroanatomical data. 相似文献
12.
Feng-Renn JuangAuthor VitaeYean-Kuen FangAuthor Vitae Yen-Ting ChiangAuthor VitaeTse-Heng ChouAuthor Vitae Cheng-I. LinAuthor VitaeCheng-Wei LinAuthor Vitae 《Sensors and actuators. B, Chemical》2011,156(1):338-342
The Au/SnO2/n-LTPS MOS Schottky diode prepared on a glass substrate for carbon monoxide (CO) sensing applications is studied. The n-LTPS (n-type low temperature polysilicon) is prepared by excimer laser annealing and PH3 plasma treatment of an amorphous Si thin film on glass substrate. The developed Schottky diode exhibits a high relative response ratio of ∼546% to 100 ppm CO ambient under condition of 200 °C and −3 V bias. The response ratio is better than the reported SnO2 based resistive type CO sensors of 100% and 37%, respectively on poly-alumina and glass substrates or comparable to 390% of Pt-AlGaN/GaN Schottky diode CO sensor. Thus, the Au/SnO2/n-LTPS Schottky diode has the potential to develop a low cost high performance CO sensor. 相似文献
13.
This paper describes a network-based video capture and processing peripheral, called the Vidboard, for a distributed multimedia system centered around a 1-Gbit/s asynchronous transfer mode (ATM) network. The Vidboard is capable of generating full-motion video streams having a range of presentation (picture size, color space, etc.) and network (traffic, transport, etc.) characteristics. The board is also capable of decoupling video from the real-time constraints of the television world, which allows easier integration of video into the software environment of computer systems. A suite of ATM-based protocols has been developed for transmitting video from the Vidboard to a workstation, and a series of experiments are presented in which video is transmitted to a workstation for display. 相似文献
14.
In this paper, a sensor data validation/reconstruction methodology applicable to water networks and its implementation by means of a software tool are presented. The aim is to guarantee that the sensor data are reliable and complete in case that sensor faults occur. The availability of such dataset is of paramount importance in order to successfully use the sensor data for further tasks e.g. water billing, network efficiency assessment, leak localization and real-time operational control. The methodology presented here is based on a sequence of tests and on the combined use of spatial models (SM) and time series models (TSM) applied to the sensors used for real-time monitoring and control of the water network. Spatial models take advantage of the physical relations between different system variables (e.g. flow and level sensors in hydraulic systems) while time series models take advantage of the temporal redundancy of the measured variables (here by means of a Holt–Winters (HW) time series model). First, the data validation approach, based on several tests of different complexity, is described to detect potential invalid or missing data. Then, the reconstruction process is based on a set of spatial and time series models used to reconstruct the missing/invalid data with the model estimation providing the best fit. A software tool implementing the proposed data validation and reconstruction methodology is also described. Finally, results obtained applying the proposed methodology to a real case study based on the Catalonia regional water network is used to illustrate its performance. 相似文献
15.
The three basic structural elements of a data processing system are shown to be files, flows, and processes. A metric for the size of a data processing system is introduced which is a function of the number of files, flows, and processes of the system. The validity and reliability of this metric are demonstrated. It is shown how the metric may be used to estimate the cost of developing a data processing system at an early stage in the development process. Furthermore, it is demonstrated how the metric may be used to determine the efficiency of data processing system development. 相似文献
16.
17.
Ahmad Alzghoul Björn Backe Magnus Löfstrand Arne Byström Bengt Liljedahl 《Computers in Industry》2014
The field of fault detection and diagnosis has been the subject of considerable interest in industry. Fault detection may increase the availability of products, thereby improving their quality. Fault detection and diagnosis methods can be classified in three categories: data-driven, analytically based, and knowledge-based methods. 相似文献
18.
Daniel Barbará-Millá Ph.D Hector Garcia-Molina Ph.D 《The VLDB Journal The International Journal on Very Large Data Bases》1994,3(3):325-353
Traditional protocols for distributed database management have a high message overhead; restrain or lock access to resources during protocol execution; and may become impractical for some scenarios like real-time systems and very large distributed databases. In this article, we present the demarcation protocol; it overcomes these problems by using explicit consistency constraints as the correctness criteria. The method establishes safe limits as lines drawn in the sand for updates, and makes it possible to change these limits dynamically, enforcing the constraints at all times. We show how this technique can be applied to linear arithmetic, existential, key, and approximate copy constraints. 相似文献
19.
Yunfeng GuAuthor Vitae Azzedine BoukercheAuthor Vitae 《Journal of Parallel and Distributed Computing》2011,71(8):1111-1124
There are two basic concerns for supporting multi-dimensional range query in P2P overlay networks. The first is to preserve data locality in the process of data space partitioning, and the second is the maintenance of data locality among data ranges with an exponentially expanding and extending rate. The first problem has been well addressed by using recursive decomposition schemes, such as Quad-tree, K-d tree, Z-order, and Hilbert curve. On the other hand, the second problem has been recently identified by our novel data structure: HD Tree. In this paper, we explore how data locality can be easily maintained, and how range query can be efficiently supported in HD Tree. This is done by introducing two basic routing strategies: hierarchical routing and distributed routing. Although hierarchical routing can be applied to any two nodes in the P2P system, it generates high volume traffic toward nodes near the root, and has very limited options to cope with node failure. On the other hand, distributed routing concerns source and destination pairs only at the same depth, but traffic load is bound to some nodes at two neighboring depths, and multiple options can be found to redirect a routing request. Because HD Tree supports multiple routes between any two nodes in the P2P system, routing in HD Tree is very flexible; it can be designed for many purposes, like fault tolerance, or dynamic load balancing. Distributed routing oriented combined routing (DROCR) algorithm is one such routing strategy implemented so far. It is a hybrid algorithm combining advantages from both hierarchical routing and distributed routing. The experimental results show that DROCR algorithm achieves considerable performance gain over the equivalent tree routing at the highest depth examined. For supporting multi-dimensional range query, the experimental results indicate that the exponentially expanding and extending rate have been effectively controlled and minimized by HD Tree overlay structure and DROCR routing. 相似文献
20.
A constructive solution for stabilization via immersion and invariance: The cart and pendulum system
J.Á. Acosta Author Vitae R. Ortega Author Vitae I. Sarras Author Vitae 《Automatica》2008,44(9):2352-2357
Immersion and Invariance (I&I) is the method to design asymptotically stabilizing control laws for nonlinear systems that was proposed in [Astolfi, A., & Ortega, R. (2003). Immersion and invariance: A new tool for stabilization and adaptive control of nonlinear systems. IEEE Transactions on Automatic Control, 48, 590-606]. The key steps of I&I are (i) the definition of a target dynamics, whose order is strictly smaller than the order of the system to be controlled; (ii) the construction of an invariant manifold such that the restriction of the system dynamics to this manifold coincides with the target dynamics; (iii) the design of a control law that renders the manifold attractive and ensures that all signals are bounded. The second step requires the solution of a partial differential equation (PDE) that may be difficult to obtain. In this short note we use the classical cart and pendulum system to show that by interlacing the first and second steps, and invoking physical considerations, it is possible to obviate the solution of the PDE. To underscore the generality of the proposed variation of I&I, we show that it is also applicable to a class of n-dimensional systems that contain, as a particular case, the cart and pendulum system. 相似文献