首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 335 毫秒
1.
This paper presents a paradigm for remote file access called Smart File Objects (SFOs). The SFO is a realization of the ELFS (Extensible File Systems) concept of files as typed objects, but it is applied to wide-area networks (J. Karpovich et al., in “Proceedings of the 9th OOPLSA,” 1994). The SFO is an object-oriented application- specific file access paradigm designed to address the bottleneck imposed by high latency, low bandwidth, unpredictable, and unreliable networks such as the current Internet. Newly emerging network applications such as multimedia, metacomputing, and collaboratories will have different sensitivities to these network “features.” These applications will require a more flexible file access mechanism than what is provided by conventional distributed file systems. The SFO uses application and network information to adaptively prefetch and cache needed data in parallel with the execution of the application to mitigate the impact of the network. Preliminary results indicate that the SFO can provide substantial performance gains for network applications.  相似文献   

2.
Mobile code makes it easier to maintain, debug, update, and customize a system. Active networks are one of the more interesting applications of mobile code: code is injected into the nodes of a network to customize the network's functionality, such as routing, and to add new features, such as special‐purpose congestion control and filtering algorithms. The challenge is to develop a communication‐oriented platform for such systems. We refer to mobile code targeted at low‐level, communication‐oriented systems like active networks as liquid software, the key distinction being that liquid software is focused on the efficient transfer of data, not high‐performance computation. To this end, we have designed and implemented Joust, which consists of a complete re‐implementation of the Java virtual machine (including both the runtime system and a just‐in‐time compiler), running on the Scout operating system (a configurable, communication‐oriented OS). The result is a configurable, high‐performance platform for running liquid software. We present the results of implementing two different applications of liquid software on Joust, including a prototype architecture for active networks. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Data Grids enable the sharing, selection, and connection of a wide variety of geographically distributed computational and storage resources for content needed by large‐scale data‐intensive applications such as high‐energy physics, bioinformatics, and virtual astrophysical observatories. In Data Grids, co‐allocation architectures were developed to enable parallel downloads of data sets from selected replica servers. As Internet is usually the underlying network of a grid, network bandwidth plays as the main factor affecting file transfers between clients and servers. In this paradigm, there are still some challenges that need to be solved, such as to reduce differences in finish times between selected replica servers, to avoid traffic congestion resulting from transferring the same blocks in different links among servers and clients, and to manage network performance variations among parallel transfers. In this paper, we propose the Anticipative Recursively Adjusting Mechanism (ARAM) scheme to adjust the workloads on selected replica servers and handle unpredictable variations in network performance by those servers. Our algorithm is based on using the finish rates for previously assigned transfers to anticipate the bandwidth status for the next section to adjust workloads, and to reduce file transfer times in grid environments. Our approach is useful in grid environments with unstable network link. It not only reduces idle time wasted waiting for the slowest server, but also decreases file transfer completion times. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
The Internet of Things (IoT) is an emerging technology paradigm where millions of sensors and actuators help monitor and manage physical, environmental, and human systems in real time. The inherent closed‐loop responsiveness and decision making of IoT applications make them ideal candidates for using low latency and scalable stream processing platforms. Distributed stream processing systems (DSPS) hosted in cloud data centers are becoming the vital engine for real‐time data processing and analytics in any IoT software architecture. But the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT applications and data streams. Here, we propose RIoTBench , a real‐time IoT benchmark suite, along with performance metrics, to evaluate DSPS for streaming IoT applications. The benchmark includes 27 common IoT tasks classified across various functional categories and implemented as modular microbenchmarks. Further, we define four IoT application benchmarks composed from these tasks based on common patterns of data preprocessing, statistical summarization, and predictive analytics that are intrinsic to the closed‐loop IoT decision‐making life cycle. These are coupled with four stream workloads sourced from real IoT observations on smart cities and smart health, with peak streams rates that range from 500 to 10 000 messages/second from up to 3 million sensors. We validate the RIoTBench suite for the popular Apache Storm DSPS on the Microsoft Azure public cloud and present empirical observations. This suite can be used by DSPS researchers for performance analysis and resource scheduling, by IoT practitioners to evaluate DSPS platforms, and even reused within IoT solutions.  相似文献   

5.
An emerging approach to distributed systems exploits the self-organization, autonomy and robustness of biological epidemics. In this article, we propose a novel bio-inspired protocol: EraMobile (Epidemic-based Reliable and Adaptive Multicast for Mobile ad hoc networks). We also present extensive performance analysis results for it. EraMobile supports group applications that require high reliability. The protocol aims to deliver multicast data reliably with minimal network overhead, even under adverse network conditions. With an epidemic-based multicast method, it copes with dynamic and unpredictable topology changes due to mobility. Our epidemic mechanism does not require maintaining any tree- or mesh-like structure for multicasting. It requires neither a global nor a partial view of the network, nor does it require information about neighboring nodes and group members. In addition, it substantially lowers overhead by eliminating redundant data transmissions. Another distinguishing feature is its ability to adapt to varying node densities. This lets it deliver data reliably in both sparse networks (where network connectivity is prone to interruptions) and dense networks (where congestion is likely). We describe the working principles of the protocol and study its performance through comparative and extensive simulations in the ns-2 network simulator.  相似文献   

6.
Heterogeneous performance prediction models are valuable tools to accurately predict application runtime, allowing for efficient design space exploration and application mapping. The existing performance models require intricate system architecture knowledge, making the modeling task difficult. In this research, we propose a regression‐based performance prediction framework for general purpose graphical processing unit (GPGPU) clusters that statistically abstracts the system architecture characteristics, enabling performance prediction without detailed system architecture knowledge. The regression‐based framework targets deterministic synchronous iterative algorithms using our synchronous iterative GPGPU execution model and is broken into two components: the computation component that models the GPGPU device and host computations and the communication component that models the network‐level communications. The computation component regression models use algorithm characteristics such as the number of floating‐point operations and total bytes as predictor variables and are trained using several small, instrumented executions of synchronous iterative algorithms that include a range of floating‐point operations‐to‐byte requirements. The regression models for network‐level communications are developed using micro‐benchmarks and employ data transfer size and processor count as predictor variables. Our performance prediction framework achieves prediction accuracy over 90% compared with the actual implementations for several tested GPGPU cluster configurations. The end goal of this research is to offer the scientific computing community, an accurate and easy‐to‐use performance prediction framework that empowers users to optimally utilize the heterogeneous resources. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Distributed Java virtual machine (dJVM) systems enable concurrent Java applications to transparently run on clusters of commodity computers. This is achieved by supporting Java's shared‐memory model over multiple JVMs distributed across the cluster's computer nodes. In this work, we describe and evaluate selective dynamic diffing and lazy home allocation, two new runtime techniques that enable dJVMs to efficiently support memory sharing across the cluster. Specifically, the two proposed techniques can contribute to reduce the overheads due to message traffic, extra memory space, and high latency of remote memory accesses that such dJVM systems require for implementing their memory‐coherence protocol either in isolation or in combination. In order to evaluate the performance‐related benefits of dynamic diffing and lazy home allocation, we implemented both techniques in Cooperative JVM (CoJVM), a basic dJVM system we developed in previous work. In subsequent work, we carried out performance comparisons between the basic CoJVM and modified CoJVM versions for five representative concurrent Java applications (matrix multiply, LU, Radix, fast Fourier transform, and SOR) using our proposed techniques. Our experimental results showed that dynamic diffing and lazy home allocation significantly reduced memory sharing overheads. The reduction resulted in considerable gains in CoJVM system's performance, ranging from 9% up to 20%, in four out of the five applications, with resulting speedups varying from 6.5 up to 8.1 for an 8‐node cluster of computers. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Current smart spaces require more and more sophisticated sensors able to acquire the state of the environment in order to provide advanced and customized services. Among the most important environmental variables, locations of users and their identities represent a primary concern for smart home applications. Despite some years of investigation in indoor positioning, the availability of systems designed as components pluggable into complex home automation platforms is limited. We present People Localization and Tracking for HomE Automation (PLaTHEA), a vision‐based indoor localization system specifically tailored for Ambient Assisted Living applications. PLaTHEA features a novel technique to acquire a stereo video stream from a couple of independent (and not synchronized) network‐attached cameras, thus easing its physical deployment. The input stream is processed by integrating well‐known techniques with a novel tracking approach targeted to indoor spaces. The system has a modular architecture that offers clear interfaces exposed as Web services, and it can run on off‐the‐shelf and cheap hardware (both in terms of sensing devices and computing units). We evaluated PLaTHEA in real usage conditions and reported the measured performance in terms of precision and accuracy. Low light, crowded and large monitored environments might slightly decrease the performance of the system; nevertheless, the results here presented show that it is perfectly suitable to be employed in the typical domestic day‐to‐day life settings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
This paper presents a new stability and L2‐gain analysis of linear Networked Control Systems (NCS). The new method is inspired by discontinuous Lyapunov functions that were introduced by Naghshtabrizitextitet al. (Syst. Control Lett. 2008; 57 :378–385; Proceedings 26th American Control Conference, New York, U.S.A., July 2007) in the framework of impulsive system representation. Most of the existing works on the stability of NCS (in the framework of time delay approach) are reduced to some Lyapunov‐based analysis of systems with uncertain and bounded time‐varying delays. This analysis via time‐independent Lyapunov functionals does not take advantage of the sawtooth evolution of the delays induced by sample‐and‐hold. The latter drawback was removed by Fridman (Automatica 2010; 46 :421–427), where time‐dependent Lyapunov functionals for sampled‐data systems were introduced. This led to essentially less conservative results. The objective of the present paper is to extend the time‐dependent Lyapunov functional approach to NCS, where variable sampling intervals, data packet dropouts, and variable network‐induced delays are taken into account. The Lyapunov functionals in this paper depend on time and on the upper bound of the network‐induced delay, and these functionals do not grow along the input update times. The new analysis is applied to the state‐feedback and to a novel network‐based static output‐feedback H control problems. Numerical examples show that the novel discontinuous terms in Lyapunov functionals essentially improve the results. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Buğra Gedik 《Software》2014,44(9):1105-1128
Stream processing applications process high volume, continuous feeds from live data sources, employ data‐in‐motion analytics to analyze these feeds, and produce near real‐time insights with low latency. One of the fundamental characteristics of such applications is the on‐the‐fly nature of the computation, which does not require access to disk resident data. Stream processing applications store the most recent history of streams in memory and use it to perform the necessary modeling and analysis tasks. This recent history is often managed using windows. All data stream management systems provide some form of windowing functionality. Windowing makes it possible to implement streaming versions of the traditionally blocking relational operators, such as streaming aggregations, joins, and sorts, as well as any other analytic operator that requires keeping the most recent tuples as state, such as time series analysis operators and signal processing operators. In this paper, we provide a categorization of different window types and policies employed in stream processing applications and give detailed operational semantics for various window configurations. We describe an extensibility mechanism that makes it possible to integrate windowing support into user‐defined operators, enabling consistent syntax and semantics across system‐provided and third‐party toolkits of streaming operators. We describe the design and implementation of a runtime windowing library that significantly simplifies the construction of window‐based operators by decoupling the handling of window policies and operator logic from each other. We present our experience using the windowing library to implement a relational operators toolkit and compare the efficacy of the solution to an earlier implementation that did not employ a common windowing library. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
An efficient peer-to-peer indexing tree structure for multidimensional data   总被引:4,自引:1,他引:3  
As one of the most important technologies for implementing large-scale distributed systems, peer-to-peer (P2P) computing has attracted much attention in both research and industrial communities, for its advantages such as high availability, high performance, and high flexibility to the dynamics of networks. However, multidimensional data indexing remains as a big challenge to P2P computing, because of the inefficiency in search and network maintenance caused by the complicated existing index structures, which greatly limits the scalability of applications and dimensionality of the data to be indexed.We propose SDI (Swift tree structure for multidimensional Data Indexing), a swift index scheme with a simple tree structure for multidimensional data indexing in large-scale distributed systems. While keeping the query efficiency in O(logN) in terms of routing hops, SDI has extremely low maintenance costs which is proved through theoretical analysis. Furthermore, SDI overcomes the root-bottleneck problem existing in most other tree-based distributed indexing systems. Extensive empirical study verifies the superiority of SDI in both query and maintenance performance.  相似文献   

12.
Distributed data stream processing applications are often characterized by data flow graphs consisting of a large number of built‐in and user‐defined operators connected via streams. These flow graphs are typically deployed on a large set of nodes. The data processing is carried out on‐the‐fly, as tuples arrive at possibly very high rates, with minimum latency. It is well known that developing and debugging distributed, multi‐threaded, and asynchronous applications, such as stream processing applications, can be challenging. Thus, without domain‐specific debugging support, developers struggle when debugging distributed applications. In this paper, we describe tools and language support to support debugging distributed stream processing applications. Our key insight is to view debugging of stream processing applications from four different, but related, perspectives. First, debugging the semantics of the application involves verifying the operator‐level composition and inspecting the flows at the logical level. Second, debugging the user‐defined operators involves traditional source‐code debugging, but strongly tied to the stream‐level interactions. Third, debugging the deployment details of the application require understanding the runtime physical layout and configuration of the application. Fourth, debugging the performance of the application requires inspecting various performance metrics (such as communication rates, CPU utilization, etc.) associated with streams, operators, and nodes in the system. In light of this characterization, we developed several tools such as a debugger‐aware compiler and an associated stream debugger, composition and deployment visualizers, and performance visualizers, as well as language support, such as configuration knobs for logging and tracing, deployment configurations such as operator‐to‐process and process‐to‐node mappings, monitoring directives to inspect streams, and special sink adapters to intercept and dump streaming data to files and sockets, to name a few. We describe these tools in the context of Spade —a language for creating distributed stream processing applications, and System S —a distributed stream processing middleware under development at the IBM Watson Research Center. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

13.
Despite using multiple concurrent processors, a typical high‐performance parallel application is long‐running, taking hours, even days to arrive at a solution. To modify a running high‐performance parallel application, the programmer has to stop the computation, change the code, redeploy, and enqueue the updated version to be scheduled to run, thus wasting not only the programmer's time, but also expensive computing resources. To address these inefficiencies, this article describes how dynamic software updates (DSU) can be used to modify a parallel application on the fly, thus saving the programmer's time and using expensive computing resources more productively. The net effect of updating parallel applications dynamically can reduce the total time that elapses between posing a problem and arriving at a solution, otherwise known as time‐to‐discovery. To explore the benefits of dynamic updates for high performance applications, this article takes a two‐pronged approach. First, we describe our experiences of building and evaluating a system for dynamically updating applications running on a parallel cluster. We then review a large body of literature describing the existing state of the art in DSU and point out how this research can be applied to high‐performance applications. Our experimental results indicate that DSU have the potential to become a powerful tool in reducing time‐to‐discovery for high‐performance parallel applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
Data compression techniques have long been assisting in making effective use of disk, network and other resources. Most compression utilities require explicit user action for compressing and decompressing of file data. However, there are some systems in which compression and decompression of file data is done transparently by the operating system. A compressed file requires fewer sectors for storage on the disk. Hence, incorporating data compression techniques into a file system gives the advantage of a larger effective disk space. At the same time, the additional time needed for compression and decompression of file data gets compensated for to a large extent by the time gained because of fewer disk accesses. In this paper we describe the design and implementation of a file system for the Linux kernel, with the feature of on‐the‐fly data compression and decompression in a fashion that is transparent to the user. We also present some experimental results which show that the performance of our file system is comparable to that of Ext2fs, the native file system for Linux. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
A new delay fraction technique is proposed to investigate the H control of uncertain systems with time delay in state or input. First, the variation interval of the time-varying delay is divided into several subintervals with equal length. By checking the variation of derivative of the Lyapunov functional in every subinterval, some new criteria on H performance analysis of the systems are derived in terms of the convexity properties of a matrix inequality and some other new analysis techniques. Then, criteria for the H control design are presented, which are concluded from the H performance analysis results. As applications of the derived results, H performance analysis and H control design are carried out for some systems, including two numerical systems, linearised truck trailer system and traffic network system. Discussions show that, the proposed method is effective for these systems and much less conservativeness appears than those given in the existing references.  相似文献   

16.
This paper addresses the finite‐time H bumpless transfer control problem for switched systems. The main idea lies in designing a state‐feedback controller with amplitude limitation and a state‐dependent switching law to reduce control bumps caused by switching. First, a local bumpless transfer condition is proposed to limit the amplitude of switching controllers at switching points. Second, by introducing a state‐dependent switching law, a prescribed finite‐time H bumpless transfer control performance is attained even if it does not hold for each subsystem or system state remaining on a switching surface. Third, a sufficient condition verifying the solvability of finite‐time H bumpless transfer control problem is established by resorting to multiple Lyapunov function method. Finally, the effectiveness of developed method is illustrated by a numerical example.  相似文献   

17.
The reliability and scalability of large-scale network storage systems are confronted with big challenges, which require designing a reliable, scalable, and efficient data placement algorithm. Previous techniques can only partially satisfy these requirements. In this work, we develop an effective hybrid approach, RSEDP, which combines reliable replication data placement (RRDP) with scalable and efficient data placement (SEDP) to achieve the requirements mentioned above. RRDP distributes replicated data over large-scale heterogeneous network storage systems in which the same replica is distributed to different devices and not inclined to consecutive devices, achieving high redundancy degree and failure resilience. SEDP assigns data evenly among devices according to their weight and scales well to the expansions or curtailments of the systems. In order to take the advantages of both RRDP and SEDP, RSEDP integrates them by categorizing data into hot and cold data based on their access frequency, placing hot data by RRDP, and distributing the remainder by SEDP. The theoretical analysis and the experimental study show that the combined RSEDP can increase redundancy degree and failure resilience, and has a good scalability and time efficiency with small memory overhead.  相似文献   

18.
Parallel simulation codes often suffer from performance bottlenecks due to network congestion, leaving millions of dollars of investments underutilized. Given a network topology, it is critical to understand how different applications, job placements, routing schemes, etc., are affected by and contribute to network congestion, especially for large and complex networks. Understanding and optimizing communication on large‐scale networks is an active area of research. Domain experts often use exploratory tools to develop both intuitive and formal metrics for network health and performance. This paper presents Tree Scope , an interactive, web‐based visualization tool for exploring network traffic on large‐scale fat‐tree networks. Tree Scope encodes the network topology using a tailored matrix‐based representation and provides detailed visualization of all traffic in the network. We report on the design process of Tree Scope , which has been received positively by network researchers as well as system administrators. Through case studies of real and simulated data, we demonstrate how Tree Scope 's visual design and interactive support for complex queries on network traffic can provide experts with new insights into the occurrences and causes of congestion in the network.  相似文献   

19.
This paper is concerned with the H performance analysis for networked control systems with transmission delays and successive packet dropouts under stochastic sampling. The parameter uncertainties are time‐varying norm‐bounded and appear in both the state and input matrices. If packet loss is considered the same as time delay, when models the networked control systems with successive packet dropouts and delays as ordinary linear system with input‐delay approach, due to sampling period is stochastic, then the delay caused by packet losses is a stochastic variable, which leads to difficulties in the stability analysis of the considered system. However, if we can transform the system with stochastic delay into a continuous system with stochastic parameter, we can solve the problem. In this paper, by assuming that the network packet loss rate and employing the information of probabilistic distribution of the time delays, the stochastic sampling system is transformed into a continuous‐time model with stochastic variable, which satisfies a Bernoulli distribution. By linear matrix inequality approach, sufficient conditions are obtained, which guarantee the robust mean‐square exponential stability of the system with an H performance. What's more, an H controller design procedure is then proposed, and a less conservative result is obtained by taking the probability into consideration. Finally, a numerical simulation example is employed to show the effectiveness of the obtained results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Nowadays, there is a strong trend towards rendering to higher‐resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever‐increasing amount of computational power, the speed gain is easily counteracted by increasingly complex and sophisticated shading computations. For real‐time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g. although full HD is possible, PS3 and XBox often render at lower resolutions). In order to achieve high‐quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this survey, we investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit TC for performance optimization. These methods not only allow incorporating more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high‐end graphics applications to lower‐spec consumer‐level hardware. To this end, we first introduce the notion and main concepts of TC, including an overview of historical methods. We then describe a general approach, image‐space reprojection, with several implementation algorithms that facilitate reusing shading information across adjacent frames. We also discuss data‐reuse quality and performance related to reprojection techniques. Finally, in the second half of this survey, we demonstrate various applications that exploit TC in real‐time rendering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号