共查询到20条相似文献,搜索用时 15 毫秒
1.
In a large-scale information system such as a digital library or the Web, a set of distributed caches can improve their effectiveness by coordinating their data placement decisions. Using simulation, we examine three practical cooperative placement algorithms, including one that is provably close to optimal, and we compare these algorithms to the optimal placement algorithm and several cooperative and noncooperative replacement algorithms. We draw five conclusions from these experiments: 1) cooperative placement can significantly improve performance compared to local replacement algorithms, particularly when the size of individual caches is limited compared to the universe of objects; 2) although the amortizing placement algorithm is only guaranteed to be within 14 times the optimal, in practice it seems to provide an excellent approximation of the optimal; 3) in a cooperative caching scenario, the recent greedy-dual local replacement algorithm performs much better than the other local replacement algorithms; 4) our hierarchical-greedy-dual replacement algorithm yields further improvements over the greedy-dual algorithm especially when there are idle caches in the system; and 5) a key challenge to coordinated placement algorithms is generating good predictions of access patterns based on past accesses. 相似文献
2.
Streaming media from the Internet is a successful application for end-users. With the upcoming success of mobile devices
and home networking environments, cooperation among users will become more important in the future. To achieve such cooperation,
explicit middleware standards have been defined. On the other hand, Internet conferencing applications do not handle collaborative
streaming sessions with individual control for each user. We propose a new concept for cooperation exemplary for collaborative
media streaming using IETF multimedia session control protocols together with a proxy architecture. This concept enables both
synchronization among clients and flexible control to individual users. 相似文献
3.
Scheduling algorithms are used in content delivery systems to control the resource allocation rate. They not only improve
system efficiency, but also increase user satisfaction. Lower renege rate and less waiting time for users are the main goals
for a scheduling algorithm. Among existing algorithms, On-Demand strategy does not perform well, while rate control channel
allocation policies performs much better. Pure-Rate-Control (PRC) and Multiple-Service-Class (MSC) belong to the rate control
algorithms. MSC performs well, but a drawback is that it uses the Hot Index, which is hard to decide and has significant effects
on the performance. In order to solve this problem and to improve the overall system performance, two new algorithms, Modified
MSC(MMSC) and Adaptive Algorithm (AA), are proposed in this paper. Both of them solved the problem of MSC very well and improved
the overall performance. For example, the renege rate of AA is about 5.4% less than that of MMSC, and about 9.8% less than
that of MSC.
相似文献
Yun ZhangEmail: |
4.
5.
High-resolution display environments built on networked, multi-tile displays have emerged as an enabling tool for collaborative, distributed visualization work. They provide a means to present, compare, and correlate data in a broad range of formats and coming from a multitude of different sources. Visualization of these distributed data resources may be achieved from a variety of clustered processing and display resources for local rendering and may be streamed on demand and in real-time from remotely rendered content. The latter is particularly important when multiple users want to concurrently share content from their personal devices to further augment the shared workspace. This paper presents a high-quality video streaming technique allowing remotely generated content to be acquired and streamed to multi-tile display environments from a range of sources and over a heterogeneous wide area network.The presented technique uses video compression to reduce the entropy and therefore required bandwidth of the video stream. Compressed video delivery poses a series of challenges for display on tiled video walls which are addressed in this paper. These include delivery to the display wall from a variety of devices and localities with synchronized playback, seamless mobility as users move and resize the video streams across the tiled display wall, and low latency video encoding, decoding, and display necessary for interactive applications. The presented technique is able to deliver 1080p resolution, multimedia rich content with bandwidth requirements below 10 Mbps and low enough latency for constant interactivity. A case study is provided, comparing uncompressed and compressed streaming techniques, with performance evaluations for bandwidth use, total latency, maximum frame rate, and visual quality. 相似文献
6.
Effective and efficient dimensionality reduction for large-scale and streaming data preprocessing 总被引:3,自引:0,他引:3
Jun Yan Benyu Zhang Ning Liu Shuicheng Yan Qiansheng Cheng Fan W. Qiang Yang Xi W. Zheng Chen 《Knowledge and Data Engineering, IEEE Transactions on》2006,18(3):320-333
Dimensionality reduction is an essential data preprocessing technique for large-scale and streaming data classification tasks. It can be used to improve both the efficiency and the effectiveness of classifiers. Traditional dimensionality reduction approaches fall into two categories: feature extraction and feature selection. Techniques in the feature extraction category are typically more effective than those in feature selection category. However, they may break down when processing large-scale data sets or data streams due to their high computational complexities. Similarly, the solutions provided by the feature selection approaches are mostly solved by greedy strategies and, hence, are not ensured to be optimal according to optimized criteria. In this paper, we give an overview of the popularly used feature extraction and selection algorithms under a unified framework. Moreover, we propose two novel dimensionality reduction algorithms based on the orthogonal centroid algorithm (OC). The first is an incremental OC (IOC) algorithm for feature extraction. The second algorithm is an orthogonal centroid feature selection (OCFS) method which can provide optimal solutions according to the OC criterion. Both are designed under the same optimization criterion. Experiments on Reuters Corpus Volume-1 data set and some public large-scale text data sets indicate that the two algorithms are favorable in terms of their effectiveness and efficiency when compared with other state-of-the-art algorithms. 相似文献
7.
As manufacturing environments are getting distributed and increasing in size, the related virtual environments are getting
larger and more closely networked together. This trend has led to a new paradigm—large-scale virtual manufacturing environment
(LSVME). It supports networked and distributed virtual manufacturing to meet manufacturing system requirements. Since it contains
a large number of virtual components, an effective data structure and collaborative construction methodology are needed. A
metaearth architecture is proposed as the data structure for representing LSVME. This architecture consists of virtual space
layer, mapping layer, library layer and ontology layers, which describe interaction among virtual components and has the ability
to analyze the characteristics of virtual environment. In addition, it increases reusability of virtual components and supports
self-reconfiguration for manufacturing simulation. A heuristic construction method based on graph theory is proposed using
this architecture. It prevents redundant design of virtual components and contributes to an effective construction scheduling
technique for collaborative designers. 相似文献
8.
Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cache-aware and cache-oblivious layouts of surface and volume meshes that improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cache-aware and cache-oblivious metrics that have high correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and view-dependent rendering. Moreover, we improve upon recent cache-oblivious mesh layouts in terms of performance, applicability, and accuracy. 相似文献
9.
In large-scale peer-to-peer (P2P) video-on-demand (VoD) streaming applications, a fundamental challenge is to quickly locate
new supplying peers whenever a VCR command is issued, in order to achieve smooth viewing experiences. For many existing commercial
systems which use tracker servers for neighbor discovery, the increasing scale of P2P VoD systems has overloaded the dedicated
servers to the point where they cannot accurately identify the suppliers with the desired content and bandwidth. To avoid
overloading the servers and achieve instant neighbor discovery over the self-organizing P2P overlay, we design a novel method
of organizing peers watching a video. The method features a light-weight indexing architecture to support efficient streaming
and fast neighbor discovery at the same time. InstantLeap separates the neighbors at each peer into a streaming neighbor list and a shortcut neighbor list, for streaming and neighbor
discovery respectively, which are maintained loosely but effectively based on random neighbor list exchanges. Our analysis
shows that InstantLeap achieves an O(1) neighbor discovery efficiency upon any playback “leap” across the media stream in streaming overlays of any size, and
low messaging costs for overlay maintenance upon peer join, departure, and VCR operations. We also verify our design with
large-scale simulation studies of dynamic P2P VoD systems based on real-world settings. 相似文献
10.
Direct-mapped caches are defined, and it is shown that trends toward larger cache sizes and faster hit times favor their use. The arguments are restricted initially to single-level caches in uniprocessors. They are then extended to two-level cache hierarchies. How and when these arguments for caches in uniprocessors apply to caches in multiprocessors are also discussed 相似文献
11.
Hardware and software cache optimizations are active fields of research, that have yielded powerful but occasionally complex designs and algorithms. The purpose of this paper is to investigate the performance of combined through simple software and hardware optimizations. Because current caches provide little flexibility for exploiting temporal and spatial locality, two hardware modifications are proposed to support these two kinds of locality. Spatial locality is exploited by using large virtual cache lines which do not exhibit the performance flaws of large physical cache lines. Temporal locality is exploited by minimizing cache pollution with a bypass mechanism that still allows to exploit spatial locality. Subsequently, it is shown that simple software informations on the spatial/temporal locality of array references, as provided by current data locality optimization algorithms, can be used to increase cache performance significantly. The performance and design tradeoffs of the proposed mechanisms are discussed, Software-assisted caches are also shown to provide a very convenient support for further enhancement of data locality optimizations. 相似文献
12.
13.
14.
15.
大数据计算中存在流计算、内存计算、批计算和图计算等不同模式,各种计算模式有不同的访存、通信和资源利用等特征。GPU异构集群在大数据分析处理中得到广泛应用,然而缺少研究GPU异构集群在大数据分析中的计算模型。多核CPU与GPU协同计算时不仅增加了计算资源的密度,而且提高节点间和节点内的通信复杂度。为了从理论上研究GPU与多核CPU协同计算问题,面向多种计算模式建立一个多阶段的协同计算模型(p-DCOT)。p-DCOT以BSP大同步并行模型为核心,将协同计算过程分成数据层、计算层和通信层三个层次,并且延用DOT模型的矩阵来形式化描述计算和通信行为。通过扩展p-DOT模型描述节点内和节点间的协同计算行为,细化了负载均衡的参数并证明时间成本函数,最后用典型计算作业验证模型及参数分析的有效性。该协同计算模型可成为揭示大数据分析处理中协同计算行为的工具。 相似文献
16.
If ISP pool unused cache capacity, they are able to handle more traffic at lower prices. This article introduces the capacity provision networks (CPN) concept and explains why this model of cooperation is a good match for the emerging ISP connectivity paradigm. We also discuss the architectural and economic implications of CPN and examine some potential opportunities and challenges ahead. 相似文献
17.
Neural Computing and Applications - 相似文献
18.
With the development of big data and cloud computing, real-time collaborative editing systems have to face new challenges. How to support string-wise operations for smart and large-scale collaborations is one of the key issues in next generation of collaborative editing systems, which is both the core topic of collaborative computing area and the fundamental research of many collaborative systems in science and engineering. However, string-wise operations have troubled the existing collaborative editing algorithms, including Operational Transformation (OT) and Commutative Replicated Data Type (CRDT), for many years. This paper proposes a novel and efficient CRDT algorithm that integrates string-wise operations for smart and massive-scale collaborations. Firstly, the proposed algorithm ensures the convergence and maintains operation intentions of collaborative users under an integrated string-wise framework. Secondly, formal proofs are provided to prove both the correctness of the proposed algorithm and the intentions preserving of string-wise operations. Thirdly, the time complexity of the proposed algorithm has been analyzed in theory to be lower than that of the state of the art OT algorithm and CRDT algorithm. Fourthly, experiment evaluations show that the proposed algorithm outperforms the state of the art OT algorithm and CRDT algorithm. 相似文献
19.
《Computer Networks》2002,38(6):795-808
Web content caches are often placed between end users and origin servers as a mean to reduce server load, network usage, and ultimately, user-perceived latency. Cached objects typically have associated expiration times, after which they are considered stale and must be validated with a remote server (origin or another cache) before they can be sent to a client. A considerable fraction of cache “hits” involve stale copies that turned out to be current. These validations of current objects have small message size, but nonetheless, often induce latency comparable to full-fledged cache misses. Thus, the functionality of caches as a latency-reducing mechanism highly depends not only on content availability but also on its freshness. We propose policies for caches to proactively validate selected objects as they become stale, and thus allow for more client requests to be processed locally. Our policies operate within the existing protocols and exploit natural properties of request patterns such as frequency and recency. We evaluated and compared different policies using trace-based simulations. 相似文献
20.
PSoup: a system for streaming queries over streaming data 总被引:3,自引:0,他引:3
Recent work on querying data streams has focused on systems where newly arriving data is processed and continuously streamed to the user in real time. In many emerging applications, however, ad hoc queries and/or intermittent connectivity also require the processing of data that arrives prior to query submission or during a period of disconnection. For such applications, we have developed PSoup, a system that combines the processing of ad hoc and continuous queries by treating data and queries symmetrically, allowing new queries to be applied to old data and new data to be applied to old queries. PSoup also supports intermittent connectivity by separating the computation of query results from the delivery of those results. PSoup builds on adaptive query-processing techniques developed in the Telegraph project at UC Berkeley. In this paper, we describe PSoup and present experiments that demonstrate the effectiveness of our approach.Received: 17 September 2002, Revised: 18 February 2003, Published online: 10 July 2003Edited by R. RamakrishnanThis work has been supported in part by the National Science Foundation under the ITR grants of IIS0086057 and SI0122599, and by IBM, Microsoft, Siemens, and the UC MICRO program. 相似文献