共查询到20条相似文献,搜索用时 0 毫秒
1.
Data caching is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). In this paper, an improved cooperative caching scheme called group-based cooperative caching (GCC) is proposed to generalize and enhance the performance of most group-based caching schemes. GCC allows MHs and their neighbors to form a group, and exchange a bitmap data directory periodically used for proposed algorithms, such as the process of data discovery, and cache placement and replacement. The goal is to reduce the access latency of data requests and efficiently use available caching space among MH groups. Two optimization techniques are also developed for GCC to reduce computation and communication overheads. The first technique compresses the directories using an aggregate bitmap. The second employs multi-point relays to develop a forwarding node selection scheme to reduce the number of broadcast messages inside the group. Our simulation results show that the optimized GCC yields better results than existing cooperative caching schemes in terms of cache hit ratio, access latency, and average hop count. 相似文献
2.
We study approximate algorithms for placing a set of documents into M distributed Web servers in this paper. We define the load of a server to be the summation of loads induced by all documents stored. The size of a server is defined in a similar manner. We propose five algorithms. Algorithm 1 balances the loads and sizes of the servers by limiting the loads to k/sub l/ and the sizes to k/sub s/ times their optimal values, where 1/k/sub l/-1 + 1/k/sub n/-1. This result improves the bounds on load and size of servers in (L.C. Chen et al., 2001). Algorithm 2 further reduces the load bound on each server by using partial document replication, and algorithm 3 by sorting. Algorithm 4 employs both partial replication and sorting. Last, without using sorting and replication, we give algorithm 5 for the dynamic placement at the cost of a factor Q(log M) in the time-complexity. 相似文献
3.
4.
为了应对第五代无线通信网络中数据吞吐量急剧增加的问题,移动边缘缓存成为了一种有效的解决方案。它通过在边缘设备上存储网络内容,减轻回程链路和核心网络的负担,缩短服务时延。到目前为止,大多数边缘缓存研究主要在协作内容缓存的优化方面,忽略了内容传输的效率。研究超密集网络的内容协作边缘缓存与无线带宽资源的分配问题,通过余弦相似度和高斯相似度求解基站之间总的相似度,将网络中的小基站根据总相似度进行分组,把缓存和无线带宽分配问题建模成一个长期混合整数的非线性规划问题(LT-MINLP),进而将协作边缘缓存与带宽分配问题转变为一个带约束的马尔可夫决策过程,并利用深度确定性策略梯度DDPG模型,提出了一种基于深度强化学习的内容协作边缘缓存与带宽分配算法CBDDPG。提出的基站分组方案增加了基站之间文件共享的机会,提出的CBDDPG算法的缓存方案利用DDPG双网络机制能更好地捕捉用户的请求规律,优化缓存部署。将CBDDPG算法与三种基线算法(RBDDPG、LCCS和CB-TS)进行了对比实验,实验结果表明所提方案能够有效地提高内容缓存命中率,降低内容传递的时延,提升用户体验。 相似文献
5.
An improved vertex caching scheme for 3D mesh rendering 总被引:1,自引:0,他引:1
Modern graphics cards are equipped with a vertex cache to reduce the amount of data needing to be transmitted to the graphics pipeline during rendering. To make effective use of the cache and facilitate rendering, it is key to represent a mesh in a manner that maximizes the cache hit rate. In this paper, we propose a simple yet effective algorithm for generating a sequence for efficient rendering of 3D polygonal meshes based on greedy optimization. The algorithm outperforms the current state-of-the-art algorithms in terms of rendering efficiency of the resultant sequence. We also adapt it for the rendering of progressive meshes. For any simplified version of the original mesh, the rendering sequence is generated by adaptively updating the reordered sequence at full resolution. The resultant rendering sequence is cheap to compute and has reasonably good rendering performance, which is desirable to many complex rendering environments involving continuous rendering of meshes at various level of details. The experimental results on a collection of 3D meshes are provided. 相似文献
6.
Shi-Jinn Horng Didi Rosiyadi Pingzhi Fan Xian Wang Muhammad Khurram Khan 《Multimedia Tools and Applications》2014,72(3):3085-3103
This paper proposes an adaptive watermarking scheme for e-government document images. The adaptive scheme combines the discrete cosine transform (DCT) and the singular value decomposition (SVD) using luminance masking. As a core of masking model in the human visual system (HVS), luminance masking is implemented to improve noise sensitivity. Genetic algorithm (GA), subsequently, is employed for the optimization of the scaling factor of the masking. Involving a number of steps, the scheme proposed through this study begins by calculating the mask of the host image using luminance masking. It is then continued by transforming the mask on each area into all frequencies domain. The watermark image, following this, is embedded by modifying the singular values of DCT-transformed host image with singular values of mask coefficient of host image and the control parameter of DCT-transformed watermark image using Genetic Algorithm (GA). The use of both the singular values and the control parameter respectively, in this case, is not only to improve the sensitivity of the watermark performance but also to avoid the false positive problem. The watermark image, afterwards, is extracted from the distorted images. The experiment results show the improved adaptive performance of the proposed scheme is in resistant to several types of attacks in comparison with the previous schemes; the adaptive performance refers to the adaptive parameter of the luminance masking functioned to improve the performance or robustness of an image from any attacks. 相似文献
7.
Many geographically distributed proxies are increasingly used for collaborative Web caching to improve performance. In hashing-based collaborative Web caching, the response times can be negatively impacted for those URL requests hashed into geographically distant or overloaded proxies. In this paper, we present and evaluate a latency-sensitive hashing scheme for collaborative Web caching. It takes into account latency delays due to both geographical distances and dynamic load conditions. Each URL request is first hashed into an anchor hash bucket, with each bucket mapping to one of the proxies. Secondly, a number of nearby hash buckets are examined to select the proxy with the smallest latency delay to the browser. Trace-driven simulations are conducted to evaluate the performance of this new latency-sensitive hashing. The results show that (1) with the presence of load imbalance due to skew in request origination or hot-spot references, latency-sensitive hashing effectively balances the load by hashing into geographically distributed proxies for collaborative Web caching, and (2) when the overall system is lightly loaded, latency-sensitive hashing effectively reduces latency delays by directing requests to geographically closer proxies. 相似文献
8.
《Computer Networks and ISDN Systems #》1997,29(8-13):1007-1017
Caching plays a vital role in the performance of any large-scale distributed system and, as the variety and number of Web applications grows, is becoming an increasingly important research topic within the Web community. Existing caching mechanisms are largely transparent to their users and cater for resources which are primarily read-only, offering little support for customisable or complex caching strategies. In this paper we examine the deficiencies in these mechanisms with regard to applications with requirements for shared access to data where clients may require a variety of consistency guarantees. We present “open” caching within an object-oriented framework, an approach to solving these problems which, instead of offering caching transparency makes the caching mechanism highly visible allowing great flexibility in caching choices. Our implementation is built upon the W3Objects infrastructure and allows clients to make caching decisions for individual resources with minimal impact upon other resources which do not support our mechanisms. 相似文献
9.
Web-log mining for predictive Web caching 总被引:3,自引:0,他引:3
Caching is a well-known strategy for improving the performance of Web-based systems. The heart of a caching system is its page replacement policy, which selects the pages to be replaced in a cache when a request arrives. In this paper, we present a Web-log mining method for caching Web objects and use this algorithm to enhance the performance of Web caching systems. In our approach, we develop an n-gram-based prediction algorithm that can predict future Web requests. The prediction model is then used to extend the well-known GDSF caching policy. We empirically show that the system performance is improved using the predictive-caching approach. 相似文献
10.
Minh Tran Wallapak Tavanapong Wanida Putthividhya 《Multimedia Tools and Applications》2007,34(1):25-56
Video streaming is vital for many important applications such as distance learning, digital video libraries, and movie-on-demand.
Since video streaming requires significant server and networking resources, caching has been used to reduce the demand on
these resources. In this paper, we propose a novel collaboration scheme for video caching on overlay networks, called Overlay Caching Scheme (OCS), to further minimize service delays and loads placed on an overlay network for video streaming applications. OCS is not a
centralized nor a hierarchical collaborative scheme. Despite its design simplicity, OCS effectively uses an aggregate storage
space and capability of distributed overlay nodes to cache popular videos and serve nearby clients. Moreover, OCS is light-weight
and adaptive to clients’ locations and request patterns. We also investigate other video caching techniques for overlay networks
including both collaborative and non-collaborative ones. Compared with these techniques on topologies inspired from actual
networks, OCS offers extremely low average service delays and approximately half the server load. OCS also offers smaller
network load in most cases in our study.
相似文献
Wanida PutthividhyaEmail: |
11.
Hammouda K.M. Kamel M.S. 《Knowledge and Data Engineering, IEEE Transactions on》2004,16(10):1279-1296
Document clustering techniques mostly rely on single term analysis of the document data set, such as the vector space model. To achieve more accurate document clustering, more informative features including phrases and their weights are particularly important in such scenarios. Document clustering is particularly useful in many applications such as automatic categorization of documents, grouping search engine results, building a taxonomy of documents, and others. This article presents two key parts of successful document clustering. The first part is a novel phrase-based document index model, the document index graph, which allows for incremental construction of a phrase-based index of the document set with an emphasis on efficiency, rather than relying on single-term indexes only. It provides efficient phrase matching that is used to judge the similarity between documents. The model is flexible in that it could revert to a compact representation of the vector space model if we choose not to index phrases. The second part is an incremental document clustering algorithm based on maximizing the tightness of clusters by carefully watching the pair-wise document similarity distribution inside clusters. The combination of these two components creates an underlying model for robust and accurate document similarity calculation that leads to much improved results in Web document clustering over traditional methods. 相似文献
12.
13.
《Computer Networks and ISDN Systems #》1994,25(2):165-173
We describe the design and performance of a caching relay for the World Wide Web. We model the distribution of requests for pages from the web and see how this distribution affects the performance of a cache. We use the data gathered from the relay to make some general characterizations about the web. 相似文献
14.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。 相似文献
15.
Xiaopeng Fan Jiannong Cao Haixia Mao Yunhuai Liu 《Journal of Parallel and Distributed Computing》2013
Cooperative caching is an efficient way to improve the performance of data access in mobile wireless networks, by cache nodes selecting different data items in their limited storage in order to reduce total access delay. With more demands on sharing a video or other data, especially for mobile applications in an Internet-based Mobile Ad Hoc Network, considering the relations among data items in cooperative caching becomes more important than before. However, most of the existing works do not consider these inherent relations among data items, such as the logical, temporal, or spatial relations. In this paper, we present a novel solution, Gossip-based Cooperative Caching (GosCC) to address the cache placement problem, and consider the sequential relation among data items. Each mobile node stores the IDs of data items cached locally and the ID of the data item in use into its progress report. Each mobile node also makes use of these progress reports to determine whether a data item should be cached locally. These progress reports are propagated within the network in a gossip-based way. To improve the user experience, GosCC aims to provide users with an uninterrupted data access service. Simulation results show that GosCC achieves better performance than Benefit-based Data Caching and HybridCache, in terms of average interruption intervals and average interruption times, while sacrificing message cost to a certain degree. 相似文献
16.
一种有效的混合式P2P Web缓存系统HCache 总被引:1,自引:0,他引:1
针对当前P2P Web 缓存系统中副本过多的问题,提出了一种有效的混合式P2P Web缓存系统:HCache。HCache根据用户对网页的访问特点及网页的优先级,对网页进行有选择的缓存,进而减少P2P Web缓存系统中的副本个数。根据Web对象当前的流行度,对LRU替换策略进行了改进(ELRU),提高了P2P Web缓存的命中率。在日志驱动的模拟实验表明,HCache缓存系统提高了Web缓存的命中率和性能。 相似文献
17.
Constraint-based document layout for the Web 总被引:4,自引:0,他引:4
Constraints can be used to specify declaratively the desired layout of a Web document. We present a system architecture in which both the author and the viewer can impose
page layout constraints, some required and some preferential. The final appearance of the Web page is thus the result of negotiation
between author and viewer, where this negotiation is carried out by solving the set of required and preferential constraints
imposed by both parties. We identify two plausible system architectures, based on different ways of dividing the work of constraint
solving between Web server and Web client. We describe a prototype constraint-based Web authoring system and viewing tool
that provides linear arithmetic constraints for specifying the layout of the document as well as finite-domain constraints
for specifying font size relationships. Finally, we provide an empirical evaluation of the prototype. 相似文献
18.
When asymmetric cryptography is used in wireless networks, public keys of the nodes need to be made available securely. In other networks, these public keys would have been certified by a certificate authority (CA). However, the existence of a single CA in large wireless networks such as mobile ad hoc networks and wireless sensor networks can lead to a communication hotspot problem and become an easy target for attacks. In this work, we propose a distributed technique, termed A-CACHE, to cache the public keys on regular nodes. One salient feature of our scheme is that some anchor nodes with larger cache memories are exploited. Due to the limited memory size that each node is allowed to dedicate for key caching, only a limited number of keys will be cached. Access to the public keys of other nodes is possible based on a chain of trust. In addition, multiple copies of public keys from different chains of trusted nodes provide fault-tolerant protections and guard against malicious attacks. We explain our technique in detail and investigate its prominent features in this work. Through analysis and evaluations, we observe the existence of an optimum ratio to cache the keys of local nodes. 相似文献
19.
Yan Zhang Xu Zhou Yinlong Liu Bo Wang Song Ci 《Peer-to-Peer Networking and Applications》2013,6(4):425-433
For an ISP (Internet Service Provider) that has deployed P2P caches in more than one ASs (autonomous systems), cooperative caching which makes their caches cooperate with each other can save more cost of carrying P2P traffic than independent caching. However, existing cooperative caching algorithms only use objects’ popularity as the measurement to decide which objects should be cached, and cost on intra-ISP links that has great impact on the benefits of cooperative caching is not considered. In this paper, we first model the cooperative caching problem as a NP-Complete problem, which is based on our analysis about the cost of serving requests with consideration of both the objects’ popularity and the cost on intra-ISP links. Then we propose a novel cooperative caching algorithm named cLGV (Cooperative, Lowest Global Value). The cLGV algorithm uses a new concept global value to estimate the benefits of caching or replacing an object in the cooperative caching system, and the global value of each object is evaluated according to not only objects’ popularity in each AS but also cost on intra-ISP links among ASs. Results of both synthetic and real traces driven simulations indicate that our cLGV algorithm can save the cost of carrying P2P traffic at least 23 % higher than that of existing cooperative caching algorithms. 相似文献
20.
In content-oriented networks, popular contents are replicated at the intermediate nodes to enhance content delivery performance. Under cooperative caching, the caching nodes collaborate to leverage one another’s cache capability and to reduce the amount of traffic transferring inside the network. This study considers the cooperation among service providers (SPs). The transferable-payoff coalitional game model is applied for analysis. We investigate the stability of the grand coalition and show that the dual-based cost allocation is in the core. A linear program (LP) minimizing the network bandwidth-expense is used for the characteristic function of the game model. However, solving the LP is a challenge because of a large amount of contents in the network. The Dantzig–Wolfe decomposition approach is further applied to decompose the large-scale problem into many subproblems, which can be solved in parallel. The analysis provides not only a deeper insight into the cooperative cache among SPs but also content placement and distribution strategies as a solution to the LP. 相似文献