共查询到20条相似文献,搜索用时 352 毫秒
1.
肖刚 《计算机与数字工程》2012,40(4):75-77,142
针对P2P网络中由于查询条件的弱语义和粗粒度、检索效率低下以及网络带宽消耗的问题提出了一个基于元数据的高效查询算法,通过在任意P2P数据管理层的基础上建立一个统一的元数据层,各个节点自动抽取共享数据的详细的元数据信息,每个节点不仅保存本地共享数据的元数据信息,而且存储访问过的最感兴趣的数据的元数据信息,并使用数据库对元数据信息进行高效管理,从而使所有节点都具有自我学习的能力,充分利用元数据信息提高检索效率。 相似文献
2.
3.
4.
针对在基于P2P的点播系统中,由于客户端缓存区没有得到高效的利用而影响流媒体点播系统的服务质量问题,提出了一种新的基于混合P2P的流媒体点播模型P2P_VOD,该模型将客户端缓存分为三个区,并详细阐述了客户端节点缓存区的缓存替换机制,综合考虑了数据块备份量的均衡性和节点VCR操作的命中率,使得节目数据块在各节点间缓存得到全局优化并有效缓解了服务器负载。通过仿真对比实验,验证了该模型在启动延迟和服务器负载方面的优越性。 相似文献
5.
汪永琳 《计算机工程与科学》2009,31(8)
本文首先介绍超节点结构P2P网络的原理,指出其存在负载失衡问题。针对其存在的问题引入信息索引机制IIM,把资源的信息索引分布到多个超节点。通过仿真实验表明,IIM能够在相对不降低搜索效率的前提下,使得各超节点的资源信息相对均衡,有效地解决了超节点结构P2P网络中超节点负载不均衡的问题。 相似文献
6.
在移动园区网环境中部署了P2P系统并提出了一种协同缓存策略。接入控制策略利用"阈值"和节点的位置关系选择缓存的数据。缓存替换策略利用价值函数"Cost"选择要替换掉的数据,该函数考虑了数据的被访问频率、大小及区域之间的距离这三个因素。数据一致性策略结合了Plain-Push和Pull-Every-time方案的优点并做了改进。通过两组模拟实验验证了这种协同缓存策略在降低时延、减少网络通信开销、提高缓存命中率方面具有较好的性能。 相似文献
7.
8.
9.
移动环境中一种基于Hash的P2P覆盖网 总被引:1,自引:0,他引:1
目前提出的大多数基于哈希(hash—based)的P2P网络都集中于固定的对等节点。当节点移动到网络中一个新的位置时,这种结构在消息传递等方面的效率就会下降。文章提出一种移动环境中的基于哈希的P2P覆盖网(Hash—based P2P Overlay in Mobile Environment,H—MP2P),允许节点在网络中自由移动。一个节点可通过P2P网络广播其位置信息,其他节点通过网络可以获知该节点的移动信息并进行定位。通过理论分析和实验可知H—MP2P在扩展性、可靠性和效率方面都可以取得较好的结果,可以很好的应用在移动环境中。 相似文献
10.
无结构P2P网络中基于泛洪法的搜索机制会给系统带来极大的网络负载,结构化P2P网络则需要较大的开销来维护其拓扑结构。针对该问题,给出一种具有社会网络特性的P2P分层搜索机制。根据社会网络的基本原理,将语义相似度高的节点分布在同一个虚拟社区,节点在虚拟社区内能动地建立搜索链接。实验结果证明,该搜索机制能有效地提高P2P网络的资源搜索效率。 相似文献
11.
Exploiting client caches to build large Web caches 总被引:2,自引:1,他引:1
New demands brought by the continuing growth of the Internet will be met in part by more effective and comprehensive use of
caching. This paper proposes to exploit client browser caches in the context of cooperative proxy caching by constructing
the client caches within each organization (e.g., corporate networks) as a peer-to-peer (P2P) client cache. Via trace-driven
simulations we evaluate the potential performance benefit of cooperative proxy caching with/without exploiting client caches.
We show that exploiting client caches in cooperative proxy caching can significantly improve performance, particularly when
the size of individual proxy caches is limited compared to the universe of Web objects. We further devise a cooperative hierarchical
greedy-dual replacement algorithm (Hier-GD), which not only provides some cache coordination but also utilizes client caches.
Through Hier-GD, we explore the design issues of how to exploit client caches in cooperative proxy caching to build large
Web caches. We show that Hier-GD is technically practical and can potentially improve the performance of cooperative proxy
caching by utilizing client caches.
相似文献
Yiming HuEmail: |
12.
Theo Härder Andreas Bühmann 《The VLDB Journal The International Journal on Very Large Data Bases》2008,17(4):805-826
Caching is a proven remedy to enhance scalability and availability of software systems as well as to reduce latency of user
requests. In contrast to Web caching where single Web objects are accessed and kept ready somewhere in caches in the user-to-server
path, database caching uses full-fledged database management systems as caches, close to application servers at the edge of
the Web, to adaptively maintain sets of records from a remote database and to evaluate queries on them. We analyze a new class
of approaches to database caching where the extensions of query predicates that are to be evaluated are constructed by constraints
in the cache. Starting from the key concept of value completeness, we explore the application of cache constraints and their
implications on query evaluation correctness and on controllable cache loading called cache safeness. Furthermore, we identify
simple rules for the design of cache groups and their optimization before discussing the use of single cache groups and cache
group federations. Finally, we argue that predicate completeness can be used to develop new variants of constraint-based database
caching. 相似文献
13.
Noriaki Kamiyama Ryoichi KawaharaTatsuya Mori Shigeaki HaradaHaruhisa Hasegawa 《Computer Communications》2011,34(7):883-897
Traffic caused by P2P services dominates a large part of traffic on the Internet and imposes significant loads on the Internet, so reducing P2P traffic within networks is an important issue for ISPs. In particular, a huge amount of traffic is transferred within backbone networks; therefore reducing P2P traffic is important for transit ISPs to improve the efficiency of network resource usage and reduce network capital cost. To reduce P2P traffic, it is effective for ISPs to implement cache devices at some router ports and reduce the hop length of P2P flows by delivering the required content from caches. However, the design problem of cache locations and capacities has not been well investigated, although the effect of caches strongly depends on the cache locations and capacities. We propose an optimum design method of cache capacity and location for minimizing the total amount of P2P traffic based on dynamic programming, assuming that transit ISPs provide caches at transit links to access ISP networks. We apply the proposed design method to 31 actual ISP backbone networks and investigate the main factors determining cache efficiency. We also analyze the property of network structure in which deploying caches are effective in reducing P2P traffic for transit ISPs. We show that transit ISPs can reduce the P2P traffic within their networks by about 50-85% by optimally designing caches at the transit links to the lower networks. 相似文献
14.
Bin Huang Zhigang Sun Hongyi Chen Jianbiao Mao Ziwen Zhang 《Peer-to-Peer Networking and Applications》2014,7(4):485-496
Peer-to-peer (P2P) systems generate a major fraction of the current Internet traffic which significantly increase the load on ISP networks. To mitigate these negative impacts, many previous works in the literature have proposed caching of P2P traffic. But very few have considered designing a distributed caching infrastructure in the edge network. This paper demonstrates that a distributed caching infrastructure is more suitable than traditional proxy cache servers which cache data in disk, and it is viable to use the memory of users in the edge network as the cache space. This paper presents the design and evaluation of a distributed network cache infrastructure for P2P application, called BufferBank. BufferBank provides a number of application interfaces for P2P applications to make full use of the cache space. Three-level mapping is introduced and elaborated to improve the reliability and security of this distributed cache mechanism. Our measurement results suggest that BufferBank can decrease the data obtaining delay, compared with traditional P2P cache server based on disk. 相似文献
15.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth
and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease
network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers
closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web
caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The
goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an
improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more
data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server
and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested
against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results
demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance
metrics most commonly used to measure the performance of web proxy caches. 相似文献
16.
In this paper we describe the design of an effective caching mechanism for resource-limited, definite-clause theorem-proving systems. Previous work in adapting caches for theorem proving relies on the use of unlimited-size caches. We show how unlimited-size caches are unsuitable in application contexts where resource-limited theorem provers are used to solve multiple problems from a single problem distribution. We introduce bounded-overhead caches, that is, those caches that contain at most a fixed number of entries and entail a fixed amount of overhead per lookup, and we examine cache design issues for bounded-overhead caches. Finally, we present an empirical evaluation of bounded-overhead cache performance, relying on a specially designed experimental methodology that separates hardware-dependent, implementation-dependent, and domain-dependent effects. 相似文献
17.
Internet网络上媒体流的应用受限于应用时的网络状况,如时延、包丢失率等.本文研究通过在网络边缘处设置缓存代理来减小这些影响,提出一种新的缓存管理算法NRC,即接入媒体流服务时,用户以两种方式获取媒体流对象:一部分对象内容从代理缓存中获取,而另一部分对象内容则直接从流媒体源服务器处传输而来;从而加速媒体流接入服务,提高媒体流服务质量,算法同网络特性和媒体流特性相关.最后仿真实验证实同网络和流媒体特性相关的缓存管理算法NRC可以很好地减少服务延迟和提高媒体流的总体服务质量. 相似文献
18.
19.
The delivery of multimedia over the Internet is affected by adverse network conditions such as high packet loss rate and long delay. This paper aims at mitigating such effects by leveraging client-side caching proxies. We present a novel cache architecture and associated cache management algorithms that turn edge caches into accelerators of streaming media delivery. This architecture allows partial caching of media objects and joint delivery from caches and origin servers. Most importantly, the caching algorithms are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit rate requirements, and the available bandwidth between clients and servers. Using Internet bandwidth models derived from proxy cache logs and measured over real Internet paths, we have conducted extensive simulations to evaluate the performance of various cache management algorithms. Our experiments demonstrate that network-aware caching algorithms can significantly reduce startup delay and improve stream quality. Our experiments also show that partial caching is particularly effective when bandwidth variability is not very high.Shudong Jin: Corespondence to
This research was supported in part by NSF (awards ANI-9986397, ANI-0095988, ANI-0205294 and EJA-0202067) and by IBM. Part of this work was done while the first author was at IBM Research in 2001. 相似文献
20.
网络化缓存是命名数据网络实现对信息的高效获取,有效降低互联网骨干网络流量的关键技术.网络化缓存将缓存作为普适的功能添加到每个网络节点.用户需要获取信息时,缓存有该内容的任意网络节点(例如路由器)接收到用户请求后都可直接向用户返回相应内容,提升用户请求响应效率.然而,命名数据网络采用泛在缓存使得内容发布者到用户的传输路径... 相似文献