首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 352 毫秒
1.
针对P2P网络中由于查询条件的弱语义和粗粒度、检索效率低下以及网络带宽消耗的问题提出了一个基于元数据的高效查询算法,通过在任意P2P数据管理层的基础上建立一个统一的元数据层,各个节点自动抽取共享数据的详细的元数据信息,每个节点不仅保存本地共享数据的元数据信息,而且存储访问过的最感兴趣的数据的元数据信息,并使用数据库对元数据信息进行高效管理,从而使所有节点都具有自我学习的能力,充分利用元数据信息提高检索效率。  相似文献   

2.
孙丽丽  欧阳松 《计算机工程》2008,34(20):127-128
P2P对于分布式文件共享具有很好的前景,但当前的P2P系统仍然缺乏有效的信息管理机制。该文在构建超级节点叠加网络时考虑信任和语义的因素,语义相似的节点尽量分布在同一个域中。在选取超级节点时考虑信任值、节点能力及动态性等因素,提出一种高效的基于语义和信任机制的P2P资源发现算法。  相似文献   

3.
针对超节点P2P系统的特点,提出了一种有效且灵活的缓存策略.该策略使用文件价值来决定缓存替换的对象,并且在替换之前使用"阈值"选择要缓存的文件,使其系统只缓存价值较大的热点文件.最后通过Trace-Driven的方法模拟实验,结果表明,与现有的缓存策略LRU和LFU相比,这种缓存策略具有较好的缓存命中率和字节命中率.  相似文献   

4.
针对在基于P2P的点播系统中,由于客户端缓存区没有得到高效的利用而影响流媒体点播系统的服务质量问题,提出了一种新的基于混合P2P的流媒体点播模型P2P_VOD,该模型将客户端缓存分为三个区,并详细阐述了客户端节点缓存区的缓存替换机制,综合考虑了数据块备份量的均衡性和节点VCR操作的命中率,使得节目数据块在各节点间缓存得到全局优化并有效缓解了服务器负载。通过仿真对比实验,验证了该模型在启动延迟和服务器负载方面的优越性。  相似文献   

5.
本文首先介绍超节点结构P2P网络的原理,指出其存在负载失衡问题。针对其存在的问题引入信息索引机制IIM,把资源的信息索引分布到多个超节点。通过仿真实验表明,IIM能够在相对不降低搜索效率的前提下,使得各超节点的资源信息相对均衡,有效地解决了超节点结构P2P网络中超节点负载不均衡的问题。  相似文献   

6.
在移动园区网环境中部署了P2P系统并提出了一种协同缓存策略。接入控制策略利用"阈值"和节点的位置关系选择缓存的数据。缓存替换策略利用价值函数"Cost"选择要替换掉的数据,该函数考虑了数据的被访问频率、大小及区域之间的距离这三个因素。数据一致性策略结合了Plain-Push和Pull-Every-time方案的优点并做了改进。通过两组模拟实验验证了这种协同缓存策略在降低时延、减少网络通信开销、提高缓存命中率方面具有较好的性能。  相似文献   

7.
非结构化P2P网络中的副本管理策略   总被引:1,自引:1,他引:0       下载免费PDF全文
陈宇  董健全 《计算机工程》2008,34(18):108-110
为利用一种动态的副本管理机制,改善P2P网络中数据的可用性和可靠性,提出一种基于趋势预测的动态副本管理机制,通过引入经济学中的时间序列平滑算法预测热点文件,并使用3种不同的策略分别对副本的放置、删除、替换等操作进行动态管理。模拟实验和对相关数据的分析证明,该项研究能有效提高P2P网络中资源搜索的命中率,降低整个网络的开销,使网络中的各个节点达到负载平衡。  相似文献   

8.
现有P2P网络规模大、动态性高、异构性强,有效的搜索技术一直是P2P系统研究中的核心问题。本文针对无结构P2P网络泛洪搜索机制的盲目性所导致的查询开销大、效率低的问题,提出了一种基于语义相似的P2P搜索机制SRVN,通过积累历史搜索经验获得路由指引信息,帮助搜索结点快速发现与搜索内容最相关的结点,从而提高搜索效率和目标命中率。实验结果显示,使用SRVN搜索机制,有效地提高了Gnutella查询性能。  相似文献   

9.
移动环境中一种基于Hash的P2P覆盖网   总被引:1,自引:0,他引:1  
目前提出的大多数基于哈希(hash—based)的P2P网络都集中于固定的对等节点。当节点移动到网络中一个新的位置时,这种结构在消息传递等方面的效率就会下降。文章提出一种移动环境中的基于哈希的P2P覆盖网(Hash—based P2P Overlay in Mobile Environment,H—MP2P),允许节点在网络中自由移动。一个节点可通过P2P网络广播其位置信息,其他节点通过网络可以获知该节点的移动信息并进行定位。通过理论分析和实验可知H—MP2P在扩展性、可靠性和效率方面都可以取得较好的结果,可以很好的应用在移动环境中。  相似文献   

10.
刘浩 《计算机工程》2012,38(24):86-89
无结构P2P网络中基于泛洪法的搜索机制会给系统带来极大的网络负载,结构化P2P网络则需要较大的开销来维护其拓扑结构。针对该问题,给出一种具有社会网络特性的P2P分层搜索机制。根据社会网络的基本原理,将语义相似度高的节点分布在同一个虚拟社区,节点在虚拟社区内能动地建立搜索链接。实验结果证明,该搜索机制能有效地提高P2P网络的资源搜索效率。  相似文献   

11.
Exploiting client caches to build large Web caches   总被引:2,自引:1,他引:1  
New demands brought by the continuing growth of the Internet will be met in part by more effective and comprehensive use of caching. This paper proposes to exploit client browser caches in the context of cooperative proxy caching by constructing the client caches within each organization (e.g., corporate networks) as a peer-to-peer (P2P) client cache. Via trace-driven simulations we evaluate the potential performance benefit of cooperative proxy caching with/without exploiting client caches. We show that exploiting client caches in cooperative proxy caching can significantly improve performance, particularly when the size of individual proxy caches is limited compared to the universe of Web objects. We further devise a cooperative hierarchical greedy-dual replacement algorithm (Hier-GD), which not only provides some cache coordination but also utilizes client caches. Through Hier-GD, we explore the design issues of how to exploit client caches in cooperative proxy caching to build large Web caches. We show that Hier-GD is technically practical and can potentially improve the performance of cooperative proxy caching by utilizing client caches.
Yiming HuEmail:
  相似文献   

12.
Caching is a proven remedy to enhance scalability and availability of software systems as well as to reduce latency of user requests. In contrast to Web caching where single Web objects are accessed and kept ready somewhere in caches in the user-to-server path, database caching uses full-fledged database management systems as caches, close to application servers at the edge of the Web, to adaptively maintain sets of records from a remote database and to evaluate queries on them. We analyze a new class of approaches to database caching where the extensions of query predicates that are to be evaluated are constructed by constraints in the cache. Starting from the key concept of value completeness, we explore the application of cache constraints and their implications on query evaluation correctness and on controllable cache loading called cache safeness. Furthermore, we identify simple rules for the design of cache groups and their optimization before discussing the use of single cache groups and cache group federations. Finally, we argue that predicate completeness can be used to develop new variants of constraint-based database caching.  相似文献   

13.
Traffic caused by P2P services dominates a large part of traffic on the Internet and imposes significant loads on the Internet, so reducing P2P traffic within networks is an important issue for ISPs. In particular, a huge amount of traffic is transferred within backbone networks; therefore reducing P2P traffic is important for transit ISPs to improve the efficiency of network resource usage and reduce network capital cost. To reduce P2P traffic, it is effective for ISPs to implement cache devices at some router ports and reduce the hop length of P2P flows by delivering the required content from caches. However, the design problem of cache locations and capacities has not been well investigated, although the effect of caches strongly depends on the cache locations and capacities. We propose an optimum design method of cache capacity and location for minimizing the total amount of P2P traffic based on dynamic programming, assuming that transit ISPs provide caches at transit links to access ISP networks. We apply the proposed design method to 31 actual ISP backbone networks and investigate the main factors determining cache efficiency. We also analyze the property of network structure in which deploying caches are effective in reducing P2P traffic for transit ISPs. We show that transit ISPs can reduce the P2P traffic within their networks by about 50-85% by optimally designing caches at the transit links to the lower networks.  相似文献   

14.
Peer-to-peer (P2P) systems generate a major fraction of the current Internet traffic which significantly increase the load on ISP networks. To mitigate these negative impacts, many previous works in the literature have proposed caching of P2P traffic. But very few have considered designing a distributed caching infrastructure in the edge network. This paper demonstrates that a distributed caching infrastructure is more suitable than traditional proxy cache servers which cache data in disk, and it is viable to use the memory of users in the edge network as the cache space. This paper presents the design and evaluation of a distributed network cache infrastructure for P2P application, called BufferBank. BufferBank provides a number of application interfaces for P2P applications to make full use of the cache space. Three-level mapping is introduced and elaborated to improve the reliability and security of this distributed cache mechanism. Our measurement results suggest that BufferBank can decrease the data obtaining delay, compared with traditional P2P cache server based on disk.  相似文献   

15.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance metrics most commonly used to measure the performance of web proxy caches.  相似文献   

16.
In this paper we describe the design of an effective caching mechanism for resource-limited, definite-clause theorem-proving systems. Previous work in adapting caches for theorem proving relies on the use of unlimited-size caches. We show how unlimited-size caches are unsuitable in application contexts where resource-limited theorem provers are used to solve multiple problems from a single problem distribution. We introduce bounded-overhead caches, that is, those caches that contain at most a fixed number of entries and entail a fixed amount of overhead per lookup, and we examine cache design issues for bounded-overhead caches. Finally, we present an empirical evaluation of bounded-overhead cache performance, relying on a specially designed experimental methodology that separates hardware-dependent, implementation-dependent, and domain-dependent effects.  相似文献   

17.
Internet网络上媒体流的应用受限于应用时的网络状况,如时延、包丢失率等.本文研究通过在网络边缘处设置缓存代理来减小这些影响,提出一种新的缓存管理算法NRC,即接入媒体流服务时,用户以两种方式获取媒体流对象:一部分对象内容从代理缓存中获取,而另一部分对象内容则直接从流媒体源服务器处传输而来;从而加速媒体流接入服务,提高媒体流服务质量,算法同网络特性和媒体流特性相关.最后仿真实验证实同网络和流媒体特性相关的缓存管理算法NRC可以很好地减少服务延迟和提高媒体流的总体服务质量.  相似文献   

18.
提出了一种高性能的合作式Web缓存系统(WebRing),包括一种基于连续哈希的Web对象路由模式,保证了对任意Web请求经过一次哈希计算且至多经过一次转发就可到达目标节点。同时,基于节点状态标记切割哈希空间的系统负载均衡算法大大提高了系统的吞吐量。解决了传统合作式缓存系统中多级转发和多重哈希计算造成的高时延和单点失效问题。  相似文献   

19.
The delivery of multimedia over the Internet is affected by adverse network conditions such as high packet loss rate and long delay. This paper aims at mitigating such effects by leveraging client-side caching proxies. We present a novel cache architecture and associated cache management algorithms that turn edge caches into accelerators of streaming media delivery. This architecture allows partial caching of media objects and joint delivery from caches and origin servers. Most importantly, the caching algorithms are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit rate requirements, and the available bandwidth between clients and servers. Using Internet bandwidth models derived from proxy cache logs and measured over real Internet paths, we have conducted extensive simulations to evaluate the performance of various cache management algorithms. Our experiments demonstrate that network-aware caching algorithms can significantly reduce startup delay and improve stream quality. Our experiments also show that partial caching is particularly effective when bandwidth variability is not very high.Shudong Jin: Corespondence to This research was supported in part by NSF (awards ANI-9986397, ANI-0095988, ANI-0205294 and EJA-0202067) and by IBM. Part of this work was done while the first author was at IBM Research in 2001.  相似文献   

20.
网络化缓存是命名数据网络实现对信息的高效获取,有效降低互联网骨干网络流量的关键技术.网络化缓存将缓存作为普适的功能添加到每个网络节点.用户需要获取信息时,缓存有该内容的任意网络节点(例如路由器)接收到用户请求后都可直接向用户返回相应内容,提升用户请求响应效率.然而,命名数据网络采用泛在缓存使得内容发布者到用户的传输路径...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号