首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Cooperative Caching Strategy in Mobile Ad Hoc Networks Based on Clusters   总被引:1,自引:0,他引:1  
In this paper, we present a scheme, called Cluster Cooperative (CC) for caching in mobile ad hoc networks. In CC scheme, the network topology is partitioned into non-overlapping clusters based on the physical network proximity. For a local cache miss, each client looks for data item in the cluster. If no client inside the cluster has cached the requested item, the request is forwarded to the next client on the routing path towards server. A cache replacement policy, called Least Utility Value with Migration (LUV-Mi) is developed. The LUV-Mi policy is suitable for cooperation in clustered ad hoc environment because it considers the performance of an entire cluster along with the performance of local client. Simulation experiments show that CC caching mechanism achieves significant improvements in cache hit ratio and average query latency in comparison with other caching strategies.  相似文献   

2.
This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache‐hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache‐hit ratio, and a cache‐miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.  相似文献   

3.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
5.
The caching of frequently accessed data items on the client side is an effective technique to improve performance in a mobile environment. Caching data in a wireless mobile computer can significantly reduce the bandwidth requirement. However, cache content needs to be validated; classical cache invalidation strategies are not suitable for mobile environments due to the disconnection frequency and mobility of the mobile clients. Attractive cache invalidation techniques are based on invalidation reports (IRs). But, IR-based cache invalidation schemes result in considerable consumption of uplink and download bandwidth. In this paper, we address these problems by presenting a new energy-efficient cache invalidation method for the wireless mobile environment. The new cache invalidation scheme is called Adaptive Energy Efficient Cache Invalidation Scheme (AEECIS). The algorithm is adaptive since it changes the data dissemination strategy based on the current conditions. To reduce the bandwidth requirement, the server transmits in one of three modes: slow, fast or super-fast. The mode is selected based on thresholds specified for time and the number of clients requesting updated objects. An efficient implementation of AEECIS is presented and simulations have been carried out to evaluate its caching effectiveness. The results demonstrate that it can substantially improve mobile caching by reducing the communication bandwidth (thus energy consumption) for query processing. Also, the reported results demonstrate that compared to previous IR-based schemes, AEECIS can significantly improve the bandwidth consumption and the number of uplink requests.
Reda AlhajjEmail:
  相似文献   

6.
针对内容中心网络(CCN, content centric networking)节点存储资源的有效利用和优化配给问题,在同质化缓存分配的基础上,提出了一种基于替换率的缓存空间动态借调机制。该机制从节点存储空间使用状态的动态差异性出发,首先对于缓存资源借调的合理性给予证明,进而,依据节点对于存储资源的需求程度,动态地执行缓存借调,将相对空闲的存储资源分配给需求程度更大的节点支配,换取过载节点缓存性能的提升。该机制减小了内容请求跳数,提高了缓存命中率,以少量额外的代价换取了内容请求开销的显著下降,提升了存储资源整体利用率,仿真结果验证了其有效性。  相似文献   

7.
命名数据网络中基于局部请求相似性的协作缓存路由机制   总被引:1,自引:0,他引:1  
该文针对命名数据网络(Named Data Networking, NDN)应答内容的高效缓存和利用问题,依据内容请求分布的局域相似特征,提出一种协作缓存路由机制。缓存决策时,将垂直请求路径上的冗余消除和水平局域范围内的内容放置进行有效结合。垂直方向上,提出基于最大内容活跃因子的路径缓存策略,确定沿途转发对应的最大热点请求区域;水平方向上,采用一致性Hash协同缓存思想,实现应答内容的局域定向存储。路由查找时,将局域节点缓存引入到路由转发决策中,依据内容活跃等级动态执行局域缓存查找,增大内容请求就近响应概率。该机制减小了内容请求时延和缓存冗余,提高了缓存命中率,以少量额外的代价换取了内容请求开销的大幅下降,仿真结果验证了其有效性。  相似文献   

8.
The fifth‐generation (5G) wireless networks have to deal with the high data rate and stringent latency requirements due to the massive invasion of connected devices and data‐hungry applications. Edge caching is a promising technique to overcome these challenges by prefetching the content closer to the end users at the edge node's local storage. In this paper, we analyze the performance of edge caching 5G networks with the aid of satellite communication systems. First, we investigate the satellite‐aided edge caching systems in two promising use cases: (a) in dense urban areas and (b) in sparsely populated regions, eg, rural areas. Second, we study the effectiveness of satellite systems via the proposed satellite‐aided caching algorithm, which can be used in three configurations: (a) mono‐beam satellite, (b) multi‐beam satellite, and (c) hybrid mode. Third, the proposed caching algorithm is evaluated by using both empirical Zipf‐distribution data and the more realistic Movielens dataset. Last but not least, the proposed caching scheme is implemented and tested by our developed demonstrators which allow real‐time analysis of the cache hit ratio and cost analysis.  相似文献   

9.
In mobile wireless data access networks, remote data access is expensive in terms of bandwidth consumption. An efficient caching scheme can reduce the amount of data transmission, hence, bandwidth consumption. However, an update event makes the associated cached data objects obsolete and useless for many applications. Data access frequency and update play a crucial role in deciding which data objects should be cached. Seemingly, frequently accessed but infrequently updated objects should have higher preference while preserving in the cache. Other objects should have lower preference or be evicted, or should not be cached at all, to accommodate higher‐preference objects. In this paper, we proposed Optimal Update‐based Replacement, a replacement or eviction scheme, for cache management in wireless data networks. To facilitate the replacement scheme, we also presented two enhanced cache access schemes, named Update‐based Poll‐Each‐Read and Update‐based Call‐Back. The proposed cache management schemes were supported with strong theoretical analysis. Both analysis and extensive simulation results were given to demonstrate that the proposed schemes guarantee optimal amount of data transmission by increasing the number of effective hits and outperform the popular Least Frequently Used scheme in terms of both effective hits and communication cost. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Internet-based mobile ad hoc network (Imanet) is an emerging technique that combines a wired network (e.g. Internet) and a mobile ad hoc network (Manet) for developing a ubiquitous communication infrastructure. To fulfill users’ demand to access various kinds of information, however, an Imanet has several limitations such as limited accessibility to the wired Internet, insufficient wireless bandwidth, and longer message latency. In this paper, we address the issues involved in information search and access in Imanets. An aggregate caching mechanism and a broadcast-based Simple Search (SS) algorithm are proposed for improving the information accessibility and reducing average communication latency in Imanets. As a part of the aggregate cache, a cache admission control policy and a cache replacement policy, called Time and Distance Sensitive (TDS) replacement, are developed to reduce the cache miss ratio and improve the information accessibility. We evaluate the impact of caching, cache management, and the number of access points that are connected to the Internet, through extensive simulation. The simulation results indicate that the proposed aggregate caching mechanism can significantly improve an Imanet performance in terms of throughput and average number of hops to access data items.  相似文献   

11.
12.
GroCoca: group-based peer-to-peer cooperative caching in mobile environment   总被引:3,自引:0,他引:3  
In a mobile cooperative caching environment, we observe the need for cooperating peers to cache useful data items together, so as to improve cache hit from peers. This could be achieved by capturing the data requirement of individual peers in conjunction with their mobility pattern, for which we realized via a GROup-based COoperative CAching scheme (GroCoca). In GroCoca, we define a tightly-coupled group (TCG) as a collection of peers that possess similar mobility pattern and display similar data affinity. A family of algorithms is proposed to discover and maintain all TCGs dynamically. Furthermore, two cooperative cache management protocols, namely, cooperative cache admission control and replacement, are designed to control data replicas and improve data accessibility in TCGs. A cache signature scheme is also adopted in GroCoca in order to provide information for the mobile clients to determine whether their TCG members are likely caching their desired data items and to perform cooperative cache replacement Experimental results show that GroCoca outperforms the conventional caching scheme and standard COoperative CAching scheme (COCA) in terms of access latency and global cache hit ratio. However, GroCoca generally incurs higher power consumption  相似文献   

13.
Recently, content‐centric networking (CCN) has become one of the important technologies for enabling the future networks. Along with its recognized potentialities as a content retrieval and dissemination solution, CCN has been also recently considered as a promising architecture for the Internet of things (IoT), because of 2 main features such as named‐based routing and in‐network caching. However, IoT is characterized by challenging features: small storage capacity of resource‐constrained devices due to cost and limitation of energy and especially transient data that impose stringent requirements on the information freshness. As a consequence, the intrinsic caching mechanisms existing in CCN approach do not well suit IoT domains; hence, providing a specific caching policy at intermediate nodes is a very challenging task. This paper proposes an effective multiattribute in‐network caching decision algorithm that performs a caching strategy in CCN‐IoT network by considering a set of crucial attributes including the content store size, hop count, particularly key temporal properties like data freshness, and the node energy level. Simulation results proved that our proposed approach outperforms 2 cache management schemes (probabilistic least recently used and AlwaysCache–first in first out in terms of improving total hit rate, reducing data retrieval delay, and enhancing content reusability in IoT environment).  相似文献   

14.
Cooperative caching is an important technique to support pervasive Internet access. In order to ensure valid data access, the cache consistency must be maintained properly. However, this problem has not been sufficiently studied in mobile computing environments, especially those with ad hoc networks. There are two essential issues in cache consistency maintenance: consistency control initiation and data update propagation. Consistency control initiation not only decides the cache consistency provided to the users, but also impacts the consistency maintenance cost. This issue becomes more challenging in asynchronous and fully distributed ad hoc networks. To this end, we propose the predictive consistency control initiation (PCCI) algorithm, which adaptively initiates consistency control based on its online predictions of forthcoming data updates and cache queries. In order to efficiently propagate data updates through multi‐hop wireless connections, the hierarchical data update propagation (HDUP) algorithm is proposed. Theoretical analysis shows that cooperation among the caching nodes facilitates data update propagation. Extensive simulations are conducted to evaluate performance of both PCCI and HDUP. Evaluation results show that PCCI cost‐effectively initiates consistency control even when faced with dynamic changes in data update rate, cache query rate, node speed, and number of caching nodes. The evaluation results also show that HDUP saves cost for data update propagation by up to 66%. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
We consider the problem of bit error rate (BER) degradation because of the power gain imbalance between horizontal (H)‐polarization and vertical (V)‐polarization components in an orthogonal dual‐polarization transmission system. To alleviate the aforementioned BER degradation problem, we propose a non‐orthogonal polarization‐domain rotation scheme where the axes of H‐polarization and V‐polarization components are rotated with different angles at the transmitter and de‐rotated at the receiver. In addition, in order to assess the effectiveness of the polarization‐domain rotation scheme, we derive the closed‐form BER expression under a practical dual‐polarized channel model, which is represented by cross‐polarization ratio and co‐polarization ratio (CPR). We also derive the approximated BER expressions for the two asymptotic values of CPR: balanced CPR and infinite CPR. With the derived BER expressions, we find the optimal rotation angles that jointly minimize the BER. According to the numerical results, it is shown that about 3dB Eb/N0 gain is obtained at the BER of 10?4 and the CPR of 10dB by the polarization‐domain rotation scheme with optimal rotation angles compared with the conventional orthogonal dual‐polarization transmission. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
张涛  李强  张继良  张蔡霞 《电子学报》2017,45(11):2649-2655
为了缓解海量的移动业务数据与容量受限的无线接入网回传链路之间的矛盾,本文提出一种面向软件定义无线接入网(SD-RAN)的协作内容缓存网络架构.在宏蜂窝基站(MBS)的控制管理下,小蜂窝基站(SBS)可以在存储单元有序存储一些高流行度的内容.针对SBS存储单元空间受限问题,进一步提出SD-RAN网络架构下的协作内容缓存算法.该算法中,每个SBS缓存空间被分割成两部分:(1)用于存储全网流行度最高的公共内容以保证各小蜂窝小区本地命中率.(2)用于存储流行度较高的差异化的内容以促进MBS内SBS之间的协作.在此基础上,解析推导具有最优平均内容获取开销的分割参数闭合表达式.仿真结果表明该算法在不同系统参数条件下能显著降低SD-RAN的平均内容获取开销.  相似文献   

17.
With increase in the number of smart wireless devices, the demand for higher data rates also grows which puts immense pressure to the network. A vast majority of this demand comes from video files, and it is observed that only a few popular video files are requested more frequently during any specified time interval. Recent studies have shown that caching provides a better performance as it minimizes the network load by avoiding the fetching of same files multiple times from the server. In this paper, we propose to combine two ideas; proactive caching of files and content‐based pricing in macro‐femto heterogeneous networks. The femtocell access point (FAP) is allowed to manipulate its users' demand through content‐based pricing and serve the users' requests by proactively downloading suitable content into its cache memory which reduces the load of the femtocell. In addition, an incentive mechanism is also proposed which encourages the FAP to help macrocell users under its coverage zone by allowing access to its cached content and thereby reduces the macrocell load. The proposed content‐based pricing and proactive caching scheme for femtocells is modeled as a Stackelberg game among the macrocell base station and the FAP to jointly maximize both of their utilities. Performance analysis of the scheme is presented for a single femtocell scenario and compared with the conventional flat pricing‐based scheme via numerical examples. The results demonstrate significant reduction in network load using our proposed scheme.  相似文献   

18.
Cost-Effective Caching for Mobility Support in IEEE 802.1X Frameworks   总被引:1,自引:0,他引:1  
This paper is concerned with caching support of access points (APs) for fast handoff within IEEE 802.11 networks. A common flavor of current schemes is to let a mobile station preauthenticate or distribute the security context of the station proactively to neighboring APs. Each target AP caches the received context beforehand and can save itself backend-network authentication if the station reassociates. We present an approach to ameliorating cache effectiveness under the least recently used (LRU) replacement policy, additionally allowing for distinct cache miss penalty indicative of authentication delay. We leverage the widely used LRU caching techniques to effect a new model where high-penalty cache entries are prevented from being prematurely evicted under the conventional replacement policy so as to save frequent, expensive authentications with remote sites. This is accomplished by introducing software-generated reference requests that trigger cache hardware machinery in APs to refresh certain entries in an automated manner. Performance evaluations are conducted using simulation and analytical modeling. Performance results show that our approach, when compared with the base LRU scheme, reduces authentication delay by more than 51 percent and cache miss ratio by over 28 percent on average. Quantitative and qualitative discussions indicate that our approach is applicable in pragmatic settings  相似文献   

19.
This work proposes a replication scheme that is implemented on top of a previously proposed system for MANETs that cache submitted queries in special nodes, called query directories, and uses them to locate the data (responses) that are stored in the nodes that first request them, called caching nodes. The system, which was named distributed cache invalidation method (DCIM), includes client‐based mechanisms for keeping the cached data consistent with the data source. In this work, we extend DCIM to handle cache replicas inside the MANET. For this purpose, we utilize a push‐based approach within the MANET to propagate the server updates to replicas inside the network. The result is a hybrid approach that utilizes the benefits of pull approaches for client server communication and those of push approaches inside the network between the replicas. The approach is analyzed analytically, and the appropriate number of replicas is obtained, where it was concluded that full replication of the indices of data items at the query directory and two‐partial replication of the data items themselves makes most sense. Simulation results based on ns2 demonstrate the ability of the added replication scheme to lower delays and improve hit ration at the cost of mild increases in overhead traffic. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
The explosive growth of mobile data traffic has made cellular operators to seek low‐cost alternatives for cellular traffic off‐loading. In this paper, we consider a content delivery network where a vehicular communication network composed of roadside units (RSUs) is integrated into a cellular network to serve as an off‐loading platform. Each RSU subjecting to its storage capacity caches a subset of the contents of the central content server. Allocating the suitable subset of contents in each RSU cache such that maximizes the hit ratio of vehicles requests is a problem of paramount value that is targeted in this study. First, we propose a centralized solution in which, we model the cache content placement problem as a submodular maximization problem and show that it is NP‐hard. Second, we propose a distributed cooperative caching scheme, in which RSUs in an area periodically share information about their contents locally and thus update their cache. To this end, we model the distributed caching problem as a strategic resource allocation game that achieves at least 50% of the optimal solution. Finally, we evaluate our scheme using simulation for urban mobility simulator under realistic conditions. On average, the results show an improvement of 8% in the hit ratio of the proposed method compared with other well‐known cache content placement approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号