首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Integrating Web caching and Web prefetching in client-side proxies   总被引:2,自引:0,他引:2  
Web caching and Web prefetching are two important techniques used to reduce the noticeable response time perceived by users. Note that by integrating Web caching and Web prefetching, these two techniques can complement each other since the Web caching technique exploits the temporal locality, whereas Web prefetching technique utilizes the spatial locality of Web objects. However, without circumspect design, the integration of these two techniques might cause significant performance degradation to each other. In view of this, we propose in this paper an innovative cache replacement algorithm, which not only considers the caching effect in the Web environment, but also evaluates the prefetching rules provided by various prefetching schemes. Specifically, we formulate a normalized profit function to evaluate the profit from caching an object (i.e., either a nonimplied object or an implied object according to some prefetching rule). Based on the normalized profit function devised, we devise an innovative Web cache replacement algorithm, referred to as Algorithm IWCP (standing for the Integration of Web Caching and Prefetching). Using an event-driven simulation, we evaluate the performance of Algorithm IWCP under several circumstances. The experimental results show that Algorithm IWCP consistently outperforms the companion schemes in various performance metrics.  相似文献   

2.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

3.
Web prefetching is an attractive solution to reduce the network resources consumed by Web services as well as the access latencies perceived by Web users. Unlike Web caching, which exploits the temporal locality, Web prefetching utilizes the spatial locality of Web objects. Specifically, Web prefetching fetches objects that are likely to be accessed in the near future and stores them in advance. In this context, a sophisticated combination of these two techniques may cause significant improvements on the performance of the Web infrastructure. Considering that there have been several caching policies proposed in the past, the challenge is to extend them by using data mining techniques. In this paper, we present a clustering-based prefetching scheme where a graph-based clustering algorithm identifies clusters of “correlated” Web pages based on the users’ access patterns. This scheme can be integrated easily into a Web proxy server, improving its performance. Through a simulation environment, using a real data set, we show that the proposed integrated framework is robust and effective in improving the performance of the Web caching environment.  相似文献   

4.
The diverse server, client, and unique file object types used today slow Web performance. Caching alone offers limited performance relief because it cannot handle many different file types easily. One solution combines caching with Web prefetching: obtaining the Web data a client might need from data about that client's past surfing activity. The prediction by partial match model, for example, makes prefetching decisions by reviewing URLs clients have accessed on a particular server, then structuring them in a Markov predictor tree. The authors propose a variation of this model that builds common surfing patterns and regularities into the tree.  相似文献   

5.
A data mining algorithm for generalized Web prefetching   总被引:8,自引:0,他引:8  
Predictive Web prefetching refers to the mechanism of deducing the forthcoming page accesses of a client based on its past accesses. In this paper, we present a new context for the interpretation of Web prefetching algorithms as Markov predictors. We identify the factors that affect the performance of Web prefetching algorithms. We propose a new algorithm called WM,,, which is based on data mining and is proven to be a generalization of existing ones. It was designed to address their specific limitations and its characteristics include all the above factors. It compares favorably with previously proposed algorithms. Further, the algorithm efficiently addresses the increased number of candidates. We present a detailed performance evaluation of WM, with synthetic and real data. The experimental results show that WM/sub o/ can provide significant improvements over previously proposed Web prefetching algorithms.  相似文献   

6.
Retrospective adaptive prefetching for interactive Web GIS applications   总被引:1,自引:0,他引:1  
A major task of a Web GIS (Geographic Information Systems) system is to transfer map data to client applications over the Internet, which may be too costly. To improve this inefficient process, various solutions are available. Caching the responses of the requests on the client side is the most commonly implemented solution. However, this method may not be adequate by itself. Besides caching the responses, predicting the next possible requests from a client and updating the cache with responses for those requests together provide a remarkable performance improvement. This procedure is called “prefetching” and makes caching mechanisms more effective and efficient. This paper proposes an efficient prefetching algorithm called Retrospective Adaptive Prefetch (RAP), which is constructed over a heuristic method that considers the former actions of a given user. The algorithm reduces the user-perceived response time and improves user navigation efficiency. Additionally, it adjusts the cache size automatically, based on the memory size of the client’s machine. RAP is compared with four other prefetching algorithms. The experiments show that RAP provides better performance enhancements than the other methods.  相似文献   

7.
In this work we propose a prediction by partial matching technique to anticipate and prefetch web pages and files accessed via browsers. The goal is to reduce the delays necessary to load the web pages and files visited by the users. Since the number of visited web pages can be high, tree-based and table-based implementations can be inefficient from the representation point of view. Therefore, we present an efficient way to implement the prediction by partial matching as simple searches in the observation sequence. Thus, we can use high number of states in long web page access histories and higher order Markov chains at low complexity. The time-evaluations show that the proposed PPM implementation is significantly more efficient than previous implementations. We have enhanced the predictor with a confidence mechanism, implemented as saturating counters, which classifies dynamically web pages as predictable or unpredictable. Predictions are generated selectively only from web pages classified as predictable, improving thus the accuracy. The experiments show that the prediction by partial matching of order 4 with a history of 500 web pages is the optimal.  相似文献   

8.
Users of a Web site usually perform their interest-oriented actions by clicking or visiting Web pages, which are traced in access log files. Clustering Web user access patterns may capture common user interests to a Web site, and in turn, build user profiles for advanced Web applications, such as Web caching and prefetching. The conventional Web usage mining techniques for clustering Web user sessions can discover usage patterns directly, but cannot identify the latent factors or hidden relationships among users?? navigational behaviour. In this paper, we propose an approach based on a vector space model, called Random Indexing, to discover such intrinsic characteristics of Web users?? activities. The underlying factors are then utilised for clustering individual user navigational patterns and creating common user profiles. The clustering results will be used to predict and prefetch Web requests for grouped users. We demonstrate the usability and superiority of the proposed Web user clustering approach through experiments on a real Web log file. The clustering and prefetching tasks are evaluated by comparison with previous studies demonstrating better clustering performance and higher prefetching accuracy.  相似文献   

9.
This paper presents a Page rank-based prefetching technique for accesses to Web page clusters. The approach uses the link structure of a requested page to determine the “most important” linked pages and to identify the page(s) to be prefetched. The underlying premise of our approach is that in the case of cluster accesses, the next pages requested by users of the Web server are typically based on the current and previous pages requested. Furthermore, if the requested pages have a lot of links to some “important” page, that page has a higher probability of being the next one requested. An experimental evaluation of the prefetching mechanism is presented using real server logs. The results show that the Page rank-based scheme does better than random prefetching for clustered accesses, with hit rates of 90% in some cases.  相似文献   

10.
All uses of HTML forms may benefit from validation of the specified input field values. Simple validation matches individual values against specified formats, while more advanced validation may involve interdependencies of form fields. There is currently no standard for specifying or implementing such validation. Today, CGI programmers often use Perl libraries for simple server-side validation or program customized JavaScript solutions for client-side validation. We present PowerForms, which is an add-on to HTML forms that allows a purely declarative specification of input formats and sophisticated interdependencies of form fields. While our work may be seen as inspiration for a future extension of HTML, it is also available for CGI programmers today through a preprocessor that translates a PowerForms document into a combination of standard HTML and JavaScript that works on all combinations of platforms and browsers. The definitions of PowerForms formats are syntactically disjoint from the form itself, which allows a modular development where the form is perhaps automatically generated by other tools and the formats and interdependencies are added separately. PowerForms has a clean semantics defined through a fixed-point process that resolves the interdependencies between all field values. Text fields are equipped with status icons (by default traffic lights) that continuously reflect the validity of the text that has been entered so far, thus providing immediate feed-back for the user. For other GUI components the available options are dynamically filtered to present only the allowed values. PowerForms are integrated into the system for generating interactive Web services, but is also freely available in an Open Source distribution as a stand-alone package.  相似文献   

11.
This paper proposes using a user-level memory thread (ULMT) for correlation prefetching. In this approach, a user thread runs on a general-purpose processor in main memory, either in the memory controller chip or in a DRAM chip. The thread performs correlation prefetching in software, sending the prefetched data into the L2 cache of the main processor. This approach requires minimal hardware beyond the memory processor: The correlation table is a software data structure that resides in main memory, while the main processor only needs a few modifications to its L2 cache so that it can accept incoming prefetches. In addition, the approach has wide applicability, as it can effectively prefetch even for irregular applications. Finally, it is very flexible, as the prefetching algorithm can be customized by the user on an application basis. Our simulation results show that, through a new design of the correlation table and prefetching algorithm, our scheme delivers good results. Specifically, nine mostly-irregular applications show an average speedup of 1.32. Furthermore, our scheme works well in combination with a conventional processor-side sequential prefetcher, in which case the average speedup increases to 1.46. Finally, by exploiting the customization of the prefetching algorithm, we increase the average speedup to 1.53.  相似文献   

12.
区分服务中的包标记算法存在的共同问题是其无法对长、短业务流提供相同级别的保护,尤其带小窗口限制的Web流易遭遇丢包.因而提出了一个Web流友好滑动窗口机制WFTSW,新算法基于改进的三色滑动窗口机制TswTCM,并根据拥塞窗口信息动态调整速率阈值,保证短流获得更高级别的保护.仿真结果验证了该标记算法的有效性和公平性.  相似文献   

13.
Nesbit  K.J. Smith  J.E. 《Micro, IEEE》2005,25(1):90-97
Over the past couple of decades, trends in both microarchitecture and underlying semiconductor technology have significantly reduced microprocessor clock periods. These trends have significantly increased relative main-memory latencies as measured in processor clock cycles. To avoid large performance losses caused by long memory access delays, microprocessors rely heavily on a hierarchy of cache memories. But cache memories are not always effective, either because they are not large enough to hold a program's working set, or because memory access patterns don't exhibit behavior that matches a cache memory's demand-driven, line-structured organization. To partially overcome cache memories' limitations, we organize data cache prefetch information in a new way, a GHB (global history buffer) supports existing prefetch algorithms more effectively than conventional prefetch tables. It reduces stale table data, improving accuracy and reducing memory traffic. It contains a more complete picture of cache miss history and is smaller than conventional tables.  相似文献   

14.
Hash routing reduces cache misses for a cluster of web proxies by eliminating the duplication of cache contents. In this paper, we investigate the optimization of hash routing performance by dynamically adapting object and DNS allocations to the traffic pattern. An analytical model is developed for hash routing that takes into consideration the original request distribution, the object allocation strategy, the speeds of the proxies and the cache hit ratios. Based on this model, the optimal hash routing problem is studied. The analytical results are applied to the design of two adaptive hash routing schemes: ADA-OBJ optimizes object allocation under static client configuration, and ADA-OBJ/DNS optimizes both object and DNS allocations under dynamic client configuration. Trace-driven simulation experiments have been conducted to evaluate the performance of the proposed schemes. The results show that they significantly outperform the intuitive static hash routing scheme based only on the speeds of the proxies.  相似文献   

15.
根据存储系统各级缓存的不同特点,提出了一种基于网络的异构协同预取方案RLCP(RA and Linux collaborate prefetching),它通过在存储客户端和存储服务器端使用不同的预取算法和替换算法,构建一个异构的两级存储系统,并利用合适的协同预取策略,动态调整预取算法的积极性,提高了存储系统的性能,同时也为云存储提供了一种优秀的解决方案.  相似文献   

16.
When caches aren't enough: data prefetching techniques   总被引:1,自引:0,他引:1  
Vander Wiel  S.P. Lilja  D.J. 《Computer》1997,30(7):23-30
With data prefetching, memory systems call data into the cache before the processor needs it, thereby reducing memory-access latency. Using the most suitable techniques is critical to maximizing data prefetching's effectiveness. The authors review three popular prefetching techniques: software-initiated prefetching, sequential hardware-initiated prefetching, and prefetching via reference prediction tables  相似文献   

17.
Prefetching is a simple and general method for single-chain parallelisation of the Metropolis-Hastings algorithm based on the idea of evaluating the posterior in parallel and ahead of time. Improved Metropolis-Hastings prefetching algorithms are presented and evaluated. It is shown how to use available information to make better predictions of the future states of the chain and increase the efficiency of prefetching considerably. The optimal acceptance rate for the prefetching random walk Metropolis-Hastings algorithm is obtained for a special case and it is shown to decrease in the number of processors employed. The performance of the algorithms is illustrated using a well-known macroeconomic model. Bayesian estimation of DSGE models, linearly or nonlinearly approximated, is identified as a potential area of application for prefetching methods. The generality of the proposed method, however, suggests that it could be applied in other contexts as well.  相似文献   

18.
We present a storage management system that has the ability to adapt to the data access characteristics of the application that uses it based on collection and analysis of runtime statistics. This feature is especially useful in the storage management layer of database systems, where applications exhibit relatively predictable access patterns. Adaptive reorganization is performed by the storage management system in a manner that optimizes the access patterns of the system for which it is used. We enhance the log-structured storage system that naturally caters for write optimization, with the addition of a statistics collection mechanism to determine data access patterns of applications. The storage system can serve as a testbed for a variety of statistics analysis and clustering mechanisms. Higher level application-specific data clustering mechanisms can be used to override the storage system's low-level clustering mechanisms. In addition, the analysis techniques and reorganization scheme can be used in other storage systems. Performance results from our prototype show potential response time speedups of up to 83 percent over the basic log-structured file system in the best case, using a combination of storage reorganization and prefetching  相似文献   

19.
The expanded role of radiology in clinical medicine and its emerging digital practice have made patient-image management a growing concern for health-care organizations. A fundamental aspect of patient-image management is to provide a radiologist with convenient access to prior images relevant to his or her reading of a recently taken radiological examination. For confirmation or evaluation purposes, radiologists often reference relevant prior images of the same patient when interpreting the images of a current examination. To alleviate the time and physical requirements on radiologists, many health-care organizations have taken a prefetching strategy for meeting their patient-image reference needs. Radiologists' patient-image reference knowledge understandably may exhibit subtle individual variations and dynamically evolves over time, thus making the artificial intelligence-based inductive learning approach appealing. Central to patient-image prefetching is a knowledge base of which knowledge elements need continual update and individual customization. In this study, we extended a decision rule induction technique (i.e., CN2 algorithm) to address the challenging characteristics of the targeted learning. We experimentally evaluated the extended algorithm using the learning performances achieved by backpropagation neural network as benchmarks. Overall, our evaluation results suggest that the extended algorithm exhibited satisfactory learning effectiveness and, at the same time, showed desirable noise tolerance, immunity to missing data, and robustness in relation to limited training data.  相似文献   

20.
Once a big repository of static data, the Web has been gradually evolved into a worldwide network of information and services known as the Semantic Web. This environment allows programs to autonomously interact with Web-accessible information and services. In this sense, mobile agent technology could help in efficiently exploiting this relatively new Web in a fully automated way, since Semantic Web resources are described in a computer-understandable way. In this paper, we present SWAM, a platform for building and deploying Prolog-based intelligent mobile agents on the Semantic Web. The article also reports examples and experimental results in order to illustrate as well as to assess the benefits of SWAM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号