首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
The Scalable I/O(SIO)Initiative‘s Low-Level Application Programming Interface(SIO LLAP)provides file system implementers with a simple low-Level interface to support high-level parallel /O interfaces efficiently and effectively.This paper describes a reference implementation and the evaluation of the SIO LLAPI on the Intel Paragon multicomputer.The implementation provides the file system structure and striping algorithm compatible with the Parallel File System(PFS)of Intel Paragon ,and runs either inside the kernel or as a user level library.The scatter-gather addressing read/write,asynchronous I/O,client caching and prefetching mechanism,file access hint mechanism,collective I/O and highly efficient file copy have been implemented.The preliminary experience shows that the SIO LLAPI provides opportunities of significant performance improvement and is easy to implement.Some high level file system interfaces and applications such as PFS,ADIO and Hartree-Fock application,are also implemented on top of SIO.The performance of PFS is at least the same as that of Intel‘s native pfs,and in many cases,such as small sequential file access,huge I/O requests and collective I/O,it is stable and much better,The SIO features help to support high level interfaces easily,quickly and more efficiently,and the cache,prefetching,hints are useful to get better performance based on different access models.The scalability and performance of SIO are limited by the network latency,network scalable bandwidth,memory copy bandwidth,memory size and pattern of I/O requests.The tadeoff between generality and efficienc should be considered in implementation.  相似文献   

2.
A site-based proxy cache   总被引:4,自引:0,他引:4       下载免费PDF全文
In traditional proxy caches,any visited page from any Web server is cached independently,ignoring connections between pages,And users still have to frequently visity in dexing pages just for reaching useful informative ones,which causes significant waste of caching space and unnecessary Web traffic.In order to solve the above problem,this paper introduced a site graph model to describe WWW and a site-based replacement strategy has been built based on it .The concept of “access frequency“ is developed for evaluating whether a Web page is worth being kept in caching space.On the basis of user‘‘‘‘‘‘‘‘s access history,auxiliary navigation information is provided to help him reach target pages more quickly.Performance test results haves shown that the proposed proxy cache system can get higher hit ratio than traditional ones and can reduce user‘‘‘‘‘‘‘‘s access latency effectively.  相似文献   

3.
According to the application of Java in Web Geographical Information System (WebGIS), a model of WebGIS based on Java Servlet is defined and a new distributed model is proposed. The dislributed model relies on the implementation of distribt, tcd Servlet. And in the model a locator is adopted to manage client access to the Servlet and balance the loads of cluster Web servers. The programming rule of distributcd Servlet and the operating principle of locator are studied in detail. The model has been applied in a practical system, and the actual operating effect shows that it remarkably improves access performance of the WebGIS application and possesses characteristics of load balance, fail-over and so on.  相似文献   

4.
Indexing techniques have been developed for wireless data broadcast environments, in order to conserve the scarce power resources of the mobile clients. However, the use of interleaved index segments in a broadcast cycle increases the average access latency for the clients. In this paper, the broadcast-based spatial query processing methods (BBS) are presented for the location-based services. In the BBS, broadcasted data objects are sorted sequentially based on their locations, and the server broadcasts the location dependent data along with an index segment. Then, a sequential prefetching and caching scheme is designed to reduce the query response time. The performance of this scheme is investigated in relation to various environmental variables, such as the distributions of the data objects, the average speed of the clients and the size of the service area.  相似文献   

5.
Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory.With hardware and/or software support,data prefetching brings data closer to a processor before it is actually needed.Many prefetching techniques have been developed for single-core processors.Recent developments in processor technology have brought multicore processors into mainstream. While some of the single-core prefetching t...  相似文献   

6.
The exponential growth of various services demands the increased capacity of the next-generation broadband wireless access networks, which is toward the deployment of femtocell in macrocell network based on orthogonal frequency division multiple access. However, serious time-varying interference obstructs this macro/femto overlaid network to realize its true potential. In this article, we present a macro services guaranteed resource allocation scheme, which can mitigate various dominant interferences and provide multiple services in macro/femto overlaid Third-Generation Partnership Project Long Term Evolution-Advanced networks. We model our multiple services resource allocation scheme into a multiobjective optimization problem, which is a non-deterministic polynomial-time (NP)-hard problem. Then, we give a low-complexity algorithm consisting of two layers based on chordal graph. Simulation results verify that the proposed scheme can achieve better efficiency than the previous works and raise the satisfaction ratio of Guaranteed Bit Rate (GBR) services while improving the average performance of non-GBR services.  相似文献   

7.
World Wide Web(WWW) services have grown to levels where significant delays are expected to happen.Technology like prefetching are likely to help users to personalize their needs ,reducing their waiting times. This paperfirstly describes the architecture of prefetching,then classifies them into three types: based on branch model,based ontree model and others and presents profoundly the basic ideas of some existing prefetching algorithms. Next,several models for controlling the prefetching are introduced. At last,the trend and course concerning the prefetching algo-rithms are concluded.  相似文献   

8.
Web caching: A way to improve web QoS   总被引:4,自引:0,他引:4       下载免费PDF全文
As the Internet and World Wide Web grow at a fast pace, it is essential that the Web‘s performance should keep up with increased demand and expectations. Web Caching technology has.been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.  相似文献   

9.
This paper analyzes the most common security problems of web application level. A model WALSG (Web Application Level Security Gateway) is presented to provide web application level security. WALSG employs XML Schema to specify access control policies and security policies for HTML pages and cookies. WALSG can also be used as a secure tool to define access control policies and security policies with the development of web site.  相似文献   

10.
A structure was proposed for multiple-input-multiple-output multicarrier code divi- sion multiple access (MIMO MC-CDMA) uplink transmission system. Linear zero- forcing V-BLAST (ZF V-BLAST) algorithm and maximum ratio combining (MRC) scheme was applied to the receivers. The average bit error rate (BER) expression was derived on condition that the number of receive antennas was larger than that of transmit antennas and it was verified by simulations. Numerical results show that the number of transmit and receive antennas, as well as the number of sub- carriers, all exert significant effects on the BER performance. The space diversity and frequency diversity show different abilities to improve the BER performance. The MIMO MC-CDMA system based on linear ZF V-BLAST algorithm is capable of achieving better BER performance than that of the conventional MC-CDMA system by reducing the number of transmit antennas or increasing the number of receive antennas.  相似文献   

11.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。  相似文献   

12.
A Data Cube Model for Prediction-Based Web Prefetching   总被引:7,自引:0,他引:7  
Reducing the web latency is one of the primary concerns of Internet research. Web caching and web prefetching are two effective techniques to latency reduction. A primary method for intelligent prefetching is to rank potential web documents based on prediction models that are trained on the past web server and proxy server log data, and to prefetch the highly ranked objects. For this method to work well, the prediction model must be updated constantly, and different queries must be answered efficiently. In this paper we present a data-cube model to represent Web access sessions for data mining for supporting the prediction model construction. The cube model organizes session data into three dimensions. With the data cube in place, we apply efficient data mining algorithms for clustering and correlation analysis. As a result of the analysis, the web page clusters can then be used to guide the prefetching system. In this paper, we propose an integrated web-caching and web-prefetching model, where the issues of prefetching aggressiveness, replacement policy and increased network traffic are addressed together in an integrated framework. The core of our integrated solution is a prediction model based on statistical correlation between web objects. This model can be frequently updated by querying the data cube of web server logs. This integrated data cube and prediction based prefetching framework represents a first such effort in our knowledge.  相似文献   

13.
Web对象访问特征模拟器的设计与实现   总被引:2,自引:0,他引:2  
石磊  陶永才 《计算机仿真》2006,23(1):133-136
Web缓存是一个提高Web性能非常有效的方法,它可以位于网络的不同位置:客户端,代理服务器端,服务器端。研究表明Web缓存命中率可以达到30%-50%。Web缓存在应用中最大的问题就是Web缓存管理,研究Web访问特征是有效进行Web缓存管理的基础。Web日志生成模拟器对于研究Web缓存系统有很大地帮助,目前有两种方法模拟生成Web访问日志:日志驱动方法,数学模拟方法。日志驱动方法利用对历史日志进行变换来模拟生成新的日志,数学模拟方法在充分研究Ⅵ协对象访问特征的基础上,通过建立数学模型来模拟生成Web日志。该文通过分析Web对象访问特征,采用数学模拟方法分别模拟了Web对象高频区及低频区流行度特征,Web对象大小重尾分布特征,Web访问的时间局部性特征;设计并实现了一个Web日志模拟生成器WEBSIM。该模拟器不仅可以模拟生成Web对象访问日志,而且具有较大的灵活性,为进一步研究Web缓存技术和预取技术提供依据。  相似文献   

14.
Web缓存命中率与字节命中率关系   总被引:2,自引:0,他引:2       下载免费PDF全文
在研究Web缓存性能时,一般考虑2个评价指标:命中率HR和字节命中率BHR。目前大多侧重于2个指标之一,或仅通过测试2个指标的数值来评价缓存替换算法优劣,没有从2个指标关系的角度来评价缓存替换算法的性能。该文讨论了Web缓存系统中命中率与字节命中率之间的关系,提出了一种Web缓存性能评价指标——命中比(FBR),讨论了该指标在Web缓存替换算法及Web预取性能评价中的应用,为度量缓存系统的性能提供了参考依据。  相似文献   

15.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

16.
Web预取模型分析   总被引:1,自引:0,他引:1  
WWW的快速增长导致网络拥塞和服务器超载。缓存技术被认为是减轻服务器负载、减少网络拥塞、降低客户访问延迟的有效途径之一,但作用有限。为进一步提高WWW性能,引入了预取技术。文中首先介绍了Web预取技术的基本思想及其研究可行性,然后分析了现有Web预取模型,最后给出了一个Web预取模型应具有的关键属性。  相似文献   

17.
Web预取技术的研究   总被引:1,自引:0,他引:1  
预取技术是提高缓存命中率和解决Web访问延迟问题的主要方案,本文研究了网页预取技术,将数据挖掘应用于Web预取中,设计了一个为用户提供个性化服务的Web预取模型;详细介绍了对Web日志进行预处理的方法;提出了新的预取替换算法。  相似文献   

18.
随着WWW的迅速发展和网络用户的急剧增加,准确预测Web用户的访问行为对减小用户的感知延时,实现个性化推荐等具有重要的作用.无论是Markov模型还是其任何一种变种,高阶模型具有较好的预测性能.然而,高阶模型通常有较高的状态空间复杂度.提出了一种新的混合阶Markov模型(HMPM),将前缀相同的序列共享存储,降低了状态空间复杂度.仿真实验结果表明,该模型在一定程度上提高了预测准确率,查全率也有所提升.  相似文献   

19.
Web prefetching is an attractive solution to reduce the network resources consumed by Web services as well as the access latencies perceived by Web users. Unlike Web caching, which exploits the temporal locality, Web prefetching utilizes the spatial locality of Web objects. Specifically, Web prefetching fetches objects that are likely to be accessed in the near future and stores them in advance. In this context, a sophisticated combination of these two techniques may cause significant improvements on the performance of the Web infrastructure. Considering that there have been several caching policies proposed in the past, the challenge is to extend them by using data mining techniques. In this paper, we present a clustering-based prefetching scheme where a graph-based clustering algorithm identifies clusters of “correlated” Web pages based on the users’ access patterns. This scheme can be integrated easily into a Web proxy server, improving its performance. Through a simulation environment, using a real data set, we show that the proposed integrated framework is robust and effective in improving the performance of the Web caching environment.  相似文献   

20.
Integrating Web Prefetching and Caching Using Prediction Models   总被引:2,自引:0,他引:2  
Yang  Qiang  Zhang  Henry Hanning 《World Wide Web》2001,4(4):299-321
Web caching and prefetching have been studied in the past separately. In this paper, we present an integrated architecture for Web object caching and prefetching. Our goal is to design a prefetching system that can work with an existing Web caching system in a seamless manner. In this integrated architecture, a certain amount of caching space is reserved for prefetching. To empower the prefetching engine, a Web-object prediction model is built by mining the frequent paths from past Web log data. We show that the integrated architecture improves the performance over Web caching alone, and present our analysis on the tradeoff between the reduced latency and the potential increase in network load.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号