首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于标记的缓存协作分布式Web服务器系统   总被引:3,自引:0,他引:3       下载免费PDF全文
林曼筠  钱华林 《软件学报》2003,14(1):117-123
介绍了提高Web服务器性能的前沿技术--分布式Web服务器系统,讨论了现有各种方案的优缺点,在此基础上提出一种新的分布式Web服务器系统.该系统使用基于标记的缓存协作用户请求分发方法(tag based cache cooperative Web requests distribution,简称TB-CCRD),通过前端机把系统中各个Web服务器的缓存组织成一个大的虚拟缓存系统,提高系统的整体缓存命中率,缩短了请求的响应时间;通过分布式处理TCP连接转交来消除前端机的性能瓶颈;利用标记通告URL在缓存中的位置,避免了额外的系统内部通信.从而得到了一个可扩展的高性能分布式Web服务器系统.  相似文献   

2.
While most users currently access Web applications from Web browser interfaces, pervasive computing is emerging and offering new ways of accessing Internet applications from any device at any location, by utilizing various modes of interfaces to interact with their end users. The PC and its back-end servers remain important in a pervasive system, and the technology could involve new ways of interfacing with a PC and/or various types of gateways to back-end servers. In this research, cellular phone was used as the pervasive device for accessing an Internet application prototype, a multimodal Web system (MWS), through voice user interface technology.This paper describes how MWS was developed to provide a secure interactive voice channel using an Apache Web server, a voice server, and Java technology. Securing multimodal applications proves more challenging than securing traditional Internet applications. Various standards have been developed within a context of Java 2 Micro Edition (J2ME) platform to secure multimodal and wireless applications. In addition to covering these standards and their applicability to the MWS system implementation, this paper also shows that multimodal user-interface page can be generated by using XSLT stylesheet which transforms XML documents into various formats including XHTML, WML, and VoiceXML.  相似文献   

3.
Web-based network element management provides an administrator with the ability to configure and monitor network devices over the Internet using a Web browser. The most direct way to accomplish this is to embed a Web server [Embedded Web Server (EWS)] into a network device, and use that server to provide a Web-based management user interface constructed with HTML, graphics, Java and other features common to Web browsers. In this paper we present EWS-based management application interface mechanisms for use between embedded management applications and embedded Web servers. We propose a guideline for choosing an efficient interface mechanism, which is based on the characteristics of management information and Web documents. A Web-based management user interface through embedded Web servers has many advantages such as ubiquity, platform independence and user-friendliness. In order to be truly useful, a Web-based management user interface must have a low development cost and a short development time. We provide effective integration mechanisms for each interface. We validate these mechanisms by implementing them in an Internet router.  相似文献   

4.
杨德志  许鲁  张建刚 《计算机科学》2007,34(10):143-145
BWMMS是BWFS的分布式文件系统元数据服务子系统。它充分利用系统访问负载的动态性和局部性特征,通过简单的集中决策机制管理元数据请求负载在多个元数据服务器的分布。为降低集中决策点可能的瓶颈限制,集中决策点位于元数据请求处理路径的末端。本文介绍各个元数据服务器上用来降低对后端集中决策点的压力,提高元数据访问效率的元数据分布信息缓存,并通过测试数据评估缓存命中率对后端集中决策点和元数据访问效率的影响。  相似文献   

5.
As cluster-based Web servers are increasingly adopted to host a variety of network-based services, improving the performance of such servers has become critical to satisfy the customers’ demands. Especially, the user response time is an important factor so that clients feel satisfied with the Web services. In this paper, we investigate the feasibility of minimizing the response time of a server by exploiting the advantages of both user-level communication and coscheduling. We, thus, propose a coscheduled server model based on the recently proposed distributed PRESS Web server, where the remote cache accesses can be coscheduled on different nodes to reduce the response time. We experiment this concept using two known coscheduling techniques, called dynamic coscheduling (DCS) and DCS with immediate blocking. We have developed a comprehensive simulation testbed that captures the underlying communication layer in a cluster, the characteristics of various coscheduling algorithms, and the characteristics of the distributed server model to estimate the average delay and throughput with different system configurations. The accuracy of the VIA communication layer and the DCS mechanism is verified using measurements on a 16-node Linux cluster. Extensive simulation of four server models (PRESS over VIA, coscheduled PRESS model with DCS, with DCS and blocking, and Adaptive) using 32-node cluster configurations indicates that the average response time of a distributed server can be minimized significantly by coscheduling the communicating processes. The use of the DCS scheme reduced the average latency up to four times to the PRESS over VIA model that uses only user-level communication.  相似文献   

6.
《Computer Networks》2002,38(1):75-97
We describe the design, implementation and performance of a high-performance Web server accelerator which runs on an embedded operating system and improves Web server performance by caching data. It can serve Web data at rates an order of magnitude higher than that which would be achieved by a high-performance Web server running on similar hardware under a conventional operating system such as Unix or NT. The superior performance of our system results in part from its highly optimized communications stack. In order to maximize hit rates and maintain updated caches, our accelerator provides an API which allows application programs to explicitly add, delete, and update cached data. The API allows our accelerator to cache dynamic as well as static data. We describe how our accelerator can be scaled to multiple processors to increase performance and availability. The basic design alternatives include a content router or a TCP router (without content routing) in front of a set of Web cache accelerator nodes, with the cache memory distributed across the accelerator nodes. Content-based routing reduces cache node CPU cycles but can make the front-end router a bottleneck. With the TCP router, a request for a cached object may initially be sent to the wrong cache node; this results in larger cache node CPU cycles, but can provide a higher aggregate throughput, because the TCP router becomes a bottleneck at a higher throughput than the content router. We quantify the throughput ranges in which different designs are preferable. We also examine a combination of content-based and TCP routing techniques. In addition, we present statistics from critical deployments of our accelerator for improving performance at highly accessed Sporting and Event Web sites hosted by IBM.  相似文献   

7.
《Computer Networks》2002,38(6):795-808
Web content caches are often placed between end users and origin servers as a mean to reduce server load, network usage, and ultimately, user-perceived latency. Cached objects typically have associated expiration times, after which they are considered stale and must be validated with a remote server (origin or another cache) before they can be sent to a client. A considerable fraction of cache “hits” involve stale copies that turned out to be current. These validations of current objects have small message size, but nonetheless, often induce latency comparable to full-fledged cache misses. Thus, the functionality of caches as a latency-reducing mechanism highly depends not only on content availability but also on its freshness. We propose policies for caches to proactively validate selected objects as they become stale, and thus allow for more client requests to be processed locally. Our policies operate within the existing protocols and exploit natural properties of request patterns such as frequency and recency. We evaluated and compared different policies using trace-based simulations.  相似文献   

8.
在分布式视频服务器系统中,利用协同缓存(CooperativeCaching)技术,可以达到充分利用系统中各服务器上的空闲内存,以形成一个更大的、逻辑上完整的协同缓存,提升总体的缓存命中率,进而提升整个系统性能的目的。文章就来对这一背景下的协同缓存技术进行研究,分析采用协同缓存的动因,比较几种传统的协同缓存机制,并提出了一种适合于分布式视频服务器系统的协同缓存策略。  相似文献   

9.
To maintain quality of service, some heavily trafficked Web sites use multiple servers, which share information through a shared file system or data space. The Andrews file system (AFS) and distributed file system (DFS), for example, can facilitate this sharing. In other sites, each server might have its own independent file system. Although scheduling algorithms for traditional distributed systems do not address the special needs of Web server clusters well, a significant evolution in the computational approach to artificial intelligence and cognitive engineering shows promise for Web request scheduling. Not only is this transformation - from discrete symbolic reasoning to massively parallel and connectionist neural modeling - of compelling scientific interest, but also of considerable practical value. Our novel application of connectionist neural modeling to map Web page requests to Web server caches maximizes hit ratio while load balancing among caches. In particular, we have developed a new learning algorithm for fast Web page allocation on a server using the self-organizing properties of the neural network (NN).  相似文献   

10.
Do users wait less if proxy caches incorporate estimates of the current network conditions into document replacement algorithms? To answer this, we explore two new caching algorithms: (1) keep in the cache documents that take the longest to retrieve; and (2) use a hybrid of several factors, trying to keep in the cache documents from servers that take a long time to connect to, that must be loaded over the slowest Internet links, that have been referenced the most frequently, and that are small. The algorithms work by estimating the Web page download delays or proxy-to-Web server bandwidth using recent page fetches. The new algorithms are compared to the best three existing policies—LRU, LFU, and SIZE—using three measures-user response time and ability to minimize Web server loads and network bandwidth consumed—on workloads from Virginia Tech and Boston University.  相似文献   

11.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

12.
一种基于分散协作的Web缓存集群体系结构   总被引:1,自引:0,他引:1  
Web对象缓存技术是一种减少Web访问通信量和访问延迟的重要手段,该文通过分析现有的各种Web缓存系统,提出了一种基于分散协作的Web缓存集群体系结构。该体系结构克服了集中式系统需要额外配备一台管理服务器的缺陷,消除了管理服务器瓶颈失效造成系统瘫痪的危险,减少由于管理服务器带来的延迟;同时消除了分散系统的缓存不命中情况下的多级转发的延迟和缓存内容重叠,提高了资源利用率和系统效率,具有良好的可扩展性和健壮性。  相似文献   

13.
Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control  相似文献   

14.
一个J2EE应用服务器的Web容器集成框架   总被引:5,自引:0,他引:5  
林泊  周明辉  刘天成  黄罡  梅宏 《软件学报》2006,17(5):1195-1203
针对J2EE(Java 2 platform enterprise edition)应用服务器集成Web容器的传统实现方式存在的不足,提出一个两层结构的Web容器集成框架:外层独立于Web容器实现,满足管理工具、部署工具等其他模块与Web容器的交互;内层则是对特定Web容器的包装、扩展或改良.该框架已在J2EE应用服务器PKUAS(Peking Universitv Application Server)上得以实现.测试结果表明,该框架具有良好的可插拔性,实现了应用服务器配置和管理机制的统一,且性能良好.  相似文献   

15.
This paper presents WebGALAXY, a flexible multi-modal user interface system that allows wide access to selected information on the World Wide Web (WWW) by integrating spoken and typed natural language queries and hypertext navigation. WebALAXY extends our GALAXY spoken language system, a distributed client-server system for retrieving information from online sources through speech and natural language. WebGALAXY supports a spoken user interface via a standard telephone line as well as a graphical user interface via a standard Web browser using either Java/JavaScript or a cgi-bin/forms front end. Natural language understanding is performed by the system and information servers retrieve the requested information from various online resources including WWW servers, Gopher servers and CompuServe. Currently, queries about three domains are supported: weather, air travel, and points of interest around Boston.  相似文献   

16.
分布式异构空间数据共享研究   总被引:8,自引:0,他引:8  
在构建防震减灾WebGIS体系的过程中,引入GML来描述空间数据,设计了空间数据库系统,以实现多源异构数据的共享。系统基于客户层、服务器层和数据库服务器三层体系结构来建造。客户层实现图形用户界面并完成数据的表示。服务器包括Web服务器和GIS应用服务器,前者主要用以客户端通信、后者则要实现对数据库服务器的访问,多源空间数据的获取、转换和输出标准的GML文件等。数据库服务器用来存放本地数据和链接分布式异构数据库。系统采用Java Servlet来完成Web服务器和GIS应用服务器的功能开发,通过Java applet实现客户端、并利用SVG来实现GML数据的可视化。  相似文献   

17.
基于内容的Web服务器动态负载均衡算法   总被引:1,自引:0,他引:1  
研究基于内容的Web服务器动态负载均衡算法,将其中的请求类别对应的服务器数与PICK-KX算法中的K值相结合,提出一种新的算法。模拟实验结果表明,该算法可以在较高的缓存命中率和较小的服务器负载计算负荷之间取得良好的平衡。在极限情况下,经该算法均衡后的集群服务器可以返回更多的请求响应。  相似文献   

18.
This paper describes a scalable architecture for Web servers designed to cope with the ongoing increase of the Internet requirements. In the paper, first the drawbacks of the traditional Web server architecture are discussed, and the need for an innovative solution is described. The proposed design addresses two of the parameters that can dramatically impact the performance of Web servers: (1) the need for a powerful data management system to cope with the increase in the complexity of users' requests; and (2) an efficient caching mechanism to reduce the amount of redundant traffic. In this direction, a scalable solution based on distributed database technology to replace the file system is described, and performance test results of the system are provided. This architecture is further extended by a collaborative caching system that builds up an adaptive hierarchy of caches for Web servers, which allows them to keep up with the changes in the traffic generated by the applications they are running. Finally, some improvements to the proposed architecture are addressed.  相似文献   

19.
Developing a distributed scalable Java component server   总被引:2,自引:0,他引:2  
We present here approaches for a distributed scalable Java component server. The first one uses a resource broker model, whereby the system is composed of one or several entry point servers, a resource broker and a set of participating servers. The resource broker gives the system its dynamic scalability and load balancing capability by notifying participants and providing information to the entry point servers. An experimental version of the server has been developed. Two other approaches based on Jini and JavaSpace are proposed. An experimental version of the latter one is also compared with the resource broker model.  相似文献   

20.
Due to the heavy tailed pattern of Internet traffic, it is crucial to monitor the incoming arrival rate in a Web system to preserve its performance. In this study, we focus on the arrival rate processing mechanism as part of the design of an adaptive load balancing Web algorithm. The arrival rate is one of the most important metrics to be monitored in a Web site to avoid the possible congestion of Web servers. Six methods are analysed to detect the burstiness of incoming traffic in a Web system. We define six burstiness factors to be individually included in an adaptive load balancing algorithm, which also needs to monitor some Web servers’ parameters continuously, such as the arrival rate at the server or its CPU utilization in order to avoid an unexpected overload situation.We also define adaptive time slot scheduling based on the burstiness factor, which reduces considerably the overhead of the monitoring process by increasing the monitoring frequency when bursty traffic arrives at the system and by decreasing the frequency when no bursts are detected in the arrival rate. Simulation results of the behaviour of the six burstiness factors and adaptive time slot scheduling when sudden changes are detected in the arrival rate are presented and discussed. We have considered a scenario made up of a locally distributed cluster-based Web information system for simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号