首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 109 毫秒
1.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

2.
Songqing Chen  Xiaodong Zhang 《Software》2004,34(14):1381-1395
The amount of dynamic Web contents and secured e‐commerce transactions has been dramatically increasing on the Internet, where proxy servers between clients and Web servers are commonly used for the purpose of sharing commonly accessed data and reducing Internet traffic. A significant and unnecessary Web access delay is caused by the overhead in proxy servers to process two types of accesses, namely dynamic Web contents and secured transactions, not only increasing response time, but also raising some security concerns. Conducting experiments on Squid proxy 2.3STABLE4, we have quantified the unnecessary processing overhead to show its significant impact on increased client access response times. We have also analyzed the technical difficulties in eliminating or reducing the processing overhead and the security loopholes based on the existing proxy structure. In order to address these performance and security concerns, we propose a simple but effective technique from the client side that adds a detector interfacing with a browser. With this detector, a standard browser, such as the Netscape/Mozilla, will have simple detective and scheduling functions, called a detective browser. Upon an Internet request from a user, the detective browser can immediately determine whether the requested content is dynamic or secured. If so, the browser will bypass the proxy and forward the request directly to the Web server; otherwise, the request will be processed through the proxy. We implemented a detective browser prototype in Mozilla version 0.9.7, and tested its functionality and effectiveness. Since we have simply moved the necessary detective functions from a proxy server to a browser, the detective browser introduces little overhead to Internet accessing, and our software can be patched to existing browsers easily. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Using cable modems that operate at several hundred times the speed of conventional telephone modems, many cable operators are beginning to offer World Wide Web access and other data services to residential subscribers. Initial experiences indicate that real-world hybrid fiber coaxial (HFC) networks are susceptible to a variety of radio-frequency impairments that significantly reduce the benefits of using high-speed cable modems. The effects of packet losses in the access network are particularly accentuated during subscriber accesses to remote servers on the Internet. The longer round-trip times in such accesses together with the high packet loss rate result in dramatic degradations in performance perceived by subscribers. This paper shows that by using proxy servers to handle all remote accesses from an HFC access network, the performance of remote accesses can be significantly enhanced even in cases when the proxy servers do not function as data caches. By handling packet losses that occur in the HFC network locally, at low latencies and without the remote server even being aware of the loss, a proxy server enables faster recovery from packet losses. Most importantly, since it controls data transmissions over the local HFC network, the proxy server's transmission control protocol (TCP) implementation can be optimized for the loss of characteristics of the HFC access network, enabling a significant increase in performance when the access network is lossy.  相似文献   

4.
 As user load on web servers becomes more globalised, duplicated regional mirroring is being seen as an increasingly expensive solution to meeting regional peak demand. Alternative solutions are being explored by looking at dynamic load balancing using distributed intelligent middleware to re-route traffic from busy regions to quieter ones as global load patterns change over a 24 h cycle. The techniques used can also be employed under fault and planned maintenance conditions. One such solution, providing the load balancing via reconfigurable dynamic proxy servers, is seen as `unobtrusive' in that it works with standard web browsers and web server technology. The technique employs an evolutionary algorithm to perform combinatorial optimisation against a dynamic performance predicting model. This paper describes this solution, focussing on issues such as algorithm tuning, scalabilty and reliability. A prototype system is currently being trialled within the Systems Integration Department at British Telecommunications Research Labs, Adastral Park and is the subject of several BT held patents.  相似文献   

5.
在分布式不经意传输协议中,为便于安全性分析,通常假定所有的代理服务器都是半可信的,但如果存在某些恶意的代理服务器,将会导致接收者重构出错误的消息。针对这些恶意的代理服务器,提出了N取1的可验证分布式不经意传输方案。该方案除了具备一般形式的分布式不经意传输的特性外,还具有:接收者R可以与所有代理服务器进行交互,避免了其他方案中对门限值k的限制;接收者R能够验证所有消息的正确性,以便能够防止恶意的代理服务器通过篡改某个消息的秘密份额并监听这种篡改行为是否被发现等手段,而造成对其隐私的危害。  相似文献   

6.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

7.
Anonymity technologies enable Internet users to maintain a level of privacy that prevents the collection of identifying information such as the IP address. Understanding the deployment of anonymity technologies on the Internet is important to analyze the current and future trends. In this paper, we provide a tutorial survey and a measurement study to understand the anonymity technology usage on the Internet from multiple perspectives and platforms. First, we review currently utilized anonymity technologies and assess their usage levels. For this, we cover deployed contemporary anonymity technologies including proxy servers, remailers, JAP, I2P, and Tor with the geo-location of deployed servers. Among these systems, proxy servers, Tor and I2P are actively used, while remailers and JAP have minimal usage. Then, we analyze application-level protocol usage and anonymity technology usage with different applications. For this, we preform a measurement study by collecting data from a Tor exit node, a P2P client, a large campus network, a departmental email server, and publicly available data on spam sources to assess the utilization of anonymizer technologies from various perspectives. Our results confirm previous findings regarding application usage and server geo-location distribution where certain countries utilize anonymity networks significantly more than others. Moreover, our application analysis reveals that Tor and proxy servers are used more than other anonymity techniques.  相似文献   

8.
针对互联网面临的僵尸网络、渗透等恶意威胁,如何确保网络设备的高可靠性、数据的安全性和完整性,是急需解决的问题。本文设计实现了内核级网络流量监测系统,本系统包括流量采集、流量检测、流量统计、防护模块和Web控制台五大模块,实现了流量的捕获和监测均在内核完成,减少了性能开销。使用本系统能够减少服务器遭受恶意流量攻击,为众多Linux服务器提供强大的保护。  相似文献   

9.
Currently, the multicore system is prevalent in desktops, laptops or servers. The web proxy can save network traffic overhead and shorten communication cost. Especially with the fast development of wireless Internet accessing, the web proxy will take a more important role in the future. To obtain the fast response and high hit rate from the proxy, we study the processing of web proxy and deeply exploit parallel features which exist in kinds of proxy work flow. We propose the CP technique to build parallel tasks in a proxy system. The result shows that our scheme can efficiently improve the data throughput and fully utilize the computing resources provided by the multicore system.  相似文献   

10.
穿过代理服务器获取本机到远程服务器的路由   总被引:2,自引:0,他引:2  
通过代理服务器接入Internet的局域网中的主机,在和远程服务器通信时,IP地址都要受代理服务器的处理,无法直接通过TCP/IP和远程服务器通信,引入了应用层的HTTP协议,完成了和远程服务器的通信,又通过本地端口绑定技术,从收集到的众多包中识别出哪个是由指定远程服务器发回来的,从而过滤出相应的路由信息.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号