首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Songqing Chen  Xiaodong Zhang 《Software》2004,34(14):1381-1395
The amount of dynamic Web contents and secured e‐commerce transactions has been dramatically increasing on the Internet, where proxy servers between clients and Web servers are commonly used for the purpose of sharing commonly accessed data and reducing Internet traffic. A significant and unnecessary Web access delay is caused by the overhead in proxy servers to process two types of accesses, namely dynamic Web contents and secured transactions, not only increasing response time, but also raising some security concerns. Conducting experiments on Squid proxy 2.3STABLE4, we have quantified the unnecessary processing overhead to show its significant impact on increased client access response times. We have also analyzed the technical difficulties in eliminating or reducing the processing overhead and the security loopholes based on the existing proxy structure. In order to address these performance and security concerns, we propose a simple but effective technique from the client side that adds a detector interfacing with a browser. With this detector, a standard browser, such as the Netscape/Mozilla, will have simple detective and scheduling functions, called a detective browser. Upon an Internet request from a user, the detective browser can immediately determine whether the requested content is dynamic or secured. If so, the browser will bypass the proxy and forward the request directly to the Web server; otherwise, the request will be processed through the proxy. We implemented a detective browser prototype in Mozilla version 0.9.7, and tested its functionality and effectiveness. Since we have simply moved the necessary detective functions from a proxy server to a browser, the detective browser introduces little overhead to Internet accessing, and our software can be patched to existing browsers easily. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
As the Internet has become a more central aspect for information technology, so have concerns with supplying enough bandwidth and serving web requests to end users in an appropriate time frame. Web caching was introduced in the 1990s to help decrease network traffic, lessen user perceived lag, and reduce loads on origin servers by storing copies of web objects on servers closer to end users as opposed to forwarding all requests to the origin servers. Since web caches have limited space, web caches must effectively decide which objects are worth caching or replacing for other objects. This problem is known as cache replacement. We used neural networks to solve this problem and proposed the Neural Network Proxy Cache Replacement (NNPCR) method. The goal of this research is to implement NNPCR in a real environment like Squid proxy server. In order to do so, we propose an improved strategy of NNPCR referred to as NNPCR-2. We show how the improved model can be trained with up to twelve times more data and gain a 5–10% increase in Correct Classification Ratio (CCR) than NNPCR. We implemented NNPCR-2 in Squid proxy server and compared it with four other cache replacement strategies. In this paper, we use 84 times more data than NNPCR was tested against and present exhaustive test results for NNPCR-2 with different trace files and neural network structures. Our results demonstrate that NNPCR-2 made important, balanced decisions in relation to the hit rate and byte hit rate; the two performance metrics most commonly used to measure the performance of web proxy caches.  相似文献   

3.
一种追查加密代理数据包的有效方法   总被引:2,自引:0,他引:2       下载免费PDF全文
周正 《计算机工程》2007,33(21):142-143
从分析网络数据包的角度,阐述了“破网软件无界(v6.9)”网络的工作原理,提出了一种从网络出口探测和监控该软件使用者的方法,并针对其他“破网软件”,给出了通用的探测方法。实验数据证明了该方法的有效性。  相似文献   

4.
余顺争 《计算机工程与应用》2004,40(24):134-137,155
文章提出了一种无线Internet代理网关的设计方案。代理网关是移动用户与Web服务器之间提供无线Internet服务的重要设备。新的设计方案可以改进无线Internet的服务质量和性能。它结合了Web的预取技术和移动状态估计技术。其中预取技术将根据信息内容被访问的统计情况和选取标准,把部分信息内容预取到代理网关,以改善移动用户调取(Pull)信息的时延。基于移动状态估计的发布(Push)技术,将根据用户的移动状态,把用户最可能需要和感兴趣的信息(例如与位置有关的广告),发布给移动用户,以提高这种发布的准确性(命中率)。最后,将用计算机模拟,验证此方案。  相似文献   

5.
本文基于实际网络维护经验,总结了代理服务器在大型园区网用户访问互联网中的作用和存在的各种问题,并针对存在的问题进行分析,提出可实施的解决方案  相似文献   

6.
Design, implementation, and evaluation of differentiated caching services   总被引:3,自引:0,他引:3  
With the dramatic explosion of online information, the Internet is undergoing a transition from a data communication infrastructure to a global information utility. PDAs, wireless phones, Web-enabled vehicles, modem PCs, and high-end workstations can be viewed as appliances that "plug-in" to this utility for information. The increasing diversity of such appliances calls for an architecture for performance differentiation of information access. The key performance accelerator on the Internet is the caching and content distribution infrastructure. While many research efforts addressed performance differentiation in the network and on Web servers, providing multiple levels of service in the caching system has received much less attention. It has two main contributions. First, we describe, implement, and evaluate an architecture for differentiated content caching services as a key element of the Internet content distribution architecture. Second, we describe a control-theoretical approach that lays well-understood theoretical foundations for resource management to achieve performance differentiation in proxy caches. An experimental study using the Squid proxy cache shows that differentiated caching services provide significantly better performance to the premium content classes.  相似文献   

7.
Proxy servers have been used to cache web objects to alleviate the load of the web servers and to reduce network congestion on the Internet. In this paper, a central video server is connected to a proxy server via wide area networks (WANs) and the proxy server can reach many clients via local area networks (LANs). We assume a video can be either entirely or partially cached in the proxy to reduce WAN bandwidth consumption. Since the storage space and the sustained disk I/O bandwidth are limited resources in the proxy, how to efficiently utilize these resources to maximize the WAN bandwidth reduction is an important issue. We design a progressive video caching policy in which each video can be cached at several levels corresponding to cached data sizes and required WAN bandwidths. For a video, the proxy server determines to cache a smaller amount of data at a lower level or to gradually accumulate more data to reach a higher level. The proposed progressive caching policy allows the proxy to adjust caching amount for each video based on its resource condition and the user access pattern. We investigate the scenarios in which the access pattern is priorly known or unknown and the effectiveness of the caching policy is evaluated.  相似文献   

8.
随着Web服务的不断深入研究,针对基于多Web服务器环境下的统一认证方法的研究日益重要。论文根据不同Web服务器的认证方式的特点,提出了多Web站点的智能认证代理模型实现统一认证。该模型利用Cookies、监听网络信息、RSA对称密钥体制等关键技术实现单个Web服务器的认证,并采用映射机制建立多Web服务器间的协作。该模型在某高校校园网内得以应用,用户只需登录一次即可完成一个复杂业务,对用户实现了透明认证。  相似文献   

9.
A popular technique to improve the scalability of a web based system is caching at proxy servers. Caching has the drawback that a cached page becomes stale when the page is updated at the web server. In some cases, staleness may not be completely avoided because the server may not wish to expend the processing and communication resources required to transmit all the updates immediately. In general, if updates are transmitted less frequently, the staleness will tend to increase, but the amount of resources consumed will be reduced. The tradeoff between resource consumption and staleness is investigated. A measure of staleness is defined and optimization problems are formulated. The solutions to these problems allow one to come up with an optimal strategy for transmitting page updates. Numerical examples showing the resource consumption/staleness tradeoff are presented.  相似文献   

10.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

11.
Currently, the multicore system is prevalent in desktops, laptops or servers. The web proxy can save network traffic overhead and shorten communication cost. Especially with the fast development of wireless Internet accessing, the web proxy will take a more important role in the future. To obtain the fast response and high hit rate from the proxy, we study the processing of web proxy and deeply exploit parallel features which exist in kinds of proxy work flow. We propose the CP technique to build parallel tasks in a proxy system. The result shows that our scheme can efficiently improve the data throughput and fully utilize the computing resources provided by the multicore system.  相似文献   

12.
Why the ongoing surge in Internet popularity? The simplest explanation is that there is nothing else like it. Local area networks enable data exchange only with a select set of other users. The Internet is the largest wide area data network in existence—there are nearly 12 million hosts and over 250 000 web sites covering 83 countries. Currently there are around 40 million users which is expected to grow to 100 million by the year 2000. The Internet can now support storage, searching and transmission of full multimedia data including audio, video, formatted documents as well as conventional data. The Internet allows local and wide area users to communicate with more people, in more ways and provides access to the largest range of database servers in the world. The growth in demand for Internet access has been accompanied by the development of an ever-growing range of client-server tools and GUIs (graphical user interfaces). This tutorial paper discusses a number of the pertinent issues relating to the protocols, architecture, services and facilities of the Internet.  相似文献   

13.
Water resources web applications or “web apps” are growing in popularity as a means to overcome many of the challenges associated with hydrologic simulations in decision-making. Water resources web apps fall outside of the capabilities of standard web development software, because of their spatial data components. These spatial data needs can be addressed using a combination of existing free and open source software (FOSS) for geographic information systems (FOSS4G) and FOSS for web development. However, the abundance of FOSS projects that are available can be overwhelming to new developers. In an effort to understand the web of FOSS features and capabilities, we reviewed many of the state-of-the-art FOSS software projects in the context of those that have been used to develop water resources web apps published in the peer-reviewed literature in the last decade (2004–2014).  相似文献   

14.
集中管理式Web缓存系统及性能分析   总被引:5,自引:0,他引:5  
共享缓存文件是减少网络通信量和服务器负载的重要方法,本文在介绍Web Caching技术及流行的Web缓存通信协议ICP的基础上,提出了一种集中管理式Web缓存系统,该系统通过将用户的HTTP请求,按照一定的算法分发到系统中某一合适的缓存服务器上,从而消除了缓存系统内部服务器之间庞大的通信开销及缓存处理负担,减少了缓存内容的冗余度.通过分析,证明了集中管理式Web缓存系统比基于ICP的简单缓存系统具有缓存效率高、处理开销低、延迟小等优点,并且该系统具有良好的可扩展性.  相似文献   

15.
代理服务器随着互联网应用的发展与普及迅速地为人们所了解、熟悉,随着人们应用需求的提高,当前的普通代理服务器已不能满足人们在过滤方面的需求。提出了一种新的基于JPEG图像内容过滤的代理服务器,通过计算代理服务器截获的JPEG图像与基准特征库中图像的特征之间的相似度决定是否过滤网页,以此来监控不良网页信息,保护健康文明的网络环境。  相似文献   

16.
高平  广晖  陈熹  李光松 《计算机工程》2021,47(8):140-148,156
安全代理被越来越多的互联网用户用于规避网络审查和访问受限资源,因此安全代理流量的分类对于网络安全和网络管理具有重要意义。为弥补深度包检测技术在过滤和识别不良信息上的不足,提高防火墙流量探测能力,提出一种安全代理流量分类方法。提取用于安全代理流量分类的侧信道特征,包括有效载荷长度序列、信号序列等,使用机器学习和深度学习算法对Shadowsocks、V2Ray、Freegate、Ultrasurf 4种被广泛使用的安全代理流量进行识别。实验结果表明,通过提取与有效载荷内容无关的侧信道特征进行分类,与MLP、LSMP等算法相比,该方法在准确率、F1值等性能方面均有提升。  相似文献   

17.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

18.
周文刚  马占欣 《微机发展》2007,17(4):120-124
对Web页进行必要的、有效的内容过滤对于营造健康、安全的网络环境具有重要的意义。重现用户成功访问过的Web页内容,可以对网络访问进行事后监督,为过滤机制的完善提供相应数据。文中分析了Web页的访问流程,基于HTTP代理服务器,在应用层实现了对Web页的关键字过滤和基于语义的内容过滤,并通过将客户机成功访问过的Web页存储在代理服务器硬盘上,实现了内容重现。试验表明,语义过滤能较好地甄别文本的不同观点,准确度较单纯关键字过滤有明显提高。  相似文献   

19.
《Computer》2002,35(3):18-21
As the computer industry focuses on system and network security, a growing number of users are taking a closer look at open source software in order to gauge whether its potential advantages outweigh its possible disadvantages. Although open source security has been around for years, it has never been as widely used as open source products like the Linux OS or Apache Web server have been. John Pescatore, Internet security research director at market-research firm Gartner Inc., said open source security tools now represent 3 to 5 percent of security-software usage but could comprise 10 to 15 percent by 2007. A key factor in this potential growth is the quality of numerous open source security packages. Open source software products include free tools that users can download from the Internet, packages that come with commercial vendor support, and tools bundled with closed source products. The most popular tools include Netfilter and iptables; intrusion-detection systems such as Snort, Snare, and Tripwire; vulnerability scanners like Nessus and Saint; authentication servers such as Kerberos; and firewalls like T.Rex. Some companies are even beginning to use open source security to protect mission-critical applications  相似文献   

20.
In this article, we describe an adaptation proxy we developed as a part of a Kontti research project at VTT Information Technology that lets mobile users access Web content that's not directly targeted to mobile user agents. More and more content is now available on the Internet, and there's a growing need for mobile users to be able to access it. Thus the authors describe the adaptation proxy, which lets mobile users access Web content that's not directly targeted to user agents of mobile devices. The adaptation proxy can adapt Extensible Hypertext Markup Language (XHTML) documents into XHTML mobile profile (XHTML MP) and Wireless Markup Language (WML), and can perform media adaptation. At the system's core is an adaptation framework to which new source and target XML languages can be introduced with relatively little effort.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号