首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
周文刚  马占欣 《微机发展》2007,17(4):120-124
对Web页进行必要的、有效的内容过滤对于营造健康、安全的网络环境具有重要的意义。重现用户成功访问过的Web页内容,可以对网络访问进行事后监督,为过滤机制的完善提供相应数据。文中分析了Web页的访问流程,基于HTTP代理服务器,在应用层实现了对Web页的关键字过滤和基于语义的内容过滤,并通过将客户机成功访问过的Web页存储在代理服务器硬盘上,实现了内容重现。试验表明,语义过滤能较好地甄别文本的不同观点,准确度较单纯关键字过滤有明显提高。  相似文献   

2.
《Computer Networks》1999,31(11-16):1725-1736
The World-Wide Web provides remote access to pages using its own naming scheme (URLs), transfer protocol (HTTP), and cache algorithms. Not only does using these special-purpose mechanisms have performance implications, but they make it impossible for standard Unix applications to access the Web. Gecko is a system that provides access to the Web via the NFS protocol. URLs are mapped to Unix file names, providing unmodified applications access to Web pages; pages are transferred from the Gecko server to the clients using NFS instead of HTTP, significantly improving performance; and NFS's cache consistency mechanism ensures that all clients have the same version of a page. Applications access pages as they would Unix files. A client-side proxy translates HTTP requests into file accesses, allowing existing Web applications to use Gecko. Experiments performed on our prototype show that Gecko is able to provide this additional functionality at a performance level that exceeds that of HTTP.  相似文献   

3.
该文考虑网络数据更新,需要控制代理服务器与客户的距离时,网络中的代理服务器的放置问题。找到代理服务器的最优数量和放置位置,使网络中数据访问的总花费(包括数据读取和更新)最小。利用二叉树结构和动态规划方法,得到了一个时间复杂度On2)的多项式时间算法,其中n为网络结点数。  相似文献   

4.
5.
Who killed Gopher? An extensible murder mystery   总被引:1,自引:0,他引:1  
Everyone knew there had to be an easier way to use the Internet than the Unix shell experience, mind you, but original editions of the Gopher and HyperText Transfer Protocols were painfully trivial hacks that did nothing FTP didn't already handle. So here begins our mysterious tale: why either protocol ever rose to prominence in the first place; and how the fratricidal drama eventually played out. Understanding how HTTP killed Gopher may lead us to the forces that may, in turn, topple HTTP  相似文献   

6.
Vetter  R.J. Spell  C. Ward  C. 《Computer》1994,27(10):49-57
The World-Wide Web, an information service on the Internet, uses hypertext links to other textual documents or files. Users can click on a highlighted word or words in the text to provide additional information about the selected word(s). Users can also access graphic pictures, images, audio clips, or even full-motion video through hypermedia, an extension of hypertext. One of the most popular graphics-oriented browsers is Mosaic, which was developed at the National Center for Supercomputing Applications (NCSA) as a way to graphically-navigate the WWW. Mosaic browsers are currently available for Unix workstations running X Windows, PCs running Microsoft Windows, and Macintosh computers. Mosaic can access data in WWW servers, Wide Area Information Servers (WAIS), Gopher servers, Archie servers, and several others. The World-Wide Web is still evolving at a rapid pace. Distributed hypermedia systems on the Internet will continue to be an active area of development in the future. The flexibility of the WWW design, its use of hyperlinks, and the integration of existing WAIS and Gopher information resources, make the WWW ideal for future research and study. Highly interactive multimedia applications will require more sophisticated tools than currently exist. The most significant issue that needs to be resolved is the mismatch between WWW system capabilities and user requirements in the areas of presentation and quality of service  相似文献   

7.
WWW代理服务器的研究与实现   总被引:2,自引:0,他引:2  
代理服务器是缓解安全总理2的有效方法之一。在对http协议分析的基础上,描述了WWW代理服务器的工作流程和实现原理,并据此开发了一个可供衫的WWW代理服务器。  相似文献   

8.
董武  李晓辉 《微机发展》2003,13(10):74-76,116
在Web服务器和客户机之间设置代理服务器,可以使不同性能的客户机浏览Internet的内容。通过分析图像在网页源文件中的表示方法,可以推断出图像在网页中的意义。根据图像在网页中的意义、客户机的性能、网络带宽和用户设置,代理服务器采用不同的转换方法和策略进行图像的转换,对图像重新编码,使转换后的图像适应于不同的客户机。  相似文献   

9.
基于HTTP的WEB Server监控系统   总被引:1,自引:0,他引:1  
针对互联网应用中最广泛的WWW服务器,采用有别于传统SNMP及RMON的监控模式,系统绕过Web Server的SN-MP Agent及Mib库,利用HTTP协议的请求、应答模式,通过采集应答状态行、主体实体应答内容,达到对Web Server运行状态与页面内容的监控。实验表明该方法能较好的实现对Web Server的管理,提高了管理与监控的实时性能。  相似文献   

10.
WWW业务访问特性分布研究   总被引:8,自引:0,他引:8  
WWW业务表现为一系列的访问序列。而Web Server和Proxy Server的日志很好地记录了这种访问序列的过程及特性。WWW业务的特性研究是Web Server、Web中间件研究和人工合成Web负载的基础。分析了一个Web Server和两个Proxy Server的日志,重点研究了Web页面请求的概率分布、Web静态文档大小的概率分布(含传输文档)、Web静态文档的访问距离的概率分布,并将分析结果同相关文献的结果进行了对比,同时通过试验证实了在使用Size作为Web缓存替换依据时,还应该考虑Web文档的访问频率。  相似文献   

11.
Measurement and Analysis of HTTP Traffic   总被引:2,自引:0,他引:2  
The usage of Internet is rapidly increasing and a large part of the Internet traffic is generated by the World Wide Web (WWW) and the associated protocol HyperText Transfer Protocol (HTTP). Several important parameters that affect the performance of the WWW are bandwidth, scalability, and latency. To tackle these parameters and to improve the overall performance of the system, it is important to understand and to characterize the application level characteristics. This article is reporting on the measurement and analysis of HTTP traffic collected on the student access network at the Blekinge Institute of Technology in Karlskrona, Sweden. The analysis is done on various HTTP traffic parameters, e.g., inter-session timings, inter-arrival timings, request message sizes, response code, and number of transactions. The reported results can be useful for building synthetic workloads for simulation and benchmarking purposes.  相似文献   

12.
介绍在核能科学领域,基于Linux操作系统的稳定的数据服务器的搭建过程.通过安装和配置Linux系统的Vsftpd、 Apache HTTP以及OpenSSH服务软件包完成对服务器FTP服务、Web服务和SSH服务的搭建,并对过程中需要注意的诸如主机访问控制、不同操作系统的字符集编码转化等问题进行分析和解决.  相似文献   

13.
该文论述HTTP协议的主要特性,分析了TCP和HTTP的传输过程,阐述了HTTP与TCP影响Web服务器性能的原因,分析了各种网络特性对Web服务器性能的影响和影响Web服务器性能的主要因素,最后提出了优化WWW系统性能的主要方法。  相似文献   

14.
Web页面安全性技术初探   总被引:9,自引:0,他引:9  
随着Internet技术的飞速发展,WWW技术得到广泛应用.有些Web站点需要对Web页面进行访问控制,但是由于HTTP协议的无状态性,使得依靠HTTP协议进行Web页面的访问控制变得不可能.文中给出了在网络的应用层利用QueryString、Form的隐含类型域和Cookie方法对Web页面进行用户名/口令校验的方法,并对这些方法进行了比较.  相似文献   

15.
在互联网智能化的过程中,互联网用户行为的分析是一个必要的工作.通过架设网络代理,记录用户在互联网上发出的HTTP请求,建立用户行为日志库,并根据Web访问的特性对用户行为日志进行过滤、聚类,缩减数据规模,最后利用开放式分类目录ODP(Open Directory Project)对用户行为进行分类统计,将没有语义信息的...  相似文献   

16.
The Internet and the WWW have encouraged the adoption of a three‐tiered architectural style as the general framework for distributed computing. The thin client concept associated with this model is the most suitable for the WWW, because it can be easily mapped to the browser concept. Additionally, the separation of business logic (middle tier) and back‐end services enable further flexibility for the design of this kind of service. Nevertheless, the advent of platform‐independent software models like Java and the availability of common off‐the‐shelf services, like FTP or HTTP, offer new opportunities to more classical distributed computing models. In this paper we revisit the two‐tiered model for Internet environments, and show that the fat client (user interface plus business logic) may still be valid, at least in some environments. We propose two solutions to provide full FTP access from a Java applet. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
Songqing Chen  Xiaodong Zhang 《Software》2004,34(14):1381-1395
The amount of dynamic Web contents and secured e‐commerce transactions has been dramatically increasing on the Internet, where proxy servers between clients and Web servers are commonly used for the purpose of sharing commonly accessed data and reducing Internet traffic. A significant and unnecessary Web access delay is caused by the overhead in proxy servers to process two types of accesses, namely dynamic Web contents and secured transactions, not only increasing response time, but also raising some security concerns. Conducting experiments on Squid proxy 2.3STABLE4, we have quantified the unnecessary processing overhead to show its significant impact on increased client access response times. We have also analyzed the technical difficulties in eliminating or reducing the processing overhead and the security loopholes based on the existing proxy structure. In order to address these performance and security concerns, we propose a simple but effective technique from the client side that adds a detector interfacing with a browser. With this detector, a standard browser, such as the Netscape/Mozilla, will have simple detective and scheduling functions, called a detective browser. Upon an Internet request from a user, the detective browser can immediately determine whether the requested content is dynamic or secured. If so, the browser will bypass the proxy and forward the request directly to the Web server; otherwise, the request will be processed through the proxy. We implemented a detective browser prototype in Mozilla version 0.9.7, and tested its functionality and effectiveness. Since we have simply moved the necessary detective functions from a proxy server to a browser, the detective browser introduces little overhead to Internet accessing, and our software can be patched to existing browsers easily. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
一个基于集中管理的协作式Web缓存系统   总被引:10,自引:2,他引:10  
共享不同代理的缓存Web文档是减少通信量和减轻网络瓶颈的重要方法。在分析原有Web缓存通信协议(ICP)的基础上,提出了一种新的协作Web缓存系统(CMCS)并作了分析比较。通过将HTTP请求均匀分散到系统各个代理,消除了代理之间庞大的通信开销以及由此带来的处理负担。在动态变化的网络环境下,有效地将各个代理组织起来,处理来自服务器的文档。另外也克服了以往每个代理里有大量冗余内容,造成各个代理内容趋向的情形。  相似文献   

19.
Obraczka  K. Danzig  P.B. Li  S.-H. 《Computer》1993,26(9):8-22
An overview of resource discovery services currently available on the Internet is presented. The authors concentrate on the following discovery tools: the Wide Area Information Servers (WAIS) project, Archie, Prospero, Gopher. The World-Wide Web (WWW), Netfind, the X.500 directory, Indie, the Knowbot Information Service (KIS), Alex, Semantic File Systems, and Nomenclator. The authors summarize the surveyed tools by presenting a taxonomy of their characteristics and design decisions. They also describe where to find and how to access several of the surveyed discovery services. They conclude with a discussion of future directions in the area of resource discovery and retrieval  相似文献   

20.
动态网页加速技术   总被引:3,自引:0,他引:3  
综述了动态网页加速技术的最新发展,首先介绍了主动缓存,它通过在代理服务器上运行原始服务器提供的Applets以完成必要的处理,并返回结果给用户而不用连接服务器,接着介绍了服务器加速器,它放置于Web Server前端或内部以加速服务器访问速度,最后重点介绍了一种简单的标记语言ESI(Edge Side Includes),用于定义网页片段,使得在因特网边缘可以动态地组装和分发Web应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号