首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
To understand website complexity deeply, a web page complexity measurement system is developed. The system measures the complexity of a web page at two levels: transport-level and content-level, using a packet trace-based approach rather than server or client logs. Packet traces surpass others in the amount of information contained. Quantitative analyses show that different categories of web pages have different complexity characteristics. Experimental results show that a news web page usually loads much more elements at more accessing levels from much more web servers within diverse administrative domains over much more concurrent transmission control protocol (TCP) flows. About more than half of education pages each only involve a few logical servers, where most of elements of a web page are fetched only from one or two logical servers. The number of content types for web game traffic after login is usually least. The system can help web page designers to design more efficient web pages, and help researchers or Internet users to know communication details.  相似文献   

2.
孔英会  高育栋 《电视技术》2015,39(20):54-58
构建基于Watir的物联网Web事件处理系统框架。首先搭建Zigbee无线传感器网络,把终端节点的属性描述到网页并进行服务器部署,其中用微数据来描述每一个传感器的静态属性,用Js文件描述动态属性,然后采用Web自动化测试框架Watir,通过Css Selector定位技术实现对动态页面数据的实时采集,将实时采集的数据通过预处理后添加事件处理,提取关键和有效的事件数据并保存。同时利用Watir对多个事件条件下不同平台的物联网网页进行测试,结果表明基于Watir的物联网Web事件处理方法能准确和高效地获取事件数据。  相似文献   

3.
网页防篡改中分布式文件同步复制系统   总被引:1,自引:0,他引:1  
赵莉  梁静 《电子设计工程》2012,20(23):50-52
为了解决Web服务器核心内嵌防篡改系统中,文件的分布式发布以及文件篡改后的即时恢复的问题。文中基于J2EE技术架构以及MVC(模型一视图一控制)的软件开发模式,设计了一种分布式文件同步复制系统。该系统能将文件分布式的发送到多个Web服务器上;当检测到某个Web服务器上的文件被篡改了,系统能迅速从原始库同步复制到相应的Web服务器上,以达到网页防篡改的目的。系统通过对文件进行比对,只传输差异部分,提高了系统资源利用率。  相似文献   

4.
树状菜单是Web应用系统中常见的数据结构,但由于传统的Web应用采用同步交互模式,使对菜单的操作总是伴随着页面刷新,不仅浪费资源,而且降低系统性能,针对这个问题,通过分析Ajax技术特点,设计并实现了一种高效的动态加载数据的树状菜单方案,该方案调用Web Service实现与远程数据库服务器的数据访问,并以数据库中数据...  相似文献   

5.
陈丹 《电子设计工程》2012,20(14):129-131,134
为了给用户提供有效信息,能够根据用户的需要对Internet上的网页信息进行获取和过滤,设计了一个基于XML的Web信息筛选器。系统利用了XML作为中间数据的数据交换技术将Microsoft.NET技术、数据库技术与XML技术相结合设计并实现Web服务程序。采用在Web服务和客户端应用程序之间使用基于XML的存储和访问技术进行数据交换与处理的实现方法,构建了一个基于XML的Web服务,进行解析并过滤多余的信息,将解析后有用的信息结果返回给请求数据的客户端,实现为用户提供特定信息的服务。  相似文献   

6.
The proportion of dynamic objects has been growing at a fast rate in the World Wide Web. In the e-commerce environment, these objects form the core of all web transactions. However, because of additional resource requirements and the changing nature of these objects, the performance of accessing dynamic Web contents has been observed to be poor in the current generation Web services. We propose a framework called WebGraph that helps in improving the response time for accessing dynamic objects. The WebGraph framework manages a graph for each of the Web pages. The nodes of the graph represent weblets, which are components of the Web pages that either stay static or change simultaneously. The edges of the graph define the inclusiveness of the weblets. Both the nodes and the edges have attributes that are used in managing the Web pages. Instead of recomputing and recreating the entire page, the node and edge attributes are used to update a subset of the weblets are then integrated to form the entire page. In addition to the performance benefits in terms of lower response time, the WebGraph framework facilitates Web caching, quality-of-service (QoS) support, load balancing, overload control, personalized services, and security for both dynamic as well as static Web pages. A detailed implementation methodology for the proposed framework is also described. We have implemented the WebGraph framework in an experimental setup and have measured the performance improvement in terms of server response time, throughput, and connection rate. The results demonstrate the feasibility and validates a subset of the advantages of the proposed framework.  相似文献   

7.
Improving the performance of data transfers in the Internet (such as Web transfers) requires a detailed understanding of when and how delays are introduced. Unfortunately, the complexity of data transfers like those using HTTP is great enough that identifying the precise causes of delays is difficult. We describe a method for pinpointing where delays are introduced into applications like HTTP by using critical path analysis. By constructing and profiling the critical path, it is possible to determine what fraction of total transfer latency is due to packet propagation, network variation (e.g., queueing at routers or route fluctuation), packet losses, and delays at the server and at the client. We have implemented our technique in a tool called tcpeval that automates critical path analysis for Web transactions. We show that our analysis method is robust enough to analyze traces taken for two different TCP implementations (Linux and FreeBSD). To demonstrate the utility of our approach, we present the results of critical path analysis for a set of Web transactions taken over 14 days under a variety of server and network conditions. The results show that critical path analysis can shed considerable light on the causes of delays in Web transfers, and can expose subtleties in the behavior of the entire end-to-end system  相似文献   

8.
基于网络用户行为的相关页面挖掘模型   总被引:11,自引:0,他引:11  
文章提出了一种基于网络用户行为的相关页面挖掘模型。模型采用统计的方法对proxy日志进行挖掘。模型的输入是一个WEB页面,输出是一组与之相关的页面。模型的假设基础是一组兴趣相似的人访问的页面有可能相关。模型从用户群中找出对输入页面感兴趣的用户,通过聚类从这些用户中找出一类具有相似兴趣背景且对输入页面最感兴趣的用户,综合这类用户感兴趣的页面,从中挖掘出与输入页面相关的页面。该模型与目前流行的相关页面检索算法的最大区别在于分析的对象是网络用户行为,因为模型认为页面是否相关的最终判定者应该是用户,通过分析网络用户行为能够更好地挖掘用户在页面相关判定上的潜在意识。用户对页面的兴趣度基于用户对页面的访问频率而定义。实验表明,该模型是可行的。该模型可用于改进传统的IR,提供相关反馈和查询扩展,使其更加适应Internet检索。该模型也可用于内容安全方面的相关主题预测。  相似文献   

9.
The simplified field-system functionality testing software (FSFTS) was presented. That software was successfully used for verification of the final system functionality and the quality evaluation of its components. Then, two solutions for the server/client software for the distributed measurement system, implementing the Web as a main transmission medium, were described. The essential parts of the final system solutions are the server and the clients. The server collects remote data and makes them accessible for clients. These two solutions of the server/client system are based on a Windows operating system. The first solution implements Microsoft's .NET environment. The second one uses WAMP (Windows and free and open source software: Apache, MySQL, and PHP). The main difference between the first two solutions is how the client page is generated. The .NET case is only on the server side, and the WAMP case is on both server and client sides. Both systems produce a map/plot/ table showing the system data to the user through web browsers.  相似文献   

10.
基于BP神经网络的Web页面分类算法   总被引:3,自引:0,他引:3  
提出了一种基于BP神经网络的Web页面分类算法。在搜索引擎的结构中提取页面标题、内容标题和内容摘要来表示页面,采用向量空间模型计算分类组合与页面组合的相关性对页面进行矢量化,将训练后的BP神经网络用于对Web页面进行分类。实验结果表明,该分类算法有一定的实用价值。  相似文献   

11.
孙万源  戴声奎  张冰冰 《通信技术》2012,45(5):33-35,39
在分析和比较达芬奇DM365中Web参数传递的多种方法后,提出了利用嵌入式服务器Boa作为Web服务器、利用CGI(通用网关接口)实现服务器端和客户机端动态参数的传递,最后以RS485串口程序为例对运行在服务器端的CGI程序的运行方式进行了改进和优化。实验结果表明:基于后台运行的CGI程序,在Web参数动态传递的同时,可以有效地消除控制页面切换时的停顿现象。  相似文献   

12.
World over wide-area wireless Global System for Mobile Communication (GSM) networks have been upgraded to support the general packet radio service (GPRS). GPRS brings "always-on" wireless data connectivity at bandwidths comparable to that of conventional fixed-line telephone modems. Unfortunately many users have found the reality to be rather different, experiencing very disappointing performance when, for example, browsing the Web over GPRS. In This work, we show what causes the web and its underlying transport protocol TCP to underperform in a GPRS wide-area wireless environment. We examine why certain GPRS network characteristics interact badly with TCP to yield problems such as: link underutilization for short-lived flows, excess queueing for long-lived flows, ACK compression, poor loss recovery, and gross unfairness between competing flows. We also show that many Web browsers tend to be overly aggressive, and by opening too many simultaneous TCP connections can aggravate matters. We present the design and implementation of a web optimizing proxy system called GPRSWeb that mitigates many of the GPRS link-related performance problems with a simple software update to a mobile device. The update is a link-aware middleware (a local "client proxy") that sits in the mobile device, and communicates with a "server proxy" located at the other end of the wireless link, close to the wired-wireless border. The dual-proxy architecture collectively implements a number of key enhancements-an aggressive caching scheme that employs content-based hash keying to improve hit rates for dynamic content, a preemptive push of Web page support resources to mobile clients, resource adaptation to suit client capabilities, delta encoded data transfer of modified pages, DNS lookup migration, and a UDP-based reliable transport protocol that is specifically optimized for use over GPRS. We show that these enhancements results in significant improvement in web performance over GPRS links.  相似文献   

13.
网页信息抽取方法的研究   总被引:2,自引:0,他引:2  
信息抽取技术属于人工智能的一个分支.使用信息抽取技术可以人性化地从网页中把人们需要的信息抽取出来.文中提出的信息抽取技术是基于DOM和网页模板的一种归纳网页模板的新方法,它能很好地对各种布局元素的网页进行模板归纳,同时给出核心算法的C++实现.  相似文献   

14.
陕西高校体育网站的现状与对策   总被引:2,自引:0,他引:2  
采用网络资料检索、文献资料、专家访问等方法,对陕西35所高校体育网站的建设现状进行调查研究.结论:目前陕西高校体育网站的建设还处于初期,对体育网站建设的认识与关注程度淡泊,整体建设工作滞后.因此高校要根据网络功能及高校体育教育环境的特点,充分利用网络信息资源的优势,加快陕西高校体育网站的建设,为普及健全高校体育网络,促进高校网络教育视窗的快捷交流与发展提供参考.  相似文献   

15.
16.
Neural networks have been used in various applications on the World Wide Web, but most of them only rely on the available input-output examples without incorporating Web-specific knowledge, such as Web link analysis, into the network design. In this paper, we propose a new approach in which the Web is modeled as an asymmetric Hopfield Net. Each neuron in the network represents a Web page, and the connections between neurons represent the hyperlinks between Web pages. Web content analysis and Web link analysis are also incorporated into the model by adding a page content score function and a link score function into the weights of the neurons and the synapses, respectively. A simulation study was conducted to compare the proposed model with traditional Web search algorithms, namely, a breadth-first search and a best-first search using PageRank as the heuristic. The results showed that the proposed model performed more efficiently and effectively in searching for domain-specific Web pages. We believe that the model can also be useful in other Web applications such as Web page clustering and search result ranking  相似文献   

17.
Discovery of Web communities, groups of Web pages sharing common interests, is important for assisting users' information retrieval from the Web. This paper describes a method for visualizing Web communities and their internal structures. visualization of Web communities in the form of graphs enables users to access related pages easily, and it often reflects the characteristics of the Web communities. Since related Web pages are often co-referred from the same Web page, the number of co-occurrences of references in a search engine is used for measuring the relation among pages. Two URLs are given to a search engine as keywords, and the value of the number of pages searched from both URLs divided by the number of pages searched from either URL, which is called the Jaccard coefficient, is calculated as the criteria for evaluating the relation between the two URLs. The value is used for determining the length of an edge in a graph so that vertices of related pages will be located close to each other. Our visualization system based on the method succeeds in clarifying various genres of Web communities, although the system does not interpret the contents of the pages. The method of calculating the Jaccard coefficient is easily processed by computer systems, and it is suitable for visualization using the data acquired from a search engine.  相似文献   

18.
基于多Agent的数据挖掘技术,不仅能够针对不同的Web数据综合采用不同的挖掘算法,而且可以在各站点进行并行挖掘,避免Web通信量过载。在简述Agent技术和Web数据挖掘技术的基础上,结合多Agent和Web数据挖掘,设计出一种新型数据挖掘模型,且进一步阐述了该模型,并做了一些分析测试。结果证明,该方法能有效提高Web数据挖掘的速度、准确率和覆盖率,提高了数据利用率。  相似文献   

19.
Media‐centric networks deal with exchanging large media files between geographical distributed locations with strict deadlines. In such networks, resources need to be available at predetermined timeslots in the future and thus need to be reserved in advance, based on either flexible or fixed timeslot sizes. Reliability of the transfers is also important and can be attained by advance provisioning of redundant reservations. This, however, imposes additional costs, because redundant reservations are rarely in use, causing network resources to be wasted. Further adaptation and network utilization can be achieved at runtime by reutilizing unused reservations for transferring extra data as long as no failure has been detected. In this article, we design, implement, and evaluate a resilient advance bandwidth‐reservation approach based on flexible timeslots, in combination with a runtime adaptation approach. We take into account the specific characteristics of media transfers. Quality and complexity of the proposed approach have been extensively compared with that of a fixed timeslot algorithm. Our simulation results reveal that the highest admittance ratio and percentage of fully transferred requests in case of failures are almost always achieved by flexible timeslots, while the execution time of this approach is up to 17.5 times lower, compared with the approaches with fixed timeslot sizes.  相似文献   

20.
在研究Web结构特征的基础上,综合网页的结构和内容,抽取网页不同区域的内容信息,并赋予不同权重来表明其重要程度的不同。按网页间相互链接关系,扩展链接,将链接源网页所含有的类别信息传播给目标网页,从而提高分类效果。实验证明,该方法比单独依赖网页内容信息的分类在效果上有所提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号