共查询到20条相似文献,搜索用时 31 毫秒
1.
黄晓华 《电脑编程技巧与维护》2012,(16):70+111-70,111
在ASP.NET Web应用程序的开发过程中,经常遇到页面重定向问题。针对这一问题,提出了多种实现页面重定向的方法,并探讨了它们各自的特点和使用策略。 相似文献
2.
用SQUID架构本地信息代理服务器 总被引:1,自引:0,他引:1
仿照电视广播里的插播模式,我们设想了一个活动的代理中心结构作为支持本地信息高效传送的手段,当网页通过与内容服务器协作的动态代理服务器被取回的时候,当地信息将基于需要灵活地被插入到网页中。本文介绍了用Squid—based架构的代理服务器上的信息传送的设计和各种功能,这种方案的操作对于网页客户与内容提供商来说完全透明的,是切实可行的。 相似文献
3.
4.
Constraint-based document layout for the Web 总被引:4,自引:0,他引:4
Constraints can be used to specify declaratively the desired layout of a Web document. We present a system architecture in which both the author and the viewer can impose
page layout constraints, some required and some preferential. The final appearance of the Web page is thus the result of negotiation
between author and viewer, where this negotiation is carried out by solving the set of required and preferential constraints
imposed by both parties. We identify two plausible system architectures, based on different ways of dividing the work of constraint
solving between Web server and Web client. We describe a prototype constraint-based Web authoring system and viewing tool
that provides linear arithmetic constraints for specifying the layout of the document as well as finite-domain constraints
for specifying font size relationships. Finally, we provide an empirical evaluation of the prototype. 相似文献
5.
A segmentation-based fine-grained peer sharing technique for delivering large media files in content distribution networks 总被引:1,自引:0,他引:1
Zongming Fei Mengkun Yang 《Multimedia, IEEE Transactions on》2006,8(4):821-829
Delivering large media files over the Internet is a challenging task because it has some unique features that are different from delivering conventional Web documents. In this paper, we propose a fine-grained peer sharing technique for dealing with the problem in the context of content distribution networks. The key difference of the technique from conventional peer-to-peer systems is that the unit of peer sharing is not a complete media file, but at a finer granularity. By doing so, we improve the flexibility of replica servers for handling client requests. We analyze the storage requirement at replica servers and design a scheduling algorithm to coordinate the delivery process from multiple replica servers to a client. Our simulations show that the fine-grained peer sharing approach can reduce the initial latency of clients and the rejection rate of the system significantly over a simple peer sharing method. 相似文献
6.
We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability
by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide
the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested
data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another
server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe
implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that
are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol
and does not require any changes either in existing web browsers or the installation of any software on the client side. In
particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype;
performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average
by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch
resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for
the clients with 56 Kb modem connections.
This revised version was published online in August 2006 with corrections to the Cover Date. 相似文献
7.
Alberto Bartoli Milan Prica Etienne Antoniutti di Muro 《Concurrency and Computation》2006,18(7):701-724
We propose a service replication framework for unreliable networks. The service exhibits the same consistency guarantees about the order of execution of operation requests as its non‐replicated implementation. Such guarantees are preserved in spite of server replica failure or network failure (either between server replicas or between a client and a server replica), and irrespective of when the failure occurs. Moreover, the service guarantees that in the case when a client sends an ‘update’ request multiple times, there is no risk that the request be executed multiple times. No hypotheses about the timing retransmission policy of clients are made, e.g. the very same request might even arrive at different server replicas simultaneously. All of these features make the proposed framework particularly suitable for interaction between remote programs, a scenario that is gaining increasing importance. We discuss a prototype implementation of our replication framework based on Tomcat, a very popular Java‐based Web server. The prototype comes into two flavors: replication of HTTP client session data and replication of a counter accessed as a Web service. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
8.
9.
在Web服务器和客户机之间设置代理服务器,可以使不同性能的客户机浏览Internet的内容。通过分析图像在网页源文件中的表示方法,可以推断出图像在网页中的意义。根据图像在网页中的意义、客户机的性能、网络带宽和用户设置,代理服务器采用不同的转换方法和策略进行图像的转换,对图像重新编码,使转换后的图像适应于不同的客户机。 相似文献
10.
11.
The protocols used by the majority of Web transactions are HTTP/1.0 and HTTP/1.1. HTTP/1.0 is typically used with multiple concurrent connections between client and server during the process of Web page retrieval. This approach is inefficient because of the overhead of setting up and tearing down many TCP connections and because of the load imposed on servers and routers. HTTP/1.1 attempts to solve these problems through the use of persistent connections and pipelined requests, but there is inconsistent support for persistent connections, particularly with pipelining, from Web servers, user agents, and intermediaries. In addition, the use of persistent connections in HTTP/1.1 creates the problem of non-deterministic connection duration. Web browsers continue to open multiple concurrent TCP connections to the same server. This paper examines the idea of packaging the set of objects embedded on a Web page into a single bundle object for retrieval by clients. Based on measurements from popular Web sites and an implementation of the bundle mechanism, we show that if embedded objects on a Web page are delivered to clients as a single bundle, the response time experienced by clients is better than that provided by currently deployed mechanisms. Our results indicate that the use of bundles provides shorter overall download times and reduced average object delays as compared to HTTP/1.0 and HTTP/1.1. This approach also reduces the load on the network and servers. Implementation of the mechanism requires no changes to the HTTP protocol. 相似文献
12.
以采用HTML为文件格式,JavaScript作为客户端脚本,JSP作为服务器端执行代码的Web应用系统为研究对象,在现有Web应用结构抽取方法所存在的缺陷分析基础下,通过静态分析Web应用系统的源代码,获得整个Web应用的目录结构和文档类型,再进一步抽取页面内主要结构元素,将所得到的信息以XML语言形式进行存储。通过构建和遍历XML语法树,抽取主要组件及组件间的关联信息,最终形成Web应用的系统结构图,从而提高Web应用系统维护和演化工作的效率,有效帮助维护人员对整个Web应用系统的理解。 相似文献
13.
赵中营 《电脑与微电子技术》2012,(21):53-55
在Intemet环境下.在Web服务器上运行的应用程序由于网络结构及协议的限制,无法直接获取客户端硬件信息。经过对因特网的网络结构及Web页应用的研究,给出利用WindowsForm与Web页的组合方案,成功解决以上难题,并给出具体实例。 相似文献
14.
15.
Gunter Ollmann 《Network Security》2003,2003(6):13-17
Most organizations now have substantial investments in their online Internet presences. For major financial institutions and retailers, the Internet provides both a cost effective means of presenting their offerings to customers, and a method of delivering a personalised 24/7 presence. In almost all cases, the preferred method of delivering these services is over common HTTP. Due to limitations within the protocol, there is no in-built facility to identify or track a particular customer (or session) uniquely within an application. Thus the connection between the customer’s Web browser and the organisation's Web service is commonly referred to as being “stateless”. Because of this, organizations have been forced to adopt custom methods of managing client sessions if they wish to maintain state.An important aspect of correctly managing state information through session IDs relates directly to authentication processes. While it is possible to insist that a client using an organization's Web application provide authentication information for each “restricted” page or data submission, it would soon become tedious and untenable. Thus session IDs are not only used to follow clients throughout the Web application, they are also used to identify each unique, authenticated user — thereby indirectly regulating access to site content or information. 相似文献
16.
A home page is the gateway to an organization's Web site. To design effective Web home pages, it is necessary to understand the fundamental drivers of user's perception of Web pages. Not only do designers have to understand potential users' frame of mind, they also have at their choosing a stupefying array of attributes – including numerous font types, audio, video, and graphics – all of which can be arranged on a page in different ways, compounding the complexity of the design task. A theoretical model capable of explaining user reactions at a molar level should be invaluable to Web designers as a complement to prevalent intuitive and heuristic approaches. Such a model transcends piecemeal page attributes to focus on overall Web page perceptions of users. Reasoning that people perceive the cyberspace of Web pages in ways similar to their perception of physical places, we use Kaplan and Kaplan's informational model of place perception from the field of environmental psychology to predict that only two dimensions: understanding of information on a Web page, and the involvement potential of a Web page, should adequately capture Web page perception at a molar level. We empirically verify the existence of these dimensions and develop valid scales for measuring them. Using a home page as a stimulus in a lab experiment, we find that understanding and involvement together account for a significant amount of the variance in the attitude toward the Web page and in the intention to browse the underlying Web site. We show that the informational model is a parsimonious and powerful theoretical framework to measure users' perceptions of Web home pages and it could potentially serve as a guide to Web page design and testing efforts. 相似文献
17.
对Web页进行必要的、有效的内容过滤对于营造健康、安全的网络环境具有重要的意义。重现用户成功访问过的Web页内容,可以对网络访问进行事后监督,为过滤机制的完善提供相应数据。文中分析了Web页的访问流程,基于HTTP代理服务器,在应用层实现了对Web页的关键字过滤和基于语义的内容过滤,并通过将客户机成功访问过的Web页存储在代理服务器硬盘上,实现了内容重现。试验表明,语义过滤能较好地甄别文本的不同观点,准确度较单纯关键字过滤有明显提高。 相似文献
18.
互联网上大量的信息往往以不同种的语言出现,为了在尽可能短的时间内了解这些信息,也为了在互联网舆情管控领域的中文文本处理中使用这些信息文本,需要借助于在线实时网页翻译。文中在分析了传统网页翻译方法的不足后,提出了基于内容协商和网络缓存的网页实时在线翻译系统的方案,使得翻译服务对于客户端透明,节省客户端多余操作时间,同时使得对于重复请求的网页呈现效率得以提高。并通过分析和实验证实了该方案相对于传统网页翻译方法的优越性。 相似文献
19.
分布式Web服务器中负载均衡的实现 总被引:3,自引:2,他引:3
实现和测试了一种可扩展的分布式Web服务器原型。原型由多台接收和处理TCP连接的服务器通过负载均衡集群而成,主机IP地址利用Round Robin DNS发布,任意主机都可以接收客户端的请求。一旦当客户试图建立连接,主机要决定接收连接提供服务还是重定向连接到其它主机。每台主机保持集群定期组播的系统负载信息,并利用低负荷的IP封装IP技术重构数据包来重定向请求。性能测试表明这种技术比RR—DNS具有更好的性能,而且利用系统负载状态的重定向连接比无状态的重定向效果更好。 相似文献
20.
A data mining algorithm for generalized Web prefetching 总被引:8,自引:0,他引:8
Nanopoulos A. Katsaros D. Manolopoulos Y. 《Knowledge and Data Engineering, IEEE Transactions on》2003,15(5):1155-1169
Predictive Web prefetching refers to the mechanism of deducing the forthcoming page accesses of a client based on its past accesses. In this paper, we present a new context for the interpretation of Web prefetching algorithms as Markov predictors. We identify the factors that affect the performance of Web prefetching algorithms. We propose a new algorithm called WM,,, which is based on data mining and is proven to be a generalization of existing ones. It was designed to address their specific limitations and its characteristics include all the above factors. It compares favorably with previously proposed algorithms. Further, the algorithm efficiently addresses the increased number of candidates. We present a detailed performance evaluation of WM, with synthetic and real data. The experimental results show that WM/sub o/ can provide significant improvements over previously proposed Web prefetching algorithms. 相似文献