首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Internet最为广泛的应用就是由Web服务器提供的WWW服务,如果能将Web服务器构造为一个能够容纳Agent的平台,将会极大地推动移动Agent在Internet上的应用。在保留现有Web服务器的情况下,如何来支持和应用移动Agent,这就涉及到移动Agent与Web服务器的集成问题。本文从应用的角度给出了一个基于中间件的集成框架,较好地解决了移动Agent与Web服务器的集成问题。  相似文献   

2.
Web服务器性能评测   总被引:11,自引:0,他引:11  
Web服务器性能评测是一种理解Web服务器对不同负载反应能力的方法,它对Web服务器的容量规划和性能增强有很大的帮助。讨论了Web服务器性能评测的原理、方法、难点及解决方案,介绍了基于Web负载的特点、ON/OFF源模型及浏览器/服务器体系结构,开发了一个Web服务器性能评测工具-WSBench。WSBench产生渐近自相似的HTTP请求序列,从静态文档、动态文档(没有数据库存取)、动态文档(有数据库存取)及前三者根据Zipf法则的组合4个层次来评测Web服务器的性能。性能测试结果表现为每秒请求数、每秒字节数和往返时间3个指标。最后讨论了Web服务器性能问题及使用WSBench测得的指标来建议Web服务器性能增强可以采用的方法。  相似文献   

3.
1 引言随着因特网技术和应用向广度和深度迅猛发展,人们除了在网上共享信息外,还试图通过无所不在的因特网来大规模地共享计算能力和服务等一切可以共享的资源。但是,由于网络中瓶颈的存在及热点资源的相对集中,这种迅猛发展也带来了一个问题,即人们上网时常常体验到的“World WideWait”问题。要解决这一问题,一种有效的解决办法就是采用Web Caching技术,即将Web上常被人们访问的、热门的信息缓存在Web服务器与最终用户之间的缓存服务器系统中。当前,Web Caching是一个正在迅速发展的领域,无论科研界还是工业界,都在这一方面投入了大量的精力。如果设计和部署得当的话,Web Caching可以带来很多好处,如节约大  相似文献   

4.
正1990年,加拿大麦吉尔大学的一帮师生开发出名为Archie的应用,定期搜集散放在FTP服务器上的文件信息供用户搜索,拉开了现代搜索引擎的序幕,而那时,万维网(Web)还没有诞生。1990年底,第一个Web服务器(nxoc01.cern.ch)开始运行,而直到1993年第一个基于Web的搜索  相似文献   

5.
《网络与信息》2010,(10):38-39
很多公司都架设了Web服务器,不过架设好的Web服务器如果不经过访问测试,很可能无法达到顺利发布信息的目的;下面,总结出一个比较常见的Web服务器访问失败故障排除过程,希望日后大家再次遭遇相同类型故障时可以快速进行应对!  相似文献   

6.
基于Web服务器的性能模型与参数分析   总被引:3,自引:0,他引:3  
Web是全球范围的信息浏览系统,建立在“客户机/服务器”模型之上。Web服务器具有高度的集成性,能把各种类型的信息(如文本、图像、声音、动画等)和服务(如News,FTP,Gopher,Mail等)无缝连接起来。因此,如何使Web服务器服务速度更快、服务质量更好已经成为人们关注的一个问题。过去有一些关于客户/服务器系统的研究,但往往集中在容错或存储特性的研究上,很少考虑Web服务器系统的性能特征。文只是在一些实验的基础上,提出了一个简单的关于Web服务器模型的想法,考虑因素少,且未做验证。  相似文献   

7.
传统的Web模型不支持服务器端主动向客户端发送数据,这是基于Web的网管需要解决的问题。文章给出了一个由Web服务器、数据服务器、中间服务器、通信中间件以及应用服务器组成的管理服务系统模型,并详细讨论了基于VisualC++的系统设计与实现。  相似文献   

8.
陈联 《计算机工程与设计》2006,27(11):2054-2056
为改善基于Web的远程教育的交互性,提出了一种Web页面嵌入标注模型,讨论了该模型的实现.该模型不但实现了在线对任意服务器上的Web页面进行直接标注,而且实现了标注的精确定位以及标注信息的共享和重组.  相似文献   

9.
沈勇  朱超 《计算机与现代化》2012,(7):160-162,170
嵌入式Web服务器与传统Web服务器一样,面临网络安全问题。介绍嵌入式Web技术的体系结构和特点,给出一种基于SSL协议的嵌入式Web服务器安全增强方案设计。分析并选择适合的SSL协议和Web服务器软件包,构建一个安全的嵌入式Web服务器系统,测试并分析该系统的安全性。实验表明,该安全方案可以保障嵌入式Web服务器信息服务的机密性、完整性和不可否认性,达到了安全增强的效果。  相似文献   

10.
当你启用一台Web服务器的时候,你就把你公司的一部分完全展现给了公众,任凭他人摆布。Web服务器上的那些可以被远程利用的漏洞可能会成为你的梦魇。一个显著的例子就是:微软Internet信息服务(IIS)5.0曾经就留下了丧失生产力和效益的不光彩记录。  相似文献   

11.
互联网信息组织和规划中的带拒绝装箱问题   总被引:4,自引:0,他引:4  
何勇  谈之奕  任峰 《计算机学报》2003,26(12):1765-1770
讨论如下定义的带拒绝装箱问题:设有许多等长的一维箱子,给定一个物品集,每个物品有两个参数:长度和罚值.物品可以放入箱子也可被拒绝放入箱子.如果将物品放入箱子,则使该箱剩余长度减少.一旦需将某一物品放入某一箱中,而该箱的剩余长度不够时,则需启用新箱子.如果物品被拒绝放入任何箱中,则产生惩罚.问怎样安排物品使所用箱子数与未装箱的物品总罚值之和最小.该问题是一个新的组合优化问题,来源于内部互联网的信息组织和规划.该文首先给出一个最优解值的下界估计,它可用于分枝定界法求最优解.由于该问题是强NP-难的,该文进一步研究它的离线和在线近似算法的设计与分析.文中给出一个离线算法,其绝对性能比为2;同时给出一个在线算法,其绝对性能比不超过3,渐近性能比为2,还对算法性能比的下界进行了讨论.  相似文献   

12.
吕其诚 《软件学报》1992,3(4):19-23
本文提出了无向图(k,m)最优划分的一个近似算法,证明了这是一个产生近似最优解的多项式时间算法。在最坏情况下,该算法的性能保证为一个参数k所界定,这里k是与问题输入尺寸无关的。  相似文献   

13.
This paper presents an economic lot-sizing problem with perishable inventory and general economies of scale cost functions. For the case with backlogging allowed, a mathematical model is formulated, and several properties of the optimal solutions are explored. With the help of these optimality properties, a polynomial time approximation algorithm is developed by a new method. The new method adopts a shift technique to obtain a feasible solution of subproblem and takes the optimal solution of the subproblem as an approximation solution of our problem. The worst case performance for the approximation algorithm is proven to be (4*2½+5)/7. Finally, an instance illustrates that the bound is tight.  相似文献   

14.
为了解决硬件资源难以满足WebGIS信息增长的需求,提出了ArcGIS Server分布式技术实现企业级GIS应用程序的综合平台.使用WebService技术,在业务逻辑层和空间信息层加入分布式处理服务器,能够真正实现WebGIS系统逻辑上和物理上的分布式处理服务.采用Web Service技术和ArcServer技术可以将信息量巨大的WebGIS系统分解成多个Web Service的分布式服务,从而解决硬件资源上的问题,同时对ArcGIS Server的性能进行最小的服务器资源占用率、最小化的数据传输量、最合适的图像格式和最佳的代码执行效率的优化,提升整个系统的性能.  相似文献   

15.
Approximate sequencing for variable length tasks   总被引:3,自引:0,他引:3  
Based on applications to efficient information gathering over the Web, Czumaj et al. (Algorithms and data structures (Vancouver, BC, 1999), Lecture Notes in Computer Science, Vol. 1663, Springer, Berlin, 1999, p. 297) studied the Variable Length Sequencing Problem (VLSP), showed it is NP-complete, presented a polynomial time algorithm for a very restricted version and an approximation algorithm for a slightly less restricted version. In this paper, we pin-point the difficulty by showing that it is NP-complete in a strong sense even to approximating the VLSP within a factor nk for any fixed integer k. In addition, we show it is NP-hard to find the optimal solution even when all jobs follow the periodic property. Motivated by the NP-hardness of approximating VLSP, we consider an optimal version of maximizing the number of completed tasks and present an approximation algorithm with factor 2 and a polynomial time algorithm for optimal solution in the special case when the number of different types of tasks is restricted.  相似文献   

16.
G. J. Wöginger  Z. Yu 《Computing》1992,49(2):151-158
We investigate the problem of preemptively schedulingn jobs onm parallel machines. Whenever there is a switch from processing a job to processing another job on some machine, a set-up time is necessary. The objective is to find a schedule which minimizes the maximum completion time. Form≥2 machines, this problem obviously is NP-complete. For the case of job-dependent set-up times, Monma and Potts derived a polynomial time heuristic whose worst case ratio tends to 5/3 as the number of machines tends to infinity. In this paper, we examine the case of constant (job- and machine-independent) set-up times. We present a polynomial time approximation algorithm with worst case ratio 7/6 form=2 machines and worst case ratio at most 3/2–1/2m form≥3 machines. Moreover, for the casem=2 we construct a fully polynomial time approximation scheme.  相似文献   

17.
We prove that the problem of finding, in an undirected graph with non-negative costs on edges, a minimum cost rooted spanning tree of depth 2 is NP-hard. We then prove that, in a graph of order n , this problem cannot be approximated within better than O )ln n ), unless problems in NP can be solved by slightly superpolynomial algorithms. We also prove that the metric version of the problem is MAX-SNP-hard and, consequently, cannot be approximated by polynomial time approximation schemes, unless P=NP. We devise approximation algorithms for several restricted cases and, finally, a polynomial time algorithm approximating the general problem within ratio ln n .  相似文献   

18.
Tradeoffs between time complexities and solution optimalities are important when selecting algorithms for an NP-hard problem in different applications. Also, the distinction between theoretical upper bound and actual solution optimality for realistic instances of an NP-hard problem is a factor in selecting algorithms in practice. We consider the problem of partitioning a sequence of n distinct numbers into minimum number of monotone (increasing or decreasing) subsequences. This problem is NP-hard and the number of monotone subsequences can reach [√2n+1/1-1/2]in the worst case. We introduce a new algorithm, the modified version of the Yehuda-Fogel algorithm, that computes a solution of no more than [√2n+1/1-1/2]monotone subsequences in O(n^1.5) time. Then we perform a comparative experimental study on three algorithms, a known approximation algorithm of approximation ratio 1.71 and time complexity O(n^3), a known greedy algorithm of time complexity O(n^1.5 log n), and our new modified Yehuda-Fogel algorithm. Our results show that the solutions computed by the greedy algorithm and the modified Yehuda-Fogel algorithm are close to that computed by the approximation algorithm even though the theoretical worst-case error bounds of these two algorithms are not proved to be within a constant time of the optimal solution. Our study indicates that for practical use the greedy algorithm and the modified Yehuda-Fogel algorithm can be good choices if the running time is a major concern.  相似文献   

19.
We consider the problem of partitioning a graph into k components of roughly equal size while minimizing the capacity of the edges between different components of the cut. In particular we require that for a parameter ν ≥ 1, no component contains more than ν · n/k of the graph vertices. For k = 2 and ν = 1 this problem is equivalent to the well-known Minimum Bisection problem for which an approximation algorithm with a polylogarithmic approximation guarantee has been presented in [FK]. For arbitrary k and ν ≥ 2 a bicriteria approximation ratio of O(log n) was obtained by Even et al. [ENRS1] using the spreading metrics technique. We present a bicriteria approximation algorithm that for any constant ν > 1 runs in polynomial time and guarantees an approximation ratio of O(log1.5n) (for a precise statement of the main result see Theorem 6). For ν = 1 and k ≥ 3 we show that no polynomial time approximation algorithm can guarantee a finite approximation ratio unless P = NP.  相似文献   

20.
Vertex cover is one of the best known NP-hard combinatorial optimization problems. Experimental work has claimed that evolutionary algorithms (EAs) perform fairly well for the problem and can compete with problem-specific ones. A theoretical analysis that explains these empirical results is presented concerning the random local search algorithm and the (1+1)-EA. Since it is not expected that an algorithm can solve the vertex cover problem in polynomial time, a worst case approximation analysis is carried out for the two considered algorithms and comparisons with the best known problem-specific ones are presented. By studying instance classes of the problem, general results are derived. Although arbitrarily bad approximation ratios of the (1+1)-EA can be proved for a bipartite instance class, the same algorithm can quickly find the minimum cover of the graph when a restart strategy is used. Instance classes where multiple runs cannot considerably improve the performance of the (1+1)-EA are considered and the characteristics of the graphs that make the optimization task hard for the algorithm are investigated and highlighted. An instance class is designed to prove that the (1+1)-EA cannot guarantee better solutions than the state-of-the-art algorithm for vertex cover if worst cases are considered. In particular, a lower bound for the worst case approximation ratio, slightly less than two, is proved. Nevertheless, there are subclasses of the vertex cover problem for which the (1+1)-EA is efficient. It is proved that if the vertex degree is at most two, then the algorithm can solve the problem in polynomial time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号