首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
陈向东 《计算机科学》2015,42(6):185-188, 215
在当前的自适应软件研究中,人们将更多的关注点放在环境感知、服务质量建模、编程语言等方面,从而导致缺乏对自适应过程和原理的深入揭示的问题.关注体系结构,研究动态自适应过程,提出了一种软件体系结构重配置方法.该方法通过对构件、连接子的添加、删除和替换等操作来调整体系结构.基于云计算的服务器池大小动态自适应调整实验表明,动态自适应能提高系统的可信度,降低运行费用.  相似文献   

2.
分析影响服务质量的构件资源需求与资源依赖关系模型,提出一种自适应中间件框架。该框架能动态感知负载变化,自适应调整服务器配置参数,确保应用的服务质量。采用回溯算法搜索最优配置,以满足性能需求。以一个信息查询系统作为测试用例进行实验,结果表明,该框架可以提高应用程序的性能。  相似文献   

3.
NVMS中的自适应数据库连接池技术   总被引:2,自引:1,他引:1       下载免费PDF全文
分析目前主流数据库连接池的不足,针对非机动车辆管理系统的应用需求提出一种自适应数据库连接池方案。该方案可以根据应用规模动态调整可扩展标记语言参数,选择相应的管理策略,在每次初始化时读入优化的配置参数,实现连接池的自适应。对比测试表明,该方案可以更有效地管理连接资源,提高数据库的运行效率。  相似文献   

4.
以某校园网为例,在网络应用服务器设计这个环节中,我们分别用到了WEB服务器、FTP服务器、DNS服务器、DHCP服务器、MAIL服务器。并且在一台已经安装了Windows 2003 Server的计算机上进行这些服务器的配置。该文主要从以上相关服务器的功能和配置步骤方面作了详细介绍。  相似文献   

5.
网络环境 我们单位的局域网环境简要描述如下:服务器HP LH3,运行Windows NT4.0 SPK3,通过ⅡS3.0提供内部WEB服务,没有使用DHCP服务,手工配置TCP/IP参数。服务器IP地址198.88,188.1,子网掩码255.255.255.0;客户端微机运行Windows98,IP地址从198.88.188.2往后排,子网掩码均为255.255.255.0。各部门的微机通过双绞线连到HUB,各个HUB再通过交换机(3COM的SuperStack Ⅱ Switch 1100;3C16950)连到服务器。  相似文献   

6.
本文从域名服务(DNS)出发,结合当前ISP为各企业在Internet上提供网络宣传的实际需要,详细地讲述了构建虚拟WEB服务器的必要性,以及虚拟WEB服务器所需的支持网络服务──虚拟域名服务、支持技术──虚拟主机/虚拟端口技术 并根据WEB服务器的运行模式给出了相应的配置实例。  相似文献   

7.
富月  杜琼 《自动化学报》2018,44(7):1250-1259
针对一类动态未知的工业运行过程,提出一种基于神经网络补偿和多模型切换的自适应控制方法.为充分考虑底层跟踪误差对整个运行过程优化和控制的影响,将底层极点配置控制系统和上层运行层动态模型相结合,作为运行过程动态模型.针对参数未知的运行过程动态模型,设计由线性鲁棒自适应控制器、基于神经网络补偿的非线性自适应控制器以及切换机制组成的多模型自适应控制算法.采用带死区的递推最小二乘算法在线辨识控制器参数,克服了投影算法收敛速度慢、对参数初值灵敏的局限.理论分析和仿真实验结果表明了所提方法的有效性.  相似文献   

8.
针对目前先应秘密共享系统基于经验方法的安全参数设置问题,本文将安全检测技术与先应秘密共享方案相结合,提出了动态自适应安全的先应秘密共享系统结构和响应方法.利用系统的安全审计日志,在评估移动攻击安全风险的基础上,分析了系统的共享服务器组由起始安全向入侵转移的渐进过程,建立了系统的状态转移模型,给出了系统的安全性定量分析和评估方法.并且,通过比较不同的门限配置、入侵率和安全阈值等参数情况,说明了维持先应秘密共享系统安全性的一般步骤,通过动态调整运行配置,实现系统安全的自适应控制和管理.给出了该方法应用的具体步骤,并验证了其有效性.  相似文献   

9.
白凤妮 《福建电脑》2009,25(2):164-164
为了解决了油气联合站设备运行基于油田企业网的远程监视问题,采用了工控机集中采集设备参数,工控机与WEB服务器之间直接使用光线相连构建局域网的方式,由WEB服务器将运行数据在网络上发布.从而达到数据远程监视的目的。本文将从系统总体架构、数据传输、远程监视设备参数、远程监视系统功能等方面详细阐述了系统的构建思路。  相似文献   

10.
自适应重配置软件系统的运行时监控方法研究   总被引:1,自引:0,他引:1  
唐姗  李丽萍  谭文安 《计算机科学》2013,40(11):191-196
运行时监控技术作为实现自适应软件的一个重要研究内容,现已成为当前很多软件工程方法中用来提高软件产品可信性的一个重要设计原则。针对现有的很多软件监控方法常常将系统的监控逻辑与业务功能逻辑混杂在一起的问题,提出了一个需求模型驱动的、自适应重配置软件的运行时监控方法。以软件系统的目标模型及属性规约为基础,介绍了如何构建系统的监控模型、生成和编织监控代码,以及进行运行时诊断分析和自适应重配置调整。该方法通过采用独立于应用程序的外部单元来实现对运行时系统的监控、诊断和自适应重配置处理。这更利于系统的维护和管理,也更符合软件复用的思想。  相似文献   

11.
We have implemented an efficient and scalable web cluster named LVS-CAD/FC (i.e. LVS with Content-Aware Dispatching and File Caching). In LVS-CAD/FC, a kernel-level one-way content-aware web switch based on TCP Rebuilding is implemented to examine and distribute the HTTP requests from clients to web servers, and the fast Multiple TCP Rebuilding is implemented to efficiently support persistent connection. Besides, a file-based web cache stores a small set of the most frequently accessed web files in server RAM to reduce disk I/Os and a light-weight redirect method is developed to efficiently redirect requests to this cache. In this paper, we have further proposed new policies related to content-based workload-aware request distribution, in which the web switch considers the content of requests and workload characterization in request dispatching. In particular, web files with more access frequencies would be duplicated in more servers’ file-based caches, such that hot web files can be served by more servers. Our goals are to improve cluster performance by obtaining better memory utilization and increasing the cache hit rates while achieving load balancing among servers. Experimental results of practical implementation on Linux show that LVS-CAD/FC is efficient and scales well. Besides, LVS-CAD/FC with the proposed policies can achieve 66.89% better performance than the Linux Virtual Server with a content-blind web switch.  相似文献   

12.
Jhonny Mertz  Ingrid Nunes 《Software》2018,48(6):1218-1237
Meeting performance and scalability requirements while delivering services is a critical issue in web applications. Recently, latency and cost of Internet‐based services are encouraging the use of application‐level caching to continue satisfying users' demands and improve the scalability and availability of origin servers. Application‐level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an ad hoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application‐level caching is a time‐consuming and error‐prone task, becoming a common source of bugs. Among all the issues involved with application‐level caching, the decision of what should be cached must frequently be adjusted to cope with the application evolution and usage, making it a challenging task. In this paper, we introduce an automated caching approach to automatically identify application‐level cache content at runtime by monitoring system execution and adaptively managing caching decisions. Our approach is implemented as a framework that can be seamlessly integrated into new and existing web applications. In addition to the reduction of the effort required from developers to develop a caching solution, an empirical evaluation showed that our approach significantly speeds up and improves hit ratios with improvements ranging from 2.78% to 17.18%.  相似文献   

13.
Web集群服务器的分离式调度策略   总被引:9,自引:3,他引:9  
主要用排队论方法讨论了Web集群整体性能与请求调度策略之间的关系,所获得的结论是:在Web集群非过载情况下,一部分后端服务器仅处理静态请求而另一部分后端服务器仅处理动态请求的分离式调度策略要好于所有后端服务器既处理静态请求又处理动态请求的混合式调度策略。用SPECweb99测试工具所做的实际测试更进一步证明:当负载参数为120个连接时,采用分离式调度策略的Web集群服务器可完成63个连接,而采用混合式调度策略的Web集群服务器仅能完成36个连接,性能提高了22.5%。  相似文献   

14.
Web服务器生成Cookies并作为文本存贮于用户计算机硬盘或内存中,是实现Web应用认证的主要手段。本文分析了Cookie认证机制的实现过程与特点,并且论述了该认证机制易遭受的安全威胁以及安全需求,并给出实现安全Cookie认证的方法与措施。  相似文献   

15.
To reduce the environmental impact, it is essential to make data centers green, by turning off servers and tuning their speeds for the instantaneous load offered, that is, determining the dynamic configuration in web server clusters. We model the problem of selecting the servers that will be on and finding their speeds through mixed integer programming; we also show how to combine such solutions with control theory. For proof of concept, we implemented this dynamic configuration scheme in a web server cluster running Linux, with soft real-time requirements and QoS control, in order to guarantee both energy-efficiency and good user experience. In this paper, we show the performance of our scheme compared to other schemes, a comparison of a centralized and a distributed approach for QoS control, and a comparison of schemes for choosing speeds of servers.  相似文献   

16.
Currently, non-functional requirements (NFRs) consume a considerable part of the software development effort. The good news is that most of them appear time and again during system development and, luckily, their solutions can be often described as a pattern independently from any specific application or domain. A proof of this are the current application servers and middleware platforms that can provide configurable prebuilt services for managing some of these crosscutting concerns, or aspects. Nevertheless, these reusable pattern solutions presents two shortcomings, among others: (1) they need to be applied manually; and (2) most of these pattern solutions do not use aspect-orientation, and, since NFRs are often crosscutting concerns, this leads to scattered and tangled representations of these concerns. Our approach aims to overcome these limitations by: (1) using model-driven techniques to reduce the development effort associated to systematically apply reusable solutions for satisfying NFRs; and (2) using aspect-orientation to improve the modularization of these crosscutting concerns. Regarding the first contribution, since the portion of a system related to NFRs is usually significant, the reduction on the development effort associated to these NFRs is also significant. Regarding the second contribution, the use aspect-orientation improves maintenance and evolution of the non-functional requirements that are managed as aspects. An additional contribution of our work is to define a mapping and transition from aspectual requirements to aspect-oriented software architectures, which, in turn, contributes to improve the general issue of systematically relating requirements to architecture. Our approach is illustrated by applying it to a Toll Gate case study.  相似文献   

17.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

18.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

19.
Aspect-Oriented Requirements Engineering focuses on the identification and modularisation of crosscutting concerns at early stages. There are different approaches in the requirements engineering community to deal with crosscutting concerns, introducing the benefits of the application of aspect-oriented approaches at these early stages of development. However, most of these approaches rely on the use of Natural Language Processing techniques for aspect identification in textual documents and thus, they lack a unified process that generalises its application to other requirements artefacts such as use case diagrams or viewpoints. In this paper, we propose a process for mining early aspects, i.e. identifying crosscutting concerns at the requirements level. This process is based on a crosscutting pattern where two different domains are related. These two different domains may represent different artefacts of the requirements analysis such as text and use cases or concerns and use cases. The process uses syntactical and dependency based analyses to automatically identify crosscutting concerns at the requirements level. Validation of the process is illustrated by applying it to several systems and showing a comparison with other early aspects tools. A set of aspect-oriented metrics is also used to show this validation.  相似文献   

20.
The growth of web-based applications in business and e-commerce is building up demands for high performance web servers for better throughputs and lower user-perceived latency. These demands are leading to a widespread substitution of powerful single servers by robust newcomers, cluster web servers, in many enterprise companies. In this respect the load-balancing algorithms play an important role in boosting the performance of cluster servers. The previous load-balancing algorithms which were designed for the handling of static contents in web services suffer from significant performance degradation under dynamic and database-driven workloads. Regarding this, we propose an approximation-based load-balancing algorithm with admission control for cluster-based web servers in this study. Since it is difficult to accurately determine the loads of web servers through feedbacks from distributed agents in web servers, we propose an analytical model of a web server to estimate the web servers’ loads. To achieve this, the algorithm classifies requests based on their service times and track numbers of outstanding requests for each class of each web server node and also based on their resource demands to dynamically estimate the loads of each node. For the error handling of the model a proportional integral (PI) controller from control theory is used. Then the estimated available capacity of each web server is used for load balancing and admission control decisions. The implementation results with a standard benchmark confirm the effectiveness of the proposed scheme, which improves both the mean response time and the throughput of the cluster compared to rival load-balancing algorithms, and also avoids situations in which the cluster is overloaded, even when the request rates are beyond the cluster capacity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号