首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The most critical property exhibited by a heavy-tailed workload distribution (found in many WWW workloads) is that a very small fraction of tasks make up a large fraction of the workload, making the load very difficult to distribute in a distributed system. Load balancing and load sharing are the two predominant load distribution strategies used in such systems. Load sharing generally has better response time than load balancing because the latter can exhibit excessive overheads in selecting servers and partitioning tasks. We therefore further explored the least-loaded-first (LLF) load sharing approach and found two important limitations: (a) LLF does not consider the order of processing, and (b) when it assigns a task, LLF does not consider the processing capacity of servers. The high task size variation that exists in heavy-tailed workloads often causes smaller tasks to be severely delayed by large tasks.This paper proposes a size-based approach, called the least flow-time first (LFF-SIZE), which reduces the delay caused by size variation while maintaining a balanced load in the system. LFF-SIZE takes the relative processing time of a task into account and dynamically assigns a task to the fittest server with a lighter load and higher processing capacity. LFF-SIZE also uses a multi-section queue to separate larger tasks from smaller ones. This arrangement effectively reduces the delay of smaller tasks by larger ones as small tasks are given a higher priority to be processed. The performance results performed on the LFF-SIZE implementation shows a substantial improvement over existing load sharing and static size-based approaches under realistic heavy-tailed workloads.  相似文献   

2.
The existing corpus of scientific knowledge in the field of database systems (DBS) has been concerned mostly with such problems as database technologies and system development methodologies. Relatively few efforts have been devoted to the problem of adapting an ongoing DBS in a systematic fashion. Notwithstanding the lack of sufficient prior knowledge, this adaptation problem is critical in DBS management, since a DBS should really be conceived as an evolving structure, rather than a stable one-shot phenomenon. Toward this end, this paper proposes a reliability-based decision support framework for evolving a DBS systematically. Both user satisfaction with the DBS and the usage pattern of it are monitored on a real-time basis and used for controlling it adaptively. Chance-constrained models are proposed to characterize suitable decision rules for DBS evolution decisions ranging from file reorganization to DBS restructing. A four-schema architecture is also proposed for achieving data model independence in a DBS, whereby facilitating the control and evolution of DBS.  相似文献   

3.
In a number of real-time applications such as target tracking, precise workloads are unknown a priori but may dynamically vary, for example, based on the changing number of targets to track. It is important to manage the CPU utilization, via feedback control, to avoid severe overload or underutilization even in the presence of dynamic workloads. However, it is challenge to model a real-time system for feedback control, as computer systems cannot be modeled via physics laws. In this paper, we present a novel closed-loop approach for utilization control based on formal fuzzy logic control theory, which is very effective to support the desired performance in a nonlinear dynamic system without requiring a system model. We mathematically prove the stability of the fuzzy closed-loop system. Further, in a real-time kernel, we implement and evaluate our fuzzy logic utilization controller as well as two existing utilization controllers based on the linear and model predictive control theory for an extensive set of workloads. Our approach supports the specified average utilization set-point, while showing the best transient performance in terms of utilization control among the tested approaches.  相似文献   

4.
In groupware, users must communicate about their intentions and aintain common knowledge via communication channels that are explicitly designed into the system. Depending upon the task, generic communication tools like chat or a shared whiteboard may not be sufficient to support effective coordination. We have previously reported on a methodology that helps the designer develop task specific communication tools, called coordinating representations, for groupware systems. Coordinating representations lend structure and persistence to coordinating information. We have shown that coordinating representations are readily adopted by a user population, reduce coordination errors, and improve performance in a domain task. As we show in this article, coordinating representations present a unique opportunity to acquire user information in collaborative, user-adapted systems. Because coordinating representations support the exchange of coordinating information, they offer a window onto task and coordination-specific knowledge that is shared by users. Because they add structure to communication, the information that passes through them can be easily exploited by adaptive technology. This approach provides a simple technique for acquiring user knowledge in collaborative, user-adapted systems. We document our application of this approach to an existing groupware system. Several empirical results are provided. First, we show how information that is made available by a coordinating representation can be used to infer user intentions. We also show how this information can be used to mine free text chat for intent information, and show that this information further enhances intent inference. Empirical data shows that an automatic plan generation component, which is driven by information from a coordinating representation, reduces coordination errors and cognitive effort for its users. Finally, our methodology is summarized, and we present a framework for comparing our approach to other strategies for user knowledge acquisition in adaptive systems.  相似文献   

5.
A model for assessing workloads called overall workload level (OWL) was developed by introducing linguistic variable sets and applying the analytic hierarchy process (AHP) to estimate the external workload imposed on a human operator in man–machine systems. To do this, a five-point linguistic variable set scale was constructed and their hierarchical prioritization procedures were set up. The task and workplace variables (e.g., physical, environmental, postural, and mental job demand workloads) which can obtain the operator's perception of workload are selected as workload factors and the AHP technique is used to collect different weights. Finally, OWL is calculated using a computer-assisted system to determine the level of overall workload impinged on an operator. The OWL was implemented in an actual industrial environment from a physiological and epidemiological viewpoint to determine the validity of the model. Furthermore, the results obtained by applying OWL were compared to the results obtained by applying the overall workload (OW) of the NASA task load index (TLX). The results show that there is a close linear relationship among the physiological measurements, the severity of injury and illness rates, OW, and OWL. Thus, this approach can be used for problem identification and for solving widespread occupational workloads.

Relevance to industry

The determination of workloads imposed on a human operator plays an important role in designing and evaluating an existing man–machine system. Therefore, a model for assessing workloads was developed to estimate the external workload imposed on a human operator in man–machine systems. This model can be used for problem identification and for solving widespread occupational workload.  相似文献   


6.
Web数据库安全机制的研究和探讨   总被引:2,自引:0,他引:2  
Web数据库系统在人们的日常生活中发挥着重要的作用。本文从Web数据库的访问技术入手。在分析了数据库面临的安全危机之后,从建立安全机制模型、防火墙技术、数据加密、备份与恢复几个方面对Web数据库的安全机制做了研究和探讨。  相似文献   

7.
Diagnostic support systems that help solving problems in open and weak theory domains need to be context-sensitive in order to reveal flexible and efficient behaviour. This paper presents a task-oriented methodology for analysing and modeling contextual knowledge at the knowledge level. We present a context-sensitive diagnosis approach (ConSID) which clarifies the connection between content and process knowledge. The former embodies the domain model, while the latter embodies the task and method models. We present a prototypical system, the ConSID-Creek, that applies the ConSID approach to the medical diagnostic domain. We illustrate how the system integrates case-based and explanation-based reasoning paradigms when realizing the abductive subtask of the overall diagnostic task.  相似文献   

8.
The type of the workload on a database management system (DBMS) is a key consideration in tuning the system. Allocations for resources such as main memory can be very different depending on whether the workload type is Online Transaction Processing (OLTP) or Decision Support System (DSS). A DBMS also typically experiences changes in the type of workload it handles during its normal processing cycle. Database administrators must therefore recognize the significant shifts of workload type that demand reconfiguring the system in order to maintain acceptable levels of performance. We envision intelligent, autonomic DBMSs that have the capability to manage their own performance by automatically recognizing the workload type and then reconfiguring their resources accordingly. In this paper, we present an approach to automatically identifying a DBMS workload as either OLTP or DSS. Using data mining techniques, we build a classification model based on the most significant workload characteristics that differentiate OLTP from DSS and then use the model to identify any change in the workload type. We construct and compare classifiers built from two different sets of workloads, namely the TPC-C and TPC-H benchmarks and the Browsing and Ordering profiles from the TPC-W benchmark. We demonstrate the feasibility and success of these classifiers with TPC-generated workloads and with industry-supplied workloads.  相似文献   

9.
With the vision of "Internet as a computer", complex software-intensive systems running on the Internet, or the "Internetwares", can be also divided into multiple layers. Each layer has a different focus, implementation technique, and stakeholders. Monitoring and adaptation of such multilayer systems are challenging, because the mismatches and adaptations are interrelated across the layers. This interrelation makes it difficult to find out: 1) When a system change causes mismatches in one layer, how to identify all the cascaded mismatches on the other layers? 2) When an adaptation is performed at one layer, how to find out all the complementary adaptations required in other layers? This paper presents a model-based approach towards cross-layer monitoring and adaptation of multilayer systems. We provide standard meta-modelling languages for system experts to specify the concepts and constraints separately for each layer, as well as the relations among the concepts from different layers. Within each individual layer, we use run-time models to represent the system state specific to this layer, monitor the systems by evaluating model changes according to specified constraints, and support manual or semi-automated adaption by modifying the models. When a change happens in the run-time model for one layer, either caused by system changes or by the adaptation, we synchronize the models for other layers to identify cascaded mismatches and complementary adaptations across the layers. We illustrate the approach on a simulated crisis management system, and are using it on a number of ongoing projects.  相似文献   

10.
Reverse engineering, also called reengineering, is used to modify systems that have functioned for many years, but which can no longer accomplish their intended tasks and, therefore, need to be updated. Reverse engineering can support the modification and extension of the knowledge in an already existing system. However, this can be an intricate task for a large, complex and poorly documented knowledge-based system. The rules in the knowledge base must be gathered, analyzed and understood, but also checked for verification and validation. We introduce an approach that uses reverse engineering for the knowledge in knowledge-based systems. The knowledge is encapsulated in rules, facts and conclusions, and in the relationships between them. Reverse engineering also collects functionality and source code. The outcome of reverse engineering is a model of the knowledge base, the functionality and the source code connected to the rules. These models are presented in diagrams using a graphic representation similar to Unified Modeling Language and employing ontology. Ontology is applied on top of rules, facts and relationships. From the diagrams, test cases are generated during the reverse engineering process and adopted to verify and validate the system.  相似文献   

11.
This paper presents a new hybrid improvement heuristic approach to simple straight and U-type assembly line balancing problems which is based on the idea of adaptive learning approach and simulated annealing. The proposed approach uses a weight parameter to perturb task priorities of a solution to obtain improved solutions. The weight parameters are then modified using a learning strategy. The maximization of line efficiency (i.e., the minimization of the number of stations) and the equalization of workloads among stations (i.e., the minimization of the smoothness index or the minimization of the variation of workloads) are considered as the performance criteria. In order to clarify the proposed solution methodology, a well known problem taken from literature is solved. A computational study is conducted by solving a large number of benchmark problems available in the literature to compare the performance of the proposed approach to the existing methods such as simulated annealing and genetic algorithms. Some test instances taken from literature are also solved by the proposed approach. The results of the computational study show that the proposed approach performs quite effectively. It also yields optimal solutions for all test problems within a short computational time.  相似文献   

12.
殷凡  张建明 《计算机工程与设计》2005,26(10):2802-2804,2827
针对当前的协作学习系统很少考虑到学习者知识水平、认知特性、兴趣等个性化属性,提出了一个基于学习者知识水平的分层多Agent学习系统模型,该模型结合智能代理技术,通过对学习者知识水平的界定实现系统的分层结构,较好地解决了现有学习系统中普遍存在而又尚未解决的无序和混乱、群体互动效果、个性化以及系统通讯开销等问题。  相似文献   

13.
The development of highly effective heuristics for search problems is a difficult and time-consuming task. We present a knowledge acquisition approach to incrementally model expert search processes. Though, experts do not normally have complete introspective access to that knowledge, their explanations of actual search considerations seem very valuable in constructing a knowledge-level model of their search processes.Furthermore, for the basis of our knowledge acquisition approach, we substantially extend the work done on Ripple-down rules which allows knowledge acquisition and maintenance without analysis or a knowledge engineer. This extension allows the expert to enter his domain terms during the KA process; thus the expert provides a knowledge-level model of his search process. We call this framework nested ripple-down rules.Our approach targets the implicit representation of the less clearly definable quality criteria by allowing the expert to limit his input to the system to explanations of the steps in the expert search process. These explanations are expressed in our search knowledge interactive language. These explanations are used to construct a knowledge base representing search control knowledge. We are acquiring the knowledge in the context of its use, which substantially supports the knowledge acquisition process. Thus, in this paper, we will show that it is possible to build effective search heuristics efficiently at the knowledge level. We will discuss how our system SmS1.3 (SmS for Smart Searcher) operates at the knowledge level as originally described by Newell. We complement our discussion by employing SmS for the acquisition of expert chess knowledge for performing a highly pruned tree search. These experimental results in the chess domain are evidence for the practicality of our approach.  相似文献   

14.
Time-predictability is the most important requirement for a real-time system, and researchers have therefore paid attention to the duration between the arrival and completion of a real-time task, called response time. RTA (Response Time Analysis) studies, however, rely on the same technique, yielding room for further improvement, especially regarding multiprocessor platforms. For this paper, we investigated the properties of an existing utilization-based schedulability analysis for global EDF (Earliest Deadline First) on a multiprocessor platform, and developed a new RTA technique based on the corresponding properties, which calculates the response times of tasks in task sets deemed schedulable by the existing analysis. We demonstrated through simulations that our proposed RTA technique not only calculates response times that are less pessimistic than those of the existing approach, but also successfully derives response times that cannot be obtained by the existing approach.  相似文献   

15.
Software configuration management ( SCM) is an emerging discipline. An important aspect of realizing SCM is the task of maintaining the configurations of evolving software systems. In this paper, we provide an approach to resolving some of the conceptual and technical problems in maintaining configurations of evolving software systems. The approach provides a formal basis for existing notions of system architecture. The formal properties of this view of configurations provide the underpinnings for a rigorous notion of system integrity, and mechanisms to control the evolution of configurations. This approach is embodied in a language, NuMIL, to describe software system configurations, and a prototype environment to maintain software system configurations. We believe that the approach and the prototype environment offer a firm base to maintain software system configurations and, therefore, to implement SCM.  相似文献   

16.
In distributed real-time systems, if a task misses its deadline, an exception can be thrown. In this context, end-to-end deadline missing prediction mechanisms can reduce exception throwing because they define an estimated response time. With this estimated response time the system can carry out remedial actions in time to avoid the throw of an exception. In this work, we propose the Available Slack (AS) deadline missing prediction mechanism, which defines an estimated response time for distributed tasks using information such as computation time and end-to-end deadline. We show how AS behaves in simulations with different system workloads like pipelines, balanced and non-balanced loads.  相似文献   

17.
为寻找新型的深度脑刺激(DBS)刺激波形和简化DBS参数设置,提出基于脉宽调制PWM思想调制DBS波形的方案。利用正弦波等效的PWM(SPWM)形成一种新的DBS波形,并通过帕金森病网络模型验证了SPWM DBS的作用效果。仿真结果显示SPWM DBS与普通的周期性脉冲DBS作用效果一致,使帕金森病的病态放电恢复正常。SPWM调制波幅值是影响SPWM DBS作用效应的关键参数,通过调整调制波幅值可以优化DBS的能量。  相似文献   

18.
Public access computer systems, e.g. videotex, enable the development of value added services such as teleshopping and telebanking. Potential users may not have any significant experience of computers, or indeed any interest in learning how to operate them. The computers, therefore, need to be simple to use. This paper attempts to identify features of the teleshopping task which contribute to problems of usability. This identification is a pre-requisite for subsequent experimental evaluation and system optimization. First, transactions are described in terms of a general model of the task. The videotex form of a particular transaction-shopping - is then examined and expectations of sources of difficulty are derived. The data from an observational study are used to identify sources of difficulty and to establish a set of operationalizable system variables contributing to user difficulties and errors. A model of the user is then described which accounts for such problems of usability in terms of mismatch between knowledge used by the expert ideal user and the knowledge used in real transactions. The errors and the statements of difficulty from the observational study are used again to establish the knowledge sources which mismatched with the ideal user knowledge. Relationships between the system variables and these knowledge variables are identified. The operationalizability of the variables allows subsequent experimentation to quantify their effects, and to confirm the grouping and relationship of system characteristics with the incorrect or inadequate knowledge sources. The findings are intended to contribute to improving videotex transaction systems. The aims and the success of the approach are discussed, along with the role of the models as conceptual organizers.  相似文献   

19.
In this paper, we present a technique for building a high-availability (HA) database management system (DBMS). The proposed technique can be applied to any DBMS with little or no customization, and with reasonable performance overhead. Our approach is based on Remus, a commodity HA solution implemented in the virtualization layer, that uses asynchronous virtual machine state replication to provide transparent HA and failover capabilities. We show that while Remus and similar systems can protect a DBMS, database workloads incur a performance overhead of up to 32 % as compared to an unprotected DBMS. We identify the sources of this overhead and develop optimizations that mitigate the problems. We present an experimental evaluation using two popular database systems and industry standard benchmarks showing that for certain workloads, our optimized approach provides fast failover ( $\le $ 3 s of downtime) with low performance overhead when compared to an unprotected DBMS. Our approach provides a practical means for existing, deployed database systems to be made more reliable with a minimum of risk, cost, and effort. Furthermore, this paper invites new discussion about whether the complexity of HA is best implemented within the DBMS, or as a service by the infrastructure below it.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号