首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Physical data layout is a crucial factor in the performance of queries and updates in large data warehouses. Data layout enhances and complements other performance features such as materialized views and dynamic caching of aggregated results. Prior work has identified that the multidimensional nature of large data warehouses imposes natural restrictions on the query workload. A method based on a “uniform” query class approach has been proposed for data clustering and shown to be optimal. However, we believe that realistic query workloads will exhibit data access skew. For instance, if time is a dimension in the data model, then more queries are likely to focus on the most recent time interval. The query class approach does not adequately model the possibility of multidimensional data access skew. We propose the affinity graph model for capturing workload characteristics in the presence of access skew and describe an efficient algorithm for physical data layout. Our proposed algorithm considers declustering and load balancing issues which are inherent to the multidisk data layout problem. We demonstrate the validity of this approach experimentally.  相似文献   

2.
Cloud computing is emerging as an increasingly important service-oriented computing paradigm. Management is a key to providing accurate service availability and performance data, as well as enabling real-time provisioning that automatically provides the capacity needed to meet service demands. In this paper, we present a unified reinforcement learning approach, namely URL, to automate the configuration processes of virtualized machines and appliances running in the virtual machines. The approach lends itself to the application of real-time autoconfiguration of clouds. It also makes it possible to adapt the VM resource budget and appliance parameter settings to the cloud dynamics and the changing workload to provide service quality assurance. In particular, the approach has the flexibility to make a good trade-off between system-wide utilization objectives and appliance-specific SLA optimization goals. Experimental results on Xen VMs with various workloads demonstrate the effectiveness of the approach. It can drive the system into an optimal or near-optimal configuration setting in a few trial-and-error iterations.  相似文献   

3.
Computer systems are now powerful enough to run multiple virtual machines (VMs), each one running a separate operating system (OS) instance. In such an environment, direct and centralized energy management employed by a single OS is unfeasible. Accurately predicting the idle intervals is one of the major approaches to save energy of disk drives. However, for the intensive workloads, it is difficult to find long idle intervals. Even if long idle intervals exist, it is very difficult for a predictor to catch the idle spikes in the workloads. This paper proposes to divide the workloads into buckets which are equal in time length, and predict the number of the forthcoming requests in each bucket instead of the length of the idle periods. By doing so, the bucket method makes the converted workload more predictable. The method also squeezes the executing time of each request to the end of its respective bucket, thus extending the idle length. By deliberately reshaping the workloads such that the crests and troughs of each workload become aligned, we can aggregate the peaks and the idle periods of the workloads. Due to the extended idle length caused by this aggregation, energy can be conserved. Furthermore, as a result of aligning the peaks, resource utilization is improved when the system is active. A trace driven simulator is designed to evaluate the idea. Three traces are employed to represent the workloads issued by three web servers residing in three VMs. The experimental results show that our method can save significant amounts of energy by sacrificing a small amount of quality of service.  相似文献   

4.
The prevalence of intelligent video systems such as urban video surveillance or Google Glass, is gradually changing our daily life. This type of systems applies online analysis on video streams for the extraction of object information, which will be utilized to provide abundant content-based services. However, the system also brings challenges to the system resource utilization, while providing convenience to users. The online video analysis requires continuous and immediate processing of video streams, which always causes massive investment on the processing hardware and intolerable power consumption. In this paper, we propose to utilize the power of cloud to improve the energy efficiency of intelligent video systems, through video stream consolidation based on the fluctuation characteristic of analysis workloads. Our trace-driven study proves that the pressure on the power consumption can be significantly alleviated, while ensuring the processing ability in practical scenes.  相似文献   

5.
负载平衡在三维渲染中的应用   总被引:1,自引:0,他引:1  
该文介绍一种基于调整CPU和GPU(GraphicsProcessUnit)数据处理量来达到负载平衡,进而加速三维场景渲染的方法。现代GPU芯片制造技术发展非常迅速,直接导致GPU性能迅速增长,传统的通过简单地减少显卡工作量来优化渲染速度的思想已经不是十分合适了,取而代之的应该是平衡CPU和GPU的数据处理量来优化渲染,基于这个思想我们展开了一点讨论并对传统的算法作出一些修改,通过实验比较了CPU与GPU数据处理量平衡和非平衡时的渲染速度,实验结果证明了基于平衡数据处理量来进行优化渲染速度的方法是切实可行的。  相似文献   

6.
This paper builds on a body of European research on multiple resolution data bases (MRDBs), defining a conceptual framework for managing tasks in a multi-scale mapping project. The framework establishes a workload incorporating task difficulty, time to complete a task, required level of expertise, required resources, etc. Project managers must balance the workload among tasks with lower and higher complexity to produce a high quality cartographic product on time and within budget. We argue for increased emphasis on the role of symbol design, which often carries a lower workload than multi-scale mapping based primarily on geometry change. Countering expectations that combining symbol change with geometry change will increase workloads, we argue that in many cases, integration of the two can reduce workloads overall. To demonstrate our points, we describe two case studies drawn from a recent multi-scale mapping and database building project for Ada County, Idaho. We extend the concept of workload balancing, demonstrating that insertion of Level of Detail (LoD) datasets at intermediate scales can further reduce the workload. Previous work proposing LoDs has not reported empirical assessment, and we encourage small and large mapping organizations to contribute to such an effort.  相似文献   

7.
With the success of Internet video-on-demand (VoD) streaming services, the bandwidth required and the financial cost incurred by the host of the video server becoming extremely large. Peer-to-peer (P2P) networks and proxies are two common ways for reducing the server workload. In this paper, we consider a peer-assisted Internet VoD system with proxies deployed at domain gateways. We formally present the video caching problem with the objectives of reducing the video server workload and avoiding inter-domain traffic, and we obtain its optimal solution. Inspired by theoretical analysis, we develop a practical protocol named PopCap for Internet VoD services. Compared with previous work, PopCap does not require additional infrastructure support, is inexpensive, and able to cope well with the characteristic workloads of Internet VoD services. From simulation-based experiments driven by real-world data sets from YouTube, we find that PopCap can effectively reduce the video server workload, therefore provides a superior performance regarding the video server’s traffic reduction.  相似文献   

8.
In dynamic datacenter networks (DDNs), there are two ways to handle growing traffic: adjusting the network topology according to the traffic and placing virtual machines (VMs) to change the workload according to the topology. While previous work only focused on one of these two approaches, in this paper, we jointly optimize both virtual machine placement and topology design to achieve higher traffic scalability. We formulate this joint optimization problem to be a mixed integer linear programming (MILP) model and design an efficient heuristic based on Lagrange’s relaxation decomposition. To handle traffic dynamics, we introduce an online algorithm that can balance algorithm performance and overhead. Our extensive simulation with various network settings and traffic patterns shows that compared with randomly placing VMs in fixed datacenter networks, our algorithm can reduce up to 58.78% of the traffic in the network, and completely avoid traffic overflow in most cases. Furthermore, our online algorithm greatly reduces network cost without sacrificing too much network stability.  相似文献   

9.
The demand for so-called living or real-time data warehouses is increasing in many application areas such as manufacturing, event monitoring and telecommunications. In these fields, users normally expect short response times for their queries and high freshness for the requested data. However, meeting these fundamental requirements is challenging due to the high loads and the continuous flow of write-only updates and read-only queries that might be in conflict with each other. Therefore, we present the concept of workload balancing by election (WINE), which allows users to express their individual demands on the quality of service and the quality of data, respectively. WINE exploits these information to balance and prioritize both types of transactions—queries and updates—according to the varying user needs. A simulation study shows that our proposed algorithm outperforms competing baseline algorithms over the entire spectrum of workloads and user requirements.  相似文献   

10.

With the recent advancements in Internet-based computing models, the usage of cloud-based applications to facilitate daily activities is significantly increasing and is expected to grow further. Since the submitted workloads by users to use the cloud-based applications are different in terms of quality of service (QoS) metrics, it requires the analysis and identification of these heterogeneous cloud workloads to provide an efficient resource provisioning solution as one of the challenging issues to be addressed. In this study, we present an efficient resource provisioning solution using metaheuristic-based clustering mechanism to analyze cloud workloads. The proposed workload clustering approach used a combination of the genetic algorithm and fuzzy C-means technique to find similar clusters according to the user’s QoS requirements. Then, we used a gray wolf optimizer technique to make an appropriate scaling decision to provide the cloud resources for serving of cloud workloads. Besides, we design an extended framework to show interaction between users, cloud providers, and resource provisioning broker in the workload clustering process. The simulation results obtained under real workloads indicate that the proposed approach is efficient in terms of CPU utilization, elasticity, and the response time compared with the other approaches.

  相似文献   

11.
With the success of Internet video-on-demand (VoD) streaming services, the bandwidth required and the financial cost incurred by the host of the video server becoming extremely large. Peer-to-peer (P2P) networks and proxies are two common ways for reducing the server workload. In this paper, we consider a peer-assisted Internet VoD system with proxies deployed at domain gateways. We formally present the video caching problem with the objectives of reducing the video server workload and avoiding inter-domain traffic, and we obtain its optimal solution. Inspired by theoretical analysis, we develop a practical protocol named PopCap for Internet VoD services. Compared with previous work, PopCap does not require additional infrastructure support, is inexpensive, and able to cope well with the characteristic workloads of Internet VoD services. From simulation-based experiments driven by real-world data sets from YouTube, we find that PopCap can effectively reduce the video server workload, therefore provides a superior performance regarding the video server’s traffic reduction.  相似文献   

12.
In general, operating systems (OSs) are designed to mediate access to device hardware by applications. They process different kinds of system calls using an indiscriminate kernel with the same configuration. Applications in cloud computing platforms are constructed from service components. Each of the service components is assigned separately to an individual virtual machine (VM), which leads to homogeneous system calls on each VM. In addition, the requirements for kernel function and configuration of system parameters from different VMs are different. Therefore, the suit-to-all design incurs an unnecessary performance overhead and restricts the OS’s processing capacity in cloud computing. In this paper, we propose an adaptive model for cloud computing to resolve the conflict between generality and performance. Our model adaptively specializes the OS of a VM according to the resource-consuming characteristics of workloads on the VM. We implement a prototype of the adaptive model, vSpec. There are five classes of VM: CPU-intensive, memory-intensive, I/O-intensive, networkintensive and compound, according to the resource-consuming characteristics of the workloads running on the VMs. vSpec specializes the OS of a VM according to the VM class. We perform comprehensive experiments to evaluate the effectiveness of vSpec on benchmarks and real-world applications.  相似文献   

13.
The use of virtualization technology (VT) has become widespread in modern datacenters and Clouds in recent years. In spite of their many advantages, such as provisioning of isolated execution environments and migration, current implementations of VT do not provide effective performance isolation between virtual machines (VMs) running on a physical machine (PM) due to workload interference of VMs. Generally, this interference is due to contention on physical resources that impacts performance in different workload configurations. To investigate the impacts of this interference, we formalize the concept of interference for a consolidated multi-tenant virtual environment. This formulation, represented as a mathematical model, can be used by schedulers to estimate the interference of a consolidated virtual environment in terms of the processing and networking workloads of running VMs, and the number of consolidated VMs. Based on the proposed model, we present a novel batch scheduler that reduces the interference of running tenant VMs by pausing VMs that have a higher impact on proliferation of the interference. The scheduler achieves this by selecting a set of VMs that produce the least interference using a 0–1 knapsack problem solver. The selected VMs are allowed to run and other VMs are paused. Users are not troubled by the pausing and resumption of VMs for a short time because the scheduler has been designed for the execution of batch type applications such as scientific applications. Evaluation results on the makespan of VMs executed under the control of our scheduler have shown nearly 33% improvement in the best case and 7% improvement in the worst case compared to the case in which all VMs are running concurrently. In addition, the results show that our scheduling algorithm outperforms serial and random scheduling of VMs as well.  相似文献   

14.
15.
As the number of cores in chip multiprocessors (CMPs) increases rapidly, network-on-chips (NoCs) have become the major role in ensuring performance and power scalability. In this paper, we propose multiple-combinational-channel (MCC), a load balancing and deadlock free interconnect network, for cache-coherent non-uniform memory accessing (CC-NUMA). In order to make load more balancing and reduce power dissipation, we combine low usage channels and make high usage channels independent and wide enough, since messages transmitted over NoC have different widths and injection rates. Furthermore, based on the in-depth analysis of network traffic, we summarize four traffic patterns and establish several rules to avoid protocol-level deadlocks. We implement MCC on a 16-core CMPs, and evaluate the workload balance, area, power and performance using universal workloads. The experimental results show that MCC reduces nearly 21% power than multiple-physical-channel with similar throughput. Moreover, MCC improves 10% performance with similar area and power, compared to packet-switching architecture with virtual channels.  相似文献   

16.
一个有效的动态负载平衡方法   总被引:26,自引:0,他引:26  
动态负载平衡问题是影响工作站网络并行计算性能的重要因素.首先分析出在负载平衡中产生额外开销的根本原因是负载的移动,进而定性地给出了每次移动负载的粒度公式.引入益处估计的方法,仅在有益的情况下进行负载平衡.另外还提出了一个动态负载平衡算法.最后,通过实验,将该算法的运行结果与其他人的负载平衡结果以及不作负载平衡的情况进行了对比.此负载平衡方法在工作站为空载以及不同的负载和应用问题的数据规模的情况下,都优于Siegell等人提出的方法.  相似文献   

17.
The type of the workload on a database management system (DBMS) is a key consideration in tuning the system. Allocations for resources such as main memory can be very different depending on whether the workload type is Online Transaction Processing (OLTP) or Decision Support System (DSS). A DBMS also typically experiences changes in the type of workload it handles during its normal processing cycle. Database administrators must therefore recognize the significant shifts of workload type that demand reconfiguring the system in order to maintain acceptable levels of performance. We envision intelligent, autonomic DBMSs that have the capability to manage their own performance by automatically recognizing the workload type and then reconfiguring their resources accordingly. In this paper, we present an approach to automatically identifying a DBMS workload as either OLTP or DSS. Using data mining techniques, we build a classification model based on the most significant workload characteristics that differentiate OLTP from DSS and then use the model to identify any change in the workload type. We construct and compare classifiers built from two different sets of workloads, namely the TPC-C and TPC-H benchmarks and the Browsing and Ordering profiles from the TPC-W benchmark. We demonstrate the feasibility and success of these classifiers with TPC-generated workloads and with industry-supplied workloads.  相似文献   

18.
We focus on load balancing policies for homogeneous clustered Web servers that tune their parameters on-the-fly to adapt to changes in the arrival rates and service times of incoming requests. The proposed scheduling policy, ADAPTLOAD, monitors the incoming workload and self-adjusts its balancing parameters according to changes in the operational environment such as rapid fluctuations in the arrival rates or document popularity. Using actual traces from the 1998 World Cup Web site, we conduct a detailed characterization of the workload demands and demonstrate how online workload monitoring can play a significant part in meeting the performance challenges of robust policy design. We show that the proposed load, balancing policy based on statistical information derived from recent workload history provides similar performance benefits as locality-aware allocation schemes, without requiring locality data. Extensive experimentation indicates that ADAPTLOAD results in an effective scheme, even when servers must support both static and dynamic Web pages.  相似文献   

19.
The consolidation of multiple workloads and servers enables the efficient use of server and power resources in shared resource pools. We employ a trace-based workload placement controller that uses historical information to periodically and proactively reassign workloads to servers subject to their quality of service objectives. A reactive migration controller is introduced that detects server overload and underload conditions. It initiates the migration of workloads when the demand for resources exceeds supply. Furthermore, it dynamically adds and removes servers to maintain a balance of supply and demand for capacity while minimizing power usage. A host load simulation environment is used to evaluate several different management policies for the controllers in a time effective manner. A case study involving three months of data for 138 SAP applications compares three integrated controller approaches with the use of each controller separately. The study considers trade-offs between: (i) required capacity and power usage, (ii) resource access quality of service for CPU and memory resources, and (iii) the number of migrations. Our study sheds light on the question of whether a reactive controller or proactive workload placement controller alone is adequate for resource pool management. The results show that the most tightly integrated controller approach offers the best results in terms of capacity and quality but requires more migrations per hour than the other strategies.  相似文献   

20.
The most critical property exhibited by a heavy-tailed workload distribution (found in many WWW workloads) is that a very small fraction of tasks make up a large fraction of the workload, making the load very difficult to distribute in a distributed system. Load balancing and load sharing are the two predominant load distribution strategies used in such systems. Load sharing generally has better response time than load balancing because the latter can exhibit excessive overheads in selecting servers and partitioning tasks. We therefore further explored the least-loaded-first (LLF) load sharing approach and found two important limitations: (a) LLF does not consider the order of processing, and (b) when it assigns a task, LLF does not consider the processing capacity of servers. The high task size variation that exists in heavy-tailed workloads often causes smaller tasks to be severely delayed by large tasks.This paper proposes a size-based approach, called the least flow-time first (LFF-SIZE), which reduces the delay caused by size variation while maintaining a balanced load in the system. LFF-SIZE takes the relative processing time of a task into account and dynamically assigns a task to the fittest server with a lighter load and higher processing capacity. LFF-SIZE also uses a multi-section queue to separate larger tasks from smaller ones. This arrangement effectively reduces the delay of smaller tasks by larger ones as small tasks are given a higher priority to be processed. The performance results performed on the LFF-SIZE implementation shows a substantial improvement over existing load sharing and static size-based approaches under realistic heavy-tailed workloads.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号