首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
服务器性能预测的递归神经元网络方法   总被引:2,自引:2,他引:0  
正确有效地预测服务器性能负载,是计算机系统性能管理系统的一个重要环节。通常,传统的预测方法有最小二乘、二次指数平滑法等,但这些模型往往不能很好地捕捉服务器性能负载数据的时序关系。利用基于局部回归的递归神经网络(RNN),采用改进的RPROP学习算法进行服务器性能负载的预测。并与传统的二次指数平滑法相比较,实验结果证明,RNN得到的预测结果要比二次指数平滑法高出5个百分点以上,并且有较强的预测能力,可以预测较长周期的数据。  相似文献   

2.
Server consolidation is very attractive for cloud computing platforms to improve energy efficiency and resource utilization. Advances in multi-core processors and virtualization technologies have enabled many workloads to be consolidated in a physical server. However, current virtualization technologies do not ensure performance isolation among guest virtual machines, which results in degraded performance due to contention in shared resources along with violation of service level agreement (SLA) of the cloud service. In that sense, minimizing performance interference among co-located virtual machines is the key factor of successful server consolidation policy in the cloud computing platforms. In this work, we propose a performance model that considers interferences in the shared last-level cache and memory bus. Our performance interference model can estimate how much an application will hurt others and how much an application will suffer from others. We also present a virtual machine consolidation method called swim which is based on our interference model. Experimental results show that the average performance degradation ratio by swim is comparable to the optimal allocation.  相似文献   

3.
Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex problems. However, they suffer from a number of major technical hurdles, including distributed resource management and effective job scheduling. The main focus of this work is devoted on online scheduling of real time applications in distributed environments such as grids. Specifically, we are interested in applications with several independent tasks, each task with a prespecified lifecycle called deadline. Here, our goal is to schedule applications within an optimum overall time considering the specified deadlines. To achieve this, the resource performance prediction based on workload modeling and with the help of queuing techniques is employed. Afterward, a mathematical neural model is used to schedule the subtasks of the application. The main contributions of this work is to incorporate the impatiency factor as well as resource fault in performance modeling of nondedicated distributed systems, and also presenting an efficient and fast parallel scheduling algorithm under time constraint and heterogeneous resources. The proposed model is appropriate for implementation on parallel machines and in O(1) time. The new model was implemented on GridSim toolkit and under various conditions and with different parameters to evaluate the performance of scheduling algorithm. Simulation outcomes have shown that approximately in 87.8% of cases, our model schedules the tasks in such a way that all constraints are satisfied.
Mohammad Kazem AkbariEmail:
  相似文献   

4.
Filtration consolidation is modeled taking into account salt saturation of soil and the nonisothermal and relaxation nature of the filtration process. A boundary-value problem for a soil mass consolidated on an impermeable bed is posed, its approximated solution is obtained, and the results of numerical experiments are presented. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 71–79, November–December 2006.  相似文献   

5.
The Video Server Estimator (VSE) is an analytical tool which allows a user to perform a cost/performance analysis of video servers with hierarchical storage. The underlying model comprises multiple systems, main memory, expanded storage, disks and a tape library. The main objective of the tool is to optimally allocate the video files to different storage media based on the system parameters and the video file request probability distribution. The cost and the size of the video server that can accommodate a customer profile are determined. Furthermore, the impact of design parameters on the cost and performance are examined through a parametric analysis.  相似文献   

6.
At the virtualized data centers, services are presented by active virtual machines (VMs) in physical machines (PMs). The manner in which VMs are mapped to PMs affects the performance of data centers and the energy efficiency. By employing the server consolidation technique, it is possible to configure the VMs on a smaller number of PMs, while the quality of service is guaranteed. In this way, the rate of active PM utilization increases and fewer active PMs would be required. Moreover, the server consolidation technique reacts to the management of underloaded and overloaded PMs by using the VM migration technology. Considering the capabilities of the server consolidation technique and its role in developing the cloud computing infrastructure, many researches have been conducted in this context. Still, a comprehensive and systematic study has not yet been performed on various consolidation techniques to check the capabilities, advantages, and disadvantages of current approaches. In this paper, a systematic study is conducted on a number of credible researches related to server consolidation techniques. In order to do so and by studying the selected works, proposed solutions are categorized based on the type of decision for running the consolidation algorithm in 4 groups of static method, dynamic method, prediction‐based dynamic method, and hybrid method. Thereafter, the advantages and disadvantages of suggested approaches are studied and compared in each research by specifying the technique and idea applied therein. In addition, by categorizing aims of researches and specifying assessment parameters, optimization approaches and type of architecture, a possibility has been provided to get familiarized with the views of the researchers.  相似文献   

7.
The consolidation of salt-saturated porous media is modeled with space-time nonlocality effects and with/without the relaxation of the filtration rate taken into account. A numerical algorithm for modeling the dynamics of the process is proposed and an asymptotic analysis of the problem for excess head is performed for weak spatial nonlocality. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 59–66, November–December 2008.  相似文献   

8.
Modern data center consists of thousands of servers, racks and switches. Complicated structure means it requires well-designed algorithms to utilize resources of data centers efficiently. Current virtual machine scheduling algorithms mainly focus on the initial allocation of virtual machines based on the CPU, memory and network bandwidth requirements. However, when tasks finished or lease expired, related virtual machines would be deleted from the system which would generate resource fragments. Such fragments lead to unbalanced resource utilization and decline of communication performance. This paper investigates the network influence on typical applications in data centers and proposed a self-adaptive network-aware virtual machine clustering and consolidation algorithm to maintain an optimal system-wide status. Our consolidation algorithm periodically checks whether consolidation is necessary and then clusters and consolidates virtual machines to lower communication cost with an online heuristic. We used two benchmarks in a real environment to examine network influence on different tasks. To evaluate the advantages of the proposed algorithm, we also built a cloud computing testbed. Real workload trace-driven simulations and testbed-based experiments showed that, our algorithm greatly shortened the average finish time of map-reduce tasks and reduced time delay of web applications. Simulation results showed that our algorithm considerably reduced the amount of high-delay jobs, lowered the average traffic passed through aggregate switches and improved the communication ability among virtual machines.  相似文献   

9.
The Journal of Supercomputing - We present PPT-Multicore, an analytical model embedded in the Performance Prediction Toolkit (PPT) to predict parallel applications’ performance running on a...  相似文献   

10.
An overview is given of Q+, an interactive tool for performance modeling that uses graphical input and visual output. Two major enhancements are a subnetwork capability for structuring models hierarchically and an integrated expression capability. New capabilities are custom icons and temporal browsing. With a Q+ icon palette, users can draw their own icons and manipulate existing ones. The browser allows browsing, editing and updating Q+ information, which can be textual or graphical. Automatic model building, operations management, and experimental design with Q+ are discussed  相似文献   

11.
The filtration consolidation of water-saturated randomly inhomogeneous soil masses is studied. The field of excess head in a soil mass with random inclusions is obtained in the case of random consolidation coefficients. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 89–100, January–February 2008.  相似文献   

12.
A mathematical model of the filtration consolidation of porous media saturated with saline solutions is developed. The filtration process is relaxed and occurs in a relaxation-compressible medium. A corresponding boundary-value problem is formulated. Asymptotic approximations of the solutions are found and an algorithm for numerical modeling of the process is proposed. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 116–126, January–February 2008.  相似文献   

13.
In many domains, the previous decade was characterized by increasing data volumes and growing complexity of data analyses, creating new demands for batch processing on distributed systems. Effective operation of these systems is challenging when facing uncertainties about the performance of jobs and tasks under varying resource configurations, e. g., for scheduling and resource allocation. We survey predictive performance modeling (PPM) approaches to estimate performance metrics such as execution duration, required memory or wait times of future jobs and tasks based on past performance observations. We focus on non-intrusive methods, i. e., methods that can be applied to any workload without modification, since the workload is usually a black box from the perspective of the systems managing the computational infrastructure. We classify and compare sources of performance variation, predicted performance metrics, limitations and challenges, required training data, use cases, and the underlying prediction techniques. We conclude by identifying several open problems and pressing research needs in the field.  相似文献   

14.
服务器一般使用数据中心分配的IP 地址,有时需要在服务器上使用异地数据中心的IP 地址,以实现某些特殊 的功能,但互联网体系结构使这种需求难以实现。使用VPN 技术在互联网上建立隧道,将服务器与异地数据中心在网络层直 接连接,再通过配置路由规则,可以实现在服务器上使用异地数据中心的IP 地址,并在应用上呈现出一些特点。  相似文献   

15.
Performance prediction is an important engineering tool that provides valuable feedback on design choices in program synthesis and machine architecture development. We present an analytic performance modeling approach aimed to minimize prediction cost, while providing a prediction accuracy that is sufficient to enable major code and data mapping decisions. Our approach is based on a performance simulation language called PAMELA. Apart from simulation, PAMELA features a symbolic analysis technique that enables PAMELA models to be compiled into symbolic performance models that trade prediction accuracy for the lowest possible solution cost. We demonstrate our approach through a large number of theoretical and practical modeling case studies, including six parallel programs and two distributed-memory machines. The average prediction error of our approach is less than 10 percent, while the average worst-case error is limited to 50 percent. It is shown that this accuracy is sufficient to correctly select the best coding or partitioning strategy. For programs expressed in a high-level, structured programming model, such as data-parallel programs, symbolic performance modeling can be entirely automated. We report on experiments with a PAMELA model generator built within a dataparallel compiler for distributed-memory machines. Our results show that with negligible program annotation, symbolic performance models are automatically compiled in seconds, while their solution cost is in the order of milliseconds.  相似文献   

16.
17.
利用SQL Server作为数据库管理平台开发高校通用计算机在线考试软件系统时,由于数据量较大,最应关注的是软件系统的运行速度、性能和可维护性等指标。如果采用常规的程序设计方案,会造成网络通信数据量大、业务逻辑处理速度慢、系统运行效率低等问题。为解决这些问题,软件系统中相关业务逻辑设计采用存储过程的方法实现,大大减少了网络流量,提高了系统的性能和可维护性。  相似文献   

18.
程敏  余轮 《计算机与数字工程》2010,38(7):166-168,180
通过对候鸟的迁徙路线的跟踪研究,可以对禽流感防控起到积极的指导作用,在整个追踪监控系统中,追踪节点周期性采集,发送定位数据,服务端通过GPRS网络接收各个跟踪节点信息,然后进行数据存储,通过SQL Server 2000系统管理数据,以便用户完成大量数据操作,服务器通过对接收数据查询分析,可以确定节点工作状态信息,并且可以对其工作状态进行调整,从而实现集数据接收,数据处理,状态控制为一体的监控系统。  相似文献   

19.
数据库系统性能模型是数据库系统管理的重要基础技术支撑,广泛用于查询调度、资源分配、性能调优等任务中。当前的性能模型主要分为分析型和统计型两种,分析型模型需要深入研究数据库系统查询执行过程,对动态查询的适应性较好,无须成本高昂的采样实验,但在查询并行执行情景下建模复杂,对不同的数据库系统有不同的理论模型。统计型模型无须分析查询执行过程,通过采集查询执行参数并训练某个数学模型。统计型建模过程简单,能够较好地描述查询交互,预测效果较好,但采样成本很高,对动态查询的适应性差。对数据库系统性能建模的主要文献进行综述,重点介绍数据库系统性能建模的主要方法,并讨论这两类模型各自的优缺点、建模的难点以及应对策略。在此基础上,对数据库系统性能模型领域的研究做了展望,为有关该领域的研究提供参考。  相似文献   

20.
GPUs are gaining fast adoption as high-performance computing architectures, mainly because of their impressive peak performance. Yet most applications only achieve small fractions of this performance. While both programmers and architects have clear opinions about the causes of this performance gap, finding and quantifying the real problems remains a topic for performance modeling tools. In this paper, we sketch the landscape of modern GPUs’ performance limiters and optimization opportunities, and dive into details on modeling attempts for GPU-based systems. We highlight the specific features of the relevant contributions in this field, along with the optimization and design spaces they explore. We further use typical kernel examples with various computation and memory access patterns to assess the efficacy and usability of a set of promising approaches. We conclude that the available GPU performance modeling solutions are very sensitive to applications and platform changes, and require significant efforts for tuning and calibration when new analyses are required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号