共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
5.
6.
7.
8.
石颖 《计算机与数字工程》2014,(8):1464-1467
针对数据中心的资源分配问题,论文讨论了数据中心虚拟化技术及资源分配的核心要求:数据中心性能保证、资源的动态分配、资源分配的公平性及收益最大化,分别针对这些方面讨论了几种资源分配算法,最后进行总结并对未来的研究发展趋势提出观点. 相似文献
9.
日益攀升的能源成本成为企业的巨大负担,如何有效降低能耗支出,已经成为数据中心管理者的当务之急。三件事情正在悄悄改变着弱电工程领域的发展方向。一是《京都议定书》。当它在2005年最终破茧而出时,表面上只是一项遏制全球气候恶化的国际 相似文献
10.
孟君 《计算机与应用化学》2011,28(8):961-964
以神华集团正在进行的信息化基础设施提升项目为背景,在回顾了大型能源企业集团数据中心发展历程的基础上,提出了新一代数据中心的定位和功能要求.然后结合能源企业集团业务需求和当今信息技术发展的趋势,分析了数据中心建设中面临的挑战,并结合神华集团正在进行的信息化工程给了相应的对策. 相似文献
11.
Eun Kyung Lee Indraneel Kulkarni Dario Pompili Manish Parashar 《The Journal of supercomputing》2012,60(2):165-195
The increasing demand for faster computing and high storage capacity has resulted in an increase in energy consumption and
heat generation in datacenters. Because of the increase in heat generation, cooling requirements have become a critical concern,
both in terms of growing operating costs as well as their environmental and societal impacts. Presently, thermal management
techniques make an effort to thermally profile and control datacenters’ cooling equipment to increase their efficiency. In
conventional thermal management techniques, cooling systems are triggered by the temperature crossing predefined thresholds.
Such reactive approaches result in delayed response as the temperature may already be too high, which can result in performance
degradation of hardware. 相似文献
12.
With advancements in virtualization technology, datacenters are often faced with the challenge of managing large numbers of virtual machine (VM) requests. Due to this large amount of VM requests, it has become practically impossible to search all possible VM placements in order to find a solution that best optimizes certain design objectives. As a result, managers of datacenters have resorted to the employment of heuristic optimization algorithms for VM placement. In this paper, we employ the cuckoo search optimization (CSO) algorithm to solve the VM placement problem of datacenters. Firstly, we use the CSO to optimize the datacenter for the minimization of the number of physical machines used for placement. Secondly, we implement a multiobjective CSO algorithm to simultaneously optimize the power consumption and resource wastage of the datacenter. Simulation results show that both CSO algorithms outperform the reordered grouping genetic algorithm (RGGA), the grouping genetic algorithm (GGA), improved least-loaded (ILL) and improved FFD (IFFD) methods of VM placement. 相似文献
13.
Although dense interconnection datacenter networks (DCNs) (e.g., FatTree) provide multiple paths and high bisection bandwidth for each server pair, the widely used single-path Transmission Control Protocol (TCP) and equal-cost multipath (ECMP) transport protocols cannot achieve high resource utilization due to poor resource excavation and allocation. In this paper, we present LESSOR, a performance-oriented multipath forwarding scheme to improve DCNs’ resource utilization. By adopting an OpenFlow-based centralized control mechanism, LESSOR computes near-optimal transmission path and bandwidth provision for each flow according to the global network view while maintaining nearly real-time network view with the performance-oriented flow observing mechanism. Deployments and comprehensive simulations show that LESSOR can efficiently improve the network throughput, which is higher than ECMP by 4.9%–38.3% under different loads. LESSOR also provides 2%–27.7% improvement of throughput compared with Hedera. Besides, LESSOR decreases the average flow completion time significantly. 相似文献
14.
Miray Kas 《The Journal of supercomputing》2012,62(1):214-226
Due to economical reasons, the traditional philosophy in data centers was to scale out, rather than scaling up. However, the advances in CMP technology enabled chip multiprocessors to become more prevalent and they are expected to become more affordable and power-efficient in the coming years. Current trend towards more densely packaged systems and increasing demand for higher performance push the market towards placing datacenters on highly powerful chips that have many cores on a single platform. However, increasing the number of cores on a single chip brings along very important problems to be addressed at the chip level regarding the use of shared resources and QoS satisfaction. After briefly exploring current datacenter perspective, this paper captures the current state of the art in the field of chip multiprocessors through a detailed discussion of different studies that pave the way to the datacenters on-chip. Finally, a number of open research issues are highlighted with the intention of inspiring new contributions and developments in the field of datacenters on-chip. 相似文献
15.
Information and communication technology (ICT) has a profound impact on environment because of its large amount of CO2 emissions. In the past years, the research field of “green” and low power consumption networking infrastructures is of great importance for both service/network providers and equipment manufacturers. An emerging technology called Cloud computing can increase the utilization and efficiency of hardware equipment. The job scheduler is needed by a cloud datacenter to arrange resources for executing jobs. In this paper, we propose a scheduling algorithm for the cloud datacenter with a dynamic voltage frequency scaling technique. Our scheduling algorithm can efficiently increase resource utilization; hence, it can decrease the energy consumption for executing jobs. Experimental results show that our scheme can reduce more energy consumption than other schemes do. The performance of executing jobs is not sacrificed in our scheme. We provide a green energy-efficient scheduling algorithm using the DVFS technique for Cloud computing datacenters. 相似文献
16.
Hang Liu Eun Kyung Lee Dario Pompili Xiangwei Kong 《The Journal of supercomputing》2013,64(2):383-408
Thermal cameras provide fine-grained thermal information that enables monitoring and autonomic thermal management in large datacenters. The real-time thermal monitor network employing thermal cameras is proposed to cooperatively localize hotspots and extract their characteristics (i.e., temperature, size, and shape). These characteristics are adopted to classify the causes of hotspots and make energy-efficient thermal management decisions such as job migration. Specifically, a sculpturing algorithm for extracting and reconstructing shape characteristics of hotspots is proposed to minimize the network overhead. Experimental results show the validity of all the algorithms proposed in this paper. 相似文献
17.
The optimal selection of a datacenter is one of the most important challenges in the structure of a network for the wide distribution of resources in the environment of a geographically distributed cloud. This is due to the variety of datacenters with different quality-of-service (QoS) attributes. The user’s requests and the conditions of the service-level agreements (SLAs) should be considered in the selection of datacenters. In terms of the frequency of datacenters and the range of QoS attributes, the selection of the optimal datacenter is an NP-hard problem. A method is therefore required that can suggest the best datacenter, based on the user’s request and SLAs. Various attributes are considered in the SLA; in the current research, the focus is on the four important attributes of cost, response time, availability, and reliability. In a geo-distributed cloud environment, the nearest datacenter should be suggested after receiving the user’s request, and according to its conditions, SLA violations can be minimized. In the approach proposed here, datacenters are clustered according to these four important attributes, so that the user can access these quickly based on specific need. In addition, in this method, cost and response time are taken as negative criteria, while accessibility and reliability are taken as positive, and the multi-objective NSGA-II algorithm is used for the selection of the optimal datacenter according to these positive and negative attributes. In this paper, the proposed method, known as NSGAII_Cluster, is implemented with the Random, Greedy and MOPSO algorithms; the extent of SLA violation of each of the above-mentioned attributes are compared using four methods. The simulation results indicate that compared to the Random, Greedy and MOPSO methods, the proposed approach has fewer SLA violations in terms of the cost, response time, availability, and reliability of the selected datacenters. 相似文献
18.
Software-defined networking (SDN) introduces a new method in networking that by offering programmability and centralization, it can dynamically control and configure networks. In traditional networks, data plane did the whole forwarding process, but SDN decouples data plane and control plane by using programmable software controllers for deciding how to forward different flows. By implementing control plane in a software-based independent layer, the network management will become much easier and new policies can be applied to the network by changing a few lines of code. Since the resource allocation and meeting the required service-level agreement are really important in large-scale networks such as cloud datacenters, using SDN can be very useful. In these networks, one logically centralized controller cannot handle the whole network traffic and it will become network bottleneck. Therefore, multiple distributed controllers should be allocated in different regions of the network. Since the request rate of switches varies in time, by dynamic allocation of controllers, network resources will be allocated efficiently and this approach can also reduce power consumption. In this paper, we are going to propose a framework for provisioning software controllers in cloud datacenters by using metaheuristic algorithms. These algorithms can be less accurate compared to other kinds, but their main characteristics like simplicity, flexibility, derivation free, and local optimum avoidance make them a good nominee for solving controller provisioning problem and controller placement problem. Our framework improves computation time and reaches better results compared to other allocation techniques, but it is less accurate in some scenarios. Therefore, we believe metaheuristic approach can be very useful in developing new technologies for SDN in the future. 相似文献
19.
《中国科学:信息科学(英文版)》2012,(7):1493-1508
Modular datacenters (MDCs) use shipping containers,encapsulating thousands of servers,as large pluggable building blocks for mega datacenters.The MDC's service-free model poses stricter demand on fault-tolerance of the modular datacenter network (MDCN).Based on the scale-out principle,in this paper we propose SCautz,a novel hybrid intra-container network for MDCs.SCautz comprises a base Kautz topology,created by interconnecting servers,and a small number of COTS (commercial off-the-shelf) switches.Moreover,each switch connects a specific number of servers forming clusters,which,as logical nodes,form multiple higher-level logical Kautz structures.SCautz's hybrid structure has several advantages.First,it supports multiple running modes for the MDC,while its full structure increases network capacity twofold.Second,it retains the throughput for processing one-to-x traffic in the presence of failures.Finally,it achieves much more graceful network performance degradation than computation and storage capacity do.Results from theoretical analysis and simulations show that SCautz is more viable for intra-container networks. 相似文献
20.
A. G. Hessami 《Neural computing & applications》2007,16(1):21-25
One of the key success factors behind the human species is the ability to think, plan and pursue goals. Apart from wisdom and tacit knowledge, experience and awareness of the physical laws governing the universe are themselves the ingredients of success in a particular undertaking. However, unfamiliar circumstances pose a challenge to the extrapolation of the past experience to the future expectations. This is particularly true within the context of new discoveries or novel application of emerging and complex/adaptive technologies. Amongst many facets of performance pertaining to a product, process or undertaking, safety is generally a more demanding aspect to forecast, manage and deliver. The statutory framework poses further constraints on performance where the potential for harm to people or the environment arises from a product or system. The modernisation programme for the West Coast Main Line (WCML) railways in the UK involves adoption and implementation of a few advanced technologies to assist with better train protection and network utilisation. The extent, scope and inter-related nature of these advanced technologies within WCML modernisation programme necessitate a systematic framework commensurate with the scale of the problem and undertakings. To this end, an advanced risk forecasting model was devised at Railtrack to support the safety assessment and assurance of the programme within its three key evolutionary phases. 相似文献