首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
无线电   4篇
自动化技术   5篇
  2010年   1篇
  2009年   4篇
  2008年   2篇
  2005年   1篇
  2004年   1篇
排序方式: 共有9条查询结果,搜索用时 15 毫秒
1
1.
2.
Management of computing infrastructure in data centers is an important and challenging problem, that needs to: (i) ensure availability of services conforming to the Service Level Agreements (SLAs); and (ii) reduce the Power Usage Effectiveness (PUE), i.e. the ratio of total power, up to half of which is attributed to data center cooling, over the computing power to service the workloads. The cooling energy consumption can be reduced by allowing higher-than-usual thermostat set temperatures while maintaining the ambient temperature in the data center room within manufacturer-specified server redline temperatures for their reliable operations. This paper proposes: (i) a Coordinated Job, Power, and Cooling Management (JPCM) policy, which performs: (a) job management so as to allow for an increase in the thermostat setting of the cooling unit while meeting the SLA requirements, (b) power management to reduce the produced thermal load, and (c) cooling management to dynamically adjust the thermostat setting; and (ii) a Model-driven coordinated Management Architecture (MMA), which uses a state-based model to dynamically decide the correct management policy to handle events, such as new workload arrival or failure of a cooling unit, that can trigger an increase in the ambient temperature. Each event is associated with a time window, referred to as the window-of-opportunity, after which the temperature at the inlet of one or more servers can go beyond the redline temperature if proper management policies are not enforced.This window-of-opportunity monotonically decreases with increase in the incoming workload. The selection of the management policy depends on their potential energy benefits and the conformance of the delays in their actuation to the window-of-opportunity. Simulations based on actual job traces from the ASU HPC data center show that the JPCM can achieve up to 18% energy-savings over separated power or job management policies. However, high delay to reach a stable ambient temperature (in case of cooling management through dynamic thermostat setting) can violate the server redline temperatures. A management decision chart is developed as part of MMA to autonomically employ the management policy with maximum energy-savings without violating the window-of-opportunity, and hence the redline temperatures. Further, a prototype of the JPCM is developed by configuring the widely used Moab cluster manager to dynamically change the server priorities for job assignment.  相似文献   
3.
High Performance Computing data centers have been rapidly growing, both in number and in size. Thermal management of data centers can address dominant problems associated with cooling such as the recirculation of hot air from the equipment outlets to their inlets, and the appearance of hot spots. In this paper, we are looking into assigning the incoming tasks to machines of a data center in such a way so as to affect the heat recirculation and make cooling more efficient. Using a low complexity linear heat recirculation model, we formulate the problem of minimizing the peak inlet temperature within a data center through task assignment, consequently leading to minimal cooling power consumption. We also provide two methods to solve the formulation, one that uses a genetic algorithm and the other that uses sequential quadratic programming. We show through formalization that minimizing the peak inlet temperature allows for the lowest cooling power needs. Results from a simulated, small-scale data center show that solving the formulation leads to an inlet temperature distribution that is 2 °C to 5 °C lower compared to other approaches, and achieves about 20%-30% cooling energy savings at moderate data center utilization rates. Moreover, our algorithms consistently outperform MinHR, a recirculation-reducing placement algorithm in the literature.  相似文献   
4.
Dynamic networks, e.g. Mobile Ad hoc NETworks (MANETs), call for self-healing routing protocols to tolerate topological changes imposed by node mobility. Moreover, emerging time-critical MANET applications such as disaster response and rescue, and battlefield operations, require support for real-time, reliable data streaming, while maintaining energy efficiency. However, most of the energy-efficient routing protocols rely on configuration parameters which need to be estimated and specified before the deployment phase. This paper proposes a self-managing, energy-efficient multicast routing suite based on the self-stabilization paradigm. This suite uses (i) WECM, a Waste Energy Cost Metric designed for energy-efficient route selection, (ii) SS-SPST-E, a Self-Stabilizing, Shortest-Path Spanning Tree protocol for Energy efficiency based on WECM to maintain an energy-efficient, self-healing routing structure, (iii) SS-SPST-Efc, an enhanced SS-SPST-E with fault containment to decrease stabilization latency, (iv) AMO, an Analytical Model for Optimization framework to reduce the energy overhead of the route maintenance mechanism, and (v) self-configuration mechanisms that observe, estimate and disseminate the optimization parameters.The WECM’s innovation is that it considers the overhearing energy wasted. The AMO framework considers the link state change rate, application data traffic intensity, application packet delivery requirements, and the stabilization latency. Numerical evaluations show that SS-SPST-E slightly increases the energy consumption when compared with non-adaptive energy-efficient protocols such as EWMA because of its mechanism to handle mobility. Simulation results show that SS-SPST-Efc achieves the maximum balance between the energy-reliability trade-off while conforming to the end-to-end packet delivery requirement with an accuracy between 80% and 100%. The energy-reliability balance, measured in terms of the packet delivery ratio (PDR) per millijoules of energy expended, is at least 24% and 27% higher in SS-SPST-E and SS-SPST-Efc, respectively, when compared to the MAODV and ODMRP protocols.  相似文献   
5.
Job scheduling in data centers can be considered from a cyber–physical point of view, as it affects the data center’s computing performance (i.e. the cyber aspect) and energy efficiency (the physical aspect). Driven by the growing needs to green contemporary data centers, this paper uses recent technological advances in data center virtualization and proposes cyber–physical, spatio-temporal (i.e. start time and servers assigned), thermal-aware job scheduling algorithms that minimize the energy consumption of the data center under performance constraints (i.e. deadlines). Savings are possible by being able to temporally “spread” the workload, assign it to energy-efficient computing equipment, and further reduce the heat recirculation and therefore the load on the cooling systems. This paper provides three categories of thermal-aware energy-saving scheduling techniques: (a) FCFS-Backfill-XInt and FCFS-Backfill-LRH, thermal-aware job placement enhancements to the popular first-come first-serve with back-filling (FCFS-backfill) scheduling policy; (b) EDF-LRH, an online earliest deadline first scheduling algorithm with thermal-aware placement; and (c) an offline genetic algorithm for SCheduling to minimize thermal cross-INTerference (SCINT), which is suited for batch scheduling of backlogs. Simulation results, based on real job logs from the ASU Fulton HPC data center, show that the thermal-aware enhancements to FCFS-backfill achieve up to 25% savings compared to FCFS-backfill with first-fit placement, depending on the intensity of the incoming workload, while SCINT achieves up to 60% savings. The performance of EDF-LRH nears that of the offline SCINT for low loads, and it degrades to the performance of FCFS-backfill for high loads. However, EDF-LRH requires milliseconds of operation, which is significantly faster than SCINT, the latter requiring up to hours of runtime depending upon the number and size of submitted jobs. Similarly, FCFS-Backfill-LRH is much faster than FCFS-Backfill-XInt, but it achieves only part of FCFS-Backfill-XInt’s savings.  相似文献   
6.
The problem of maximizing multicast lifetime (MML) in wireless ad hoc networks is reexamined under a recently proposed transmitter-receiver power tradeoff (TRPT) model, for which the energy consumed by a node to reliably receive a bit is inversely proportional to the energy level at which the bit is transmitted. Under the TRPT model, MML was conjectured to be NP-hard. We herein prove the conjecture under the assumption of bounded and discrete power levels.  相似文献   
7.
Many time-critical applications for Mobile Ad hoc NETworks (MANETs), such as the military applications and disaster response, call for proactive link and route maintenance to ensure low latency for reliable data delivery. The goal of this paper is to minimize the energy overhead due to the high control traffic caused by the periodic route and link maintenance operations in the proactive routing protocols for MANETs. This paper — (i) categorizes the proactive protocols based on the maintenance operations performed; (ii) derives analytical estimates of the optimum route and link update periods for the different protocol classes by considering (a) the data traffic intensity, (b) link dynamics, (c) target reliability, measured in terms of Packet Delivery Ratio (PDR), and (d) the network size; and (iii) proposes a network layer dynamic Optimization of Periodic Timers (OPT) method based on the analytical estimates to locally vary the update periods in the distributed nodes. Simulation results show that DSDV-Opt, a variation of DSDV protocol using OPT, — (i) achieves the target PDR with 98.7% accuracy while minimizing the overhead energy; (ii) improves the protocol scalability; and (iii) reduces the control traffic for low data traffic intensity.  相似文献   
8.
In this paper, we propose an extension to the personal communication services (PCS) location management protocol which uses dynamically overlapped registration areas. The scheme is based on monitoring the aggregate mobility and call pattern of the users during each reconfiguration period and adapting to the mobility and call patterns by either expanding or shrinking registration areas at the end of each reconfiguration period. We analytically characterize the trade-off resulting from the inclusion or exclusion of a cell in a registration area in terms of expected change in aggregate database access cost and signaling overhead. This characterization is used to guide the registration area adaption in a manner in which the signaling and database access load on any given location register (LR) does not exceed a specified limit. Our simulation results show that it is useful to dynamically adapt the registration areas to the aggregate mobility and call patterns of the mobile units when the mobility pattern exhibits locality. For such mobility and call patterns, the proposed scheme can greatly reduce the average signaling and database access load on LRs. Further, the cost of adapting the registration areas is shown to be low in terms of memory and communication requirements.  相似文献   
9.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号