首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
为了满足延时敏感型应用执行的需求,实现移动设备的能耗优化,基于移动边缘计算环境提出一种融入缓存机制的任务卸载策略。与仅关注计算卸载决策不同,该策略可将已完成的重复请求任务及相关数据在边缘云上进行缓存,这样可以降低任务的卸载延时。将计算与存储能力受限的边缘云中的任务缓存与卸载优化决策问题分解为两个子优化问题进行求解。证明任务卸载子问题可转换为决策变量的凸最优化问题,而任务缓存子问题可转换为0-1整数规划问题。分别设计内点法和分支限界法对两个子问题进行求解,进而得到满足截止时间约束时能耗最优的卸载决策解。仿真算例证明了该策略在动态异构的任务执行环境下可以实现更好的能效优化。  相似文献   

2.
提出一种无缓存片上网络的交叉开关调度机制.无缓存是指将路由节点输入端口缓存以及相应的控制逻辑移除,以降低实现开销.该调度机制在每个输入端口设置一条自回环通道,并在“偏转”路由基础上采用一种基于“偏转”和自回环次数的优先级策略,以通过减小“偏转”次数提高无缓存片上网络的网络性能.实验表明,此调度机制相对于基本的无缓存机制,可保证在硬件开销与能耗不增的前提下,提高网络性能;相对于典型的有缓存机制,在负载较低时其网络性能更优,负载较高时性能略低,但可使硬件开销与能耗分别下降58.9%与40.2%,并使频率提高74.6%.  相似文献   

3.
降低网络系统能耗是当前计算机领域构建绿色网络迫切需要解决的关键问题.网络设备的传输服务速率自适应性为优化网络能耗和提高网络能效提供了有效的途径,文中提出一种基于速率自适应的全局性和分布式的能耗优化路由策略.该策略从网络全局路由的角度出发,根据网络系统的服务特征,将为数据提供传输服务的网络组件抽象为一个处理域.为求解处理域中速率自适应时的服务速率和工作状态平均转换次数,把处理域的服务过程视为可变服务速率的服务系统.然后以网络系统总能耗最小化为目标,并满足相关的路由和性能等约束条件,建立基于速率自适应的网络能耗优化路由模型,利用改进的蚁群算法对模型进行求解.在仿真实验中,将文中提出的能耗优化路由的分布式启发算法与相关文献的OSPF和GreenOSPF节能路由算法进行比较,给出算法在能耗和延时方面的对比结果.多种实验情况下的对比结果表明,文中提出的能耗优化路由策略能更有效地匹配速率自适应机制,具有较好的节能效果,从而达到优化和降低能耗的目的.  相似文献   

4.
已有针对虚拟机映射问题的研究,主要以提高服务器资源及能耗效率为目标.综合考虑虚拟机映射过程中对服务器及网络设备能耗的影响,在对物理服务器、虚拟机资源及状态,虚拟机映射、网络通信矩阵等概念定义的基础上,对协同能耗优化及网络优化的虚拟机映射问题进行了建模.将问题抽象为多资源约束下的装箱问题与二次分配QAP问题,并设计了基于蚁群算法ACO与局部搜索算法2-exchange结合的虚拟机映射算法CSNEO来进行问题的求解.通过与MDBP-ACO、vector-VM等四种算法的对比实验结果表明:CSNEO算法一方面在满足多维资源约束的前提下,实现了更高的虚拟机映射效率;另一方面,相比只考虑网络优化的虚拟机放置算法,CSNEO在实现网络优化的同时具有更好的能耗效率.  相似文献   

5.
许慧青  王高才  闵仁江 《计算机科学》2017,44(8):76-81, 106
当前大多内容中心网络(Content-Centric Network,CCN)缓存决策策略研究都没有综合考虑请求热点、网络能耗、内容流行度和节点协同等相关要素。因此提出一种基于内容流行度的协同缓存策略来优化内容中心网络的能耗。该策略将CCN的一个自治区域网络中的所有内容路由器节点当作一个协同缓存组,并把协同缓存组中每个节点的缓存容量分为两部分,一部分用于自身节点和其他节点协同缓存内容;另一部分用于自身节点独立缓存本地最流行的内容,以提高协同缓存组中内容副本种类的多样性,从而减少网络中内容的重复传输,实现网络能耗的优化。建立相关的能耗优化模型,采用一种改进的遗传算法求解出该协同缓存组能耗优化问题的最优解。实验结果表明,与相关文献中的缓存决策策略相比,所提策略可以有效地降低CCN的能耗,提高其可扩展性,进而指导CCN的演化和部署。  相似文献   

6.
当前,云数据中心的能耗问题已成为业界关注的热点.已有研究工作大多致力于从技术角度降低数据中心的能耗,或在能耗与性能之间寻求一种最佳的折衷.云计算作为一种商业计算模式,已有研究成果很少考虑到云定价策略对能耗管理机制的影响.文中提出了基于动态定价策略的数据中心能耗成本优化方案.建立起服务价格和能耗成本的统一模型,通过研究两者之间的关系.协同优化服务价格与能耗成本,使数据中心的收益达到最优.鉴于数据中心规模庞大、承载任务繁重等特点,论文采用基于重载近似的大规模排队系统来对数据中心建模,根据不同数据中心间的服务需求量和电价差别,设计了多数据中心间的负载路由机制,旨在削减数据中心的整体能耗成本.针对单个数据中心,定义了双阈值策略以动态调节服务器的各种状态(On/Off/Idle等),从而使数据中心能耗成本得到进一步优化.实验结果表明,论文提出的解决方案能够在满足用户QoS需求的前提下,较好地优化数据中心能耗成本,同时使数据中心的收益达到最优.  相似文献   

7.
能源成本的增长和环境问题的日益突出使得数据中心面临严峻挑战,引进经济环保的新能源已经迫在眉睫。但是,新能源的间歇性、不稳定性和突变性等特点,导致数据中心无法有效适应新能源。为此,各大数据中心提出能源管理策略和负载调度算法等解决方案,但是现有的研究成果大多是针对计算方面的能耗优化,无法适应于存储方面。鉴于此,提出一种基于新能源驱动的存储系统的能耗优化方案,利用不同存储介质的特性和在线-离线负载划分模型来实现负载能耗需求和新能源供应的匹配。为保证存储系统的性能和能耗效率,采用双驱动和虚拟化合并技术实现细粒度的能耗控制方案;此外,还设计并实现了一种离线负载优化调度算法,进一步提高了新能源的利用率。实验结果表明,优化能耗方案可以使新能源的利用率达到95%,同时保证存储系统性能的退化比例低于9.8%。  相似文献   

8.
多接入边缘计算(multi-access edge computing,MEC)技术将计算和存储资源下沉到网络边缘,可大幅提高物联网(Internet of things,IoT)系统的计算能力和实时性。然而,MEC往往面临计算需求增长和能量受限的约束,高效的计算卸载及能耗优化机制是MEC技术中重要的研究领域。为保证计算效率的同时最大程度提升计算过程中的能效,提出了两级边缘节点(edge nodes,ENs)中继网络模型,并设计了一种计算资源及信道资源联合优化的最优能耗卸载策略算法(optimal energy consumption algorithm,OECA)。将MEC中的能效建模为0-1背包问题;以最小化系统总体能耗为目标,系统自适应地选择计算模式和分配无线信道资源;在Python环境下仿真验证了算法性能。仿真结果表明,相比于基于有向无环图的卸载策略算法(directed acyclic graph algorithm,DAGA),OECA可将网络容量提升18.3%,能耗缩减13.1%。  相似文献   

9.
无线传感网络中覆盖能效动态控制优化策略   总被引:1,自引:0,他引:1  
能量约束是无线传感网络测量控制的关键问题之一.本文针对移动节点位置优化问题,提出了无线传感网络通信能耗评价指标,采用微粒群优化策略更新节点位置,使无线传感网络具有更强的灵活性和能效性.利用Dijkstra算法获得网络最优通信路径计算能耗评价指标.采用动态能量控制策略使空闲节点进入睡眠状态减少网络运行能耗.通过优化能量指标降低了通信能耗,实现了无线传感网络覆盖与通信能量消耗的合理均衡.对移动目标跟踪仿真表明,覆盖能效优化算法与动态能量控制策略相结合提高了无线传感网络覆盖的能效性.  相似文献   

10.
随着网络规模和流量的迅速增长,Internet的能耗问题变得日益严重.文中研究IP(Internet Protocol)over TDMoverWDM核心网的能耗问题,首先对多层网络的节能机制进行了分析,在此基础上,提出了一种多层网络能耗优化的ITO模型,充分考虑多层网络的体系结构和网络设备的模块化特性,协同使用网络业务流控制和网络资源配置与管理的方法,实现业务流在多个网络层合理的分配和路由以及网络设备的多粒度模块睡眠,以达到降低网络能耗的目的.实验结果表明,ITO模型能有效降低网络的能耗,在网络业务量较低时,可将网络功耗降到业务高峰期时的24%~38%.此外,该文还通过实验探讨了业务的疏导和旁通、不同的网络层组合、网络设备的动态功耗以及网络设备的模块化结构对网络能耗优化的影响,为构建节能高效的绿色网络提供了参考.  相似文献   

11.
Energy consumption growth of the fifth-generation (5G) mobile network infrastructure can be significant due to the increased traffic demand for a massive number of end-users with increasing traffic volume, user density, and data rate. The emerging technologies of radio access networks (RAN), e.g., millimeter-wave (mm-wave) communication and large-scale antennas, make a considerable contribution to such an increase in energy consumption. The multiband 2-tier heterogeneous network (HetNet), cloud radio access network (C-RAN), and heterogeneous cloud radio access network (H-CRAN) are considered the prospective RAN architectures of the 5G mobile communication. This paper explores these novel architectures from the energy consumption and network power efficiency perspective considering the varying high volume traffic load, the number of antennas, varying bandwidth, and varying density of low power nodes (LPNs), integrated with mm-wave communication and large-scale multiple antennas. The architectural differences of these networks are highlighted and power consumption analytical models that characterize the energy consumption of radio resource heads (RRHs), base band unit (BBU) pool, fronthaul, macro base station (MBS), and small cell base stations (SCBs) in HetNet, C-RAN, and H-CRAN are developed. The network power efficiency with the consideration of propagation environment and network constraints is investigated to identify the energy-efficient architecture for the 5G mobile network. The simulation results reveal that the power consumption of all these architectures increases in all considered scenarios due to an increase in power consumption of radio frequency components and computation power. Moreover, CRAN is the most energy-efficient RAN architecture due to its cooperative processing and decreased cooling and site support devices and H-CRAN consumes most of the energy compared to other 5G RAN architectures mainly due to a high level of heterogeneity.  相似文献   

12.
Energy consumption is one of the most significant aspects of large-scale storage systems where multilevel caches are widely used. In a typical hierarchical storage structure, upper-level storage serves as a cache for the lower level, forming a distributed multilevel cache system. In the past two decades, several classic LRU-based multilevel cache policies have been proposed to improve the overall I/O performance of storage systems. However, few power-aware multilevel cache policies focus on the storage devices in the bottom level, which consume more than 27% of the energy of the whole system [1]. To address this problem, we propose a novel power-aware multilevel cache (PAM) policy that can reduce the energy consumption of high-performance and I/O bandwidth storage devices. In our PAM policy, an appropriate number of cold dirty blocks in the upper level cache are identified and selected to flush directly to the storage devices, providing high probability extension of the lifetime of disks in standby mode. To demonstrate the effectiveness of our proposed policy, we conduct several simulations with real-world traces. Compared to existing popular cache schemes such as PALRU, PB-LRU, and Demote, PAM reduces power consumption by up to 15% under different I/O workloads, and improves energy efficiency by up to 50.5%.  相似文献   

13.
Bluetooth is one of the most widespread technologies for personal area networks that allow portable devices to form multi-hop Bluetooth ad hoc networks, so called scatternets. Routing is one of the challenges in scatternets because of its impact on the performance of the network. It should focus on reducing the power consumption in the network because most of the nodes are battery-operated portable devices. In this paper, we propose a routing protocol for Bluetooth scatternets that customizes the Ad hoc On-Demand Distance Vector (AODV) routing protocol by making it power-aware and suitable for scatternets. It enhances the AODV flooding mechanism by excluding all non-bridge slaves from taking apart in the AODV route discovery process. In addition, it improves the AODV route discovery phase by considering the hop count, the predicated node’s power, and the average traffic intensity for each node as metrics for best route selection. By removing HELLO packets, our protocol reduces the control packets overhead and the power consumption in network devices. Simulation results show that the proposed protocol achieved considerable improvements over other enhanced AODV protocols by increasing the data delivery ratio by 10.78%, reducing the average end-to-end delay by 8.11%, and reducing the average energy consumption by 7.92%.  相似文献   

14.
The diversity of services delivered over wireless channels has increased people's desire in ubiquitously accessing these services from their mobile devices. However, a ubiquitous mobile computing environment faces several challenges such as scarce bandwidth, limited energy resources, and frequent disconnection of the server and mobile devices. Caching frequently accessed data is an effective technique to improve the network performance because it reduces the network congestion, the query delay, and the power consumption. When caching is used, maintaining cache consistency becomes a major challenge since data items that are updated on the server should be also updated in the cache of the mobile devices. In this paper we propose a new cache invalidation scheme called Selective Adaptive Sorted (SAS) cache invalidation strategy that overcomes the false invalidation problem that exists in most of the invalidation strategies found in the literature. The performance of the proposed strategy is evaluated and compared with the selective cache invalidation strategy and the updated invalidation report startegy found in the literature. Results showed that a significant cost reduction can be obtained with the proposed strategy when measuring performance metrics such as delay, bandwidth, and energy.  相似文献   

15.
We propose an energy management framework to optimize the energy consumption of networks using the Multiple Spanning Tree Protocol such as Carrier Grade Ethernet networks. The objective is to minimize the energy consumption of nodes and links while considering QoS constraints. The energy management is done through the Multiple Spanning Tree Protocol (MSTP) by choosing from a given set the most appropriate Spanning Trees and the most appropriate edges to operate while respecting the traffic demands. A trade-off framework between energy consumption and network performance is proposed. Results show that it is possible to achieve a good traffic engineering while operating the network closer to the minimum energy value.  相似文献   

16.
In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already reside in a particular processor's cache. The effects of this type of processor affinity are examined. It is observed that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. Only a small amount of affinity information needs to be maintained for each task. The importance of having a policy that adapts its behavior to changes in system load is demonstrated  相似文献   

17.
Power consumption is an important issue for cluster supercomputers as it directly affects running cost and cooling requirements. This paper investigates the memory energy efficiency of high-end data servers used for supercomputers. Emerging memory technologies allow memory devices to dynamically adjust their power states and enable free rides by overlapping multiple DMA transfers from different I/O buses to the same memory device. To achieve maximum energy saving, the memory management on data servers needs to judiciously utilize these energy-aware devices. As we explore different management schemes under five real-world parallel I/O workloads, we find that the memory energy behavior is determined by a complex interaction among four important factors: (1) cache hit rates that may directly translate performance gain into energy saving, (2) cache populating schemes that perform buffer allocation and affect access locality at the chip level, (3) request clustering that aims to temporally align memory transfers from different buses into the same memory chips, and (4) access patterns in workloads that affect the first three factors.  相似文献   

18.
Today the ICT industry accounts for 2–4% of the worldwide carbon emissions that are estimated to double in a business-as-usual scenario by 2020. A remarkable part of the large energy volume consumed in the Internet today is due to the over-provisioning of network resources such as routers, switches and links to meet the stringent requirements on reliability. Therefore, performance and energy issues are important factors in designing gigabit routers for future networks. However, the design and prototyping of energy-efficient routers is challenging because of multiple reasons, such as the lack of power measurements from live networks and a good understanding of how the energy consumption varies under different traffic loads and switch/router configuration settings. Moreover, the exact energy saving level gained by adopting different energy-efficient techniques in different hardware prototypes is often poorly known. In this article, we first propose a measurement framework that is able to quantify and profile the detailed energy consumption of sub-components in the NetFPGA OpenFlow switch. We then propose a new power-scaling algorithm that can adapt the operational clock frequencies as well as the corresponding energy consumption of the FPGA core and the Ethernet ports to the actual traffic load. We propose a new energy profiling method, which allows studying the detailed power performance of network devices. Results show that our energy efficient solution obtains higher level of energy efficiency compared to some existing approaches as the upper and lower bounds of power consumption of the NetFPGA Openflow switch are proved to be 30% lower than ones of the commercial HP Enterprise switch. Moreover, the new switch architecture can save up to 97% of dynamic power consumption of the FPGA chip at lowest frequency mode.  相似文献   

19.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

20.
Performance evaluation of Web proxy cache replacement policies   总被引:10,自引:0,他引:10  
Martin  Rich  Tai 《Performance Evaluation》2000,39(1-4):149-164
The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. In this paper we analyze the importance of different Web proxy workload characteristics in making good cache replacement decisions. We evaluate workload characteristics such as object size, recency of reference, frequency of reference, and turnover in the active set of objects. Trace-driven simulation is used to evaluate the effectiveness of various replacement policies for Web proxy caches. The extended duration of the trace (117 million requests collected over 5 months) allows long term side effects of replacement policies to be identified and quantified.

Our results indicate that higher cache hit rates are achieved using size-based replacement policies. These policies store a large number of small objects in the cache, thus increasing the probability of an object being in the cache when requested. To achieve higher byte hit rates a few larger files must be retained in the cache. We found frequency-based policies to work best for this metric, as they keep the most popular files, regardless of size, in the cache. With either approach it is important that inactive objects be removed from the cache to prevent performance degradation due to pollution.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号