首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
为满足数据中心网络在高并发量、低尾延时等性能上的需求,提出一种面向数据中心网络的分布式负载均衡网关架构。该新型网关架构主要包括资源池化汇聚算法、优先调度算法和动态负载均衡算法等3个核心算法模型。基于该架构,借助现场可编程门阵列(FPGA)实现智能网关的整体设计。通过第三方测试,基于分布式负载均衡网关架构的智能网关可针对数据包的关键信息实现灵活、可扩展的负载均衡,线速可达9.4 Gbps(不丢包),线速为10 Gbps的丢包率约5%,端口时延为2 μs。与通用的负载均衡方案(软件负载均衡与硬件负载均衡)相比,分布式负载均衡网关架构采用基于数据包优先调度的负载均衡策略和硬件存储资源智能“池化”的流量管理,保障了数据中心网络系统中百万级数据流的高效分发,提升高并发量、低时延应用的性能。在面向百万条并发情况下,网络链路响应尾延时小于60 ms。  相似文献   

2.
一个基于弹性云的负载均衡方法   总被引:1,自引:0,他引:1  
提出了一种基于弹性云的负载均衡方法.这一方法构造了负载均衡模型框架,建立了模型对于虚拟机负载状况和虚拟机集群资源利用率进行量化评估,为实现任务的分发和虚拟机集群的弹性伸缩,设计了任务调度算法和弹性伸缩算法.实验结果表明基于弹性云的负载均衡方法在实现负载均衡的同时,有效提高了资源利用率.  相似文献   

3.
由于无线激光通信网络吞吐量低、节点传输延时高和存在分组丢失率多等问题,提出基于自适应遗传算法的无线激光通信网络负载均衡成簇算法。利用AGCH算法对无线激光通信网络节点进行分组和成簇,从中取得簇头节点,构建资源调度模型,利用该模型对簇头节点中的资源进行分配调度,采用自适应遗传算法对建立的模型进行求解,以此提升无线激光通信网络负载均衡效果,实现无线激光通信网络负载均衡。实验结果表明,通过对该算法进行网络吞吐量测试、节点传输延时测试和分组丢失率测试,验证了该算法的有效性强、实用性高。  相似文献   

4.
多控制器负载均衡是SDN网络部署研究中关注的问题之一。该文针对多控制器间负载均衡的时间效率问题,提出一种基于负载通告策略的负载均衡机制(LILB)。该机制包括负载测量、负载通告、均衡决策和交换机迁移4个核心功能组件。借助于负载通告的能力,每个控制器可以在过载后无需收集其他控制器的负载信息而尽快完成均衡决策。为了减少负载通告带来的通信负荷和处理负荷,该文提出一个抑制算法来降低负载通告的频率。此外,该文还提出了最重过载控制器、迁移交换机和目标控制器的决策方法,以及目标控制器接受迁移请求的判定策略来避免控制器的负载震荡;并为支持交换机迁移过程中控制器角色的平滑切换设计了一种交换机迁移的消息交互机制。最后,在基于Floodlight和Mininet的实验环境中验证了所提出方法的有效性。  相似文献   

5.
二维FFT是图像处理的典型算法,广泛应用于图像滤波、快速卷积、目标跟踪等领域.为满足高分辨率图像的实时处理需求,基于自主研制的FT-X众核DSP处理器,提出了一种二维FFT算法的多核并行实现方法.基于众核编程模型,通过多核任务部署、地址空间重映射等方式完成了任务初始化,实现了24核数据并行处理,加速比达到19.8倍.在此基础上,提出了基于DMA跨步传输的隐式转置方案,通过矩阵地址分配的方式,解决了大型矩阵跨步传输步长受限的问题.实验结果表明,在8 K×8 K的数据规模下,相对于直接转置和指令隐式转置分别节省了91%和65%的转置时间,同时识别并解决了某特殊情况下的多核负载不均衡的问题,将各核的用时差距从64%下降到了12%,整体用时下降了26%.  相似文献   

6.
祝宇林 《信息技术》2005,29(4):110-111
阐述了三代不同负载均衡产品的发展,着重分析了第三代负载均衡产品的性能。简要介绍了基于软件的负载均衡产品和基于硬件的负载均衡产品两类产品,并对其性能进行了比较,分析了两类产品的优缺点及存在的问题。在让读者了解到负载均衡产品的发展历程和两类负载均衡产品的相关特性后,又进一步阐述了负载均衡产品的工作原理,及其在处理网络和服务性能,分配资源等方面所表现的优势和应用的意义。  相似文献   

7.
为了解决负载均衡后分流的有实时业务需求的用户因受到严重的小区间干扰和因传统被动式方法固有的接入延迟所导致的用户体验下降的问题,本文提出了一种基于干扰协调的主动负载均衡算法.为了协调干扰,通过部分频率重用为小区边缘用户分配正交的资源;为了在满足用户数据率需求和均衡网络负载的前提下使系统总资源开销最小,通过预测用户的大尺度...  相似文献   

8.
随着互联网技术的不断发展,数据中心网络以及国家骨干网络的流量规模也在不断增长,大规模的流量以及无用的恶意流量对多核处理器的负载均衡有着很大的影响,所以对于多核处理器来说,负载均衡是一个亟待解决的问题.本文提出了一种基于DPDK平台的动态多重Hash的技术来更好地解决在多核处理器中流量分配不均衡的问题.文章中对现有的RSS等相关技术进行分析,通过采用对称RSS技术与动态负载相关技术相结合的方法,将捕获的数据包发配到不同的收包队列,实现处理器多核间的负载均衡.  相似文献   

9.
针对云环境中虚拟机集群负载不均衡问题,提出一种基于虚拟机迁移的集群优化算法。通过对节点负载的实时监测,动态调整各种资源的权重,根据资源权重选择可最大程度降低主机负载的虚拟机进行迁移。该算法利用预测机制,消除主机资源利用率的临时越界引起的不必要的虚拟机迁移。在选择目标节点时,采用多目标决策法,兼顾多资源匹配率,服务级目标违背率(SLA)等多种管理目标。实验结果表明,与同类型的负载均衡算法相比,该算法能减少迁移次数,降低SLA违背率。  相似文献   

10.
《信息技术》2016,(9):55-58
随着网络的高速发展以及海量数据的扩充,云存储技术得到广泛的应用,分布式存储中动态负载均衡策略也逐渐为人们所重视。本文在已有的负载均衡策略基础上,提出了一种加权轮询负载算法,通过阀值的设置划分节点,并查询负载表轮询分配存储任务,保证分布式系统能够合理的提高资源利用率,动态调整存储节点的工作负载。  相似文献   

11.
In modern systems, many well-known techniques (e.g., dynamic voltage and frequency scaling, job scheduling etc.) have been developed to achieve low power, high performance, appropriate quality-of-service or other specific purposes. Workload prediction is an extremely critical factor for bringing these techniques into full play. However, it is very difficult to accurately predict the workloads of upcoming tasks if they are varying drastically. In this paper, we propose a new hybrid fuzzy-Kalman filter and the corresponding area-efficient hardware architecture to accurately and quickly predict the workload with large variation. To decrease the hardware complexity while maintaining sufficient accuracy, the computation of Kalman Gain is simplified with a lookup table method. In addition, the workload and covariance values in Kalman filter are properly normalized and truncated to significantly reduce the bit length of hybrid workload predictor. Furthermore, a simplified fuzzy controller is developed to adaptively adjust the measurement noise covariance of Kalman filter so that the prediction error can be further lowered. Experimental results of real applications exhibit that the proposed hybrid fuzzy-Kalman filter can achieve lower prediction error and smaller hardware area when compared to previous workload predictors.  相似文献   

12.
MapReduce has become a popular model for large‐scale data processing in recent years. Many works on MapReduce scheduling (e.g., load balancing and deadline‐aware scheduling) have emphasized the importance of predicting workload received by individual reducers. However, because the input characteristics and user‐specified map function of a given job are unknown to the MapReduce framework before the job starts, accurately predicting workload of reducers can be a difficult challenge. To address this challenge, we present ROUTE, a run‐time robust reducer workload estimation technique for MapReduce. ROUTE progressively samples the partition size of the early completed mappers, allowing ROUTE to perform estimation at run time yet fulfilling the accuracy requirement specified by users. Moreover, by using robust estimation and bootstrapping resampling techniques, ROUTE can achieve high applicability to a wide variety of applications. Through experiments using both real and synthetic data on an 11‐node Hadoop cluster, we show ROUTE can achieve high accuracy with error rate no more than 10.92% and an improvement of 40.6% in terms of error rate while compared with the state‐of‐the‐art solution. Besides, through simulations using synthetic data, we show that ROUTE is robust to a variety of skewed distributions. Finally, we apply ROUTE to existing load balancing and deadline‐aware scheduling frameworks and show ROUTE significantly improves the performance of these frameworks. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.

Cloud computing is one of the distributed resource-sharing technology that offers resources on a pay-as-you-use basis. Platform as a service, Infrastructure as a service, and Software as a Service are services provided by the Cloud. Each end user's Quality of service must be ensured by the cloud service provider. In recent days, cloud utilization is rapidly increasing. To avoid congestion and to preserve the Service Level Agreement, the large workload must be balanced across the network. In this research work, a new load balancing approach is proposed for the dynamic resource allocation process to improve stability and to increase profit. PBMM algorithm is devised for an effective load balancing process through which, resource scheduling is performed. Task size and the bidding value coded by each customer are taken into account. To optimize the waiting time, resource tables and task tables are employed. The average waiting time and response time of the special users are minimized. The simulation results show that the proposed load balancing technique ensures the maximum profit and it enhances load balancing stability by increasing the number of special users.

  相似文献   

14.
针对当前智能抗干扰技术策略维度低、复杂环境应对性差的问题,提出了适应复杂环境的多维策略智能抗干扰技术。将单音信号作为探测信号,通过扫频方式减少探测阶段复杂度以实现快速实时的环境状态特征提取,之后设计了基于深度神经网络的实时智能决策引擎模型以提高决策速度和准确率。仿真结果表明所提方案能够准确地预测通信质量,最后根据目标函数在所有可通信策略中决策出最优策略,当探测信号扫频间隔选取合适时,该方案能够达到接近96%的决策准确率及较好的资源利用率,能有效进行抗干扰并取得较好的通信质量。  相似文献   

15.
《Microelectronics Journal》2014,45(8):1103-1117
This paper proposes a novel Shared-Resource routing scheme, SRNoC, that not only enhances network transmission performance, but also provides a high efficient load-balance solution for NoC design. The proposed SRNoC scheme expands the NoC design space and provides a novel effective NoC framework. SRNoC scheme mainly consists of the topology and routing algorithm. The proposed topology of SRNoC is based on the Shared-Resource mechanism, in which the routers are divided into groups and each group of routers share a set of specified link resource. Because of the usage of Shared Resource mechanism, SRNoC could effectively distribute the workload uniformly onto the network so as to improve the utilization of the resource and alleviate the network congestion. The proposed routing algorithm is a minimal oblivious routing algorithm. It could improve average latency and saturation load owing to its flexibility and high efficiency. In order to evaluate the load-balance property of the network, we proposed a method to calculate the Φ which represents the characteristic value of load-balance. The smaller the Φ, the better the performance in load-balance. Simulation results show that the average latency and saturation load are dramatically improved by SRNoC both in synthetic traffic patterns and real application traffic trace with negligible hardware overhead. Under the same simulation condition, SRNoC could cut down the total network workload to 48.67% at least. Moreover, SRNoC reduces the value of Φ 45% at least compared with other routing algorithms, which means it achieves better load-balance feature.  相似文献   

16.
In Data Grid systems, quick data access is a challenging issue due to the high latency. The failure of requests is one of the most common matters in these systems that has an impact on performance and access delay. Job scheduling and data replication are two main techniques in reducing access latency. In this paper, we propose two new neighborhood‐based job scheduling strategies and a novel neighborhood‐based dynamic data replication algorithm (NDDR). The proposed algorithms reduce the access latency by considering a variety of practical parameters for decision making and the access delay by considering the failure probability of a node in job scheduling, replica selection, and replica placement. The proposed neighborhood concept in job scheduling includes all the nodes with low data transmission costs. Therefore, we can select the best computational node and reduce the search time by running a hierarchical and parallel search. NDDR reduces the access latency through selecting the best replica by performing a hierarchical search established based on the access time, storage queue workload, storage speed, and failure probability. NDDR improves the load balancing and data locality by selecting the best replication place considering the workload, temporal locality, geographical locality, and spatial locality. We evaluate our proposed algorithms by using Optorsim Simulator in two scenarios. The simulations confirm that the proposed algorithms improve the results compared with similar existing algorithms by 11%, 15%, 12%, and 10% in terms of mean job time, replication frequency, mean data access latency, and effective network usage, respectively.  相似文献   

17.
在面向服务架构(SOA)中,针对目前负载均衡算法对于延时期间的负载波动情况自适应性及预测性较差的问题,提出一种改进的带有预测功能的自适应负载均衡算法.当工作负载到达率和服务特征发生变化时自动调整负载参数,并对后续请求的负载权重及分配情况进行预测.实验结果表明,该算法适用于高频率分布式的服务集群环境,能够有效的缩短服务器的平均响应时间,提高服务器的性能.  相似文献   

18.
Many-core technology is considering as a key to improve the performance of recent computer systems. To obtain good performance for a many-core system, exploiting parallelism in arithmetic level is not enough and the parallelization strategy must apply to both of hardware architecture and software. Due to the contention of shared hardware resource, the speedup ratio of a many-core system is usually much lower than the number of processor units. In this paper, we take connected-component labeling (CCL) as a data-intensive application and take TILE64 as a many-core platform to perform a fast linear-time two-scan algorithm for labeling connected components in grayscale-level images. In the first scan, the data partition parallelism is applied and each core in the many-core system assigns provision labels (PLs) to the object pixels in a sub-image and collects equivalence information to several tables. Two parallel modes, the cascade path mode and tree path mode developed for the second scan, when the representative labels (RLs) with the help of equivalence information replace the PLs. According to the experimental results, the 10 times of speedup, compared with the performance of the single core scenario, it is can be achieved when 32 processor units is activated. The experimental results also demonstrate that the efficiency of our implementation with TILE64 is superior to that of other parallel labeling platforms.  相似文献   

19.
盛洁  唐良瑞  郝建红 《电子学报》2013,41(2):321-328
 针对现有异构无线网络负载均衡方法未能综合考虑重载网络业务转移和新业务接入控制的问题,提出了一种混合负载均衡算法.该算法首先根据各小区负载水平和终端移动性,将重载小区的适量业务向重叠覆盖的轻载小区转移;其次通过资源预留和强占优先的接入控制策略,为不同优先级的新到业务提供有差别的服务.仿真结果表明,本文算法在保证系统资源利用率的同时,保障了实时与非实时业务的QoS,并相对于参考算法有效降低了系统阻塞率和业务切换概率.  相似文献   

20.
Energy efficiency is a contemporary and challenging issue in geographically distributed data centers. These data centers consume significantly high energy and cast a negative impact on the energy resources and environment. To minimize the energy cost and the environmental impacts, Internet service providers use different approaches such as geographical load balancing (GLB). GLB refers to the placement of data centers in diverse geolocations to exploit variations in electricity prices with the objective to minimize the total energy cost. GLB helps to minimize the overall energy cost, achieve quality of service, and maximize resource utilization in geo‐distributed data centers by employing optimal workload distribution and resource utilization in the real time. In this paper, we summarize various optimization‐based workload distribution strategies and optimization techniques proposed in recent research works based on commonly used optimization factors such as workload type, load balancer, availability of renewable energy, energy storage, and data center server specification in geographically distributed data centers. The survey presents a systemized and a novel taxonomy of workload distribution in data centers. Moreover, we also debate various challenges and open research issues along with their possible solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号