首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
目前虚拟网络研究的一个热点是虚拟网络映射,但是传统两阶段算法中的节点映射算法着重于提高网络资源利用率,而忽略了网络的整体负载性能。为了避免现有映射算法中使用单一固有属性计算拓扑势值带来的片面性,在节点映射过程中增加了节点的另一个固有属性。但是,由于这两个固有属性之间的数量级相差较大,从而引入熵权,通过计算两个属性的熵权值来优化拓扑势值的计算,提出了一种基于熵权法的虚拟网映射算法。仿真实验结果表明,所提出的算法提高了映射接受率,并降低了网络的整体负载。  相似文献   

2.
云计算中虚拟机资源分配算法   总被引:1,自引:0,他引:1  
为了解决云计算中虚拟机部署预留方案浪费大量资源和单目标部署方案不够全面问题,提出了一种基于组的多目标遗传算法虚拟机资源分配算法.该算法分成组编码和资源编码,资源编码根据虚拟机历史资源需求进行整合编码,通过改进的交叉和变异操作,将物理机器个数和虚拟机占用物理机器资源整合.实验结果表明,该算法有效减少了物理机器个数使用和提高了物理机器资源使用率,达到了节能目的.  相似文献   

3.
陈港  孟相如  康巧燕  阳勇 《计算机应用》2021,41(11):3309-3318
针对目前大部分基于虚拟软件定义网络(vSDN)的映射算法未充分考虑节点与链路之间的相关性的问题,提出了一种基于网络拓扑分割与聚类分析的vSDN映射算法。首先,通过根据最短跳数进行拓扑分割的方法,降低物理网络的复杂度;然后,通过根据节点拓扑和资源属性进行聚类分析的方法,提升映射算法的请求接受率;最后,通过将链路约束分散到节点带宽资源以及节点的度进行约束考量,对不符合链路要求的节点进行重映射,从而优化了节点与链路映射过程。实验结果表明,该算法有效地提升了基于软件定义网络(SDN)架构的虚拟网络映射算法在较低连通概率物理网络下的请求接受率。  相似文献   

4.

Cloud computing is becoming a very popular form of distributed computing, in which digital resources are shared via the Internet. The user is provided with an overview of many available resources. Cloud providers want to get the most out of their resources, and users are inclined to pay less for better performance. Task scheduling is one of the most important aspects of cloud computing. In order to achieve high performance from cloud computing systems, tasks need to be scheduled for processing by appropriate computing resources. The large search space of this issue makes it an NP-hard problem, and more random search methods are required to solve this problem. Multiple solutions have been proposed with several algorithms to solve this problem until now. This paper presents a hybrid algorithm called GSAGA to solve the Task Scheduling Problem (TSP) in cloud computing. Although it has a high ability to search the problem space, the Genetic Algorithm (GA) performs poorly in terms of stability and local search. It is therefore possible to create a stable algorithm by combining the general search capacities of the GA with the Gravitational Search Algorithm (GSA). Our experimental results indicate that the proposed algorithm can solve the problem with higher efficiency compared with the state-of-the-art.

  相似文献   

5.
针对虚拟网络请求资源动态变化的实际情况,提出了面向动态虚拟网络请求的虚拟网络映射(DVNR-VNE)算法。以混合线性规划理论为基础,采用多队列的方式分别对不同类型的虚拟网络请求进行预处理,建立了以最小化映射代价和最小迁移代价为优化目标的映射模型,优先映射需要释放资源的请求以获得更多的资源支持其他的虚拟网络,对新到来的虚拟网络请求采用优化后的虚拟网络映射(WD-VNE)算法进行映射。仿真实验表明,该算法降低了链路映射成本和迁移成本并获得了较高的虚拟网络请求接受率。  相似文献   

6.
针对网络虚拟化环境中资源利用率较低的问题,通过建立资源相关性度量模型,刻画虚拟节点和物理顶点之间的匹配程度,根据虚拟节点和物理顶点之间的资源相关性,将虚拟节点映射到资源相关性较强的物理顶点上;为了降低虚拟链路的映射路径长度,通过建立节点间邻接关系模型,将相邻的虚拟节点映射到邻接的物理顶点上。实验结果表明,提出的虚拟网络映射算法均衡了物理网络资源的分布状态,降低了虚拟网络映射的资源代价,提高了虚拟网络请求接受率。  相似文献   

7.
Network virtualization is recognized as an effective way to overcome the ossification of the Internet. However, the virtual network mapping problem (VNMP) is a critical challenge, focusing on how to map the virtual networks to the substrate network with efficient utilization of infrastructure resources. The problem can be divided into two phases: node mapping phase and link mapping phase. In the node mapping phase, the existing algorithms usually map those virtual nodes with a complete greedy strategy, without considering the topology among these virtual nodes, resulting in too long substrate paths (with multiple hops). Addressing this problem, we propose a topology awareness mapping algorithm, which considers the topology among these virtual nodes. In the link mapping phase, the new algorithm adopts the k-shortest path algorithm. Simulation results show that the new algorithm greatly increases the long-term average revenue, the acceptance ratio, and the long-term revenue-to-cost ratio (R/C).  相似文献   

8.
定义了有权网络节点紧密度和路径中心度,并提出了基于网络中心性分析的虚拟网络映射算法。该算法是一个两阶段映射算法,从全局角度分别对底层节点和映射路径进行了有效的定量评估,提高了映射资源选择的均衡性。仿真实验结果表明,与现有的主流研究成果相比,该算法显著地提高了虚拟网络请求接受率。  相似文献   

9.
In a cloud computing environment, companies have the ability to allocate resources according to demand. However, there is a delay that may take minutes between the request for a new resource and it being ready for using. This causes the reactive techniques, which request a new resource only when the system reaches a certain load threshold, to be not suitable for the resource allocation process. To address this problem, it is necessary to predict requests that arrive at the system in the next period of time to allocate the necessary resources, before the system becomes overloaded. There are several time series forecasting models to calculate the workload predictions based on history of monitoring data. However, it is difficult to know which is the best time series forecasting model to be used in each case. The work becomes even more complicated when the user does not have much historical data to be analyzed. Most related work considers only single methods to evaluate the results of the forecast. Other works propose an approach that selects suitable forecasting methods for a given context. But in this case, it is necessary to have a significant amount of data to train the classifier. Moreover, the best solution may not be a specific model, but rather a combination of models. In this paper we propose an adaptive prediction method using genetic algorithms to combine time series forecasting models. Our method does not require a previous phase of training, because it constantly adapts the extent to which the data are coming. To evaluate our proposal, we use three logs extracted from real Web servers. The results show that our proposal often brings the best result and is generic enough to adapt to various types of time series.  相似文献   

10.
无线传感器网络中节点计算能力和存储存能量有限的问题一直制约着无线传感器网络的发展.为此,本文提出了一种基于云PSO(particle swarm optimization)算法的无线传感器网络能量优化方法,主要包括网络分簇、网络能量模型建立、云PSO算法迭代优化等步骤.其中云PSO算法采用云理论模型优选惯性权重可以提高PSO算法的收敛速度,典型函数测试结果表明其效果优于常规PSO算法和遗传算法;在网络建模中采用二分功率控制算法可以降低网络能耗、延长节点寿命.最后经仿真试验和对比分析表明本文提出的方法在优化无线传感器网络中具有速度快、节点生存能力强的优点,并能有效地控制网络能耗.  相似文献   

11.
12.
NoC映射是NoC设计中的重要步骤,映射结果的优劣对NoC的QoS约束和通信功耗有着很大的影响。提出一种采用云自适应遗传算法实现NoC映射的方案,该算法利用云模型对传统遗传算法加以改进,以此新方法自动调整遗传算法过程中的交叉概率和变异概率,从而达到优化遗传算法的目的。结合NoC映射中的具体问题,在功耗和延时约束的限制条件下,建立了延时约束下的NoC映射功耗数学模型。实验表明,该方法在NoC映射中取得了良好的效果,降低了通信功耗。  相似文献   

13.
为了研究多维属性云资源在云对等网络中快速定位问题,结合云对等网络的优势,提出了一种基于云对等网络的多属性云资源的查找算法。在分层云对等网络的基础上,分别利用云资源的类型和属性值建立多维索引。首先根据类型索引将相关的数据聚集在同一个资源簇内;然后将属性值的值域划分为多个区段,并将相应资源存储其中。同时建立资源簇融合、区间邻居维护等机制使算法更具效率和扩展性。仿真实验表明,该算法实现了多属性云资源的快速定位。并且它不会随着网络节点和类型维度增加而产生较大查询迟延,具有很好的扩展性。  相似文献   

14.
随着国家天地一体化信息网络重大项目的 推进,5G-低轨星座网络切片的可靠映射成为业内的研究热点.在基于软件定义网络(software-defined network,SDN)和网络功能虚拟化(network function virtualization,NFV)的5G-低轨星座集成网络架构下,将5G-低轨星座网络切片的可靠映射问题建模为一个混合整数线性规划问题.在此基础上,研究了切片请求的资源编排,进而提出了基于广度优先搜索的可靠映射算法.该算法综合考虑切片请求的可靠性阈值及虚拟网络功能(virtual network function,VNF)的资源需求,在虚拟网络映射中根据节点的可靠重要度对节点进行排序.仿真结果表明,该算法在满足可靠性阈值约束的条件下,能够最大化收益开销比,提高虚拟网络映射成功率,在切片可靠性及接受率等方面优于对比算法.  相似文献   

15.
针对云环境下服务器内部多种资源间分配不均衡问题,提出了一种多维资源协同聚合的虚拟机调度算法MCCA。该算法在分组遗传算法的基础上,采用模糊逻辑及基于资源利用率多维方差的控制参量,设计适应度函数指导搜索解空间。算法使用基于轮盘赌法的选择方法,并对交叉和变异等进行了优化,以实现快速有效地获取近似最优解。在CloudSim环境下进行了仿真,实验结果表明该算法对均衡多维资源分配和提高资源综合利用率具有一定的优势。  相似文献   

16.
Cloud computing is a big paradigm shift of computing mechanism. It provides high scalability and elasticity with a range of on-demand services. We can execute a variety of distributed applications on cloud’s virtual machines (computing nodes). In a distributed application, virtual machine nodes need to communicate and coordinate with each other. This type of coordination requires that the inter-node latency should be minimal to improve the performance. But in the case of nodes belonging to different clusters of the same cloud or in a multi-cloud environment, there can be a problem of higher network latency. So it becomes more difficult to decide, which node(s) to choose for the distributed application execution, to keep inter-node latency at minimum. In this paper, we propose a solution for this problem. We propose a model for the grouping of nodes with respect to network latency. The application scheduling is done on the basis of network latency. This model is a part of our proposed Cloud Scheduler module, which helps the scheduler in scheduling decisions on the basis of different criteria. Network latency and resultant node grouping on the basis of this latency is one of those criteria. The main essence of the paper is that our proposed latency grouping algorithm not only has no additional network traffic overheads for algorithm computation but also works well with incomplete latency information and performs intelligent grouping on the basis of latency. This paper addresses an important problem in cloud computing, which is locating communicating virtual machines for minimum latency between them and group them with respect to inter-node latency.  相似文献   

17.
Cloud computing can reduce power consumption by using virtualized computational resources to provision an application’s computational resources on demand. Auto-scaling is an important cloud computing technique that dynamically allocates computational resources to applications to match their current loads precisely, thereby removing resources that would otherwise remain idle and waste power. This paper presents a model-driven engineering approach to optimizing the configuration, energy consumption, and operating cost of cloud auto-scaling infrastructure to create greener computing environments that reduce emissions resulting from superfluous idle resources. The paper provides four contributions to the study of model-driven configuration of cloud auto-scaling infrastructure by (1) explaining how virtual machine configurations can be captured in feature models, (2) describing how these models can be transformed into constraint satisfaction problems (CSPs) for configuration and energy consumption optimization, (3) showing how optimal auto-scaling configurations can be derived from these CSPs with a constraint solver, and (4) presenting a case study showing the energy consumption/cost reduction produced by this model-driven approach.  相似文献   

18.
The growing demand and dependence upon cloud services have garnered an increasing level of threat to user data and security. Some of such critical web and cloud platforms have become constant targets for persistent malicious attacks that attempt to breach security protocol and access user data and information in an unauthorized manner. While some of such security compromises may result from insider data and access leaks, a substantial proportion continues to remain attributed to security flaws that may exist within the core web technologies with which such critical infrastructure and services are developed. This paper explores the direct impact and significance of security in the Software Development Life Cycle (SDLC) through a case study that covers some 70 public domain web and cloud platforms within Saudi Arabia. Additionally, the major sources of security vulnerabilities within the target platforms as well as the major factors that drive and influence them are presented and discussed through experimental evaluation. The paper reports some of the core sources of security flaws within such critical infrastructure by implementation with automated security auditing and manual static code analysis. The work also proposes some effective approaches, both automated and manual, through which security can be ensured through-out the SDLC and safeguard user data integrity within the cloud.  相似文献   

19.
Ruth  P. Jiang  X. Xu  D. Goasguen  S. 《Computer》2005,38(5):63-69
We have developed a middleware system that integrates and extends virtual machine and virtual network technologies to support mutually isolated virtual distributed environments in shared infrastructures like the grid and the PlanetLab overlay infrastructure. Integrating virtual network and on-demand virtual machine creation and customization technologies makes virtual distributed environments a reality. The Violin-based middleware system integrates and enhances such technologies to create virtual distributed environments.  相似文献   

20.
The virtual network (VN) embedding/mapping problem is recognized as an essential question of network virtualization. The VN embedding problem is a major challenge in this field. Its target is to efficiently map the virtual nodes and virtual links onto the substrate network resources. Previous research focused on designing heuristic-based algorithms or attempting two-stage solutions by solving node mapping in the first stage and link mapping in the second stage. In this study, we propose a new VN embedding algorithm based on integer programming. We build a model of an augmented substrate graph, and formulate the VN embedding problem as an integer program with an objective function and some constraints. A factor of topology-awareness is added to the objective function. The VN embedding problem is solved in one stage. Simulation results show that our algorithm greatly enhances the acceptance ratio, and increases the revenue/cost (R/C) ratio and the revenue while decreasing the cost of the VN embedding problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号