首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
论文针对协同工作中的任务调度问题,建立了相应的马尔可夫决策过程模型,在此基础上提出了一种改进的基于模拟退火的Q学习算法。该算法通过引入模拟退火,并结合贪婪策略,以及在状态空间上的筛选判断,显著地提高了收敛速度,缩短了执行时间。最后与其它文献中相关算法的对比分析,验证了本改进算法的高效性。  相似文献   

2.
    
The Internet of Things (IoT) has numerous applications in every domain, e.g., smart cities to provide intelligent services to sustainable cities. The next-generation of IoT networks is expected to be densely deployed in a resource-constrained and lossy environment. The densely deployed nodes producing radically heterogeneous traffic pattern causes congestion and collision in the network. At the medium access control (MAC) layer, mitigating channel collision is still one of the main challenges of future IoT networks. Similarly, the standardized network layer uses a ranking mechanism based on hop-counts and expected transmission counts (ETX), which often does not adapt to the dynamic and lossy environment and impact performance. The ranking mechanism also requires large control overheads to update rank information. The resource-constrained IoT devices operating in a low-power and lossy network (LLN) environment need an efficient solution to handle these problems. Reinforcement learning (RL) algorithms like Q-learning are recently utilized to solve learning problems in LLNs devices like sensors. Thus, in this paper, an RL-based optimization of dense LLN IoT devices with heavy heterogeneous traffic is devised. The proposed protocol learns the collision information from the MAC layer and makes an intelligent decision at the network layer. The proposed protocol also enhances the operation of the trickle timer algorithm. A Q-learning model is employed to adaptively learn the channel collision probability and network layer ranking states with accumulated reward function. Based on a simulation using Contiki 3.0 Cooja, the proposed intelligent scheme achieves a lower packet loss ratio, improves throughput, produces lower control overheads, and consumes less energy than other state-of-the-art mechanisms.  相似文献   

3.
    
A major problem in networking has always been energy consumption. Battery life is one parameter which could help improve Energy Efficiency. Existing research on wireless networking stresses on reducing signaling messages or time required for data transfer for addressing energy consumption issues. Routing or Forwarding packets in a network between the network elements like routers, switches, wireless access points, etc., is complex in conventional networks. With the advent of Software Defined Networking (SDN) for 5G network architectures, the distributed networking has embarked onto centralized networking, wherein the SDN Controller is responsible for decision making. The controller pushes its decision onto the network elements with the help of a control plane protocol termed OpenFlow. Decentralized networks have been largely in use because of their ease in physical and logically setting the administrative hierarchies. The centralized controller deals with the policy funding and the protocols used for routing procedures are designated by the decentralized controller. Ambience Awake is a location centered routing protocol deployed in the 5G network architecture with OpenFlow model. The Ambience Awake mechanism relies on the power consumption of the network elements during the packet transmission for unicast and multicast scenarios. The signalling load and the routing overhead witnessed an improvement of 30% during the routing procedure. The proposed routing mechanism run on the top of the decentralized SDN controller proves to be 19.59% more efficient than the existing routing solutions.  相似文献   

4.
    
The emergence of Segment Routing (SR) provides a novel routing paradigm that uses a routing technique called source packet routing. In SR architecture, the paths that the packets choose to route on are indicated at the ingress router. Compared with shortest-path-based routing in traditional distributed routing protocols, SR can realize a flexible routing by implementing an arbitrary flow splitting at the ingress router. Despite the advantages of SR, it may be difficult to update the existing IP network to a full SR deployed network, for economical and technical reasons. Updating partial of the traditional IP network to the SR network, thus forming a hybrid SR network, is a preferable choice. For the traffic is dynamically changing in a daily time, in this paper, we propose a Weight Adjustment algorithm WASAR to optimize routing in a dynamic hybrid SR network. WASAR algorithm can be divided into three steps: firstly, representative Traffic Matrices (TMs) and the expected TM are obtained from the historical TMs through ultra-scalable spectral clustering algorithm. Secondly, given the network topology, the initial network weight setting and the expected TM, we can realize the link weight optimization and SR node deployment optimization through a Deep Reinforcement Learning (DRL) algorithm. Thirdly, we optimize the flow splitting ratios of SR nodes in a centralized online manner under dynamic traffic demands, in order to improve the network performance. In the evaluation, we exploit historical TMs to test the performance of the obtained routing configuration in WASAR. The extensive experimental results validate that our proposed WASAR algorithm has superior performance in reducing Maximum Link Utilization (MLU) under the dynamic traffic.  相似文献   

5.
    
In recent years, progressive developments have been observed in recent technologies and the production cost has been continuously decreasing. In such scenario, Internet of Things (IoT) network which is comprised of a set of Unmanned Aerial Vehicles (UAV), has received more attention from civilian to military applications. But network security poses a serious challenge to UAV networks whereas the intrusion detection system (IDS) is found to be an effective process to secure the UAV networks. Classical IDSs are not adequate to handle the latest computer networks that possess maximum bandwidth and data traffic. In order to improve the detection performance and reduce the false alarms generated by IDS, several researchers have employed Machine Learning (ML) and Deep Learning (DL) algorithms to address the intrusion detection problem. In this view, the current research article presents a deep reinforcement learning technique, optimized by Black Widow Optimization (DRL-BWO) algorithm, for UAV networks. In addition, DRL involves an improved reinforcement learning-based Deep Belief Network (DBN) for intrusion detection. For parameter optimization of DRL technique, BWO algorithm is applied. It helps in improving the intrusion detection performance of UAV networks. An extensive set of experimental analysis was performed to highlight the supremacy of the proposed model. From the simulation values, it is evident that the proposed method is appropriate as it attained high precision, recall, F-measure, and accuracy values such as 0.985, 0.993, 0.988, and 0.989 respectively.  相似文献   

6.
    
Distributed denial-of-service (DDoS) attacks are designed to interrupt network services such as email servers and webpages in traditional computer networks. Furthermore, the enormous number of connected devices makes it difficult to operate such a network effectively. Software defined networks (SDN) are networks that are managed through a centralized control system, according to researchers. This controller is the brain of any SDN, composing the forwarding table of all data plane network switches. Despite the advantages of SDN controllers, DDoS attacks are easier to perpetrate than on traditional networks. Because the controller is a single point of failure, if it fails, the entire network will fail. This paper offers a Hybrid Deep Learning Intrusion Detection and Prevention (HDLIDP) framework, which blends signature-based and deep learning neural networks to detect and prevent intrusions. This framework improves detection accuracy while addressing all of the aforementioned problems. To validate the framework, experiments are done on both traditional and SDN datasets; the findings demonstrate a significant improvement in classification accuracy.  相似文献   

7.
8.
    
《工程(英文)》2021,7(8):1087-1100
Communication-dependent and software-based distributed energy resources (DERs) are extensively integrated into modern microgrids, providing extensive benefits such as increased distributed controllability, scalability, and observability. However, malicious cyber-attackers can exploit various potential vulnerabilities. In this study, a programmable adaptive security scanning (PASS) approach is presented to protect DER inverters against various power-bot attacks. Specifically, three different types of attacks, namely controller manipulation, replay, and injection attacks, are considered. This approach employs both software-defined networking technique and a novel coordinated detection method capable of enabling programmable and scalable networked microgrids (NMs) in an ultra-resilient, time-saving, and autonomous manner. The coordinated detection method efficiently identifies the location and type of power-bot attacks without disrupting normal NM operations. Extensive simulation results validate the efficacy and practicality of the PASS for securing NMs.  相似文献   

9.
王晓红  曾静  麻祥才  刘芳 《包装工程》2020,41(15):245-252
目的为了有效地去除多种图像模糊,提高图像质量,提出基于深度强化学习的图像去模糊方法。方法选用GoPro与DIV2K这2个数据集进行实验,以峰值信噪比(PSNR)和结构相似性(SSIM)为客观评价指标。通过卷积神经网络获得模糊图像的高维特征,利用深度强化学习结合多种CNN去模糊工具建立去模糊框架,将峰值信噪比(PSNR)作为训练奖励评价函数,来选择最优修复策略,逐步对模糊图像进行修复。结果通过训练与测试,与现有的主流算法相比,文中方法有着更好的主观视觉效果,且PSNR值与SSIM值都有更好的表现。结论实验结果表明,文中方法能有效地解决图像的高斯模糊和运动模糊等问题,并取得了良好的视觉效果,在图像去模糊领域具有一定的参考价值。  相似文献   

10.
Queuing networks have been used with partial success for analytical modelling of manufacturing systems. In this paper, we consider a tandem system with high traffic variability caused by downtime events in the first queue. We propose improved approximation for departure variability in order to predict the waiting duration at the bottleneck queue located last in the line. We demonstrate that existing methods do not properly approximate such systems and provide some reasons and insights. Thus, a new decomposition method which employs the variability function principles is proposed. We differentiate between two components of the departure variability in multi-class systems: the ‘within-class effect’ – the variability caused by the class’ own inter-arrival and service time distributions – and the ‘between-class effect’ – the variability caused by interactions with other classes. Our analysis shows that the first effect can be approximated by existing multi-class decomposition methods, while the second effect requires a new development. Our proposed approximation for between-class effect is based on simulating a proper sub-system. The method enables modelling different policies of downtimes (e.g. FCFS, Priority). Numerical experiments show relative errors much smaller vs. existing procedures.  相似文献   

11.
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.  相似文献   

12.
We develop an initial dynamic power-concious routing scheme (MPR) that incorporates physical layer and link layer statistics to conserve power, while compensating for the channel conditions and interference environment at the intended receiver. The aim of MPR is to route a packet on a path that will require the least amount of total power expended and for each node to transmit with just enough power to ensure reliable communication. We evaluate the performance of MPR and present our preliminary results.  相似文献   

13.
    
Container is an emerging virtualization technology and widely adopted in the cloud to provide services because of its lightweight, flexible, isolated and highly portable properties. Cloud services are often instantiated as clusters of interconnected containers. Due to the stochastic service arrival and complicated cloud environment, it is challenging to achieve an optimal container placement (CP) scheme. We propose to leverage Deep Reinforcement Learning (DRL) for solving CP problem, which is able to learn from experience interacting with the environment and does not rely on mathematical model or prior knowledge. However, applying DRL method directly dose not lead to a satisfying result because of sophisticated environment states and huge action spaces. In this paper, we propose UNREAL-CP, a DRL-based method to place container instances on servers while considering end to end delay and resource utilization cost. The proposed method is an actor-critic-based approach, which has advantages in dealing with the huge action space. Moreover, the idea of auxiliary learning is also included in our architecture. We design two auxiliary learning tasks about load balancing to improve algorithm performance. Compared to other DRL methods, extensive simulation results show that UNREAL-CP performs better up to 28.6% in terms of reducing delay and deployment cost with high training efficiency and responding speed.  相似文献   

14.
分析了低功耗自适应分簇路由协议(LEACH)算法,对算法中簇头选举数目的随机性做了改进并且在簇头选举时加入了对节点剩余能量的考虑,同时提出采用欧式平面上两条曲线交叉概率很大的思想,在簇头与基站之间建立多跳链路,从而解决了原协议中簇头与基站单跳通信能量消耗过大的问题.性能分析和仿真实验表明:改进的协议有效均衡了节点能耗,提高了网络寿命.  相似文献   

15.
在分析了无线传感器网络中传统的LEACH和LEACH-C路由协议基础上,结合MTE路由协议思想,提出了一种新的改进型分簇分层路由协议(improved clustering hierarchical routing protocol,ICH).文中簇首节点可以采用多跳方式传输数据包,且在选择中继节点时考虑节点剩余能量,对进入下一轮的条件进行了限制.实验表明,改进后的ICH协议的节点存活率比LEACH-C好.  相似文献   

16.
基于位置信息的水声传感器网络路由协议   总被引:4,自引:0,他引:4  
孙桂芝  黄耀群 《声学技术》2007,26(4):597-601
由于水下环境与地面环境不同,无线传感器网络中的协议不能直接应用于水下传感器网络中。针对水下环境的特点,提出了一种适用于水下传感器网络的路由协议。它是一种可扩展的、能量高效的路由协议。仿真结果显示:网络节点的移动速率不是很大时,该协议具有能量利用率高、数据传输成功率高和传输延时低等优点。  相似文献   

17.
    
With the rising demand for data access, network service providers face the challenge of growing their capital and operating costs while at the same time enhancing network capacity and meeting the increased demand for access. To increase efficacy of Software Defined Network (SDN) and Network Function Virtualization (NFV) framework, we need to eradicate network security configuration errors that may create vulnerabilities to affect overall efficiency, reduce network performance, and increase maintenance cost. The existing frameworks lack in security, and computer systems face few abnormalities, which prompts the need for different recognition and mitigation methods to keep the system in the operational state proactively. The fundamental concept behind SDN-NFV is the encroachment from specific resource execution to the programming-based structure. This research is around the combination of SDN and NFV for rational decision making to control and monitor traffic in the virtualized environment. The combination is often seen as an extra burden in terms of resources usage in a heterogeneous network environment, but as well as it provides the solution for critical problems specially regarding massive network traffic issues. The attacks have been expanding step by step; therefore, it is hard to recognize and protect by conventional methods. To overcome these issues, there must be an autonomous system to recognize and characterize the network traffic’s abnormal conduct if there is any. Only four types of assaults, including HTTP Flood, UDP Flood, Smurf Flood, and SiDDoS Flood, are considered in the identified dataset, to optimize the stability of the SDN-NFV environment and security management, through several machine learning based characterization techniques like Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR) and Isolation Forest (IF). Python is used for simulation purposes, including several valuable utilities like the mine package, the open-source Python ML libraries Scikit-learn, NumPy, SciPy, Matplotlib. Few Flood assaults and Structured Query Language (SQL) injections anomalies are validated and effectively-identified through the anticipated procedure. The classification results are promising and show that overall accuracy lies between 87% to 95% for SVM, LR, KNN, and IF classifiers in the scrutiny of traffic, whether the network traffic is normal or anomalous in the SDN-NFV environment.  相似文献   

18.
    
In recent years, Software Defined Networking (SDN) has become an important candidate for communication infrastructure in smart cities. It produces a drastic increase in the need for delivery of video services that are of high resolution, multiview, and large-scale in nature. However, this entity gets easily influenced by heterogeneous behaviour of the user's wireless link features that might reduce the quality of video stream for few or all clients. The development of SDN allows the emergence of new possibilities for complicated controlling of video conferences. Besides, multicast routing protocol with multiple constraints in terms of Quality of Service (QoS) is a Nondeterministic Polynomial time (NP) hard problem which can be solved only with the help of metaheuristic optimization algorithms. With this motivation, the current research paper presents a new Improved Black Widow Optimization with Levy Distribution model (IBWO-LD)-based multicast routing protocol for smart cities. The presented IBWO-LD model aims at minimizing the energy consumption and bandwidth utilization while at the same time accomplish improved quality of video streams that the clients receive. Besides, a priority-based scheduling and classifier model is designed to allocate multicast request based on the type of applications and deadline constraints. A detailed experimental analysis was carried out to ensure the outcomes improved under different aspects. The results from comprehensive comparative analysis highlighted the superiority of the proposed IBWO-LD model over other compared methods.  相似文献   

19.
    
As the complexity of deep learning (DL) networks and training data grows enormously, methods that scale with computation are becoming the future of artificial intelligence (AI) development. In this regard, the interplay between machine learning (ML) and high-performance computing (HPC) is an innovative paradigm to speed up the efficiency of AI research and development. However, building and operating an HPC/AI converged system require broad knowledge to leverage the latest computing, networking, and storage technologies. Moreover, an HPC-based AI computing environment needs an appropriate resource allocation and monitoring strategy to efficiently utilize the system resources. In this regard, we introduce a technique for building and operating a high-performance AI-computing environment with the latest technologies. Specifically, an HPC/AI converged system is configured inside Gwangju Institute of Science and Technology (GIST), called GIST AI-X computing cluster, which is built by leveraging the latest Nvidia DGX servers, high-performance storage and networking devices, and various open source tools. Therefore, it can be a good reference for building a small or middle-sized HPC/AI converged system for research and educational institutes. In addition, we propose a resource allocation method for DL jobs to efficiently utilize the computing resources with multi-agent deep reinforcement learning (mDRL). Through extensive simulations and experiments, we validate that the proposed mDRL algorithm can help the HPC/AI converged cluster to achieve both system utilization and power consumption improvement. By deploying the proposed resource allocation method to the system, total job completion time is reduced by around 20% and inefficient power consumption is reduced by around 40%.  相似文献   

20.
针对前期工作中讨论的多阶段虚通道(VP)控制和VP拓扑优化问题,以及基于在每一个源目(SD)节点对之间存在一组备选路由集这一假设的相应的优化算法,提出了一个补充算法,它能够求出任意两节点间的所有可能路径。在此基础上,进一步研究了一种动态虚通路(VC)路由策略。与其它路由策略不同,它是在更一般的网络环境中加以考虑的。最后,给出了一个动态VP路由算法,这是动态VC路由策略中的一个重要组成部分。理论分析和试验结果表明,这些算法是正确的,且有极高的实用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号