首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Edge Computing is a new technology in Internet of Things (IoT) paradigm that allows sensitive data to be sent to disperse devices quickly and without delay. Edge is identical to Fog, except its positioning in the end devices is much nearer to end-users, making it process and respond to clients in less time. Further, it aids sensor networks, real-time streaming apps, and the IoT, all of which require high-speed and dependable internet access. For such an IoT system, Resource Scheduling Process (RSP) seems to be one of the most important tasks. This paper presents a RSP for Edge Computing (EC). The resource characteristics are first standardized and normalized. Next, for task scheduling, a Fuzzy Control based Edge Resource Scheduling (FCERS) is suggested. The results demonstrate that this technique enhances resource scheduling efficiency in EC and Quality of Service (QoS). The experimental study revealed that the suggested FCERS method in this work converges quicker than the other methods. Our method reduces the total computing cost, execution time, and energy consumption on average compared to the baseline. The ES allocates higher processing resources to each user in case of limited availability of MDs; this results in improved task execution time and a reduced total task computation cost. Additionally, the proposed FCERS m 1m may more efficiently fetch user requests to suitable resource categories, increasing user requirements.  相似文献   

2.
Wireless sensor networks (WSNs) and Internet of Things (IoT) have gained more popularity in recent years as an underlying infrastructure for connected devices and sensors in smart cities. The data generated from these sensors are used by smart cities to strengthen their infrastructure, utilities, and public services. WSNs are suitable for long periods of data acquisition in smart cities. To make the networks of smart cities more reliable for sensitive information, the blockchain mechanism has been proposed. The key issues and challenges of WSNs in smart cities is efficiently scheduling the resources; leading to extending the network lifetime of sensors. In this paper, a linear network coding (LNC) for WSNs with blockchain-enabled IoT devices has been proposed. The consumption of energy is reduced for each node by applying LNC. The efficiency and the reliability of the proposed model are evaluated and compared to those of the existing models. Results from the simulation demonstrate that the proposed model increases the efficiency in terms of the number of live nodes, packet delivery ratio, throughput, and the optimized residual energy compared to other current techniques.  相似文献   

3.
With the rapid development of Internet technology, users have an increasing demand for data. The continuous popularization of traffic-intensive applications such as high-definition video, 3D visualization, and cloud computing has promoted the rapid evolution of the communications industry. In order to cope with the huge traffic demand of today’s users, 5G networks must be fast, flexible, reliable and sustainable. Based on these research backgrounds, the academic community has proposed D2D communication. The main feature of D2D communication is that it enables direct communication between devices, thereby effectively improve resource utilization and reduce the dependence on base stations, so it can effectively improve the throughput of multimedia data. One of the most considerable factor which affects the performance of D2D communication is the co-channel interference which results due to the multiplexing of multiple D2D user using the same channel resource of the cellular user. To solve this problem, this paper proposes a joint algorithm time scheduling and power control. The main idea is to effectively maximize the number of allocated resources in each scheduling period with satisfied quality of service requirements. The constraint problem is decomposed into time scheduling and power control subproblems. The power control subproblem has the characteristics of mixed-integer linear programming of NP-hard. Therefore, we proposed a gradual power control method. The time scheduling subproblem belongs to the NP-hard problem having convex-cordinality, therefore, we proposed a heuristic scheme to optimize resource allocation. Simulation results show that the proposed algorithm effectively improved the resource allocation and overcome the co-channel interference as compared with existing algorithms.  相似文献   

4.
The development in Information and Communication Technology has led to the evolution of new computing and communication environment. Technological revolution with Internet of Things (IoTs) has developed various applications in almost all domains from health care, education to entertainment with sensors and smart devices. One of the subsets of IoT is Internet of Medical things (IoMT) which connects medical devices, hardware and software applications through internet. IoMT enables secure wireless communication over the Internet to allow efficient analysis of medical data. With these smart advancements and exploitation of smart IoT devices in health care technology there increases threat and malware attacks during transmission of highly confidential medical data. This work proposes a scheme by integrating machine learning approach and block chain technology to detect malware during data transmission in IoMT. The proposed Machine Learning based Block Chain Technology malware detection scheme (MLBCT-Mdetect) is implemented in three steps namely: feature extraction, Classification and blockchain. Feature extraction is performed by calculating the weight of each feature and reduces the features with less weight. Support Vector Machine classifier is employed in the second step to classify the malware and benign nodes. Furthermore, third step uses blockchain to store details of the selected features which eventually improves the detection of malware with significant improvement in speed and accuracy. ML-BCT-Mdetect achieves higher accuracy with low false positive rate and higher True positive rate.  相似文献   

5.
In the smart city paradigm, the deployment of Internet of Things (IoT) services and solutions requires extensive communication and computing resources to place and process IoT applications in real time, which consumes a lot of energy and increases operational costs. Usually, IoT applications are placed in the cloud to provide high-quality services and scalable resources. However, the existing cloud-based approach should consider the above constraints to efficiently place and process IoT applications. In this paper, an efficient optimization approach for placing IoT applications in a multi-layer fog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into account IoT application requirements, available resource capacities, and geographical locations of servers, which would help optimize IoT application placement decisions, considering multiple objectives such as data transmission, power consumption, and cost. Simulation experiments were conducted with various IoT applications (e.g., augmented reality, infotainment, healthcare, and compute-intensive) to simulate realistic scenarios. The results showed that the proposed approach outperformed the existing cloud-based approach in terms of reducing data transmission by 64% and the associated processing and networking power consumption costs by up to 78%. Finally, a heuristic approach was developed to validate and imitate the presented approach. It showed comparable outcomes to the proposed model, with the gap between them reach to a maximum of 5.4% of the total power consumption.  相似文献   

6.
Abstract

In this paper, we propose a fair resource reservation and scheduling algorithm for delay‐bounded services. User applications initiate the requests by specifying the tolerable delay and priorities. Packets are scheduled according to the requirements negotiated during the resource reservation phase. Instead of tracking the fair utilization before the packet can be served in WFQ, the bandwidth share is monitored after the packet is sent. This approach can significantly reduce the computational complexity resulting from WFQ while maintaining the long‐term fairness. Examples and simulations are illustrated to show the performance difference from WFQ.  相似文献   

7.
One of the most rapidly growing areas in the last few years is the Internet of Things (IoT), which has been used in widespread fields such as healthcare, smart homes, and industries. Android is one of the most popular operating systems (OS) used by IoT devices for communication and data exchange. Android OS captured more than 70 percent of the market share in 2021. Because of the popularity of the Android OS, it has been targeted by cybercriminals who have introduced a number of issues, such as stealing private information. As reported by one of the recent studies Android malware are developed almost every 10 s. Therefore, due to this huge exploitation an accurate and secure detection system is needed to secure the communication and data exchange in Android IoT devices. This paper introduces Droid-IoT, a collaborative framework to detect Android IoT malicious applications by using the blockchain technology. Droid-IoT consists of four main engines: (i) collaborative reporting engine, (ii) static analysis engine, (iii) detection engine, and (iv) blockchain engine. Each engine contributes to the detection and minimization of the risk of malicious applications and the reporting of any malicious activities. All features are extracted automatically from the inspected applications to be classified by the machine learning model and store the results into the blockchain. The performance of Droid-IoT was evaluated by analyzing more than 6000 Android applications and comparing the detection rate of Droid-IoT with the state-of-the-art tools. Droid-IoT achieved a detection rate of 97.74% with a low false positive rate by using an extreme gradient boosting (XGBoost) classifier.  相似文献   

8.
In the Next Generation Radio Networks (NGRN), there will be extreme massive connectivity with the Heterogeneous Internet of Things (HetIoT) devices. The millimeter-Wave (mmWave) communications will become a potential core technology to increase the capacity of Radio Networks (RN) and enable Multiple-Input and Multiple-Output (MIMO) of Radio Remote Head (RRH) technology. However, the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently. The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests. To handle this issue effectively, Machine Learning (ML) algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions. This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm, namely K-Nearest Neighbor (KNN), to improve the communication Quality of Service (QoS) in real-time IoT networks. KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power. The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters, including communication reliability, latency, and throughput.  相似文献   

9.
This paper studies the problem of scheduling a multiple-load carrier subject to last-in-first-out loading constraints in an automobile assembly line. Two scheduling criteria, the throughput of the assembly line and the material handling distance, are considered in order to maximise the profit of the assembly line. Different from other studies, the product mix and weights of the scheduling criteria are considered to be variable. A scheduling approach is proposed for the problem. At moments when the product mix or weights of the scheduling criteria change, the scheduling approach can select an appropriate rule from a set of given rules. In this study, the proposed approach is compared with other approaches by simulation in order to verify the performance of the proposed approach. The results indicate that, when the product mix and weights of the scheduling criteria are variable, the proposed scheduling approach outperforms other approaches.  相似文献   

10.
Well organized datacentres with interconnected servers constitute the cloud computing infrastructure. User requests are submitted through an interface to these servers that provide service to them in an on-demand basis. The scientific applications that get executed at cloud by making use of the heterogeneous resources being allocated to them in a dynamic manner are grouped under NP hard problem category. Task scheduling in cloud poses numerous challenges impacting the cloud performance. If not handled properly, user satisfaction becomes questionable. More recently researchers had come up with meta-heuristic type of solutions for enriching the task scheduling activity in the cloud environment. The prime aim of task scheduling is to utilize the resources available in an optimal manner and reduce the time span of task execution. An improvised seagull optimization algorithm which combines the features of the Cuckoo search (CS) and seagull optimization algorithm (SOA) had been proposed in this work to enhance the performance of the scheduling activity inside the cloud computing environment. The proposed algorithm aims to minimize the cost and time parameters that are spent during task scheduling in the heterogeneous cloud environment. Performance evaluation of the proposed algorithm had been performed using the Cloudsim 3.0 toolkit by comparing it with Multi objective-Ant Colony Optimization (MO-ACO), ACO and Min-Min algorithms. The proposed SOA-CS technique had produced an improvement of 1.06%, 4.2%, and 2.4% for makespan and had reduced the overall cost to the extent of 1.74%, 3.93% and 2.77% when compared with PSO, ACO, IDEA algorithms respectively when 300 vms are considered. The comparative simulation results obtained had shown that the proposed improvised seagull optimization algorithm fares better than other contemporaries.  相似文献   

11.
The term IoT refers to the interconnection and exchange of data among devices/sensors. IoT devices are often small, low cost, and have limited resources. The IoT issues and challenges are growing increasingly. Security and privacy issues are among the most important concerns in IoT applications, such as smart buildings. Remote cybersecurity attacks are the attacks which do not require physical access to the IoT networks, where the attacker can remotely access and communicate with the IoT devices through a wireless communication channel. Thus, remote cybersecurity attacks are a significant threat. Emerging applications in smart environments such as smart buildings require remote access for both users and resources. Since the user/building communication channel is insecure, a lightweight and secure authentication protocol is required. In this paper, we propose a new secure remote user mutual authentication protocol based on transitory identities and multi-factor authentication for IoT smart building environment. The protocol ensures that only legitimate users can authenticate with smart building controllers in an anonymous, unlinkable, and untraceable manner. The protocol also avoids clock synchronization problem and can resist quantum computing attacks. The security of the protocol is evaluated using two different methods: (1) informal analysis; (2) model check using the automated validation of internet security protocols and applications (AVISPA) toolkit. The communication overhead and computational cost of the proposed are analyzed. The security and performance analysis show that our protocol is secure and efficient.  相似文献   

12.
Scheduling problems concern the allocation of limited resources over time among both parallel and sequential activities. Load balancing has been adopted as an optimization criterion for several scheduling problems. However, in many practical situations, a load-balanced solution may not be feasible or attainable. To deal with this limitation, this paper presents a generic mathematical model of load distribution for resource allocation, called desired load distribution (DLD). The objective is to develop a DLD model for scheduling of unrelated parallel machines that can be used both in centralized resource management settings and in agent-based distributed scheduling systems. The paper describes the proposed DLD model in details, presents a dynamic programming based optimization algorithm for the proposed model, and then discusses its application to agent-based distributed scheduling.  相似文献   

13.
With the continuous evolution of smart grid and global energy interconnection technology, amount of intelligent terminals have been connected to power grid, which can be used for providing resource services as edge nodes. Traditional cloud computing can be used to provide storage services and task computing services in the power grid, but it faces challenges such as resource bottlenecks, time delays, and limited network bandwidth resources. Edge computing is an effective supplement for cloud computing, because it can provide users with local computing services with lower latency. However, because the resources in a single edge node are limited, resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation. Making task scheduling more efficient is an important issue. In this paper, a two-layer resource management scheme is proposed based on the concept of edge computing. In addition, a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing) is put forth, based on a genetic algorithm, that can dynamically schedule tasks according to different scheduling goals. The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing, and reduces time delay.  相似文献   

14.
The world is rapidly changing with the advance of information technology. The expansion of the Internet of Things (IoT) is a huge step in the development of the smart city. The IoT consists of connected devices that transfer information. The IoT architecture permits on-demand services to a public pool of resources. Cloud computing plays a vital role in developing IoT-enabled smart applications. The integration of cloud computing enhances the offering of distributed resources in the smart city. Improper management of security requirements of cloud-assisted IoT systems can bring about risks to availability, security, performance, confidentiality, and privacy. The key reason for cloud- and IoT-enabled smart city application failure is improper security practices at the early stages of development. This article proposes a framework to collect security requirements during the initial development phase of cloud-assisted IoT-enabled smart city applications. Its three-layered architecture includes privacy preserved stakeholder analysis (PPSA), security requirement modeling and validation (SRMV), and secure cloud-assistance (SCA). A case study highlights the applicability and effectiveness of the proposed framework. A hybrid survey enables the identification and evaluation of significant challenges.  相似文献   

15.
Internet of Things (IoT) devices incorporate a large amount of data in several fields, including those of medicine, business, and engineering. User authentication is paramount in the IoT era to assure connected devices’ security. However, traditional authentication methods and conventional biometrics-based authentication approaches such as face recognition, fingerprints, and password are vulnerable to various attacks, including smudge attacks, heat attacks, and shoulder surfing attacks. Behavioral biometrics is introduced by the powerful sensing capabilities of IoT devices such as smart wearables and smartphones, enabling continuous authentication. Artificial Intelligence (AI)-based approaches introduce a bright future in refining large amounts of homogeneous biometric data to provide innovative user authentication solutions. This paper presents a new continuous passive authentication approach capable of learning the signatures of IoT users utilizing smartphone sensors such as a gyroscope, magnetometer, and accelerometer to recognize users by their physical activities. This approach integrates the convolutional neural network (CNN) and recurrent neural network (RNN) models to learn signatures of human activities from different users. A series of experiments are conducted using the MotionSense dataset to validate the effectiveness of the proposed method. Our technique offers a competitive verification accuracy equal to 98.4%. We compared the proposed method with several conventional machine learning and CNN models and found that our proposed model achieves higher identification accuracy than the recently developed verification systems. The high accuracy achieved by the proposed method proves its effectiveness in recognizing IoT users passively through their physical activity patterns.  相似文献   

16.
针对现有异构集群的编程框架着重于异构资源的利用,没有充分考虑共享资源竞争导致作业完成时间延长的情况,基于Hadoop+框架和异构任务模型,提出并实现了异构动态亲和性调度(HDAS)算法,该算法利用Hadoop的心跳机制监测各结点上的资源使用情况和实时负载,对系统中的异构资源用不同的策略计算与任务的亲和性,进行任务分派,使系统的资源利用更充分,从而降低共享资源竞争导致的任务延迟,提高系统的整体吞吐率,且提交到系统中的应用都会在启动后一定时间内被执行。对25种混合负载的试验表明,Hadoop+框架使用HDAS相对于Hadoop的实现可获得平均21.9x的加速比,明显优于基于异构任务模型的调度策略(17.9x),并使其中21个负载的任务平均延迟不超过6%,在任务对系统资源需求多样性丰富的混合负载上优化效果明显。  相似文献   

17.
In the past decade, blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention. However, current blockchain systems face the problems of limited throughput, poor scalability, and high latency. Due to the failure of consensus algorithms in managing nodes’identities, blockchain technology is considered inappropriate for many applications, e.g., in IoT environments, because of poor scalability. This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking (ADR) protocol to improve blockchain scalability and throughput. The ADR protocol uses the directed acyclic graph ledger, where nodes are placed according to their ranking positions in the graph. It allows honest nodes to use the Direct Acyclic Graph (DAG) topology to write blocks and verify transactions instead of a chain of blocks. By using a three-step strategy, this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability. The first step involves the safe entry of nodes into the system by verifying their private and public keys. The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions. In the third step, a ranking algorithm is developed to separate the nodes created by attackers. After eliminating attacker nodes, the nodes are ranked according to their performance in the system, and true nodes are arranged in blocks in topological order. As a result, the ADR protocol is suitable for applications in the Internet of Things (IoT). We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes. Based on the simulation results, this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications (IOTA) and ByteBall.  相似文献   

18.
Cyberattacks are developing gradually sophisticated, requiring effective intrusion detection systems (IDSs) for monitoring computer resources and creating reports on anomalous or suspicious actions. With the popularity of Internet of Things (IoT) technology, the security of IoT networks is developing a vital problem. Because of the huge number and varied kinds of IoT devices, it can be challenging task for protecting the IoT framework utilizing a typical IDS. The typical IDSs have their restrictions once executed to IoT networks because of resource constraints and complexity. Therefore, this paper presents a new Blockchain Assisted Intrusion Detection System using Differential Flower Pollination with Deep Learning (BAIDS-DFPDL) model in IoT Environment. The presented BAIDS-DFPDL model mainly focuses on the identification and classification of intrusions in the IoT environment. To accomplish this, the presented BAIDS-DFPDL model follows blockchain (BC) technology for effective and secure data transmission among the agents. Besides, the presented BAIDS-DFPDL model designs Differential Flower Pollination based feature selection (DFPFS) technique to elect features. Finally, sailfish optimization (SFO) with Restricted Boltzmann Machine (RBM) model is applied for effectual recognition of intrusions. The simulation results on benchmark dataset exhibit the enhanced performance of the BAIDS-DFPDL model over other models on the recognition of intrusions.  相似文献   

19.
The requirement for high-quality seafood is a global challenge in today’s world due to climate change and natural resource limitations. Internet of Things (IoT) based Modern fish farming systems can significantly optimize seafood production by minimizing resource utilization and improving healthy fish production. This objective requires intensive monitoring, prediction, and control by optimizing leading factors that impact fish growth, including temperature, the potential of hydrogen (pH), water level, and feeding rate. This paper proposes the IoT based predictive optimization approach for efficient control and energy utilization in smart fish farming. The proposed fish farm control mechanism has a predictive optimization to deal with water quality control and efficient energy consumption problems. Fish farm indoor and outdoor values are applied to predict the water quality parameters, whereas a novel objective function is proposed to achieve an optimal fish growth environment based on predicted parameters. Fuzzy logic control is utilized to calculate control parameters for IoT actuators based on predictive optimal water quality parameters by minimizing energy consumption. To evaluate the efficiency of the proposed system, the overall approach has been deployed to the fish tank as a case study, and a number of experiments have been carried out. The results show that the predictive optimization module allowed the water quality parameters to be maintained at the optimal level with nearly 30% of energy efficiency at the maximum actuator control rate compared with other control levels.  相似文献   

20.
The drum–buffer–rope (DBR) is a scheduling mechanism under the Theory of Constraints (TOC) philosophy. In DBR, ‘drum’ is a production schedule on the capacity-constrained resources (CCRs), which controls the speed of production for the whole system; ‘rope’ is a mechanism to release the required material to the CCRs; and ‘buffer’ is used to protect the CCRs from starvation due to statistical fluctuations. For a non-identical parallel machine flow-shop environment, estimating an efficient rope and time buffer for DBR implementation is not an easy task because of the complexity of non-identical parallel machine loading. This paper proposes a new scheduling method, which is called the modified DBR (MOD-DBR). It applies a backward finite capacity scheduling technique, including machine loadings and detail scheduling, instead of the rope mechanism in DBR. The scheduling performances of MOD-DBR are evaluated under variable processing time situations. The experimental results indicate that the MOD-DBR without a time buffer outperformed the DBR with a considerable level of buffer on the average flow time, while they have the same performance on tardiness, constraint resource utilization, and throughput.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号