首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Fog computing is considered a formidable next-generation complement to cloud computing. Nowadays, in light of the dramatic rise in the number of IoT devices, several problems have been raised in cloud architectures. By introducing fog computing as a mediate layer between the user devices and the cloud, one can extend cloud computing's processing and storage capability. Offloading can be utilized as a mechanism that transfers computations, data, and energy consumption from the resource-limited user devices to resource-rich fog/cloud layers to achieve an optimal experience in the quality of applications and improve the system performance. This paper provides a systematic and comprehensive study to evaluate fog offloading mechanisms' current and recent works. Each selected paper's pros and cons are explored and analyzed to state and address the present potentialities and issues of offloading mechanisms in a fog environment efficiently. We classify offloading mechanisms in a fog system into four groups, including computation-based, energy-based, storage-based, and hybrid approaches. Furthermore, this paper explores offloading metrics, applied algorithms, and evaluation methods related to the chosen offloading mechanisms in fog systems. Additionally, the open challenges and future trends derived from the reviewed studies are discussed.

  相似文献   

2.
雾计算将云计算的计算能力、数据分析应用等扩展到网络边缘,可满足物联网设备的低时延、移动性等要求,但同时也存在数据安全和隐私保护问题。传统云计算中的属性基加密技术不适用于雾环境中计算资源有限的物联网设备,并且难以管理属性变更。为此,提出一种支持加解密外包和撤销的属性基加密方案,构建“云-雾-终端”的三层系统模型,通过引入属性组密钥的技术,实现动态密钥更新,满足雾计算中属性即时撤销的要求。在此基础上,将终端设备中部分复杂的加解密运算外包给雾节点,以提高计算效率。实验结果表明,与KeyGen、Enc等方案相比,该方案具有更优的计算高效性和可靠性。  相似文献   

3.
Every day, more and more data is being produced by the Internet of Things (IoT) applications. IoT data differ in amount, diversity, veracity, and velocity. Because of latency, various types of data handling in cloud computing are not suitable for many time-sensitive applications. When users move from one site to another, mobility also adds to the latency. By placing computing close to IoT devices with mobility support, fog computing addresses these problems. An efficient Load Balancing Algorithm (LBA) improves user experience and Quality of Service (QoS). Classification of Request (CoR) based Resource Adaptive LBA is suggested in this research. This technique clusters fog nodes using an efficient K-means clustering algorithm and then uses a Decision Tree approach to categorize the request. The decision-making process for time-sensitive and delay-tolerable requests is facilitated by the classification of requests. LBA does the operation based on these classifications. The MobFogSim simulation program is utilized to assess how well the algorithm with mobility features performs. The outcome demonstrates that the LBA algorithm’s performance enhances the total system performance, which was attained by (90.8%). Using LBA, several metrics may be examined, including Response Time (RT), delay (d), Energy Consumption (EC), and latency. Through the on-demand provisioning of necessary resources to IoT users, our suggested LBA assures effective resource usage.  相似文献   

4.
The evolution of edge computing devices has enabled machine intelligence techniques to process data close to its producers (the sensors) and end-users. Although edge devices are usually resource-constrained, the distribution of processing services among several nodes enables a processing capacity similar to cloud environments. However, the edge computing environment is highly dynamic, impacting the availability of nodes in the distributed system. In addition, the processing workload for each node can change constantly. Thus, the scaling of processing services needs to be rapidly adjusted, avoiding bottlenecks or wasted resources while meeting the applications’ QoS requirements. This paper presents an auto-scaling subsystem for container-based processing services using online machine learning. The auto-scaling follows the MAPE-K control loop to dynamically adjust the number of containers in response to workload changes. We designed the approach for scenarios where the number of processing requests is unknown beforehand. We developed a hybrid auto-scaling mechanism that behaves reactively while a prediction online machine learning model is continuously trained. When the prediction model reaches a desirable performance, the auto-scaling acts proactively, using predictions to anticipate scaling actions. An experimental evaluation has demonstrated the feasibility of the architecture. Our solution achieved fewer service level agreement (SLA) violations and scaling operations to meet demand than purely reactive and no scaling approaches using an actual application workload. Also, our solution wasted fewer resources compared to the other techniques.  相似文献   

5.
Fog and Cloud computing are ubiquitous computing paradigms based on the concepts of utility and grid computing. Cloud service providers permit flexible and dynamic access to virtualized computing resources on pay-per-use basis to the end users. The users having mobile device will like to process maximum number of applications locally by defining fog layer to provide infrastructure for storage and processing of applications. In case demands for resources are not being satisfied by fog layer of mobile device then job is transferred to cloud for processing. Due to large number of jobs and limited resources, fog is prone to deadlock at very large scale. Therefore, Quality of Service (QoS) and reliability are important aspects for heterogeneous fog and cloud framework. In this paper, Social Network Analysis (SNA) technique is used to detect deadlock for resources in fog layer of mobile device. A new concept of free space fog is proposed which helps to remove deadlock by collecting available free resource from all allocated jobs. A set of rules are proposed for a deadlock manager to increase the utilization of resources in fog layer and decrease the response time of request in case deadlock is detected by the system. Two different clouds (public cloud and virtual private cloud) apart from fog layer and free space fog are used to manage deadlock effectively. Selection among them is being done by assigning priorities to the requests and providing resources accordingly from fog and cloud. Therefore, QoS as well as reliability to users can be provided using proposed framework. Cloudsim is used to evaluate resource utilization using Resource Pool Manager (RPM). The results show the effectiveness of proposed technique.  相似文献   

6.
The limited energy supply, computing, storage and transmission capabilities of mobile devices pose a number of challenges for improving the quality of service (QoS) of various mobile applications, which has stimulated the emergence of many enhanced mobile computing paradigms, such as mobile cloud computing (MCC), fog computing, mobile edge computing (MEC), etc. The mobile devices need to partition mobile applications into related tasks and decide which tasks should be offloaded to remote computing facilities provided by cloud computing, fog nodes etc. It is very important yet tough to decide which tasks to be uploaded and where they are scheduled since this could greatly impact the applications’ timeliness and mobile devices’ lifetime. In this paper, we model the task scheduling problem at the end-user mobile device as an energy consumption optimization problem, while taking into account task dependency, data transmission and other constraint conditions such as task deadline and cost. We further present several heuristic algorithms to solve it. A series of simulation experiments are conducted to evaluate the performance of the algorithms and the results show that our proposed algorithms outperform the state-of-the-art algorithms in energy efficiency as well as response time.  相似文献   

7.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

8.
With the massive growth of information generation, processing, and distribution in the Internet of Things (IoT), the existing cloud architectures need to be designed more effectively using fog networks. The current IP-address-based Internet architecture is unable to deliver the desired Quality-of-Service (QoS) towards the increasing demands of fog networking-based applications. To this end, Content-Centric Networking (CCN) has been developed as the potential future Internet architecture. CCN provides name-based content delivery and is established as an architecture for next-generation fog applications. The CCN-based fog environment uses the cache of in-network fog nodes to place the contents near the end-user devices. Generally, the caching capacity of the fog nodes is very small as compared to the content catalog size. Therefore, efficient content placement decisions are vital for improving the network performance. To enhance the content retrieval performance for the end-users, a novel content caching scheme named “Dynamic Partitioning and Popularity based Caching for Optimized Performance (DPPCOP)” has been proposed in this paper. First, the proposed scheme partitions the fog network by grouping the fog nodes into non-overlapping partitions to improve content distributions in the network and to ensure efficient content placement decisions. During partitioning, the scheme uses the Elbow method to obtain the “good” number of partitions. Then, the DPPCOP scheme analyzes the partition’s information along with the content popularity and distance metrics to place the popular contents near the end-user devices. Extensive simulations on realistic network topologies demonstrate the superiority of the DPPCOP caching strategy on existing schemes over various performance measurement parameters such as cache hit ratio, delay, and average network traffic load. This makes the proposed scheme suitable for next-generation CCN-based fog networks and the futuristic Internet architectures for industry 4.0.  相似文献   

9.
在雾计算中,基于密文策略属性加密(Ciphertext-Policy Attribute-Based Encryption,CP-ABE)技术被广泛用于解决数据的细粒度访问控制问题,然而其中的加解密计算给资源有限的物联网设备带来沉重的负担。提出一种改进的支持计算外包的多授权CP-ABE访问控制方案,将部分加解密计算从物联网设备外包给临近的雾节点,在实现数据细粒度访问控制的同时减少物联网设备的计算开销,适用于实际的物联网应用场景。从理论和实验两方面对所提方案的效率与功能进行分析,分析结果表明所提方案具有较高的系统效率和实用价值。  相似文献   

10.
The rapid proliferation of Internet of things (IoT) devices, such as smart meters and water valves, into industrial critical infrastructures and control systems has put stringent performance and scalability requirements on modern Supervisory Control and Data Acquisition (SCADA) systems. While cloud computing has enabled modern SCADA systems to cope with the increasing amount of data generated by sensors, actuators, and control devices, there has been a growing interest recently to deploy edge data centers in fog architectures to secure low-latency and enhanced security for mission-critical data. However, fog security and privacy for SCADA-based IoT critical infrastructures remains an under-researched area. To address this challenge, this contribution proposes a novel security “toolbox” to reinforce the integrity, security, and privacy of SCADA-based IoT critical infrastructure at the fog layer. The toolbox incorporates a key feature: a cryptographic-based access approach to the cloud services using identity-based cryptography and signature schemes at the fog layer. We present the implementation details of a prototype for our proposed secure fog-based platform and provide performance evaluation results to demonstrate the appropriateness of the proposed platform in a real-world scenario. These results can pave the way toward the development of a more secure and trusted SCADA-based IoT critical infrastructure, which is essential to counter cyber threats against next-generation critical infrastructure and industrial control systems. The results from the experiments demonstrate a superior performance of the secure fog-based platform, which is around 2.8 seconds when adding five virtual machines (VMs), 3.2 seconds when adding 10 VMs, and 112 seconds when adding 1000 VMs, compared to the multilevel user access control platform.  相似文献   

11.
智能城市、智慧工厂等对物联网设备(Internet of Things,IoT)的性能和连接性提出了挑战。边缘计算的出现弥补了这些能力受限的设备,通过将密集的计算任务从它们迁移到边缘节点(Edge Node,EN),物联网设备能够在节约更多能耗的同时,仍保持服务质量。计算卸载决策涉及协作和复杂的资源管理,应该根据动态工作负载和网络环境实时确定计算卸载决策。采用模拟实验的方法,通过在物联网设备和边缘节点上都部署深度强化学习代理来最大化长期效用,并引入联盟学习来分布式训练深度强化学习代理。首先构建支持边缘计算的物联网系统,IoT从EN处下载已有模型进行训练,密集型计算任务卸载至EN进行训练;IoT上传更新的参数至EN,EN聚合该参数与EN处的模型得到新的模型;云端可在EN处获得新的模型并聚合,IoT也可以从EN获得更新的参数应用在设备上。经过多次迭代,该IoT能获得接近集中式训练的性能,并且降低了物联网设备和边缘节点之间的传输成本,实验证实了决策方案和联盟学习在动态物联网环境中的有效性。  相似文献   

12.
Fog computing provides quality of service for cloud infrastructure. As the data computation intensifies, edge computing becomes difficult. Therefore, mobile fog computing is used for reducing traffic and the time for data computation in the network. In previous studies, software-defined networking (SDN) and network functions virtualization (NFV) were used separately in edge computing. Current industrial and academic research is tackling to integrate SDN and NFV in different environments to address the challenges in performance, reliability, and scalability. SDN/NFV is still in development. The traditional Internet of things (IoT) data analysis system is only based on a linear and time-variant system that needs an IoT data system with a high-precision model. This paper proposes a combined architecture of SDN and NFV on an edge node server for IoT devices to reduce the computational complexity in cloud-based fog computing. SDN provides a generalization structure of the forwarding plane, which is separated from the control plane. Meanwhile, NFV concentrates on virtualization by combining the forwarding model with virtual network functions (VNFs) as a single or chain of VNFs, which leads to interoperability and consistency. The orchestrator layer in the proposed software-defined NFV is responsible for handling real-time tasks by using an edge node server through the SDN controller via four actions: task creation, modification, operation, and completion. Our proposed architecture is simulated on the EstiNet simulator, and total time delay, reliability, and satisfaction are used as evaluation parameters. The simulation results are compared with the results of existing architectures, such as software-defined unified virtual monitoring function and ASTP, to analyze the performance of the proposed architecture. The analysis results indicate that our proposed architecture achieves better performance in terms of total time delay (1800 s for 200 IoT devices), reliability (90%), and satisfaction (90%).  相似文献   

13.
The rapid growth of industry and transport within this contemporary progress, there was sufficient consideration given to air quality monitoring; but conventional air quality monitoring methods are inefficient to produce adequate spatial and temporal resolutions of the air quality information by cost-effective also the period time clarifications. During the paper, we propose a distinct methodology to achieve the air quality monitoring system, using this fog computing-based Internet of Things (IoT). In this paper proposed an embedded system, where sensors collect the air quality information within period time and send it over the fog nodes. Every fog node may be an extraordinarily virtualized program hosted at a committed computing node implemented with a connection interface. Data gathered by Microprocessor based IoT sensing things do not seem to be causing on into the cloud server to the process. Preferably, they do send through the adjacent fog node to get quick, including high-rise rate service. Though, fog node will refine non-actionable data (e.g., regular device measurement) also forward them to the Cloud for lengthy run storage and batch analytics. The Cloud may be a convenient location to run world analytics at information gathered from commonly shared devices over sustained periods (months, years). General purpose processor (microprocessor) and IoT cloud platforms were involved in developing this whole infrastructure and model for analysis. Empirical outcomes reveal that this advanced method is responsible for sensing air quality, which serves to expose the modification patterns regarding air quality through a certain level.  相似文献   

14.
Fog computing has become an effective platform for computing delay-sensitive IoT tasks. However, the increased scalability of IoT devices ( IDs $$ \mathrm{IDs} $$ ) makes it difficult for fog nodes to perform better. Volunteer computing (VC) has emerged as a supportive technology in which resource-capable ID $$ \mathrm{ID} $$ s, such as computers and laptops, share their idle resources to compute the IoT tasks. However, in VC-based approaches, improper selection of volunteer nodes (VNs) may result in an increased failure rate and delay. To address these challenges, this work proposes a Smart Admission Control strategy utilizing volunteer-enabled Fog-Cloud computing (SAC-VFC). The VNs are selected based on grey TOPSIS ranking. The incoming tasks are classified based on priority and delay and then scheduled using the Improved Jellyfish Algorithm (IJFA). Smart gateway (SGW) and fog manager (FM) act as mediators for allocating tasks among voluntary, fog, and cloud resources. FM performs a similarity-based clustering of fog nodes using the enhanced Fuzzy C Means clustering (EFCM) algorithm to manage resources. Simulation study suggests the superior performance of SAC-VFC over peers under comparison in terms of average delay, average makespan, success rate of tasks and tasks satisfying the deadline metrics.  相似文献   

15.
Internet of Things (IoT), fog computing, cloud computing, and data-driven techniques together offer a great opportunity for verticals such as dairy industry to increase productivity by getting actionable insights to improve farming practices, thereby increasing efficiency and yield. In this paper, we present SmartHerd, a fog computing–assisted end-to-end IoT platform for animal behavior analysis and health monitoring in a dairy farming scenario. The platform follows a microservices-oriented design to assist the distributed computing paradigm and addresses the major issue of constrained Internet connectivity in remote farm locations. We present the implementation of the designed software system in a 6-month mature real-world deployment, wherein the data from wearables on cows is sent to a fog-based platform for data classification and analysis, which includes decision-making capabilities and provides actionable insights to farmer towards the welfare of animals. With fog-based computational assistance in the SmartHerd setup, we see an 84% reduction in amount of data transferred to the cloud as compared with the conventional cloud-based approach.  相似文献   

16.
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.  相似文献   

17.
Internet of Things (IoT) has drawn much attention in recent years. However, the image data captured by IoT terminal devices are closely related to users’ personal information, which are sensitive and should be protected. Though traditional privacy-preserving outsourced computing solutions such as homomorphic cryptographic primitives can support privacy-preserving computing, they consume a significant amount of computation and storage resources. Thus, it becomes a heavy burden on IoT terminal devices with limited resources. In order to reduce the resource consumption of terminal device, we propose an edge-assisted privacy-preserving outsourced computing framework for image processing, including image retrieval and classification. The edge nodes cooperate with the terminal device to protect data and support privacy-preserving computing on the semitrusted cloud server. Under this framework, edge-assisted privacy-preserving image retrieval and classification schemes are proposed in this paper. The security analysis and performance evaluation show that the proposed schemes greatly reduce the computational, communication and storage burden of IoT terminal device while ensuring image data security.  相似文献   

18.
雾计算是一种在云数据中心和物联网(Internet of Things,IoT)设备之间提供分布式计算、存储等服务的技术,它能利用网络边缘进行认证并提供与云交互的方法。雾计算中以传统的安全技术实现用户与雾节点间安全性的方法不够完善,它仍然面对着窃听攻击、伪装攻击等安全威胁,这对检测技术提出了新的挑战。针对这一问题,提出了一种基于DQL(Double Q-learning)算法的雾计算伪装攻击检测方案。该方案借助物理层安全技术中的信道参数,首先在Q-learning算法的基础上处理Q值过度估计问题,获取最佳的伪装攻击测试阈值,然后通过阈值实现了用户与雾节点间的伪装攻击检测。实验结果表明,该算法检测伪装攻击的性能优于传统的Q-learning算法,具有在雾计算安全防护方面的优越性。  相似文献   

19.
The emergent paradigm of fog computing advocates that the computational resources can be extended to the edge of the network, so that the transmission latency and bandwidth burden caused by cloud computing can be effectively reduced. Moreover, fog computing can support and facilitate some kinds of applications that do not cope well with some features of cloud computing, for instance, applications that require low and predictable latency, and geographically distributed applications. However, fog computing is not a substitute but instead a powerful complement to the cloud computing. This paper focuses on studying the interplay and cooperation between the edge (fog) and the core (cloud) in the context of the Internet of Things (IoT). We first propose a three-tier system architecture and mathematically characterize each tier in terms of energy consumption and latency. After that, simulations are performed to evaluate the system performance with and without the fog involvement. The simulation results show that the three-tier system outperforms the two-tier system in terms of the assessed metrics.  相似文献   

20.
Making resources closer to the user might facilitate the integration of new technologies such as edge, fog, cloud computing, and big data. However, this brings many challenges shall be overridden when distributing a real‐time stream processing, executing multiapplication in a safe multitenant environment, and orchestrating and managing the services and resources into a hybrid fog/cloud federation. In this article, first, we propose a business process model and notation (BPMN) extension to enable the Internet of Things (IoT)‐aware business process (BP) modeling. The proposed extension takes into consideration the heterogeneous IoT and non‐IoT resources, resource capacities, quality of service constraints, and so forth. Second, we present a new IoT‐fog‐cloud based architecture, which (i) supports the distributed inter and intralayer communication as well as the real‐time stream processing in order to treat immediately IoT data and improve the entire system reliability, (ii) enables the multiapplication execution within a multitenancy architecture using the single sign‐on technique to guarantee the data integrity within a multitenancy environment, and (iii) relies on the orchestration and federation management services for deploying BP into the appropriate fog and/or cloud resources. Third, we model, by using the proposed BPMN 2.0 extension, smart autistic child and coronavirus disease 2019 monitoring systems. Then we propose the prototypes for these two smart systems in order to carry out a set of extensive experiments illustrating the efficiency and effectiveness of our work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号