首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
随着物联网的快速发展和4G/5G无线网络的普及,万物互联的时代已经到来,网络边缘设备数量的迅速增加,使得该类设备所产生的数据已达到泽字节(ZB)级别.以云计算模型为核心的集中式大数据处理时代,其关键技术已经不能高效处理边缘设备所产生的数据,主要表现在:1)线性增长的集中式云计算能力无法匹配爆炸式增长的海量边缘数据;2)从网络边缘设备传输海量数据到云中心致使网络传输带宽的负载量急剧增加,造成较长的网络延迟;3)网络边缘数据涉及个人隐私,使得隐私安全问题变得尤为突出;4)有限电能的网络边缘设备传输数据到云中心消耗较大电能.为此,以边缘计算模型为核心的面向网络边缘设备所产生海量数据计算的边缘式大数据处理应运而生,其与现有以云计算模型为核心的集中式大数据处理相结合,即二者相辅相成,应用于云中心和网络边缘端的大数据处理,较好地解决了万物互联时代大数据处理中所存在的上述问题.边缘计算中的“边缘”是个相对的概念,指从数据源到云计算中心数据路径之间的任意计算资源和网络资源.边缘计算的基本理念是将计算任务在接近数据源的计算资源上运行.首先系统地介绍边缘计算的概念和原理;其次,通过现有研究工作为案例(即云计算任务迁移、视频分析、智能家居、智慧城市、智能交通以及协同边缘),实例化边缘计算的概念;最后,提出边缘计算领域所存在的挑战.该文希望能让学界和产业界了解和关注边缘计算,并能够启发更多的学者开展边缘式大数据处理时代边缘计算模型的研究.  相似文献   

2.
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.  相似文献   

3.
With the advent of the Internet of Things (IoT) paradigm, the cloud model is unable to offer satisfactory services for latency-sensitive and real-time applications due to high latency and scalability issues. Hence, an emerging computing paradigm named as fog/edge computing was evolved, to offer services close to the data source and optimize the quality of services (QoS) parameters such as latency, scalability, reliability, energy, privacy, and security of data. This article presents the evolution in the computing paradigm from the client-server model to edge computing along with their objectives and limitations. A state-of-the-art review of Cloud Computing and Cloud of Things (CoT) is presented that addressed the techniques, constraints, limitations, and research challenges. Further, we have discussed the role and mechanism of fog/edge computing and Fog of Things (FoT), along with necessitating amalgamation with CoT. We reviewed the several architecture, features, applications, and existing research challenges of fog/edge computing. The comprehensive survey of these computing paradigms offers the depth knowledge about the various aspects, trends, motivation, vision, and integrated architectures. In the end, experimental tools and future research directions are discussed with the hope that this study will work as a stepping-stone in the field of emerging computing paradigms.  相似文献   

4.
随着智能传感器和无线通信技术的发展,油田物联网系统提高了现场生产数据采集的频率和生产过程控制的效率,然而现有物联网系统仍然通过位于远程数据中心的计算资源进行数据处理和控制,网络带宽和通信延迟成为严重的瓶颈。通过对物联网系统的边缘层设备应用边缘计算技术,充分利用边缘网关的计算和存储能力,使用孤立森林算法实现异常数据检测和报警规则学习,同时对温度和阀门开关进行逻辑控制,将之前在云端的处理功能下沉在边缘端实现,降低对网络的要求,满足偏远地区油田生产需要。  相似文献   

5.
Cloud computing has grown to become a popular distributed computing service offered by commercial providers. More recently, edge and fog computing resources have emerged on the wide-area network as part of Internet of things (IoT) deployments. These three resource abstraction layers are complementary, and offer distinctive benefits. Scheduling applications on clouds has been an active area of research, with workflow and data flow models offering a flexible abstraction to specify applications for execution. However, the application programming and scheduling models for edge and fog are still maturing, and can benefit from learnings on cloud resources. At the same time, there is also value in using these resources cohesively for application execution. In this article, we offer a taxonomy of concepts essential for specifying and solving the problem of scheduling applications on edge, fog, and cloud computing resources. We first characterize the resource capabilities and limitations of these infrastructure and offer a taxonomy of application models, quality-of-service constraints and goals, and scheduling techniques, based on a literature review. We also tabulate key research prototypes and papers using this taxonomy. This survey benefits developers and researchers on these distributed resources in designing and categorizing their applications, selecting the relevant computing abstraction(s), and developing or selecting the appropriate scheduling algorithm. It also highlights gaps in literature where open problems remain.  相似文献   

6.
With the proliferation of Internet of Things (IoT) and edge computing paradigms, billions of IoT devices are being networked to support data-driven and real-time decision making across numerous application domains, including smart homes, smart transport, and smart buildings. These ubiquitously distributed IoT devices send the raw data to their respective edge device (eg, IoT gateways) or the cloud directly. The wide spectrum of possible application use cases make the design and networking of IoT and edge computing layers a very tedious process due to the: (i) complexity and heterogeneity of end-point networks (eg, Wi-Fi, 4G, and Bluetooth); (ii) heterogeneity of edge and IoT hardware resources and software stack; (iv) mobility of IoT devices; and (iii) the complex interplay between the IoT and edge layers. Unlike cloud computing, where researchers and developers seeking to test capacity planning, resource selection, network configuration, computation placement, and security management strategies had access to public cloud infrastructure (eg, Amazon and Azure), establishing an IoT and edge computing testbed that offers a high degree of verisimilitude is not only complex, costly, and resource-intensive but also time-intensive. Moreover, testing in real IoT and edge computing environments is not feasible due to the high cost and diverse domain knowledge required in order to reason about their diversity, scalability, and usability. To support performance testing and validation of IoT and edge computing configurations and algorithms at scale, simulation frameworks should be developed. Hence, this article proposes a novel simulator IoTSim-Edge, which captures the behavior of heterogeneous IoT and edge computing infrastructure and allows users to test their infrastructure and framework in an easy and configurable manner. IoTSim-Edge extends the capability of CloudSim to incorporate the different features of edge and IoT devices. The effectiveness of IoTSim-Edge is described using three test cases. Results show the varying capability of IoTSim-Edge in terms of application composition, battery-oriented modeling, heterogeneous protocols modeling, and mobility modeling along with the resources provisioning for IoT applications.  相似文献   

7.
In this paper, we propose an architecture, design and build a prototype of a novel IoT system with intelligence, distributed at multiple tiers including the network edge. Our proposed architecture hosts a modular, three-tier IoT system including the edge, gateway (fog) and cloud tiers. The proposed system relies on data acquired by edge devices to realize a distributed machine learning model and achieve timely response at the edge using a lightweight machine learning model. In addition, it employs more sophisticated machine learning models at the higher fog and cloud tiers for wider-scope, long-term decision making. One of the prime objectives of the proposed system is reducing the volume of data transferred across tiers. This is attained through intelligent data filtering at the edge/gateway tiers to distill key events that avail the most relevant data points to higher-tier machine learning models at the gateway and cloud. This, in turn, reduces the outliers and the redundant data that may impact the gateway and cloud models and reduces the inter-tier communications overhead. To demonstrate the merits of our proposed system, we build a proof-of-concept prototype hosting the three tiers, using COTS components and supporting networking technologies. We demonstrate through extensive experiments the merits of the proposed system. A major finding is that our system is capable of achieving prediction performance comparable to the centralized machine learning baseline model, while reducing the inter-tier communications overhead by up to 80%.  相似文献   

8.
Many advances have been introduced recently for service-oriented computing and applications (SOCA). The Internet of Things (IoT) has been pervasive in various application domains. Fog/Edge computing models have shown techniques that move computational and analytics capabilities from centralized data centers where most enterprise business services have been located to the edge where most customer’s Things and their data and actions reside. Network functions between the edge and the cloud can be dynamically provisioned and managed through service APIs. Microservice architectures are increasingly used to simplify engineering, deployment and management of distributed services in not only cloud-based powerful machines but also in light-weighted devices. Therefore, a key question for the research in SOCA is how do we leverage existing techniques and develop new ones for coping with and supporting the changes of data and computation resources as well as customer interactions arising in the era of IoT and Fog/Edge computing. In this editorial paper, we attempt to address this question by focusing on the concept of ensembles for IoT, network functions and clouds.  相似文献   

9.
边缘计算可以通过将计算转移至边缘设备,以提高大型物联网流数据的处理质量并降低网络运行成本。然而,实现大型流数据云计算和边缘计算的集成面临两个挑战。首先,边缘设备的计算能力和存储能力有限,不能支持大规模流数据的实时处理。其次,流数据的不可预测性导致边缘端的协作不断地发生变化。因此,有必要实现边缘服务和云服务之间的灵活划分。提出一种面向服务的云端与边缘端的无缝集成方法,用于实现大规模流数据云计算和边缘计算的协作。该方法将云服务分成两部分,分别在云端和边缘端上运行。同时,提出了一种基于改进的二分图动态服务调度机制。当产生事件时,可以在适当的时间将云服务部署到边缘节点。基于真实的电能质量监控数据对提出的方法进行了有效性验证。  相似文献   

10.
The handling of complex tasks in IoT applications becomes difficult due to the limited availability of resources in most IoT devices. There arises a need to offload the IoT tasks with huge processing and storage to resource enriched edge and cloud. In edge computing, factors such as arrival rate, nature and size of task, network conditions, platform differences and energy consumption of IoT end devices impacts in deciding an optimal offloading mechanism. A model is developed to make a dynamic decision for offloading of tasks to edge and cloud or local execution by computing the expected time, energy consumption and processing capacity. This dynamic decision is proposed as processing capacity-based decision mechanism (PCDM) which takes the offloading decisions on new tasks by scheduling all the available devices based on processing capacity. The target devices are then selected for task execution with respect to energy consumption, task size and network time. PCDM is developed in the EDGECloudSim simulator for four different applications from various categories such as time sensitiveness, smaller in size and less energy consumption. The PCDM offloading methodology is experimented through simulations to compare with multi-criteria decision support mechanism for IoT offloading (MEDICI). Strategies based on task weightage termed as PCDM-AI, PCDM-SI, PCDM-AN, and PCDM-SN are developed and compared against the five baseline existing strategies namely IoT-P, Edge-P, Cloud-P, Random-P, and Probabilistic-P. These nine strategies are again developed using MEDICI with the same parameters of PCDM. Finally, all the approaches using PCDM and MEDICI are compared against each other for four different applications. From the simulation results, it is inferred that every application has unique approach performing better in terms of response time, total task execution, energy consumption of device, and total energy consumption of applications.  相似文献   

11.
雾计算将云计算的计算能力、数据分析应用等扩展到网络边缘,可满足物联网设备的低时延、移动性等要求,但同时也存在数据安全和隐私保护问题。传统云计算中的属性基加密技术不适用于雾环境中计算资源有限的物联网设备,并且难以管理属性变更。为此,提出一种支持加解密外包和撤销的属性基加密方案,构建“云-雾-终端”的三层系统模型,通过引入属性组密钥的技术,实现动态密钥更新,满足雾计算中属性即时撤销的要求。在此基础上,将终端设备中部分复杂的加解密运算外包给雾节点,以提高计算效率。实验结果表明,与KeyGen、Enc等方案相比,该方案具有更优的计算高效性和可靠性。  相似文献   

12.
The emergent paradigm of fog computing advocates that the computational resources can be extended to the edge of the network, so that the transmission latency and bandwidth burden caused by cloud computing can be effectively reduced. Moreover, fog computing can support and facilitate some kinds of applications that do not cope well with some features of cloud computing, for instance, applications that require low and predictable latency, and geographically distributed applications. However, fog computing is not a substitute but instead a powerful complement to the cloud computing. This paper focuses on studying the interplay and cooperation between the edge (fog) and the core (cloud) in the context of the Internet of Things (IoT). We first propose a three-tier system architecture and mathematically characterize each tier in terms of energy consumption and latency. After that, simulations are performed to evaluate the system performance with and without the fog involvement. The simulation results show that the three-tier system outperforms the two-tier system in terms of the assessed metrics.  相似文献   

13.
随着5G和物联网时代的到来以及云计算应用的逐渐增加,各有所长的边缘计算与云计算势必彼此融合进行云边协同,实现云计算与边缘计算的优势互补和协同联动.SDN网络因其灵活开放可编程的网络架构被认为是解决当前云计算和边缘计算协同问题的有效方法.基于云计算和边缘计算的优势与不足对云边协同的必要性和具体内涵进行了梳理,归纳总结了目...  相似文献   

14.
随着网络终端的不断普及与互联网应用的快速发展,当今网络不仅要应对日益增长的传输流量,也要满足用户多样化的需求指标。云计算在诸如服务延迟与传输开销等方面难以适应趋势,边缘计算(Edge Computing)则将运算资源从云下移到了网络边缘,并通过就近处理数据的方式提升性能。作为人工智能的主要代表之一,深度学习一方面可以被集成到边缘计算的框架中以构建智能边缘,另一方面也能以服务的形式部署在边缘上从而实现边缘智能。本文从边缘计算与深度学习融合的趋势出发,介绍"边缘智能"与"智能边缘"的概念与应用场景,并说明典型的使能技术及其相互联系。  相似文献   

15.
Fog computing provides quality of service for cloud infrastructure. As the data computation intensifies, edge computing becomes difficult. Therefore, mobile fog computing is used for reducing traffic and the time for data computation in the network. In previous studies, software-defined networking (SDN) and network functions virtualization (NFV) were used separately in edge computing. Current industrial and academic research is tackling to integrate SDN and NFV in different environments to address the challenges in performance, reliability, and scalability. SDN/NFV is still in development. The traditional Internet of things (IoT) data analysis system is only based on a linear and time-variant system that needs an IoT data system with a high-precision model. This paper proposes a combined architecture of SDN and NFV on an edge node server for IoT devices to reduce the computational complexity in cloud-based fog computing. SDN provides a generalization structure of the forwarding plane, which is separated from the control plane. Meanwhile, NFV concentrates on virtualization by combining the forwarding model with virtual network functions (VNFs) as a single or chain of VNFs, which leads to interoperability and consistency. The orchestrator layer in the proposed software-defined NFV is responsible for handling real-time tasks by using an edge node server through the SDN controller via four actions: task creation, modification, operation, and completion. Our proposed architecture is simulated on the EstiNet simulator, and total time delay, reliability, and satisfaction are used as evaluation parameters. The simulation results are compared with the results of existing architectures, such as software-defined unified virtual monitoring function and ASTP, to analyze the performance of the proposed architecture. The analysis results indicate that our proposed architecture achieves better performance in terms of total time delay (1800 s for 200 IoT devices), reliability (90%), and satisfaction (90%).  相似文献   

16.
Recent developments in computer networks and Internet of Things (IoT) have enabled easy access to data. But the government and business sectors face several difficulties in resolving cybersecurity network issues, like novel attacks, hackers, internet criminals, and so on. Presently, malware attacks and software piracy pose serious risks in compromising the security of IoT. They can steal confidential data which results in financial and reputational losses. The advent of machine learning (ML) and deep learning (DL) models has been employed to accomplish security in the IoT cloud environment. This article presents an Enhanced Artificial Gorilla Troops Optimizer with Deep Learning Enabled Cybersecurity Threat Detection (EAGTODL-CTD) in IoT Cloud Networks. The presented EAGTODL-CTD model encompasses the identification of the threats in the IoT cloud environment. The proposed EAGTODL-CTD model mainly focuses on the conversion of input binary files to color images, where the malware can be detected using an image classification problem. The EAGTODL-CTD model pre-processes the input data to transform to a compatible format. For threat detection and classification, cascaded gated recurrent unit (CGRU) model is exploited to determine class labels. Finally, EAGTO approach is employed as a hyperparameter optimizer to tune the CGRU parameters, showing the novelty of our work. The performance evaluation of the EAGTODL-CTD model is assessed on a dataset comprising two class labels namely malignant and benign. The experimental values reported the supremacy of the EAGTODL-CTD model with increased accuracy of 99.47%.  相似文献   

17.
随着万物联网的趋势不断加深,智能手机、智能眼镜等端设备的数量不断增加,使数据的增长速度远远超过了网络带宽的增速;同时,增强现实、无人驾驶等众多新应用的出现对延迟提出了更高的要求.边缘计算将网络边缘上的计算、网络与存储资源组成统一的平台为用户提供服务,使数据在源头附近就能得到及时有效的处理.这种模式不同于云计算要将所有数据传输到数据中心,绕过了网络带宽与延迟的瓶颈,引起了广泛的关注.首先介绍边缘计算的概念,并给出边缘计算的定义;随后,比较了当前比较有代表性的3个边缘计算平台,并通过一些应用实例来分析边缘计算在移动应用和物联网应用上的优势;最后阐述了当前边缘计算面临的挑战.  相似文献   

18.
边缘计算将云计算扩展到网络边缘,在解决了云计算时延高、移动性差和位置感知弱等缺陷的同时也带来了诸多安全问题;针对边缘计算网络开放性、异构型和节点资源受限等特点,研究设计具有6层结构的通用边缘计算入侵检测系统,并在此模型架构上提出了一个边缘计算入侵检测方案,基于该方案提出了一种适用于边缘计算部署的改进极限学习机的入侵检测算法TSS-ELM,TSS-ELM增加了云服务器训练样本筛选环节来优化机器学习中的外权,从而对边缘节点数据实现高效的入侵检测;仿真实验结果和分析表明,该算法在准确性、时间依赖性、鲁棒性和误报率方面与其他现有算法相比具有更优异的性能.  相似文献   

19.
随着物联网(Internet of Things,IoT)技术的快速发展,出现了大量具有不同功能的设备(如多种带不同传感器的智能家居设备、移动智能交通设备、智能物流或仓储管理设备等),它们相互连接,被广泛应用于智能城市、智慧工厂等领域.然而,这些物联网设备的处理能力有限,很难满足延迟敏感、计算密集型应用的需求.移动边缘...  相似文献   

20.
智能城市、智慧工厂等对物联网设备(Internet of Things,IoT)的性能和连接性提出了挑战。边缘计算的出现弥补了这些能力受限的设备,通过将密集的计算任务从它们迁移到边缘节点(Edge Node,EN),物联网设备能够在节约更多能耗的同时,仍保持服务质量。计算卸载决策涉及协作和复杂的资源管理,应该根据动态工作负载和网络环境实时确定计算卸载决策。采用模拟实验的方法,通过在物联网设备和边缘节点上都部署深度强化学习代理来最大化长期效用,并引入联盟学习来分布式训练深度强化学习代理。首先构建支持边缘计算的物联网系统,IoT从EN处下载已有模型进行训练,密集型计算任务卸载至EN进行训练;IoT上传更新的参数至EN,EN聚合该参数与EN处的模型得到新的模型;云端可在EN处获得新的模型并聚合,IoT也可以从EN获得更新的参数应用在设备上。经过多次迭代,该IoT能获得接近集中式训练的性能,并且降低了物联网设备和边缘节点之间的传输成本,实验证实了决策方案和联盟学习在动态物联网环境中的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号