首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Internet of Things (IoT) has become a major technological development which offers smart infrastructure for the cloud-edge services by the interconnection of physical devices and virtual things among mobile applications and embedded devices. The e-healthcare application solely depends on the IoT and cloud computing environment, has provided several characteristics and applications. Prior research works reported that the energy consumption for transmission process is significantly higher compared to sensing and processing, which led to quick exhaustion of energy. In this view, this paper introduces a new energy efficient cluster enabled clinical decision support system (EEC-CDSS) for embedded IoT environment. The presented EEC-CDSS model aims to effectively transmit the medical data from IoT devices and perform accurate diagnostic process. The EEC-CDSS model incorporates particle swarm optimization with levy distribution (PSO-L) based clustering technique, which clusters the set of IoT devices and reduces the amount of data transmission. In addition, the IoT devices forward the data to the cloud where the actual classification procedure is performed. For classification process, variational autoencoder (VAE) is used to determine the existence of disease or not. In order to investigate the proficient results analysis of the EEC-CDSS model, a wide range of simulations was carried out on heart disease and diabetes dataset. The obtained simulation values pointed out the supremacy of the EEC-CDSS model interms of energy efficiency and classification accuracy.  相似文献   

2.
In the IoT (Internet of Things) system, the introduction of UAV (Unmanned Aerial Vehicle) as a new data collection platform can solve the problem that IoT devices are unable to transmit data over long distances due to the limitation of their battery energy. However, the unreasonable distribution of UAVs will still lead to the problem of the high total energy consumption of the system. In this work, to deal with the problem, a deployment model of a mobile edge computing (MEC) system based on multi-UAV is proposed. The goal of the model is to minimize the energy consumption of the system in the process of data transmission by optimizing the deployment of UAVs. The DEVIPSK (differential evolution algorithm with variable population size based on a mutation strategy pool initialized by K-Means) is proposed to solve the model. In DEVIPSK, the population is initialized by K-Means to obtain better initial positions of UAVs. Besides, considering the limitation of the fixed mutation strategy in the traditional evolutionary algorithm, a mutation strategy pool is used to update the positions of UAVs. The experimental results show the superiority of the DEVIPSK and provide guidance for the deployment of UAVs in the field of edge data collection in the IoT system.  相似文献   

3.
With the rapid development of the internet of things (IoT), electricity consumption data can be captured and recorded in the IoT cloud center. This provides a credible data source for enterprise credit scoring, which is one of the most vital elements during the financial decision-making process. Accordingly, this paper proposes to use deep learning to train an enterprise credit scoring model by inputting the electricity consumption data. Instead of predicting the credit rating, our method can generate an absolute credit score by a novel deep ranking model–ranking extreme gradient boosting net (rankXGB). To boost the performance, the rankXGB model combines several weak ranking models into a strong model. Due to the high computational cost and the vast amounts of data, we design an edge computing framework to reduce the latency of enterprise credit evaluation. Specially, we design a two-stage deep learning task architecture, including a cloud-based weak credit ranking and an edge-based credit score calculation. In the first stage, we send the electricity consumption data of the evaluated enterprise to the computing cloud server, where multiple weak-ranking networks are executed in parallel to produce multiple weak-ranking results. In the second stage, the edge device fuses multiple ranking results generated in the cloud server to produce a more reliable ranking result, which is used to calculate an absolute credit score by score normalization. The experiments demonstrate that our method can achieve accurate enterprise credit evaluation quickly.  相似文献   

4.
Wireless Sensor Networks (WSNs) can be termed as an auto-configured and infrastructure-less wireless networks to monitor physical or environmental conditions, such as temperature, sound, vibration, pressure and motion etc. WSNs may comprise thousands of Internet of Things (IoT) devices to sense and collect data from its surrounding, process the data and take an automated and mechanized decision. On the other side the proliferation of these devices will soon cause radio spectrum shortage. So, to facilitate these networks, we integrate Cognitive Radio (CR) functionality in these networks. CR can sense the unutilized spectrum of licensed users and then use these empty bands when required. In order to keep the IoT nodes functional all time, continuous energy is required. For this reason the energy harvested techniques are preferred in IoT networks. Mainly it is preferred to harvest Radio Frequency (RF) energy in the network. In this paper a region based multi-channel architecture is proposed. In which the coverage area of primary node is divided as Energy Harvesting Region and Communication Region. The Secondary User (SU) that are the licensed user is IoT enabled with Cognitive Radio (CR) techniques so we call it CR-enabled IoT node/device and is encouraged to harvest energy by utilizing radio frequency energy. To harvest energy efficiently and to reduce the energy consumption during sensing, the concept of overlapping region is given that supports to sense multiple channels simultaneously and help the SU to find best channel for transmitting data or to harvest energy from the ideal channel. From the experimental analysis, it is proved that SU can harvest more energy in overlapping region and this architecture proves to consume less energy during data transmission as compared to single channel. We also show that channel load can be highly reduced and channel utilization is proved to be more proficient. Thus, this proves the proposed architecture cost-effective and energy-efficient.  相似文献   

5.
Mobile edge cloud networks can be used to offload computationally intensive tasks from Internet of Things (IoT) devices to nearby mobile edge servers, thereby lowering energy consumption and response time for ground mobile users or IoT devices. Integration of Unmanned Aerial Vehicles (UAVs) and the mobile edge computing (MEC) server will significantly benefit small, battery-powered, and energy-constrained devices in 5G and future wireless networks. We address the problem of maximising computation efficiency in U-MEC networks by optimising the user association and offloading indicator (OI), the computational capacity (CC), the power consumption, the time duration, and the optimal location planning simultaneously. It is possible to assign some heavy tasks to the UAV for faster processing and small ones to the mobile users (MUs) locally. This paper utilizes the k-means clustering algorithm, the interior point method, and the conjugate gradient method to iteratively solve the non-convex multi-objective resource allocation problem. According to simulation results, both local and offloading schemes give optimal solution.  相似文献   

6.
Educational institutions are soft targets for the terrorist with massive and defenseless people. In the recent past, numbers of such attacks have been executed around the world. Conducting research, in order to provide a secure environment to the educational institutions is a challenging task. This effort is motivated by recent assaults, made at Army Public School Peshawar, following another attack at Charsada University, Khyber Pukhtun Khwa, Pakistan and also the Santa Fe High School Texas, USA massacre. This study uses the basic technologies of edge computing, cloud computing and IoT to design a smart emergency alarm system framework. IoT is engaged in developing this world smarter, can contribute significantly to design the Smart Security Framework (SSF) for educational institutions. In the emergency situation, all the command and control centres must be informed within seconds to halt or minimize the loss. In this article, the SSF is proposed. This framework works on three layers. The first layer is the sensors and smart devices layer. All these sensors and smart devices are connected to the Emergency Control Room (ECR), which is the second layer of the proposed framework. The second layer uses edge computing technologies to process massive data and information locally. The third layer uses cloud computing techniques to transmit and process data and information to different command and control centres. The proposed system was tested on Cisco Packet Tracer 7. The result shows that this approach can play an efficient role in security alert, not only in the educational institutions but also in other organizations too.  相似文献   

7.
Internet of Things (IoT), which provides the solution of connecting things and devices, has increasingly developed as vital tools to realize intelligent life. Generally, source-limited IoT sensors outsource their data to the cloud, which arises the concerns that the transmission of IoT data is happening without appropriate consideration of the profound security challenges involved. Though encryption technology can guarantee the confidentiality of private data, it hinders the usability of data. Searchable encryption (SE) has been proposed to achieve secure data sharing and searching. However, most of existing SE schemes are designed under conventional hardness assumptions and may be vulnerable to the adversary with quantum computers. Moreover, the untrusted cloud server may perform an unfaithful search execution. To address these problems, in this paper, we propose the first verifiable identity-based keyword search (VIBKS) scheme from lattice. In particular, a lattice-based delegation algorithm is adopted to help the data user to verify both the correctness and the integrity of the search results. Besides, in order to reduce the communication overhead, we refer to the identity-based mechanism. We conduct rigorous proof to demonstrate that the proposed VIBKS scheme is ciphertext indistinguishable secure against the semi-honest-but-curious adversary. In addition, we give the detailed computation and communication complexity of our VIBKS and conduct a series of experiments to validate its efficiency performance.  相似文献   

8.
Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them. Deep learning has gained momentum for identifying activities through sensors, smartphones or even surveillance cameras. However, it is often difficult to train deep learning models on constrained IoT devices. The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing, which we call DL-HAR. The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on lesspowerful edge devices for recognition. The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes. We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy. In order to evaluate the proposed framework, we conducted a comprehensive set of experiments to validate the applicability of DL-HAR. Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.  相似文献   

9.
The Internet of thing (IoT) is a growing concept for smart cities, and it is compulsory to communicate data between different networks and devices. In the IoT, communication should be rapid with less delay and overhead. For this purpose, flooding is used for reliable data communication in a smart cities concept but at the cost of higher overhead, energy consumption and packet drop etc. This paper aims to increase the efficiency in term of overhead and reliability in term of delay by using multicasting and unicasting instead of flooding during packet forwarding in a smart city using the IoT concept. In this paper, multicasting and unicasting is used for IoT in smart cities within a receiver-initiated mesh-based topology to disseminate the data to the cluster head. Smart cities networks are divided into cluster head, and each cluster head or core node will be responsible for transferring data to the desired receiver. This protocol is a novel approach according to the best of our knowledge, and it proves to be very useful due to its efficiency and reliability in smart cities concept because IoT is a collection of devices and having a similar interest for transmission of data. The results are implemented in Network simulator 2 (NS-2). The result shows that the proposed protocol shows performance in overhead, throughput, packet drop, delay and energy consumption as compared to benchmark schemes.  相似文献   

10.
Nowadays, there is a significant need for maintenance free modern Internet of things (IoT) devices which can monitor an environment. IoT devices such as these are mobile embedded devices which provide data to the internet via Low Power Wide Area Network (LPWAN). LPWAN is a promising communications technology which allows machine to machine (M2M) communication and is suitable for small mobile embedded devices. The paper presents a novel data-driven self-learning (DDSL) controller algorithm which is dedicated to controlling small mobile maintenance-free embedded IoT devices. The DDSL algorithm is based on a modified Q-learning algorithm which allows energy efficient data-driven behavior of mobile embedded IoT devices. The aim of the DDSL algorithm is to dynamically set operation duty cycles according to the estimation of future collected data values, leading to effective operation of power-aware systems. The presented novel solution was tested on a historical data set and compared with a fixed duty cycle reference algorithm. The root mean square error (RMSE) and measurements parameters considered for the DDSL algorithm were compared to a reference algorithm and two independent criteria (the performance score parameter and normalized geometric distance) were used for overall evaluation and comparison. The experiments showed that the novel DDSL method reaches significantly lower RMSE while the number of transmitted data count is less than or equal to the fixed duty cycle algorithm. The overall criteria performance score is 40% higher than the reference algorithm base on static confirmation settings.  相似文献   

11.
The world is rapidly changing with the advance of information technology. The expansion of the Internet of Things (IoT) is a huge step in the development of the smart city. The IoT consists of connected devices that transfer information. The IoT architecture permits on-demand services to a public pool of resources. Cloud computing plays a vital role in developing IoT-enabled smart applications. The integration of cloud computing enhances the offering of distributed resources in the smart city. Improper management of security requirements of cloud-assisted IoT systems can bring about risks to availability, security, performance, confidentiality, and privacy. The key reason for cloud- and IoT-enabled smart city application failure is improper security practices at the early stages of development. This article proposes a framework to collect security requirements during the initial development phase of cloud-assisted IoT-enabled smart city applications. Its three-layered architecture includes privacy preserved stakeholder analysis (PPSA), security requirement modeling and validation (SRMV), and secure cloud-assistance (SCA). A case study highlights the applicability and effectiveness of the proposed framework. A hybrid survey enables the identification and evaluation of significant challenges.  相似文献   

12.
In the smart city paradigm, the deployment of Internet of Things (IoT) services and solutions requires extensive communication and computing resources to place and process IoT applications in real time, which consumes a lot of energy and increases operational costs. Usually, IoT applications are placed in the cloud to provide high-quality services and scalable resources. However, the existing cloud-based approach should consider the above constraints to efficiently place and process IoT applications. In this paper, an efficient optimization approach for placing IoT applications in a multi-layer fog-cloud environment is proposed using a mathematical model (Mixed-Integer Linear Programming (MILP)). This approach takes into account IoT application requirements, available resource capacities, and geographical locations of servers, which would help optimize IoT application placement decisions, considering multiple objectives such as data transmission, power consumption, and cost. Simulation experiments were conducted with various IoT applications (e.g., augmented reality, infotainment, healthcare, and compute-intensive) to simulate realistic scenarios. The results showed that the proposed approach outperformed the existing cloud-based approach in terms of reducing data transmission by 64% and the associated processing and networking power consumption costs by up to 78%. Finally, a heuristic approach was developed to validate and imitate the presented approach. It showed comparable outcomes to the proposed model, with the gap between them reach to a maximum of 5.4% of the total power consumption.  相似文献   

13.
Wireless Sensor Network is considered as the intermediate layer in the paradigm of Internet of things (IoT) and its effectiveness depends on the mode of deployment without sacrificing the performance and energy efficiency. WSN provides ubiquitous access to location, the status of different entities of the environment and data acquisition for long term IoT monitoring. Achieving the high performance of the WSN-IoT network remains to be a real challenge since the deployment of these networks in the large area consumes more power which in turn degrades the performance of the networks. So, developing the robust and QoS (quality of services) aware energy-efficient routing protocol for WSN assisted IoT devices needs its brighter light of research to enhance the network lifetime. This paper proposed a Hybrid Energy Efficient Learning Protocol (HELP). The proposed protocol leverages the multi-tier adaptive framework to minimize energy consumption. HELP works in a two-tier mechanism in which it integrates the powerful Extreme Learning Machines for clustering framework and employs the zonal based optimization technique which works on hybrid Whale-dragonfly algorithms to achieve high QoS parameters. The proposed framework uses the sub-area division algorithm to divide the network area into different zones. Extreme learning machines (ELM) which are employed in this framework categories the Zone's Cluster Head (ZCH) based on distance and energy. After categorizing the zone's cluster head, the optimal routing path for an energy-efficient data transfer will be selected based on the new hybrid whale-swarm algorithms. The extensive simulations were carried out using OMNET++-Python user-defined plugins by injecting the dynamic mobility models in networks to make it a more realistic environment. Furthermore, the effectiveness of the proposed HELP is examined against the existing protocols such as LEACH, M-LEACH, SEP, EACRP and SEEP and results show the proposed framework has outperformed other techniques in terms of QoS parameters such as network lifetime, energy, latency.  相似文献   

14.
At present days, Internet of Things (IoT) and cloud platforms become widely used in various healthcare applications. The enormous quantity of data produced by the IoT devices in the healthcare sector can be examined on the cloud platform instead of dependent on restricted storage and computation resources exist in the mobile gadgets. For offering effective medicinal services, in this article, an online medical decision support system (OMDSS) is introduced for chronic kidney disease (CKD) prediction. The presented model involves a set of stages namely data gathering, preprocessing, and classification of medical data for the prediction of CKD. For classification, logistic regression (LR) model is applied for classifying the data instances into CKD and non-CKD. In addition, for tuning the parameters of LR, Adaptive Moment Estimation (Adam), and adaptive learning rate optimization algorithm is applied. The performance of the introduced model is examined using a benchmark CKD dataset. The experimental outcome observed the superior characteristics of the presented model on the applied dataset.  相似文献   

15.
Corona is a viral disease that has taken the form of an epidemic and is causing havoc worldwide after its first appearance in the Wuhan state of China in December 2019. Due to the similarity in initial symptoms with viral fever, it is challenging to identify this virus initially. Non-detection of this virus at the early stage results in the death of the patient. Developing and densely populated countries face a scarcity of resources like hospitals, ventilators, oxygen, and healthcare workers. Technologies like the Internet of Things (IoT) and artificial intelligence can play a vital role in diagnosing the COVID-19 virus at an early stage. To minimize the spread of the pandemic, IoT-enabled devices can be used to collect patient’s data remotely in a secure manner. Collected data can be analyzed through a deep learning model to detect the presence of the COVID-19 virus. In this work, the authors have proposed a three-phase model to diagnose covid-19 by incorporating a chatbot, IoT, and deep learning technology. In phase one, an artificially assisted chatbot can guide an individual by asking about some common symptoms. In case of detection of even a single sign, the second phase of diagnosis can be considered, consisting of using a thermal scanner and pulse oximeter. In case of high temperature and low oxygen saturation levels, the third phase of diagnosis will be recommended, where chest radiography images can be analyzed through an AI-based model to diagnose the presence of the COVID-19 virus in the human body. The proposed model reduces human intervention through chatbot-based initial screening, sensor-based IoT devices, and deep learning-based X-ray analysis. It also helps in reducing the mortality rate by detecting the presence of the COVID-19 virus at an early stage.  相似文献   

16.
With the recent development of big data technology that collects and analyzes various data, the technology that continuously collects and analyzes the observed data is also drawing attention. Moreover, its importance is growing in data collection in areas where people cannot access. In general, it is not easy to properly deploy IoT wireless devices for data collection in these areas, and it is also inappropriate to use general wheelbased mobile devices for relocation. Recently, researches have been actively carried out on hopping moving models in place of wheel-based movement for the inaccessible regions. The majority of studies, however, so far have unrealistic assumptions that all IoT devices know the overall state of the network and the current state of each device. Moreover, various physical terrain environments, such as coarse gravel and sand, can change from time to time, and it is impossible for all devices to recognize these changes in real-time. In this paper, with the migration success rate of IoT hopping devices being relocated, the method of estimating the varying environment is proposed. This method can actively reflect the changing environment in real-time and is a realistic distributed environment-based relocation protocol on behalf of non-realistic, theory-based relocation protocols. Also, one of the significant contributions of this paper is to evaluate its performance using the OMNeT++ simulation tool for the first time in the world to reflect actual physical environmental conditions. Compared to previous studies, the proposed protocol was able to actively reflect the state of the surrounding environment, which resulted in improved migration success rates and higher energy efficiency.  相似文献   

17.
Energy conservation is a significant task in the Internet of Things (IoT) because IoT involves highly resource-constrained devices. Clustering is an effective technique for saving energy by reducing duplicate data. In a clustering protocol, the selection of a cluster head (CH) plays a key role in prolonging the lifetime of a network. However, most cluster-based protocols, including routing protocols for low-power and lossy networks (RPLs), have used fuzzy logic and probabilistic approaches to select the CH node. Consequently, early battery depletion is produced near the sink. To overcome this issue, a lion optimization algorithm (LOA) for selecting CH in RPL is proposed in this study. LOA-RPL comprises three processes: cluster formation, CH selection, and route establishment. A cluster is formed using the Euclidean distance. CH selection is performed using LOA. Route establishment is implemented using residual energy information. An extensive simulation is conducted in the network simulator ns-3 on various parameters, such as network lifetime, power consumption, packet delivery ratio (PDR), and throughput. The performance of LOA-RPL is also compared with those of RPL, fuzzy rule-based energy-efficient clustering and immune-inspired routing (FEEC-IIR), and the routing scheme for IoT that uses shuffled frog-leaping optimization algorithm (RISA-RPL). The performance evaluation metrics used in this study are network lifetime, power consumption, PDR, and throughput. The proposed LOA-RPL increases network lifetime by 20% and PDR by 5%–10% compared with RPL, FEEC-IIR, and RISA-RPL. LOA-RPL is also highly energy-efficient compared with other similar routing protocols.  相似文献   

18.
The development in Information and Communication Technology has led to the evolution of new computing and communication environment. Technological revolution with Internet of Things (IoTs) has developed various applications in almost all domains from health care, education to entertainment with sensors and smart devices. One of the subsets of IoT is Internet of Medical things (IoMT) which connects medical devices, hardware and software applications through internet. IoMT enables secure wireless communication over the Internet to allow efficient analysis of medical data. With these smart advancements and exploitation of smart IoT devices in health care technology there increases threat and malware attacks during transmission of highly confidential medical data. This work proposes a scheme by integrating machine learning approach and block chain technology to detect malware during data transmission in IoMT. The proposed Machine Learning based Block Chain Technology malware detection scheme (MLBCT-Mdetect) is implemented in three steps namely: feature extraction, Classification and blockchain. Feature extraction is performed by calculating the weight of each feature and reduces the features with less weight. Support Vector Machine classifier is employed in the second step to classify the malware and benign nodes. Furthermore, third step uses blockchain to store details of the selected features which eventually improves the detection of malware with significant improvement in speed and accuracy. ML-BCT-Mdetect achieves higher accuracy with low false positive rate and higher True positive rate.  相似文献   

19.
With the emergence of the Internet of things (IoT), embedded systems have now changed its dimensionality and it is applied in various domains such as healthcare, home automation and mainly Industry 4.0. These Embedded IoT devices are mostly battery-driven. It has been analyzed that usage of Dynamic Random-Access Memory (DRAM) centered core memory is considered the most significant source of high energy utility in Embedded IoT devices. For achieving the low power consumption in these devices, Non-volatile memory (NVM) devices such as Parameter Random Access Memory (PRAM) and Spin-Transfer Torque Magnetic Random-Access Memory (STT-RAM) are becoming popular among main memory alternatives in embedded IoT devices because of their features such as high thickness, byte addressability, high scalability and low power intake. Additionally, Non-volatile Random-Access Memory (NVRAM) is widely adopted to save the data in the embedded IoT devices. NVM, flash memories have a limited lifetime, so it is mandatory to adopt intelligent optimization in managing the NVRAM-based embedded devices using an intelligent controller while considering the endurance issue. To address this challenge, the paper proposes a powerful, lightweight machine learning-based workload-adaptive write schemes of the NVRAM, which can increase the lifetime and reduce the energy consumption of the processors. The proposed system consists of three phases like Workload Characterization, Intelligent Compression and Memory Allocators. These phases are used for distributing the write-cycles to NVRAM, following the energy-time consumption and number of data bytes. The extensive experimentations are carried out using the IoMT (Internet of Medical things) benchmark in which the different endurance factors such as application delay, energy and write-time factors were evaluated and compared with the different existing algorithms.  相似文献   

20.
The heterogeneous nodes in the Internet of Things (IoT) are relatively weak in the computing power and storage capacity. Therefore, traditional algorithms of network security are not suitable for the IoT. Once these nodes alternate between normal behavior and anomaly behavior, it is difficult to identify and isolate them by the network system in a short time, thus the data transmission accuracy and the integrity of the network function will be affected negatively. Based on the characteristics of IoT, a lightweight local outlier factor detection method is used for node detection. In order to further determine whether the nodes are an anomaly or not, the varying behavior of those nodes in terms of time is considered in this research, and a time series method is used to make the system respond to the randomness and selectiveness of anomaly behavior nodes effectively in a short period of time. Simulation results show that the proposed method can improve the accuracy of the data transmitted by the network and achieve better performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号