首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features. Several cloud-based IoT health providers have been described in the literature previously. Furthermore, there are a number of issues related to time consumed and overall network performance when it comes to big data information. In the existing method, less performed optimization algorithms were used for optimizing the data. In the proposed method, the Chaotic Cuckoo Optimization algorithm was used for feature selection, and Convolutional Support Vector Machine (CSVM) was used. The research presents a method for analyzing healthcare information that uses in future prediction. The major goal is to take a variety of data while improving efficiency and minimizing process time. The suggested method employs a hybrid method that is divided into two stages. In the first stage, it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight, opposition-based learning, and distributor operator. In the second stage, CSVM is used which combines the benefits of convolutional neural network (CNN) and SVM. The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources. For improved economic flexibility, greater protection, greater analytics with confidentiality, and lower operating cost, the suggested approach is built on fog computing. Overall results of the experiments show that the suggested method can minimize the number of features in the datasets, enhances the accuracy by 82%, and decrease the time of the process.  相似文献   

2.
Infrastructure of fog is a complex system due to the large number of heterogeneous resources that need to be shared. The embedded devices deployed with the Internet of Things (IoT) technology have increased since the past few years, and these devices generate huge amount of data. The devices in IoT can be remotely connected and might be placed in different locations which add to the network delay. Real time applications require high bandwidth with reduced latency to ensure Quality of Service (QoS). To achieve this, fog computing plays a vital role in processing the request locally with the nearest available resources by reduced latency. One of the major issues to focus on in a fog service is managing and allocating resources. Queuing theory is one of the most popular mechanisms for task allocation. In this work, an efficient model is designed to improve QoS with the efficacy of resource allocation based on a Queuing Theory based Cuckoo Search (QTCS) model which will optimize the overall resource management process.  相似文献   

3.
杆系DEM(离散元,discrete element method)是求解结构强非线性问题的有效方法,但随着结构数值计算规模的扩大,杆系DEM所需要的计算时间也随之急剧膨胀。为了提高杆系DEM的计算效率,该研究提出单元级并行、节点级并行的计算方法,基于CPU-GPU异构平台,建构了杆系DEM并行计算框架,编制了相应的几何非线性计算程序,实现了杆系DEM的GPU多线程并行计算。对杆系DEM并行算法的设计主要包括数据存储方式、GPU线程计算模式、节点物理量集成方式以及数据传输优化。最后采用大型三维框架、球壳结构模型分别验证了杆系DEM并行算法的计算精度,并对杆系DEM并行算法进行了计算性能测试,测试结果表明杆系DEM并行算法加速比最高可达12.7倍。  相似文献   

4.
Wireless Sensor Network is considered as the intermediate layer in the paradigm of Internet of things (IoT) and its effectiveness depends on the mode of deployment without sacrificing the performance and energy efficiency. WSN provides ubiquitous access to location, the status of different entities of the environment and data acquisition for long term IoT monitoring. Achieving the high performance of the WSN-IoT network remains to be a real challenge since the deployment of these networks in the large area consumes more power which in turn degrades the performance of the networks. So, developing the robust and QoS (quality of services) aware energy-efficient routing protocol for WSN assisted IoT devices needs its brighter light of research to enhance the network lifetime. This paper proposed a Hybrid Energy Efficient Learning Protocol (HELP). The proposed protocol leverages the multi-tier adaptive framework to minimize energy consumption. HELP works in a two-tier mechanism in which it integrates the powerful Extreme Learning Machines for clustering framework and employs the zonal based optimization technique which works on hybrid Whale-dragonfly algorithms to achieve high QoS parameters. The proposed framework uses the sub-area division algorithm to divide the network area into different zones. Extreme learning machines (ELM) which are employed in this framework categories the Zone's Cluster Head (ZCH) based on distance and energy. After categorizing the zone's cluster head, the optimal routing path for an energy-efficient data transfer will be selected based on the new hybrid whale-swarm algorithms. The extensive simulations were carried out using OMNET++-Python user-defined plugins by injecting the dynamic mobility models in networks to make it a more realistic environment. Furthermore, the effectiveness of the proposed HELP is examined against the existing protocols such as LEACH, M-LEACH, SEP, EACRP and SEEP and results show the proposed framework has outperformed other techniques in terms of QoS parameters such as network lifetime, energy, latency.  相似文献   

5.
With the advancement of internet, there is also a rise in cybercrimes and digital attacks. DDoS (Distributed Denial of Service) attack is the most dominant weapon to breach the vulnerabilities of internet and pose a significant threat in the digital environment. These cyber-attacks are generated deliberately and consciously by the hacker to overwhelm the target with heavy traffic that genuine users are unable to use the target resources. As a result, targeted services are inaccessible by the legitimate user. To prevent these attacks, researchers are making use of advanced Machine Learning classifiers which can accurately detect the DDoS attacks. However, the challenge in using these techniques is the limitations on capacity for the volume of data and the required processing time. In this research work, we propose the framework of reducing the dimensions of the data by selecting the most important features which contribute to the predictive accuracy. We show that the ‘lite’ model trained on reduced dataset not only saves the computational power, but also improves the predictive performance. We show that dimensionality reduction can improve both effectiveness (recall) and efficiency (precision) of the model as compared to the model trained on ‘full’ dataset.  相似文献   

6.
《中国工程学刊》2012,35(5):515-522
To improve work efficiency and productivity in agricultural environment, which may naturally be protean, unpredictable and laborious, many researchers have made attempts to apply information technology in the field of conventional agriculture. Especially, they have great concern toward a reasonable and reliable service control model or architecture to control agricultural works automatically and smartly. This article proposes a framework to support context-aware or smart services in agricultural environments, based on a context-aware workflow model and USN/RFID. The suggested framework is focused on supporting autonomous agricultural services in the fields of growth control, protection against disease and insects, and the output control of the agricultural computing environment using a workflow model which is one of the successful service automation models, using contexts from various sensors in agricultural fields. The proposed framework, based on a context-aware workflow, can be greatly helpful in the development of smart applications or work automation in the field of agriculture.  相似文献   

7.
In view of the satellite cloud-derived wind inversion has the characteristics of large scale, intensive computing and time-consuming serial inversion algorithm is very difficult to break through the bottleneck of efficiency. We proposed a parallel acceleration scheme of cloud-derived wind inversion algorithm based on MPI cluster parallel technique in this paper. The divide-and-conquer idea, assigning winds vector inversion tasks to each computing unit, is identified according to a certain strategy. Each computing unit executes the assigned tasks in parallel, namely divide-and-rule the inversion task, so as to reduce the efficiency bottleneck of long inversion time caused by serial time accumulation. In the scheme of parallel acceleration based on MPI cluster, an algorithm based on performance prediction is proposed to effectively implement load balance of MPI clusters. Through the comparative analysis of experiment data using the parallel scheme of this parallel technology framework, it shows that this parallel technology has a certain acceleration effect on the cloud-derived wind inversion algorithm. The speedup of the MPI-based parallel algorithm reaches 14.96, which achieved the expected estimate. At the same time, this paper also proposes an efficiency optimization algorithm for cloud-derived wind inversion. In the case that the inversion of wind vector accuracy loss is minimal, the optimized algorithm execution time can be up to 13 times faster.  相似文献   

8.
Cloud computing is becoming popular technology due to its functional properties and variety of customer-oriented services over the Internet. The design of reliable and high-quality cloud applications requires a strong Quality of Service QoS parameter metric. In a hyperconverged cloud ecosystem environment, building high-reliability cloud applications is a challenging job. The selection of cloud services is based on the QoS parameters that play essential roles in optimizing and improving cloud rankings. The emergence of cloud computing is significantly reshaping the digital ecosystem, and the numerous services offered by cloud service providers are playing a vital role in this transformation. Hyperconverged software-based unified utilities combine storage virtualization, compute virtualization, and network virtualization. The availability of the latter has also raised the demand for QoS. Due to the diversity of services, the respective quality parameters are also in abundance and need a carefully designed mechanism to compare and identify the critical, common, and impactful parameters. It is also necessary to reconsider the market needs in terms of service requirements and the QoS provided by various CSPs. This research provides a machine learning-based mechanism to monitor the QoS in a hyperconverged environment with three core service parameters: service quality, downtime of servers, and outage of cloud services.  相似文献   

9.
10.
With the development of the service-oriented computing (SOC), web service has an important and popular solution for the design of the application system to various enterprises. Nowadays, the numerous web services are provided by the service providers on the network, it becomes difficult for users to select the best reliable one from a large number of services with the same function. So it is necessary to design feasible selection strategies to provide users with the reliable services. Most existing methods attempt to select services according to accurate predictions for the quality of service (QoS) values. However, because the network and user needs are dynamic, it is almost impossible to accurately predict the QoS values. Furthermore, accurate prediction is generally timeconsuming. This paper proposes a service decision tree based post-pruning prediction approach. This paper first defines the five reliability levels for measuring the reliability of services. By analyzing the quality data of service from the network, the proposed method can generate the training set and convert them into the service decision tree model. Using the generated model and the given predicted services, the proposed method classifies the service to the corresponding reliability level after discretizing the continuous attribute of service. Moreover, this paper applies the post-pruning strategy to optimize the generated model for avoiding the over-fitting. Experimental results show that the proposed method is effective in predicting the service reliability.  相似文献   

11.
Communication is important for providing intelligent services in connected vehicles. Vehicles must be able to communicate with different places and exchange information while driving. For service operation, connected vehicles frequently attempt to download large amounts of data. They can request data downloading to a road side unit (RSU), which provides infrastructure for connected vehicles. The RSU is a data bottleneck in a transportation system because data traffic is concentrated on the RSU. Therefore, it is not appropriate for a connected vehicle to always attempt a high speed download from the RSU. If the mobile network between a connected vehicle and an RSU has poor connection quality, the efficiency and speed of the data download from the RSU is decreased. This problem affects the quality of the user experience. Therefore, it is important for a connected vehicle to connect to an RSU with consideration of the network conditions in order to try to maximize download speed. The proposed method maximizes download speed from an RSU using a machine learning algorithm. To collect and learn from network data, fog computing is used. A fog server is integrated with the RSU to perform computing. If the algorithm recognizes that conditions are not good for mass data download, it will not attempt to download at high speed. Thus, the proposed method can improve the efficiency of high speed downloads. This conclusion was validated using extensive computer simulations.  相似文献   

12.
During the last two decades, mobile communication systems (such as GSM, GPRS and 3G networks), wireless broadcasting networks, wireless local area networks (WLAN or WiFi), and wireless sensor networks have been successfully developed and widely deployed through different technological routes for providing a variety of communication services in different application scenarios. While making tremendous contributions to social progress and economic growth, these heterogeneous wireless networks consume a lot of energy in achieving overlapped service coverage, and at the same time, generate strong electromagnetic interference (EMI) and radiation pollution, especially in big cities with high building density and user population. In order to guarantee the overall return on investment (ROI), improve user experience and quality of service (QoS), save energy, reduce EMI and radiation pollution, and enable the sustainable deployment of new profitable applications and services, this paper proposes a cross-network cooperation mechanism to effectively share network resources and infrastructures, and then adaptively control and match multi-network energy distribution characteristics according to actual user/service requirements in different geographic areas. Some idle or lightly-loaded Base Stations (BS or BSs) will be temporally turned off for saving energy and reducing EMI. Initial simulation results show the proposed approach can significantly improve the overall energy efficiency and QoS performance across multiple cooperative wireless networks.  相似文献   

13.
Medical Internet of Things (MIoTs) is a collection of small and energyefficient wireless sensor devices that monitor the patient’s body. The healthcare networks transmit continuous data monitoring for the patients to survive them independently. There are many improvements in MIoTs, but still, there are critical issues that might affect the Quality of Service (QoS) of a network. Congestion handling is one of the critical factors that directly affect the QoS of the network. The congestion in MIoT can cause more energy consumption, delay, and important data loss. If a patient has an emergency, then the life-critical signals must transmit with minimum latency. During emergencies, the MIoTs have to monitor the patients continuously and transmit data (e.g., ECG, BP, heart rate, etc.) with minimum delay. Therefore, there is an efficient technique required that can transmit emergency data of high-risk patients to the medical staff on time with maximum reliability. The main objective of this research is to monitor and transmit the patient’s real-time data efficiently and to prioritize the emergency data. In this paper, Emergency Prioritized and Congestion Handling Protocol for Medical IoTs (EPCP_MIoT) is proposed that efficiently monitors the patients and overcome the congestion by enabling different monitoring modes. Whereas the emergency data transmissions are prioritized and transmit at SIFS time. The proposed technique is implemented and compared with the previous technique, the comparison results show that the proposed technique outperforms the previous techniques in terms of network throughput, end to end delay, energy consumption, and packet loss ratio.  相似文献   

14.
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score.  相似文献   

15.
The discovery of gradual moving object clusters pattern from trajectory streams allows characterizing movement behavior in real time environment, which leverages new applications and services. Since the trajectory streams is rapidly evolving, continuously created and cannot be stored indefinitely in memory, the existing approaches designed on static trajectory datasets are not suitable for discovering gradual moving object clusters pattern from trajectory streams. This paper proposes a novel algorithm of gradual moving object clusters pattern discovery from trajectory streams using sliding window models. By processing the trajectory data in current window, the mining algorithm can capture the trend and evolution of moving object clusters pattern. Firstly, the density peaks clustering algorithm is exploited to identify clusters of different snapshots. The stable relationship between relatively few moving objects is used to improve the clustering efficiency. Then, by intersecting clusters from different snapshots, the gradual moving object clusters pattern is updated. The relationship of clusters between adjacent snapshots and the gradual property are utilized to accelerate updating process. Finally, experiment results on two real datasets demonstrate that our algorithm is effective and efficient.  相似文献   

16.
Mobile edge computing (MEC) provides effective cloud services and functionality at the edge device, to improve the quality of service (QoS) of end users by offloading the high computation tasks. Currently, the introduction of deep learning (DL) and hardware technologies paves a method in detecting the current traffic status, data offloading, and cyberattacks in MEC. This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC (AIMDO-SMEC) systems. The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks (SNN) to determine the traffic status in the MEC system. Also, an adaptive sampling cross entropy (ASCE) technique is utilized for data offloading in MEC systems. Moreover, the modified salp swarm algorithm (MSSA) with extreme gradient boosting (XGBoost) technique was implemented to identification and classification of cyberattack that exist in the MEC systems. For examining the enhanced outcomes of the AIMDO-SMEC technique, a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDO-SMEC technique with the minimal completion time of tasks (CTT) of 0.680.  相似文献   

17.
18.
Model Predictive Control (MPC) has been previously applied to supply chain problems with promising results; however most systems that have been proposed so far possess no information on future demand. The incorporation of a forecasting methodology in an MPC framework can promote the efficiency of control actions by providing insight in the future. In this paper this possibility is explored, by proposing a complete management framework for production-inventory systems that is based on MPC and on a neural network time series forecasting model. The proposed framework is tested on industrial data in order to assess the efficiency of the method and the impact of forecast accuracy on the overall control performance. To this end, the proposed method is compared with several alternative forecasting approaches that are implemented on the same industrial dataset. The results show that the proposed scheme can improve significantly the performance of the production-inventory system, due to the fact that more accurate predictions are provided to the formulation of the MPC optimization problem that is solved in real time.  相似文献   

19.
The prediction of particles less than 2.5 micrometers in diameter (PM2.5) in fog and haze has been paid more and more attention, but the prediction accuracy of the results is not ideal. Haze prediction algorithms based on traditional numerical and statistical prediction have poor effects on nonlinear data prediction of haze. In order to improve the effects of prediction, this paper proposes a haze feature extraction and pollution level identification pre-warning algorithm based on feature selection and integrated learning. Minimum Redundancy Maximum Relevance method is used to extract low-level features of haze, and deep confidence network is utilized to extract high-level features. eXtreme Gradient Boosting algorithm is adopted to fuse low-level and high-level features, as well as predict haze. Establish PM2.5 concentration pollution grade classification index, and grade the forecast data. The expert experience knowledge is utilized to assist the optimization of the pre-warning results. The experiment results show the presented algorithm can get better prediction effects than the results of Support Vector Machine (SVM) and Back Propagation (BP) widely used at present, the accuracy has greatly improved compared with SVM and BP.  相似文献   

20.
Drug sensitivity prediction is one of the critical tasks involved in drug designing and discovery. Recently several online databases and consortiums have contributed to providing open access to pharmacogenomic data. These databases have helped in developing computational approaches for drug sensitivity prediction. Cancer is a complex disease involving the heterogeneous behaviour of same tumour‐type patients towards the same kind of drug therapy. Several methods have been proposed in the literature to predict drug sensitivity. However, these methods are not efficient enough to predict drug sensitivity. The present study has proposed an ensemble learning framework for drug‐response prediction using a modified rotation forest. The proposed framework is further compared with three state‐of‐the‐art algorithms and two baseline methods using Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE) drug screens. The authors have also predicted missing drug response values in the data set using the proposed approach. The proposed approach outperforms other counterparts even though gene mutation data is not incorporated while designing the approach. An average mean square error of 3.14 and 0.404 is achieved using GDSC and CCLE drug screens, respectively. The obtained results show that the proposed framework has considerable potential to improve anti‐cancer drug response prediction.Inspec keywords: medical computing, molecular biophysics, genomics, genetics, learning (artificial intelligence), patient treatment, drugs, cellular biophysics, cancer, biology computing, tumours, diseasesOther keywords: ensembled machine learning framework, drug sensitivity prediction, drug therapy, ensemble learning framework, drug‐response prediction, Cancer Cell Line Encyclopedia drug screens, drug response values, CCLE drug screens, anti‐cancer drug response prediction  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号