首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them. Deep learning has gained momentum for identifying activities through sensors, smartphones or even surveillance cameras. However, it is often difficult to train deep learning models on constrained IoT devices. The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing, which we call DL-HAR. The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on lesspowerful edge devices for recognition. The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes. We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy. In order to evaluate the proposed framework, we conducted a comprehensive set of experiments to validate the applicability of DL-HAR. Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.  相似文献   

2.
In recent years, wireless sensing technologies have provided a much sought-after alternative to expensive cabled monitoring systems. Wireless sensing networks forego the high data transfer rates associated with cabled sensors in exchange for low-cost and low-power communication between a large number of sensing devices, each of which features embedded data processing capabilities. As such, a new paradigm in large-scale data processing has emerged; one where communication bandwidth is somewhat limited but distributed data processing centers are abundant. By taking advantage of this grid of computational resources, data processing tasks once performed independently by a central processing unit can now be parallelized, automated, and carried out within a wireless sensor network. By utilizing the intelligent organization and self-healing properties of many wireless networks, an extremely scalable multiprocessor computational framework can be developed to perform advanced engineering analyses. In this study, a novel parallelization of the simulated annealing stochastic search algorithm is presented and used to update structural models by comparing model predictions to experimental results. The resulting distributed model updating algorithm is validated within a network of wireless sensors by identifying the mass, stiffness, and damping properties of a three-story steel structure subjected to seismic base motion.  相似文献   

3.
Text classification has always been an increasingly crucial topic in natural language processing. Traditional text classification methods based on machine learning have many disadvantages such as dimension explosion, data sparsity, limited generalization ability and so on. Based on deep learning text classification, this paper presents an extensive study on the text classification models including Convolutional Neural Network-Based (CNN-Based), Recurrent Neural Network-Based (RNN-based), Attention Mechanisms-Based and so on. Many studies have proved that text classification methods based on deep learning outperform the traditional methods when processing large-scale and complex datasets. The main reasons are text classification methods based on deep learning can avoid cumbersome feature extraction process and have higher prediction accuracy for a large set of unstructured data. In this paper, we also summarize the shortcomings of traditional text classification methods and introduce the text classification process based on deep learning including text preprocessing, distributed representation of text, text classification model construction based on deep learning and performance evaluation.  相似文献   

4.
With the rapid growth of Internet of Things (IoT) based models, and the lack amount of data makes cloud computing resources insufficient. Hence, edge computing-based techniques are becoming more popular in present research domains that makes data storage, and processing effective at the network edges. There are several advanced features like parallel processing and data perception are available in edge computing. Still, there are some challenges in providing privacy and data security over networks. To solve the security issues in Edge Computing, Hash-based Message Authentication Code (HMAC) algorithm is used to provide solutions for preserving data from various attacks that happens with the distributed network nature. This paper proposed a Trust Model for Secure Data Sharing (TM-SDS) with HMAC algorithm. Here, data security is ensured with local and global trust levels with the centralized processing of cloud and by conserving resources effectively. Further, the proposed model achieved 84.25% of packet delivery ratio which is better compared to existing models in the resulting phase. The data packets are securely transmitted between entities in the proposed model and results showed that proposed TM-SDS model outperforms the existing models in an efficient manner.  相似文献   

5.
A semiconductor distributor that plays a third-party role in the supply chain will buy diverse components from different suppliers, warehouse and resell them to a number of electronics manufacturers with vendor-managed inventories, while suffering both risks of oversupply and shortage due to demand uncertainty. However, demand fluctuation and supply chain complexity are increasing due to shortening product life cycle in the consumer electronics era and long lead time for capacity expansion for high-tech manufacturing. Focusing realistic needs of a leading distributor for semiconductor components and modules, this study aims to construct a UNISON framework based on deep reinforcement learning (RL) for dynamically selecting the optimal demand forecast model for each of the products with the corresponding demand patterns to empower smart production for Industry 3.5. Deep RL that integrates deep learning architecture and RL algorithm can learn successful policies from the dynamic and complex real world. The reward function mechanism of deep RL can reduce negative impact of demand uncertainty. An empirical study was conducted for validation showing practical viability of the proposed approach. Indeed, the developed solution has been in real settings.  相似文献   

6.
随着计算机技术的快速发展,深度学习在工程领域的应用越来越广泛。在实际应用中,用于训练的数据集往往具有“小样本”、“高维度”、“稀疏”等特征,这导致传统深度学习模型的适用范围十分有限。该文建立了一种基于迁移学习增强的物理信息神经网络模型,用于解决数据稀疏的力学正、反问题。结合迁移学习策略,利用源模型中已有知识来加强目标任务中的学习,从而提高学习的效率,实现不需要大量数据就能得到较好预测性能的目标。该方法在薄板(两端简支+两端固支)的数据集上训练源模型,基于深度迁移学习从源模型上提取神经网络特征;利用目标任务中稀疏数据集实现源模型的微调,进而对不同边界的薄板响应预测(正问题)和边界识别(反问题)的目标任务进行验证。研究结果表明,该方法在小样本的目标任务上具有良好的精度和泛化能力。相比数据驱动的深度学习模型,物理信息神经网络模型可以有效避免数据生成带来的成本和网格独立性等问题。  相似文献   

7.
The goal of this study is to show emerging applications of deep learning technology in cancer imaging. Deep learning technology is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior. Applications of deep learning technology to cancer imaging can assist pathologists in the detection and classification of cancer in the early stages of its development to allow patients to have appropriate treatments that can increase their survival. Statistical analyses and other analytical approaches, based on data of ScienceDirect (a source for scientific research), suggest that the sharp increase of the studies of deep learning technology in cancer imaging seems to be driven by high rates of mortality of some types of cancer (e.g., lung and breast) in order to solve consequential problems of a more accurate detection and characterization of cancer types to apply efficient anti-cancer therapies. Moreover, this study also shows sources of the trajectories of deep learning technology in cancer imaging at level of scientific subject areas, universities and countries with the highest scientific production in these research fields. This new technology, in accordance with Amara's law, can generate a shift of technological paradigm for diagnostic assessment of any cancer type and disease. This new technology can also generate socioeconomic benefits for poor regions because they can send digital images to labs of other developed regions to have diagnosis of cancer types, reducing as far as possible current gap in healthcare sector among different regions.  相似文献   

8.
The sewer system plays an important role in protecting rainfall and treating urban wastewater. Due to the harsh internal environment and complex structure of the sewer, it is difficult to monitor the sewer system. Researchers are developing different methods, such as the Internet of Things and Artificial Intelligence, to monitor and detect the faults in the sewer system. Deep learning is a promising artificial intelligence technology that can effectively identify and classify different sewer system defects. However, the existing deep learning based solution does not provide high accuracy prediction and the defect class considered for classification is very small, which can affect the robustness of the model in the constraint environment. As a result, this paper proposes a sewer condition monitoring framework based on deep learning, which can effectively detect and evaluate defects in sewer pipelines with high accuracy. We also introduce a large dataset of sewer defects with 20 different defect classes found in the sewer pipeline. This study modified the original RegNet model by modifying the squeeze excitation (SE) block and adding the dropout layer and Leaky Rectified Linear Units (LeakyReLU) activation function in the Block structure of RegNet model. This study explored different deep learning methods such as RegNet, ResNet50, very deep convolutional networks (VGG), and GoogleNet to train on the sewer defect dataset. The experimental results indicate that the proposed system framework based on the modified-RegNet (RegNet+) model achieves the highest accuracy of 99.5 compared with the commonly used deep learning models. The proposed model provides a robust deep learning model that can effectively classify 20 different sewer defects and be utilized in real-world sewer condition monitoring applications.  相似文献   

9.
该文提出了一种引入经典弹塑性力学理论知识作为辅助驱动的小样本深度学习算法,适用于土木工程任意材料弹塑性本构关系,能够有效缓解大规模深度学习模型实际应用时常见的数据量匮乏瓶颈。该文简要概述了通用的经典弹塑性力学框架;在此基础上详细阐释了将弹塑性力学方程引入到常规深度学习模型中的方法与流程,该过程无需关心底层理论本构模型的具体形式与传统的复杂数值实现,保留了数据驱动技术简单、直接、高效的优点;为缓解优化目标复杂化所导致的训练不收敛问题,提出了一种与理论辅助驱动相适应的训练策略“过拟合-修正法”,能够稳定并加速收敛过程;基于结构钢材精细弹塑性本构模型开展了数值试验,验证了理论辅助的小样本学习算法的有效性,能够实现大规模深度学习模型在少量训练样本情形下获得优异的泛化性,相较纯粹数据驱动模型准确性提升38.9%。该文采用的理论辅助思想具有可借鉴性,后续可应用于结构层次的深度学习代理模型研究,促进未来更为先进、大型的智能算法落地土木工程计算领域。  相似文献   

10.
Lightweight deep convolutional neural networks (CNNs) present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients. Recently, advantages of portable Ultrasound (US) imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases. In this paper, a new framework of lightweight deep learning classifiers, namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images. Compared to traditional deep learning models, lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources. Four main lightweight deep learning models, namely MobileNets, ShuffleNets, MENet and MnasNet have been proposed to identify the health status of lungs using US images. Public image dataset (POCUS) was used to validate our proposed COVID-LWNet framework successfully. Three classes of infectious COVID-19, bacterial pneumonia, and the healthy lung were investigated in this study. The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0% and 647.0 s, respectively. This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobile-based radiological tool for clinical diagnosis of COVID-19 and other lung diseases.  相似文献   

11.
Nowadays, Internet of Things (IoT) has penetrated all facets of human life while on the other hand, IoT devices are heavily prone to cyberattacks. It has become important to develop an accurate system that can detect malicious attacks on IoT environments in order to mitigate security risks. Botnet is one of the dreadful malicious entities that has affected many users for the past few decades. It is challenging to recognize Botnet since it has excellent carrying and hidden capacities. Various approaches have been employed to identify the source of Botnet at earlier stages. Machine Learning (ML) and Deep Learning (DL) techniques are developed based on heavy influence from Botnet detection methodology. In spite of this, it is still a challenging task to detect Botnet at early stages due to low number of features accessible from Botnet dataset. The current study devises IoT with Cloud Assisted Botnet Detection and Classification utilizing Rat Swarm Optimizer with Deep Learning (BDC-RSODL) model. The presented BDC-RSODL model includes a series of processes like pre-processing, feature subset selection, classification, and parameter tuning. Initially, the network data is pre-processed to make it compatible for further processing. Besides, RSO algorithm is exploited for effective selection of subset of features. Additionally, Long Short Term Memory (LSTM) algorithm is utilized for both identification and classification of botnets. Finally, Sine Cosine Algorithm (SCA) is executed for fine-tuning the hyperparameters related to LSTM model. In order to validate the promising performance of BDC-RSODL system, a comprehensive comparison analysis was conducted. The obtained results confirmed the supremacy of BDC-RSODL model over recent approaches.  相似文献   

12.
Data fusion is one of the challenging issues, the healthcare sector is facing in the recent years. Proper diagnosis from digital imagery and treatment are deemed to be the right solution. Intracerebral Haemorrhage (ICH), a condition characterized by injury of blood vessels in brain tissues, is one of the important reasons for stroke. Images generated by X-rays and Computed Tomography (CT) are widely used for estimating the size and location of hemorrhages. Radiologists use manual planimetry, a time-consuming process for segmenting CT scan images. Deep Learning (DL) is the most preferred method to increase the efficiency of diagnosing ICH. In this paper, the researcher presents a unique multi-modal data fusion-based feature extraction technique with Deep Learning (DL) model, abbreviated as FFE-DL for Intracranial Haemorrhage Detection and Classification, also known as FFEDL-ICH. The proposed FFEDL-ICH model has four stages namely, preprocessing, image segmentation, feature extraction, and classification. The input image is first preprocessed using the Gaussian Filtering (GF) technique to remove noise. Secondly, the Density-based Fuzzy C-Means (DFCM) algorithm is used to segment the images. Furthermore, the Fusion-based Feature Extraction model is implemented with handcrafted feature (Local Binary Patterns) and deep features (Residual Network-152) to extract useful features. Finally, Deep Neural Network (DNN) is implemented as a classification technique to differentiate multiple classes of ICH. The researchers, in the current study, used benchmark Intracranial Haemorrhage dataset and simulated the FFEDL-ICH model to assess its diagnostic performance. The findings of the study revealed that the proposed FFEDL-ICH model has the ability to outperform existing models as there is a significant improvement in its performance. For future researches, the researcher recommends the performance improvement of FFEDL-ICH model using learning rate scheduling techniques for DNN.  相似文献   

13.
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature. The stock data is usually non-stationary, and attributes are non-correlative to each other. Several traditional Stock Technical Indicators (STIs) may incorrectly predict the stock market trends. To study the stock market characteristics using STIs and make efficient trading decisions, a robust model is built. This paper aims to build up an Evolutionary Deep Learning Model (EDLM) to identify stock trends’ prices by using STIs. The proposed model has implemented the Deep Learning (DL) model to establish the concept of Correlation-Tensor. The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange (NSE) – India, a Long Short Term Memory (LSTM) is used. The datasets encompassed the trading days from the 17 of Nov 2008 to the 15 of Nov 2018. This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends. The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one. The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%, 56.25%, and 57.95% on the datasets of HDFC, Yes Bank, and SBI, respectively. Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.  相似文献   

14.
Cyberbullying (CB) is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB. The recently developed deep learning (DL) models pave the way to design CB classifier models with maximum performance. At the same time, optimal hyperparameter tuning process plays a vital role to enhance overall results. This study introduces a Teacher Learning Genetic Optimization with Deep Learning Enabled Cyberbullying Classification (TLGODL-CBC) model in Social Media. The proposed TLGODL-CBC model intends to identify the existence and non-existence of CB in social media context. Initially, the input data is cleaned and pre-processed to make it compatible for further processing. Followed by, independent recurrent autoencoder (IRAE) model is utilized for the recognition and classification of CBs. Finally, the TLGO algorithm is used to optimally adjust the parameters related to the IRAE model and shows the novelty of the work. To assuring the improved outcomes of the TLGODL-CBC approach, a wide range of simulations are executed and the outcomes are investigated under several aspects. The simulation outcomes make sure the improvements of the TLGODL-CBC model over recent approaches.  相似文献   

15.
目的 将深度学习与社交网络、情感计算相结合,探索利用深度神经网络进行社交网络用户情感研究的新方法和新技术,探索模型在用户需求分析和推荐上的应用.方法 自动筛选和挖掘海量社交网络数据,研究具有长时记忆的非先验情感预测方法,对网络中海量的用户数据、人与人之间关系进行建模,为关联时间序列创建LSTM模型,并结合其相互关系融入统一的大型深度循环网络中.具体包括:基于注意力模型的社交网络异构数据处理;基于深度LSTM的长时记忆建模,研究子网络选取、深度LSTM设计,以及针对社交网络的大型网络结构设计;基于社交网络情感模型和强化学习的推荐算法.结果 提高了分析的准确度,降低了对先验假设的依赖,减轻了人工情感模型的工作量和偏差,增强了对不同网络数据的普适性;供深度模型使用.结论 研究成果促进了深度学习与情感计算的结合,可推动网络用户行为分析和预测的研究,可用于个性化推荐、定向广告等领域,具有广泛的学术意义和应用前景.  相似文献   

16.
基于不同加工工艺的微小型结构件边缘识别   总被引:2,自引:0,他引:2  
针对不同加工工艺微小型结构件的不同边缘特征,提出了一种基于工艺匹配思想的微小型结构件边缘识别算法.该算法通过计算有效平均梯度,提取不同加工工艺微小型结构件的边缘过渡区,建立边缘过渡区的多项式回归模型,求导确定边缘点精确位置.通过对4种常用微细加工工艺建模分析可以看出,加工工艺对微小型结构件边缘区域影响较大,边缘精确识别时应加入工艺匹配的思想.该算法考虑了实际加工工艺的影响,算法上加入统计学方法,通过建立过渡区数学模型,使边缘检测结果达到亚像素级.  相似文献   

17.
In recent years, with the development of machine learning and deep learning, it is possible to identify and even control crop diseases by using electronic devices instead of manual observation. In this paper, an image recognition method of citrus diseases based on deep learning is proposed. We built a citrus image dataset including six common citrus diseases. The deep learning network is used to train and learn these images, which can effectively identify and classify crop diseases. In the experiment, we use MobileNetV2 model as the primary network and compare it with other network models in the aspect of speed, model size, accuracy. Results show that our method reduces the prediction time consumption and model size while keeping a good classification accuracy. Finally, we discuss the significance of using MobileNetV2 to identify and classify agricultural diseases in mobile terminal, and put forward relevant suggestions.  相似文献   

18.
Corona is a viral disease that has taken the form of an epidemic and is causing havoc worldwide after its first appearance in the Wuhan state of China in December 2019. Due to the similarity in initial symptoms with viral fever, it is challenging to identify this virus initially. Non-detection of this virus at the early stage results in the death of the patient. Developing and densely populated countries face a scarcity of resources like hospitals, ventilators, oxygen, and healthcare workers. Technologies like the Internet of Things (IoT) and artificial intelligence can play a vital role in diagnosing the COVID-19 virus at an early stage. To minimize the spread of the pandemic, IoT-enabled devices can be used to collect patient’s data remotely in a secure manner. Collected data can be analyzed through a deep learning model to detect the presence of the COVID-19 virus. In this work, the authors have proposed a three-phase model to diagnose covid-19 by incorporating a chatbot, IoT, and deep learning technology. In phase one, an artificially assisted chatbot can guide an individual by asking about some common symptoms. In case of detection of even a single sign, the second phase of diagnosis can be considered, consisting of using a thermal scanner and pulse oximeter. In case of high temperature and low oxygen saturation levels, the third phase of diagnosis will be recommended, where chest radiography images can be analyzed through an AI-based model to diagnose the presence of the COVID-19 virus in the human body. The proposed model reduces human intervention through chatbot-based initial screening, sensor-based IoT devices, and deep learning-based X-ray analysis. It also helps in reducing the mortality rate by detecting the presence of the COVID-19 virus at an early stage.  相似文献   

19.
With the rapid development of the internet of things (IoT), electricity consumption data can be captured and recorded in the IoT cloud center. This provides a credible data source for enterprise credit scoring, which is one of the most vital elements during the financial decision-making process. Accordingly, this paper proposes to use deep learning to train an enterprise credit scoring model by inputting the electricity consumption data. Instead of predicting the credit rating, our method can generate an absolute credit score by a novel deep ranking model–ranking extreme gradient boosting net (rankXGB). To boost the performance, the rankXGB model combines several weak ranking models into a strong model. Due to the high computational cost and the vast amounts of data, we design an edge computing framework to reduce the latency of enterprise credit evaluation. Specially, we design a two-stage deep learning task architecture, including a cloud-based weak credit ranking and an edge-based credit score calculation. In the first stage, we send the electricity consumption data of the evaluated enterprise to the computing cloud server, where multiple weak-ranking networks are executed in parallel to produce multiple weak-ranking results. In the second stage, the edge device fuses multiple ranking results generated in the cloud server to produce a more reliable ranking result, which is used to calculate an absolute credit score by score normalization. The experiments demonstrate that our method can achieve accurate enterprise credit evaluation quickly.  相似文献   

20.
Internet of Things (IoT) defines a network of devices connected to the internet and sharing a massive amount of data between each other and a central location. These IoT devices are connected to a network therefore prone to attacks. Various management tasks and network operations such as security, intrusion detection, Quality-of-Service provisioning, performance monitoring, resource provisioning, and traffic engineering require traffic classification. Due to the ineffectiveness of traditional classification schemes, such as port-based and payload-based methods, researchers proposed machine learning-based traffic classification systems based on shallow neural networks. Furthermore, machine learning-based models incline to misclassify internet traffic due to improper feature selection. In this research, an efficient multilayer deep learning based classification system is presented to overcome these challenges that can classify internet traffic. To examine the performance of the proposed technique, Moore-dataset is used for training the classifier. The proposed scheme takes the pre-processed data and extracts the flow features using a deep neural network (DNN). In particular, the maximum entropy classifier is used to classify the internet traffic. The experimental results show that the proposed hybrid deep learning algorithm is effective and achieved high accuracy for internet traffic classification, i.e., 99.23%. Furthermore, the proposed algorithm achieved the highest accuracy compared to the support vector machine (SVM) based classification technique and k-nearest neighbours (KNNs) based classification technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号