首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Machine learning (ML) algorithms are often used to design effective intrusion detection (ID) systems for appropriate mitigation and effective detection of malicious cyber threats at the host and network levels. However, cybersecurity attacks are still increasing. An ID system can play a vital role in detecting such threats. Existing ID systems are unable to detect malicious threats, primarily because they adopt approaches that are based on traditional ML techniques, which are less concerned with the accurate classification and feature selection. Thus, developing an accurate and intelligent ID system is a priority. The main objective of this study was to develop a hybrid intelligent intrusion detection system (HIIDS) to learn crucial features representation efficiently and automatically from massive unlabeled raw network traffic data. Many ID datasets are publicly available to the cybersecurity research community. As such, we used a spark MLlib (machine learning library)-based robust classifier, such as logistic regression (LR), extreme gradient boosting (XGB) was used for anomaly detection, and a state-of-the-art DL, such as a long short-term memory autoencoder (LSTMAE) for misuse attack was used to develop an efficient and HIIDS to detect and classify unpredictable attacks. Our approach utilized LSTM to detect temporal features and an AE to more efficiently detect global features. Therefore, to evaluate the efficacy of our proposed approach, experiments were conducted on a publicly existing dataset, the contemporary real-life ISCX-UNB dataset. The simulation results demonstrate that our proposed spark MLlib and LSTMAE-based HIIDS significantly outperformed existing ID approaches, achieving a high accuracy rate of up to 97.52% for the ISCX-UNB dataset respectively 10-fold cross-validation test. It is quite promising to use our proposed HIIDS in real-world circumstances on a large-scale.  相似文献   

2.
The growth of cloud in modern technology is drastic by provisioning services to various industries where data security is considered to be common issue that influences the intrusion detection system (IDS). IDS are considered as an essential factor to fulfill security requirements. Recently, there are diverse Machine Learning (ML) approaches that are used for modeling effectual IDS. Most IDS are based on ML techniques and categorized as supervised and unsupervised. However, IDS with supervised learning is based on labeled data. This is considered as a common drawback and it fails to identify the attack patterns. Similarly, unsupervised learning fails to provide satisfactory outcomes. Therefore, this work concentrates on semi-supervised learning model known as Fuzzy based semi-supervised approach through Latent Dirichlet Allocation (F-LDA) for intrusion detection in cloud system. This helps to resolve the aforementioned challenges. Initially, LDA gives better generalization ability for training the labeled data. Similarly, to handle the unlabelled data, Fuzzy model has been adopted for analyzing the dataset. Here, pre-processing has been carried out to eliminate data redundancy over network dataset. In order to validate the efficiency of F-LDA towards ID, this model is tested under NSL-KDD cup dataset is a common traffic dataset. Simulation is done in MATLAB environment and gives better accuracy while comparing with benchmark standard dataset. The proposed F-LDA gives better accuracy and promising outcomes than the prevailing approaches.  相似文献   

3.
With the development of Information technology and the popularization of Internet, whenever and wherever possible, people can connect to the Internet optionally. Meanwhile, the security of network traffic is threatened by various of online malicious behaviors. The aim of an intrusion detection system (IDS) is to detect the network behaviors which are diverse and malicious. Since a conventional firewall cannot detect most of the malicious behaviors, such as malicious network traffic or computer abuse, some advanced learning methods are introduced and integrated with intrusion detection approaches in order to improve the performance of detection approaches. However, there are very few related studies focusing on both the effective detection for attacks and the representation for malicious behaviors with graph. In this paper, a novel intrusion detection approach IDBFG (Intrusion Detection Based on Feature Graph) is proposed which first filters normal connections with grid partitions, and then records the patterns of various attacks with a novel graph structure, and the behaviors in accordance with the patterns in graph are detected as intrusion behaviors. The experimental results on KDD-Cup 99 dataset show that IDBFG performs better than SVM (Supprot Vector Machines) and Decision Tree which are trained and tested in original feature space in terms of detection rates, false alarm rates and run time.  相似文献   

4.
Internet of Things (IoT) defines a network of devices connected to the internet and sharing a massive amount of data between each other and a central location. These IoT devices are connected to a network therefore prone to attacks. Various management tasks and network operations such as security, intrusion detection, Quality-of-Service provisioning, performance monitoring, resource provisioning, and traffic engineering require traffic classification. Due to the ineffectiveness of traditional classification schemes, such as port-based and payload-based methods, researchers proposed machine learning-based traffic classification systems based on shallow neural networks. Furthermore, machine learning-based models incline to misclassify internet traffic due to improper feature selection. In this research, an efficient multilayer deep learning based classification system is presented to overcome these challenges that can classify internet traffic. To examine the performance of the proposed technique, Moore-dataset is used for training the classifier. The proposed scheme takes the pre-processed data and extracts the flow features using a deep neural network (DNN). In particular, the maximum entropy classifier is used to classify the internet traffic. The experimental results show that the proposed hybrid deep learning algorithm is effective and achieved high accuracy for internet traffic classification, i.e., 99.23%. Furthermore, the proposed algorithm achieved the highest accuracy compared to the support vector machine (SVM) based classification technique and k-nearest neighbours (KNNs) based classification technique.  相似文献   

5.
Due to polymorphic nature of malware attack, a signature-based analysis is no longer sufficient to solve polymorphic and stealth nature of malware attacks. On the other hand, state-of-the-art methods like deep learning require labelled dataset as a target to train a supervised model. This is unlikely to be the case in production network as the dataset is unstructured and has no label. Hence an unsupervised learning is recommended. Behavioral study is one of the techniques to elicit traffic pattern. However, studies have shown that existing behavioral intrusion detection model had a few issues which had been parameterized into its common characteristics, namely lack of prior information (p (θ)), and reduced parameters (θ). Therefore, this study aims to utilize the previously built Feature Selection Model subsequently to design a Predictive Analytics Model based on Bayesian Network used to improve the analysis prediction. Feature Selection Model is used to learn significant label as a target and Bayesian Network is a sophisticated probabilistic approach to predict intrusion. Finally, the results are extended to evaluate detection, accuracy and false alarm rate of the model against the subject matter expert model, Support Vector Machine (SVM), k nearest neighbor (k-NN) using simulated and ground-truth dataset. The ground-truth dataset from the production traffic of one of the largest healthcare provider in Malaysia is used to promote realism on the real use case scenario. Results have shown that the proposed model consistently outperformed other models.  相似文献   

6.
Attacks on websites and network servers are among the most critical threats in network security. Network behavior identification is one of the most effective ways to identify malicious network intrusions. Analyzing abnormal network traffic patterns and traffic classification based on labeled network traffic data are among the most effective approaches for network behavior identification. Traditional methods for network traffic classification utilize algorithms such as Naive Bayes, Decision Tree and XGBoost. However, network traffic classification, which is required for network behavior identification, generally suffers from the problem of low accuracy even with the recently proposed deep learning models. To improve network traffic classification accuracy thus improving network intrusion detection rate, this paper proposes a new network traffic classification model, called ArcMargin, which incorporates metric learning into a convolutional neural network (CNN) to make the CNN model more discriminative. ArcMargin maps network traffic samples from the same category more closely while samples from different categories are mapped as far apart as possible. The metric learning regularization feature is called additive angular margin loss, and it is embedded in the object function of traditional CNN models. The proposed ArcMargin model is validated with three datasets and is compared with several other related algorithms. According to a set of classification indicators, the ArcMargin model is proofed to have better performances in both network traffic classification tasks and open-set tasks. Moreover, in open-set tasks, the ArcMargin model can cluster unknown data classes that do not exist in the previous training dataset.  相似文献   

7.
An IDS (intrusion detection system) provides a foremost front line mechanism to guard networks, systems, data, and information. That’s why intrusion detection has grown as an active study area and provides significant contribution to cyber-security techniques. Multiple techniques have been in use but major concern in their implementation is variation in their detection performance. The performance of IDS lies in the accurate detection of attacks, and this accuracy can be raised by improving the recognition rate and significant reduction in the false alarms rate. To overcome this problem many researchers have used different machine learning techniques. These techniques have limitations and do not efficiently perform on huge and complex data about systems and networks. This work focused on ELM (Extreme Learning Machine) technique due to its good capabilities in classification problems and dealing with huge data. The ELM has different activation functions, but the problem is to find out which function is more suitable and performs well in IDS. This work investigates this problem. Here, Well-known activation functions like: sine, sigmoid and radial basis are explored, investigated and applied to measure their performance on the GA (Genetic Algorithm) features subset and with full features set. The NSL-KDD dataset is used as a benchmark. The empirical results are analyzed, addressed and compared among different activation functions of the ELM. The results show that the radial basis and sine functions perform better on GA feature set than the full feature set while the performance of the sigmoid function is almost equal on both features sets. So, the proposal of GA based feature selection reduced 21 features out of 41 that brought up to 98% accuracy and enhanced overall efficiency of extreme learning machine in intrusion detection.  相似文献   

8.
The novel Software Defined Networking (SDN) architecture potentially resolves specific challenges arising from rapid internet growth of and the static nature of conventional networks to manage organizational business requirements with distinctive features. Nevertheless, such benefits lead to a more adverse environment entailing network breakdown, systems paralysis, and online banking fraudulence and robbery. As one of the most common and dangerous threats in SDN, probe attack occurs when the attacker scans SDN devices to collect the necessary knowledge on system susceptibilities, which is then manipulated to undermine the entire system. Precision, high performance, and real-time systems prove pivotal in successful goal attainment through feature selection to minimize computation time, optimize prediction performance, and provide a holistic understanding of machine learning data. As the extension of astute machine learning algorithms into an Intrusion Detection System (IDS) through SDN has garnered much scholarly attention within the past decade, this study recommended an effective IDS under the Grey-wolf optimizer (GWO) and Light Gradient Boosting Machine (LightGBM) classifier for probe attack identification. The InSDN dataset was employed to train and test the proposed IDS, which is deemed to be a novel benchmarking dataset in SDN. The proposed IDS assessment demonstrated an optimized performance against that of peer IDSs in probe attack detection within SDN. The results revealed that the proposed IDS outperforms the state-of-the-art IDSs, as it achieved 99.8% accuracy, 99.7% recall, 99.99% precision, and 99.8% F-measure.  相似文献   

9.
Applications of internet-of-things (IoT) are increasingly being used in many facets of our daily life, which results in an enormous volume of data. Cloud computing and fog computing, two of the most common technologies used in IoT applications, have led to major security concerns. Cyberattacks are on the rise as a result of the usage of these technologies since present security measures are insufficient. Several artificial intelligence (AI) based security solutions, such as intrusion detection systems (IDS), have been proposed in recent years. Intelligent technologies that require data preprocessing and machine learning algorithm-performance augmentation require the use of feature selection (FS) techniques to increase classification accuracy by minimizing the number of features selected. On the other hand, metaheuristic optimization algorithms have been widely used in feature selection in recent decades. In this paper, we proposed a hybrid optimization algorithm for feature selection in IDS. The proposed algorithm is based on grey wolf (GW), and dipper throated optimization (DTO) algorithms and is referred to as GWDTO. The proposed algorithm has a better balance between the exploration and exploitation steps of the optimization process and thus could achieve better performance. On the employed IoT-IDS dataset, the performance of the proposed GWDTO algorithm was assessed using a set of evaluation metrics and compared to other optimization approaches in the literature to validate its superiority. In addition, a statistical analysis is performed to assess the stability and effectiveness of the proposed approach. Experimental results confirmed the superiority of the proposed approach in boosting the classification accuracy of the intrusion in IoT-based networks.  相似文献   

10.
The extensive proliferation of modern information services and ubiquitous digitization of society have raised cybersecurity challenges to new levels. With the massive number of connected devices, opportunities for potential network attacks are nearly unlimited. An additional problem is that many low-cost devices are not equipped with effective security protection so that they are easily hacked and applied within a network of bots (botnet) to perform distributed denial of service (DDoS) attacks. In this paper, we propose a novel intrusion detection system (IDS) based on deep learning that aims to identify suspicious behavior in modern heterogeneous information systems. The proposed approach is based on a deep recurrent autoencoder that learns time series of normal network behavior and detects notable network anomalies. An additional feature of the proposed IDS is that it is trained with an optimized dataset, where the number of features is reduced by 94% without classification accuracy loss. Thus, the proposed IDS remains stable in response to slight system perturbations, which do not represent network anomalies. The proposed approach is evaluated under different simulation scenarios and provides a 99% detection accuracy over known datasets while reducing the training time by an order of magnitude.  相似文献   

11.
In recent years, progressive developments have been observed in recent technologies and the production cost has been continuously decreasing. In such scenario, Internet of Things (IoT) network which is comprised of a set of Unmanned Aerial Vehicles (UAV), has received more attention from civilian to military applications. But network security poses a serious challenge to UAV networks whereas the intrusion detection system (IDS) is found to be an effective process to secure the UAV networks. Classical IDSs are not adequate to handle the latest computer networks that possess maximum bandwidth and data traffic. In order to improve the detection performance and reduce the false alarms generated by IDS, several researchers have employed Machine Learning (ML) and Deep Learning (DL) algorithms to address the intrusion detection problem. In this view, the current research article presents a deep reinforcement learning technique, optimized by Black Widow Optimization (DRL-BWO) algorithm, for UAV networks. In addition, DRL involves an improved reinforcement learning-based Deep Belief Network (DBN) for intrusion detection. For parameter optimization of DRL technique, BWO algorithm is applied. It helps in improving the intrusion detection performance of UAV networks. An extensive set of experimental analysis was performed to highlight the supremacy of the proposed model. From the simulation values, it is evident that the proposed method is appropriate as it attained high precision, recall, F-measure, and accuracy values such as 0.985, 0.993, 0.988, and 0.989 respectively.  相似文献   

12.
With the advancement of network communication technology, network traffic shows explosive growth. Consequently, network attacks occur frequently. Network intrusion detection systems are still the primary means of detecting attacks. However, two challenges continue to stymie the development of a viable network intrusion detection system: imbalanced training data and new undiscovered attacks. Therefore, this study proposes a unique deep learning-based intrusion detection method. We use two independent in-memory autoencoders trained on regular network traffic and attacks to capture the dynamic relationship between traffic features in the presence of unbalanced training data. Then the original data is fed into the triplet network by forming a triplet with the data reconstructed from the two encoders to train. Finally, the distance relationship between the triples determines whether the traffic is an attack. In addition, to improve the accuracy of detecting unknown attacks, this research proposes an improved triplet loss function that is used to pull the distances of the same class closer while pushing the distances belonging to different classes farther in the learned feature space. The proposed approach’s effectiveness, stability, and significance are evaluated against advanced models on the Android Adware and General Malware Dataset (AAGM17), Knowledge Discovery and Data Mining Cup 1999 (KDDCUP99), Canadian Institute for Cybersecurity Group’s Intrusion Detection Evaluation Dataset (CICIDS2017), UNSW-NB15, Network Security Lab-Knowledge Discovery and Data Mining (NSL-KDD) datasets. The achieved results confirmed the superiority of the proposed method for the task of network intrusion detection.  相似文献   

13.
Software defect prediction plays a very important role in software quality assurance, which aims to inspect as many potentially defect-prone software modules as possible. However, the performance of the prediction model is susceptible to high dimensionality of the dataset that contains irrelevant and redundant features. In addition, software metrics for software defect prediction are almost entirely traditional features compared to the deep semantic feature representation from deep learning techniques. To address these two issues, we propose the following two solutions in this paper: (1) We leverage a novel non-linear manifold learning method - SOINN Landmark Isomap (SLIsomap) to extract the representative features by selecting automatically the reasonable number and position of landmarks, which can reveal the complex intrinsic structure hidden behind the defect data. (2) We propose a novel defect prediction model named DLDD based on hybrid deep learning techniques, which leverages denoising autoencoder to learn true input features that are not contaminated by noise, and utilizes deep neural network to learn the abstract deep semantic features. We combine the squared error loss function of denoising autoencoder with the cross entropy loss function of deep neural network to achieve the best prediction performance by adjusting a hyperparameter. We compare the SL-Isomap with seven state-of-the-art feature extraction methods and compare the DLDD model with six baseline models across 20 open source software projects. The experimental results verify that the superiority of SL-Isomap and DLDD on four evaluation indicators.  相似文献   

14.
Malicious traffic detection over the internet is one of the challenging areas for researchers to protect network infrastructures from any malicious activity. Several shortcomings of a network system can be leveraged by an attacker to get unauthorized access through malicious traffic. Safeguard from such attacks requires an efficient automatic system that can detect malicious traffic timely and avoid system damage. Currently, many automated systems can detect malicious activity, however, the efficacy and accuracy need further improvement to detect malicious traffic from multi-domain systems. The present study focuses on the detection of malicious traffic with high accuracy using machine learning techniques. The proposed approach used two datasets UNSW-NB15 and IoTID20 which contain the data for IoT-based traffic and local network traffic, respectively. Both datasets were combined to increase the capability of the proposed approach in detecting malicious traffic from local and IoT networks, with high accuracy. Horizontally merging both datasets requires an equal number of features which was achieved by reducing feature count to 30 for each dataset by leveraging principal component analysis (PCA). The proposed model incorporates stacked ensemble model extra boosting forest (EBF) which is a combination of tree-based models such as extra tree classifier, gradient boosting classifier, and random forest using a stacked ensemble approach. Empirical results show that EBF performed significantly better and achieved the highest accuracy score of 0.985 and 0.984 on the multi-domain dataset for two and four classes, respectively.  相似文献   

15.
Traffic sign recognition (TSR), as a critical task to automated driving and driver assistance systems, is challenging due to the color fading, motion blur, and occlusion. Traditional methods based on convolutional neural network (CNN) only use an end-layer feature as the input to TSR that requires massive data for network training. The computation-intensive network training process results in an inaccurate or delayed classification. Thereby, the current state-of-the-art methods have limited applications. This paper proposes a new TSR method integrating multi-layer feature and kernel extreme learning machine (ELM) classifier. The proposed method applies CNN to extract the multi-layer features of traffic signs, which can present sufficient details and semantically abstract information of multi-layer feature maps. The extraction of multi-scale features of traffic signs is effective against object scale variation by applying a new multi-scale pooling operation. Further, the extracted features are combined into a multi-scale multi-attribute vector, which can enhance the feature presentation ability for TSR. To efficiently handle nonlinear sampling problems in TSR, the kernel ELM classifier is adopted for efficient TSR. The kernel ELM has a more powerful function approximation capability, which can achieve an optimal and generalized solution for multiclass TSR. Experimental results demonstrate that the proposed method can improve the recognition accuracy, efficiency, and adaptivity to complex travel environments in TSR.  相似文献   

16.
Network Intrusion Detection System (IDS) aims to maintain computer network security by detecting several forms of attacks and unauthorized uses of applications which often can not be detected by firewalls. The features selection approach plays an important role in constructing effective network IDS. Various bio-inspired metaheuristic algorithms used to reduce features to classify network traffic as abnormal or normal traffic within a shorter duration and showing more accuracy. Therefore, this paper aims to propose a hybrid model for network IDS based on hybridization bio-inspired metaheuristic algorithms to detect the generic attack. The proposed model has two objectives; The first one is to reduce the number of selected features for Network IDS. This objective was met through the hybridization of bio-inspired metaheuristic algorithms with each other in a hybrid model. The algorithms used in this paper are particle swarm optimization (PSO), multi-verse optimizer (MVO), grey wolf optimizer (GWO), moth-flame optimization (MFO), whale optimization algorithm (WOA), firefly algorithm (FFA), and bat algorithm (BAT). The second objective is to detect the generic attack using machine learning classifiers. This objective was met through employing the support vector machine (SVM), C4.5 (J48) decision tree, and random forest (RF) classifiers. UNSW-NB15 dataset used for assessing the effectiveness of the proposed hybrid model. UNSW-NB15 dataset has nine attacks type. The generic attack is the highest among them. Therefore, the proposed model aims to identify generic attacks. My data showed that J48 is the best classifier compared to SVM and RF for the time needed to build the model. In terms of features reduction for the classification, my data show that the MFO-WOA and FFA-GWO models reduce the features to 15 features with close accuracy, sensitivity and F-measure of all features, whereas MVO-BAT model reduces features to 24 features with the same accuracy, sensitivity and F-measure of all features for all classifiers.  相似文献   

17.
Recently, machine learning algorithms have been used in the detection and classification of network attacks. The performance of the algorithms has been evaluated by using benchmark network intrusion datasets such as DARPA98, KDD’99, NSL-KDD, UNSW-NB15, and Caida DDoS. However, these datasets have two major challenges: imbalanced data and high-dimensional data. Obtaining high accuracy for all attack types in the dataset allows for high accuracy in imbalanced datasets. On the other hand, having a large number of features increases the runtime load on the algorithms. A novel model is proposed in this paper to overcome these two concerns. The number of features in the model, which has been tested at CICIDS2017, is initially optimized by using genetic algorithms. This optimum feature set has been used to classify network attacks with six well-known classifiers according to high f1-score and g-mean value in minimum time. Afterwards, a multi-layer perceptron based ensemble learning approach has been applied to improve the models’ overall performance. The experimental results show that the suggested model is acceptable for feature selection as well as classifying network attacks in an imbalanced dataset, with a high f1-score (0.91) and g-mean (0.99) value. Furthermore, it has outperformed base classifier models and voting procedures.  相似文献   

18.
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score.  相似文献   

19.
Autism Spectrum Disorder (ASD) refers to a neuro-disorder where an individual has long-lasting effects on communication and interaction with others. Advanced information technology which employs artificial intelligence (AI) model has assisted in early identify ASD by using pattern detection. Recent advances of AI models assist in the automated identification and classification of ASD, which helps to reduce the severity of the disease. This study introduces an automated ASD classification using owl search algorithm with machine learning (ASDC-OSAML) model. The proposed ASDC-OSAML model majorly focuses on the identification and classification of ASD. To attain this, the presented ASDC-OSAML model follows min-max normalization approach as a pre-processing stage. Next, the owl search algorithm (OSA)-based feature selection (OSA-FS) model is used to derive feature subsets. Then, beetle swarm antenna search (BSAS) algorithm with Iterative Dichotomiser 3 (ID3) classification method was implied for ASD detection and classification. The design of BSAS algorithm helps to determine the parameter values of the ID3 classifier. The performance analysis of the ASDC-OSAML model is performed using benchmark dataset. An extensive comparison study highlighted the supremacy of the ASDC-OSAML model over recent state of art approaches.  相似文献   

20.
Recently, the Erebus attack has proved to be a security threat to the blockchain network layer, and the existing research has faced challenges in detecting the Erebus attack on the blockchain network layer. The cloud-based active defense and one-sidedness detection strategies are the hindrances in detecting Erebus attacks. This study designs a detection approach by establishing a ReliefF_WMRmR-based two-stage feature selection algorithm and a deep learning-based multimodal classification detection model for Erebus attacks and responding to security threats to the blockchain network layer. The goal is to improve the performance of Erebus attack detection methods, by combining the traffic behavior with the routing status based on multimodal deep feature learning. The traffic behavior and routing status were first defined and used to describe the attack characteristics at diverse stages of s leak monitoring, hidden traffic overlay, and transaction identity forgery. The goal is to clarify how an Erebus attack affects the routing transfer and traffic state on the blockchain network layer. Consequently, detecting objects is expected to become more relevant and sensitive. A two-stage feature selection algorithm was designed based on ReliefF and weighted maximum relevance minimum redundancy (ReliefF_WMRmR) to alleviate the overfitting of the training model caused by redundant information and noise in multiple source features of the routing status and traffic behavior. The ReliefF algorithm was introduced to select strong correlations and highly informative features of the labeled data. According to WMRmR, a feature selection framework was defined to eliminate weakly correlated features, eliminate redundant information, and reduce the detection overhead of the model. A multimodal deep learning model was constructed based on the multilayer perceptron (MLP) to settle the high false alarm rates incurred by multisource data. Using this model, isolated inputs and deep learning were conducted on the selected routing status and traffic behavior. Redundant intermodal information was removed because of the complementarity of the multimodal network, which was followed by feature fusion and output feature representation to boost classification detection precision. The experimental results demonstrate that the proposed method can detect features, such as traffic data, at key link nodes and route messages in a real blockchain network environment. Additionally, the model can detect Erebus attacks effectively. This study provides novelty to the existing Erebus attack detection by increasing the accuracy detection by 1.05%, the recall rate by 2.01%, and the F1-score by 2.43%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号