首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
The rapid growth in data generation and increased use of computer network devices has amplified the infrastructures of internet. The interconnectivity of networks has brought various complexities in maintaining network availability, consistency, and discretion. Machine learning based intrusion detection systems have become essential to monitor network traffic for malicious and illicit activities. An intrusion detection system controls the flow of network traffic with the help of computer systems. Various deep learning algorithms in intrusion detection systems have played a prominent role in identifying and analyzing intrusions in network traffic. For this purpose, when the network traffic encounters known or unknown intrusions in the network, a machine-learning framework is needed to identify and/or verify network intrusion. The Intrusion detection scheme empowered with a fused machine learning technique (IDS-FMLT) is proposed to detect intrusion in a heterogeneous network that consists of different source networks and to protect the network from malicious attacks. The proposed IDS-FMLT system model obtained 95.18% validation accuracy and a 4.82% miss rate in intrusion detection.  相似文献   

2.
In recent years, cybersecurity has attracted significant interest due to the rapid growth of the Internet of Things (IoT) and the widespread development of computer infrastructure and systems. It is thus becoming particularly necessary to identify cyber-attacks or irregularities in the system and develop an efficient intrusion detection framework that is integral to security. Researchers have worked on developing intrusion detection models that depend on machine learning (ML) methods to address these security problems. An intelligent intrusion detection device powered by data can exploit artificial intelligence (AI), and especially ML, techniques. Accordingly, we propose in this article an intrusion detection model based on a Real-Time Sequential Deep Extreme Learning Machine Cybersecurity Intrusion Detection System (RTS-DELM-CSIDS) security model. The proposed model initially determines the rating of security aspects contributing to their significance and then develops a comprehensive intrusion detection framework focused on the essential characteristics. Furthermore, we investigated the feasibility of our proposed RTS-DELM-CSIDS framework by performing dataset evaluations and calculating accuracy parameters to validate. The experimental findings demonstrate that the RTS-DELM-CSIDS framework outperforms conventional algorithms. Furthermore, the proposed approach has not only research significance but also practical significance.  相似文献   

3.
The characterization of transportation hazards is paramount for protective packaging validation. It is used to estimate and simulate the loads and stresses occurring during transport that are essential to optimize packaging and ensure that products will resist the transportation environment with the minimum amount of protective material. Characterizing road transportation vibrations is rather complex because of the nature of the dynamic motion produced by vehicles. For instance, different levels of vibration are induced to freight depending on the vehicle speed and the road surface; which often results in non‐stationary random vibration. Road aberrations (such as cracks, potholes and speed bumps) also produce transient vibrations (shocks) that can damage products. Because shocks and random vibrations cannot be analysed with the same statistical tools, the shocks have to be separated from the underlying vibrations. Both of these dynamic loads have to be characterized separately because they have different damaging effects. This task is challenging because both types of vibration are recorded on a vehicle within the same vibration signal. This paper proposes to use machine learning to identify shocks present in acceleration signals measured on road vehicles. In this paper, a machine learning algorithm is trained to identify shocks buried within road vehicle vibration signals. These signals are artificially generated using non‐stationary random vibration and shock impulses that reproduce typical vehicle dynamic behaviour. The results show that the machine learning algorithm is considerably more accurate and reliable in identifying shocks than the more common approaches based on the crest factor. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Networks provide a significant function in everyday life, and cybersecurity therefore developed a critical field of study. The Intrusion detection system (IDS) becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network. Notwithstanding advancements of growth, current intrusion detection systems also experience dif- ficulties in enhancing detection precision, growing false alarm levels and identifying suspicious activities. In order to address above mentioned issues, several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches. Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency. Artificial intelligence, particularly machine learning methods can be used to develop an intelligent intrusion detection framework. There in this article in order to achieve this objective, we propose an intrusion detection system focused on a Deep extreme learning machine (DELM) which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features. In the moment, we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability. The experimental results illustrate that the suggested framework outclasses traditional algorithms. In fact, the suggested framework is not only of interest to scientific research but also of functional importance.  相似文献   

5.
The number of botnet malware attacks on Internet devices has grown at an equivalent rate to the number of Internet devices that are connected to the Internet. Bot detection using machine learning (ML) with flow-based features has been extensively studied in the literature. Existing flow-based detection methods involve significant computational overhead that does not completely capture network communication patterns that might reveal other features of malicious hosts. Recently, Graph-Based Bot Detection methods using ML have gained attention to overcome these limitations, as graphs provide a real representation of network communications. The purpose of this study is to build a botnet malware detection system utilizing centrality measures for graph-based botnet detection and ML. We propose BotSward, a graph-based bot detection system that is based on ML. We apply the efficient centrality measures, which are Closeness Centrality (CC), Degree Centrality (CC), and PageRank (PR), and compare them with others used in the state-of-the-art. The efficiency of the proposed method is verified on the available Czech Technical University 13 dataset (CTU-13). The CTU-13 dataset contains 13 real botnet traffic scenarios that are connected to a command-and-control (C&C) channel and that cause malicious actions such as phishing, distributed denial-of-service (DDoS) attacks, spam attacks, etc. BotSward is robust to zero-day attacks, suitable for large-scale datasets, and is intended to produce better accuracy than state-of-the-art techniques. The proposed BotSward solution achieved 99% accuracy in botnet attack detection with a false positive rate as low as 0.0001%.  相似文献   

6.
The internet, particularly online social networking platforms have revolutionized the way extremist groups are influencing and radicalizing individuals. Recent research reveals that the process initiates by exposing vast audiences to extremist content and then migrating potential victims to confined platforms for intensive radicalization. Consequently, social networks have evolved as a persuasive tool for extremism aiding as recruitment platform and psychological warfare. Thus, recognizing potential radical text or material is vital to restrict the circulation of the extremist chronicle. The aim of this research work is to identify radical text in social media. Our contributions are as follows: (i) A new dataset to be employed in radicalization detection; (ii) In depth analysis of new and previous datasets so that the variation in extremist group narrative could be identified; (iii) An approach to train classifier employing religious features along with radical features to detect radicalization; (iv) Observing the use of violent and bad words in radical, neutral and random groups by employing violent, terrorism and bad words dictionaries. Our research results clearly indicate that incorporating religious text in model training improves the accuracy, precision, recall, and F1-score of the classifiers. Secondly a variation in extremist narrative has been observed implying that usage of new dataset can have substantial effect on classifier performance. In addition to this, violence and bad words are creating a differentiating factor between radical and random users but for neutral (anti-ISIS) group it needs further investigation.  相似文献   

7.
针对大规模网络环境下海量告警信息的重复性、不完整和不可管理给网络安全管理带来的新的挑战,提出了一种基于因果关系的实时入侵告警关联(RIAC)系统,以解决海量告警的实时关联和可视化管理问题.此RIAC系统利用分布式Agent实时地捕获和预处理告警信息,然后由因果关联引擎对其进行分析和处理,从而揭示告警信息背后隐藏的攻击场景和攻击意图.使用MIT Lincoln Lab提供的攻击场景数据集LLDOS1.0和真实IPv6数据集对该RIAC系统进行了测试,实验结果验证了其有效性和实时性.  相似文献   

8.
Advances in machine learning (ML) methods are important in industrial engineering and attract great attention in recent years. However, a comprehensive comparative study of the most advanced ML algorithms is lacking. Six integrated ML approaches for the crack repairing capacity of the bacteria-based self-healing concrete are proposed and compared. Six ML algorithms, including the Support Vector Regression (SVR), Decision Tree Regression (DTR), Gradient Boosting Regression (GBR), Artificial Neural Network (ANN), Bayesian Ridge Regression (BRR) and Kernel Ridge Regression (KRR), are adopted for the relationship modeling to predict crack closure percentage (CCP). Particle Swarm Optimization (PSO) is used for the hyper-parameters tuning. The importance of parameters is analyzed. It is demonstrated that integrated ML approaches have great potential to predict the CCP, and PSO is efficient in the hyper-parameter tuning. This research provides useful information for the design of the bacteria-based self-healing concrete and can contribute to the design in the rest of industrial engineering.  相似文献   

9.
In this study, a phase field model is established to simulate the microstructure formation during the solidification of dendrites by taking the Al-Cu-Mg ternary alloy as an example, and machine learning and deep learning methods are combined with the Kim-Kim-Suzuki (KKS) phase field model to predict the quasi-phase equilibrium. The paper first uses the least squares method to obtain the required data and then applies eight machine learning methods and five deep learning methods to train the quasi-phase equilibrium prediction models. After obtaining different models, this paper compares the reliability of the established models by using the test data and uses two evaluation criteria to analyze the performance of these models. This work find that the performance of the established deep learning models is generally better than that of the machine learning models, and the Multilayer Perceptron (MLP) based quasi-phase equilibrium prediction model achieves the best performance. Meanwhile the Convolutional Neural Network (CNN) based model also achieves competitive results. The experimental results show that the model proposed in this paper can predict the quasi-phase equilibrium of the KKS phase-field model accurately, which proves that it is feasible to combine machine learning and deep learning methods with phase-field model simulation.  相似文献   

10.
This paper focuses on detecting diseased signals and arrhythmias classification into two classes: ventricular tachycardia and premature ventricular contraction. The sole purpose of the signal detection is used to determine if a signal has been collected from a healthy or sick person. The proposed research approach presents a mathematical model for the signal detector based on calculating the instantaneous frequency (IF). Once a signal taken from a patient is detected, then the classifier takes that signal as input and classifies the target disease by predicting the class label. While applying the classifier, templates are designed separately for ventricular tachycardia and premature ventricular contraction. Similarities of a given signal with both the templates are computed in the spectral domain. The empirical analysis reveals precisions for the detector and the applied classifier are 100% and 77.27%, respectively. Moreover, instantaneous frequency analysis provides a benchmark that IF of a normal signal ranges from 0.8 to 1.1 Hz whereas IF range for ventricular tachycardia and premature ventricular contraction is 0.08–0.6 Hz. This indicates a serious loss of high-frequency contents in the spectrum, implying that the heart’s overall activity is slowed down. This study may help medical practitioners in detecting the heart disease type based on signal analysis.  相似文献   

11.
Recently, the Erebus attack has proved to be a security threat to the blockchain network layer, and the existing research has faced challenges in detecting the Erebus attack on the blockchain network layer. The cloud-based active defense and one-sidedness detection strategies are the hindrances in detecting Erebus attacks. This study designs a detection approach by establishing a ReliefF_WMRmR-based two-stage feature selection algorithm and a deep learning-based multimodal classification detection model for Erebus attacks and responding to security threats to the blockchain network layer. The goal is to improve the performance of Erebus attack detection methods, by combining the traffic behavior with the routing status based on multimodal deep feature learning. The traffic behavior and routing status were first defined and used to describe the attack characteristics at diverse stages of s leak monitoring, hidden traffic overlay, and transaction identity forgery. The goal is to clarify how an Erebus attack affects the routing transfer and traffic state on the blockchain network layer. Consequently, detecting objects is expected to become more relevant and sensitive. A two-stage feature selection algorithm was designed based on ReliefF and weighted maximum relevance minimum redundancy (ReliefF_WMRmR) to alleviate the overfitting of the training model caused by redundant information and noise in multiple source features of the routing status and traffic behavior. The ReliefF algorithm was introduced to select strong correlations and highly informative features of the labeled data. According to WMRmR, a feature selection framework was defined to eliminate weakly correlated features, eliminate redundant information, and reduce the detection overhead of the model. A multimodal deep learning model was constructed based on the multilayer perceptron (MLP) to settle the high false alarm rates incurred by multisource data. Using this model, isolated inputs and deep learning were conducted on the selected routing status and traffic behavior. Redundant intermodal information was removed because of the complementarity of the multimodal network, which was followed by feature fusion and output feature representation to boost classification detection precision. The experimental results demonstrate that the proposed method can detect features, such as traffic data, at key link nodes and route messages in a real blockchain network environment. Additionally, the model can detect Erebus attacks effectively. This study provides novelty to the existing Erebus attack detection by increasing the accuracy detection by 1.05%, the recall rate by 2.01%, and the F1-score by 2.43%.  相似文献   

12.
The extreme imbalanced data problem is the core issue in anomaly detection. The amount of abnormal data is so small that we cannot get adequate information to analyze it. The mainstream methods focus on taking fully advantages of the normal data, of which the discrimination method is that the data not belonging to normal data distribution is the anomaly. From the view of data science, we concentrate on the abnormal data and generate artificial abnormal samples by machine learning method. In this kind of technologies, Synthetic Minority Over-sampling Technique and its improved algorithms are representative milestones, which generate synthetic examples randomly in selected line segments. In our work, we break the limitation of line segment and propose an Imbalanced Triangle Synthetic Data method. In theory, our method covers a wider range. In experiment with real world data, our method performs better than the SMOTE and its meliorations.  相似文献   

13.
In the field of natural language processing (NLP), the advancement of neural machine translation has paved the way for cross-lingual research. Yet, most studies in NLP have evaluated the proposed language models on well-refined datasets. We investigate whether a machine translation approach is suitable for multilingual analysis of unrefined datasets, particularly, chat messages in Twitch. In order to address it, we collected the dataset, which included 7,066,854 and 3,365,569 chat messages from English and Korean streams, respectively. We employed several machine learning classifiers and neural networks with two different types of embedding: word-sequence embedding and the final layer of a pre-trained language model. The results of the employed models indicate that the accuracy difference between English, and English to Korean was relatively high, ranging from 3% to 12%. For Korean data (Korean, and Korean to English), it ranged from 0% to 2%. Therefore, the results imply that translation from a low-resource language (e.g., Korean) into a high-resource language (e.g., English) shows higher performance, in contrast to vice versa. Several implications and limitations of the presented results are also discussed. For instance, we suggest the feasibility of translation from resource-poor languages for using the tools of resource-rich languages in further analysis.  相似文献   

14.
Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease. In this work, a dataset containing medical, physiological and environmental tests for stroke was used to evaluate the efficacy of machine learning, deep learning and a hybrid technique between deep learning and machine learning on the Magnetic Resonance Imaging (MRI) dataset for cerebral haemorrhage. In the first dataset (medical records), two features, namely, diabetes and obesity, were created on the basis of the values of the corresponding features. The t-Distributed Stochastic Neighbour Embedding algorithm was applied to represent the high-dimensional dataset in a low-dimensional data space. Meanwhile,the Recursive Feature Elimination algorithm (RFE) was applied to rank the features according to priority and their correlation to the target feature and to remove the unimportant features. The features are fed into the various classification algorithms, namely, Support Vector Machine (SVM), K Nearest Neighbours (KNN), Decision Tree, Random Forest, and Multilayer Perceptron. All algorithms achieved superior results. The Random Forest algorithm achieved the best performance amongst the algorithms; it reached an overall accuracy of 99%. This algorithm classified stroke cases with Precision, Recall and F1 score of 98%, 100% and 99%, respectively. In the second dataset, the MRI image dataset was evaluated by using the AlexNet model and AlexNet + SVM hybrid technique. The hybrid model AlexNet + SVM performed is better than the AlexNet model; it reached accuracy, sensitivity, specificity and Area Under the Curve (AUC) of 99.9%, 100%, 99.80% and 99.86%, respectively.  相似文献   

15.
在诸如风致飞射物撞击等刚体冲击作用下,建筑夹层玻璃因自身脆性特征极易破坏。针对这个问题提出了在刚体冲击下夹层玻璃破坏状态的预测方法,综合考虑了玻璃构型、中间胶层、支撑条件及尺寸等多种设计参数。首先针对多类夹层玻璃进行往复刚体冲击试验,建立567组PVB及210组SGP的两种不同中间胶层的夹层玻璃试验数据库;随后基于鲸鱼优化下的核极限学习机(WOA-KELM)机器学习算法,建立夹层玻璃破坏状态的预测模型,并与支持向量机(Support Vector Machine, SVM)及最小二乘支持向量机(Least Squares Support Vector Machine, LSSVM)建立的相应预测模型进行对比分析。结果表明, WOA-KELM模型破坏状态预测精度达88.45%,能较好地预测夹层玻璃的破坏,满足工程应用的需求,且预测模型精度及实时性均优于其他模型。  相似文献   

16.
The unavailability of sufficient information for proper diagnosis, incomplete or miscommunication between patient and the clinician, or among the healthcare professionals, delay or incorrect diagnosis, the fatigue of clinician, or even the high diagnostic complexity in limited time can lead to diagnostic errors. Diagnostic errors have adverse effects on the treatment of a patient. Unnecessary treatments increase the medical bills and deteriorate the health of a patient. Such diagnostic errors that harm the patient in various ways could be minimized using machine learning. Machine learning algorithms could be used to diagnose various diseases with high accuracy. The use of machine learning could assist the doctors in making decisions on time, and could also be used as a second opinion or supporting tool. This study aims to provide a comprehensive review of research articles published from the year 2015 to mid of the year 2020 that have used machine learning for diagnosis of various diseases. We present the various machine learning algorithms used over the years to diagnose various diseases. The results of this study show the distribution of machine learning methods by medical disciplines. Based on our review, we present future research directions that could be used to conduct further research.  相似文献   

17.
基于交叉验证SVM的网络入侵检测   总被引:1,自引:0,他引:1  
针对传统入侵检测系统漏报率和误报率高的问题,将支持向量机(SVM)应用于入侵检测中,提出了在SVM学习过程中引入交叉验证的方法,采用径向基函数(RBF)作为核,将训练集分成若干子集,每一子集使用其它子集训练得到的分类器进行测试,获得RBF的两个最佳参数后,将其应用于最终的分类器.实验结果表明,该方法能够有效检测入侵攻击,具有更高的检测率和更强的泛化能力,同时具有较低的误报率和漏报率,可以有效地运用于入侵检测系统中.  相似文献   

18.
19.
极限学习机是一种针对单隐含层前馈神经网络的新算法,具有训练速度快,泛化性能高等优点。将其应用于软测量技术,避免了传统神经网络高计算复杂度的缺点,可以实现难以直接测量参数的快速获取,在计量测量技术领域有着广阔的应用前景。  相似文献   

20.
针对极限学习机在处理高维数据时存在内存能耗大、分类准确率低、泛化性差等问题,提出了一种批量分层编码极限学习机算法。首先通过对数据集分批处理,以减小数据维度,降低输入复杂性;然后采用多层自动编码器结构对各批次数据进行无监督编码,以实现深层特征提取;最后利用流形正则化思想构建含有继承因子的流形分类器,以保持数据的完整性,提高算法的泛化性能。实验结果表明,该方法实现简单,在NORB,MNIST和USPS数据集上的分类准确率分别可以达到92.16%、99.35%和98.86%,与其它极限学习机算法对比,在降低计算复杂度和减少CPU内存消耗上具有较明显的优势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号