首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
《工程(英文)》2018,4(1):53-60
Cyberattack forms are complex and varied, and the detection and prediction of dynamic types of attack are always challenging tasks. Research on knowledge graphs is becoming increasingly mature in many fields. At present, it is very significant that certain scholars have combined the concept of the knowledge graph with cybersecurity in order to construct a cybersecurity knowledge base. This paper presents a cybersecurity knowledge base and deduction rules based on a quintuple model. Using machine learning, we extract entities and build ontology to obtain a cybersecurity knowledge base. New rules are then deduced by calculating formulas and using the path-ranking algorithm. The Stanford named entity recognizer (NER) is also used to train an extractor to extract useful information. Experimental results show that the Stanford NER provides many features and the useGazettes parameter may be used to train a recognizer in the cybersecurity domain in preparation for future work.  相似文献   

2.
3.
Since the web service is essential in daily lives, cyber security becomes more and more important in this digital world. Malicious Uniform Resource Locator (URL) is a common and serious threat to cybersecurity. It hosts unsolicited content and lure unsuspecting users to become victim of scams, such as theft of private information, monetary loss, and malware installation. Thus, it is imperative to detect such threats. However, traditional approaches for malicious URLs detection that based on the blacklists are easy to be bypassed and lack the ability to detect newly generated malicious URLs. In this paper, we propose a novel malicious URL detection method based on deep learning model to protect against web attacks. Specifically, we firstly use auto-encoder to represent URLs. Then, the represented URLs will be input into a proposed composite neural network for detection. In order to evaluate the proposed system, we made extensive experiments on HTTP CSIC2010 dataset and a dataset we collected, and the experimental results show the effectiveness of the proposed approach.  相似文献   

4.
Network security situation awareness is an important foundation for network security management, which presents the target system security status by analyzing existing or potential cyber threats in the target system. In network offense and defense, the network security state of the target system will be affected by both offensive and defensive strategies. According to this feature, this paper proposes a network security situation awareness method using stochastic game in cloud computing environment, uses the utility of both sides of the game to quantify the network security situation value. This method analyzes the nodes based on the network security state of the target virtual machine and uses the virtual machine introspection mechanism to obtain the impact of network attacks on the target virtual machine, then dynamically evaluates the network security situation of the cloud environment based on the game process of both attack and defense. In attack prediction, cyber threat intelligence is used as an important basis for potential threat analysis. Cyber threat intelligence that is applicable to the current security state is screened through the system hierarchy fuzzy optimization method, and the potential threat of the target system is analyzed using the cyber threat intelligence obtained through screening. If there is no applicable cyber threat intelligence, using the Nash equilibrium to make predictions for the attack behavior. The experimental results show that the network security situation awareness method proposed in this paper can accurately reflect the changes in the network security situation and make predictions on the attack behavior.  相似文献   

5.
Machine learning (ML) algorithms are often used to design effective intrusion detection (ID) systems for appropriate mitigation and effective detection of malicious cyber threats at the host and network levels. However, cybersecurity attacks are still increasing. An ID system can play a vital role in detecting such threats. Existing ID systems are unable to detect malicious threats, primarily because they adopt approaches that are based on traditional ML techniques, which are less concerned with the accurate classification and feature selection. Thus, developing an accurate and intelligent ID system is a priority. The main objective of this study was to develop a hybrid intelligent intrusion detection system (HIIDS) to learn crucial features representation efficiently and automatically from massive unlabeled raw network traffic data. Many ID datasets are publicly available to the cybersecurity research community. As such, we used a spark MLlib (machine learning library)-based robust classifier, such as logistic regression (LR), extreme gradient boosting (XGB) was used for anomaly detection, and a state-of-the-art DL, such as a long short-term memory autoencoder (LSTMAE) for misuse attack was used to develop an efficient and HIIDS to detect and classify unpredictable attacks. Our approach utilized LSTM to detect temporal features and an AE to more efficiently detect global features. Therefore, to evaluate the efficacy of our proposed approach, experiments were conducted on a publicly existing dataset, the contemporary real-life ISCX-UNB dataset. The simulation results demonstrate that our proposed spark MLlib and LSTMAE-based HIIDS significantly outperformed existing ID approaches, achieving a high accuracy rate of up to 97.52% for the ISCX-UNB dataset respectively 10-fold cross-validation test. It is quite promising to use our proposed HIIDS in real-world circumstances on a large-scale.  相似文献   

6.
The Covid-19 epidemic poses a serious public health threat to the world, where people with little or no pre-existing human immunity can be more vulnerable to its effects. Thus, developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives. In this study, a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus. The Long-Short Term Memory (LSTM) and Holt-trend algorithms were applied to predict confirmed numbers and death cases. The real time data used has been collected from the World Health Organization (WHO). In the proposed research, we have considered three countries to test the proposed model, namely Saudi Arabia, Spain and Italy. The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients. Standard measure performance Mean squared Error (MSE), Root Mean Squared Error (RMSE), Mean error and correlation are employed to estimate the results of the proposed models. The empirical results of the LSTM, using the correlation metrics, are 99.94%, 99.94% and 99.91% in predicting the number of confirmed cases in the three countries. As far as the results of the LSTM model in predicting the number of death of Covid-19, they are 99.86%, 98.876% and 99.16% with respect to Saudi Arabia, Italy and Spain respectively. Similarly, the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19, using the correlation metrics, are 99.06%, 99.96% and 99.94%, whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%, 99.96% and 99.94% with respect to the Saudi Arabia, Italy and Spain respectively. The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries. Such findings provide better insights regarding the future of Covid-19 this pandemic in general. The results were obtained by applying time series models, which need to be considered for the sake of saving the lives of many people.  相似文献   

7.
A knowledge graph is a structured graph in which data obtained from multiple sources are standardized to acquire and integrate human knowledge. Research is being actively conducted to cover a wide variety of knowledge, as it can be applied to applications that help humans. However, existing researches are constructing knowledge graphs without the time information that knowledge implies. Knowledge stored without time information becomes outdated over time, and in the future, the possibility of knowledge being false or meaningful changes is excluded. As a result, they can’t reflect information that changes dynamically, and they can’t accept information that has newly emerged. To solve this problem, this paper proposes Time-Aware PolarisX, an automatically extended knowledge graph including time information. Time-Aware PolarisX constructed a BERT model with a relation extractor and an ensemble NER model including a time tag with an entity extractor to extract knowledge consisting of subject, relation, and object from unstructured text. Through two application experiments, it shows that the proposed system overcomes the limitations of existing systems that do not consider time information when applied to an application such as a chatbot. Also, we verify that the accuracy of the extraction model is improved through a comparative experiment with the existing model.  相似文献   

8.
Software-defined networking (SDN) represents a paradigm shift in network traffic management. It distinguishes between the data and control planes. APIs are then used to communicate between these planes. The controller is central to the management of an SDN network and is subject to security concerns. This research shows how a deep learning algorithm can detect intrusions in SDN-based IoT networks. Overfitting, low accuracy, and efficient feature selection is all discussed. We propose a hybrid machine learning-based approach based on Random Forest and Long Short-Term Memory (LSTM). In this study, a new dataset based specifically on Software Defined Networks is used in SDN. To obtain the best and most relevant features, a feature selection technique is used. Several experiments have revealed that the proposed solution is a superior method for detecting flow-based anomalies. The performance of our proposed model is also measured in terms of accuracy, recall, and precision. F1 rating and detection time Furthermore, a lightweight model for training is proposed, which selects fewer features while maintaining the model’s performance. Experiments show that the adopted methodology outperforms existing models.  相似文献   

9.
Named Entity Recognition (NER) is one of the fundamental tasks in Natural Language Processing (NLP), which aims to locate, extract, and classify named entities into a predefined category such as person, organization and location. Most of the earlier research for identifying named entities relied on using handcrafted features and very large knowledge resources, which is time consuming and not adequate for resource-scarce languages such as Arabic. Recently, deep learning achieved state-of-the-art performance on many NLP tasks including NER without requiring hand-crafted features. In addition, transfer learning has also proven its efficiency in several NLP tasks by exploiting pretrained language models that are used to transfer knowledge learned from large-scale datasets to domain-specific tasks. Bidirectional Encoder Representation from Transformer (BERT) is a contextual language model that generates the semantic vectors dynamically according to the context of the words. BERT architecture relay on multi-head attention that allows it to capture global dependencies between words. In this paper, we propose a deep learning-based model by fine-tuning BERT model to recognize and classify Arabic named entities. The pre-trained BERT context embeddings were used as input features to a Bidirectional Gated Recurrent Unit (BGRU) and were fine-tuned using two annotated Arabic Named Entity Recognition (ANER) datasets. Experimental results demonstrate that the proposed model outperformed state-of-the-art ANER models achieving 92.28% and 90.68% F-measure values on the ANERCorp dataset and the merged ANERCorp and AQMAR dataset, respectively.  相似文献   

10.
11.
Networks provide a significant function in everyday life, and cybersecurity therefore developed a critical field of study. The Intrusion detection system (IDS) becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network. Notwithstanding advancements of growth, current intrusion detection systems also experience dif- ficulties in enhancing detection precision, growing false alarm levels and identifying suspicious activities. In order to address above mentioned issues, several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches. Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency. Artificial intelligence, particularly machine learning methods can be used to develop an intelligent intrusion detection framework. There in this article in order to achieve this objective, we propose an intrusion detection system focused on a Deep extreme learning machine (DELM) which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features. In the moment, we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability. The experimental results illustrate that the suggested framework outclasses traditional algorithms. In fact, the suggested framework is not only of interest to scientific research but also of functional importance.  相似文献   

12.
Cyber attacks on computer and network systems induce system quality and reliability problems, and present a significant threat to the computer and network systems that we are heavily dependent on. Cyber attack detection involves monitoring system data and detecting the attack‐induced quality and reliability problems of computer and network systems caused by cyber attacks. Usually there are ongoing normal user activities on computer and network systems when an attack occurs. As a result, the observed system data may be a mixture of attack data and normal use data (norm data). We have established a novel attack–norm separation approach to cyber attack detection that includes norm data cancelation to improve the data quality as an important part of this approach. Aiming at demonstrating the importance of norm data cancelation, this paper presents a set of data modeling and analysis techniques developed to perform norm data cancelation before applying an existing technique of anomaly detection, the chi‐square distance monitoring (CSDM), to residual data obtained after norm data cancelation for cyber attack detection. Specifically, a Markov chain model of norm data and an artificial neural network (ANN) of norm data cancelation are developed and tested. This set of techniques is compared with using CSDM alone for cyber attack detection. The results show a significant improvement of detection performance by CSDM with norm data cancelation over CSDM alone. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
In recent years, cybersecurity has attracted significant interest due to the rapid growth of the Internet of Things (IoT) and the widespread development of computer infrastructure and systems. It is thus becoming particularly necessary to identify cyber-attacks or irregularities in the system and develop an efficient intrusion detection framework that is integral to security. Researchers have worked on developing intrusion detection models that depend on machine learning (ML) methods to address these security problems. An intelligent intrusion detection device powered by data can exploit artificial intelligence (AI), and especially ML, techniques. Accordingly, we propose in this article an intrusion detection model based on a Real-Time Sequential Deep Extreme Learning Machine Cybersecurity Intrusion Detection System (RTS-DELM-CSIDS) security model. The proposed model initially determines the rating of security aspects contributing to their significance and then develops a comprehensive intrusion detection framework focused on the essential characteristics. Furthermore, we investigated the feasibility of our proposed RTS-DELM-CSIDS framework by performing dataset evaluations and calculating accuracy parameters to validate. The experimental findings demonstrate that the RTS-DELM-CSIDS framework outperforms conventional algorithms. Furthermore, the proposed approach has not only research significance but also practical significance.  相似文献   

14.
ABSTRACT

In this paper, we propose a robust subspace learning method, based on RPCA, named Robust Principal Component Analysis with Projection Learning (RPCAPL), which further improves the performance of feature extraction by projecting data samples into a suitable subspace. For Subspace Learning (SL) methods in clustering and classification tasks, it is also critical to construct an appropriate graph for discovering the intrinsic structure of the data. For this reason, we add a graph Laplacian matrix to the RPCAPL model for preserving the local geometric relationships between data samples and name the improved model as RPCAGPL, which takes all samples as nodes in the graph and treats affinity between pairs of connected samples as weighted edges. The RPCAGPL can not only globally capture the low-rank subspace structure of the data in the original space, but also locally preserve the neighbor relationship between the data samples.  相似文献   

15.
Named entity recognition (NER) is essential in many natural language processing (NLP) tasks such as information extraction and document classification. A construction document usually contains critical named entities, and an effective NER method can provide a solid foundation for downstream applications to improve construction management efficiency. This study presents a NER method for Chinese construction documents based on conditional random field (CRF), including a corpus design pipeline and a CRF model. The corpus design pipeline identifies typical NER tasks in construction management, enables word-based tokenization, and controls the annotation consistency with a newly designed annotating specification. The CRF model engineers nine transformation features and seven classes of state features, covering the impacts of word position, part-of-speech (POS), and word/character states within the context. The F1-measure on a labeled construction data set is 87.9%. Furthermore, as more domain knowledge features are infused, the marginal performance improvement of including POS information will decrease, leading to a promising research direction of POS customization to improve NLP performance with limited data.  相似文献   

16.
Artificial intelligence, which has recently emerged with the rapid development of information technology, is drawing attention as a tool for solving various problems demanded by society and industry. In particular, convolutional neural networks (CNNs), a type of deep learning technology, are highlighted in computer vision fields, such as image classification and recognition and object tracking. Training these CNN models requires a large amount of data, and a lack of data can lead to performance degradation problems due to overfitting. As CNN architecture development and optimization studies become active, ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models. In this study, data augmentation and contour image extraction were performed to overcome the data shortage problem. In addition, we propose a hierarchical ensemble technique to achieve high image classification accuracy, even if trained from a small amount of data. First, we trained the UC-Merced land use dataset and the contour images for each image on pretrained VGGNet, GoogLeNet, ResNet, DenseNet, and EfficientNet. We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed. These experiments were performed in cases where the proportion of training datasets was 30%, 50%, and 70%, resulting in a performance improvement of up to 4.68% compared to the average accuracy of the entire model.  相似文献   

17.
建立温度-位移相关模型是开展基于位移响应的大跨桥梁性能评估的关键步骤。该文提出一种基于长短时记忆(LSTM)神经网络的多元温度-位移相关模型建立方法。充分利用LSTM神经网络能够考虑位移时滞效应和适合处理超长数据序列的优势,采用自适应矩估计方法对LSTM神经网络进行优化,并引入丢弃正则化技术提升模型的预测能力。在此基础上,基于一座三跨连续系杆拱桥长期同步监测的温度和位移数据,讨论了影响该桥主梁竖向位移的主要温度变量,并建立了多元温度-位移的LSTM神经网络模型,与基于误差反向传播(BP)神经网络的多元温度-位移相关模型进行了比较。研究结果表明:构件有效温度与主梁竖向位移具有明显的非线性关系,构件间温差和主拱温度梯度与主梁竖向位移呈线性相关性;主拱有效温度和主梁与主拱的温差是引起该桥主梁竖向位移的主要温度变量;相比于BP神经网络模型,该文提出的LSTM神经网络模型能够大幅降低温度位移的重构误差和预测误差。  相似文献   

18.
Abstract

Water pollution is a major global environmental problem, and it poses a great environmental risk to public health and biological diversity. This work is motivated by assessing the potential environmental threat of coal mining through increased sulfate concentrations in river networks, which do not belong to any simple parametric distribution. However, existing network models mainly focus on binary or discrete networks and weighted networks with known parametric weight distributions. We propose a principled nonparametric weighted network model based on exponential-family random graph models and local likelihood estimation, and study its model-based clustering with application to large-scale water pollution network analysis. We do not require any parametric distribution assumption on network weights. The proposed method greatly extends the methodology and applicability of statistical network models. Furthermore, it is scalable to large and complex networks in large-scale environmental studies. The power of our proposed methods is demonstrated in simulation studies and a real application to sulfate pollution network analysis in Ohio watershed located in Pennsylvania, United States.  相似文献   

19.
《工程(英文)》2021,7(12):1786-1796
This paper presents a vision-based crack detection approach for concrete bridge decks using an integrated one-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM) method in the image frequency domain. The so-called 1D-CNN-LSTM algorithm is trained using thousands of images of cracked and non-cracked concrete bridge decks. In order to improve the training efficiency, images are first transformed into the frequency domain during a preprocessing phase. The algorithm is then calibrated using the flattened frequency data. LSTM is used to improve the performance of the developed network for long sequence data. The accuracy of the developed model is 99.05%, 98.9%, and 99.25%, respectively, for training, validation, and testing data. An implementation framework is further developed for future application of the trained model for large-scale images. The proposed 1D-CNN-LSTM method exhibits superior performance in comparison with existing deep learning methods in terms of accuracy and computation time. The fast implementation of the 1D-CNN-LSTM algorithm makes it a promising tool for real-time crack detection.  相似文献   

20.
Computer Assisted Diagnosis (CAD) is an effective method to detect lung cancer from computed tomography (CT) scans. The development of artificial neural network makes CAD more accurate in detecting pathological changes. Due to the complexity of the lung environment, the existing neural network training still requires large datasets, excessive time, and memory space. To meet the challenge, we analysis 3D volumes as serialized 2D slices and present a new neural network structure lightweight convolutional neural network (CNN)-long short-term memory (LSTM) for lung nodule classification. Our network contains two main components: (a) optimized lightweight CNN layers with tiny parameter space for extracting visual features of serialized 2D images, and (b) LSTM network for learning relevant information among 2D images. In all experiments, we compared the training results of several models and our model achieved an accuracy of 91.78% for lung nodule classification with an AUC of 93%. We used fewer samples and memory space to train the model, and we achieved faster convergence. Finally, we analyzed and discussed the feasibility of migrating this framework to mobile devices. The framework can also be applied to cope with the small amount of training data and the development of mobile health device in future.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号