首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a computer environment, an operating system is prone to malware, and even the Linux operating system is not an exception. In recent years, malware has evolved, and attackers have become more qualified compared to a few years ago. Furthermore, Linux-based systems have become more attractive to cybercriminals because of the increasing use of the Linux operating system in web servers and Internet of Things (IoT) devices. Windows is the most employed OS, so most of the research efforts have been focused on its malware protection rather than on other operating systems. As a result, hundreds of research articles, documents, and methodologies dedicated to malware analysis have been reported. However, there has not been much literature concerning Linux security and protection from malware. To address all these new challenges, it is necessary to develop a methodology that can standardize the required steps to perform the malware analysis in depth. A systematic analysis process makes the difference between good and ordinary malware analyses. Additionally, a deep malware comprehension can yield a faster and much more efficient malware eradication. In order to address all mentioned challenges, this article proposed a methodology for malware analysis in the Linux operating system, which is a traditionally overlooked field compared to the other operating systems. The proposed methodology is tested by a specific Linux malware, and the obtained test results have high effectiveness in malware detection.  相似文献   

2.
Malicious software (malware) is one of the main cyber threats that organizations and Internet users are currently facing. Malware is a software code developed by cybercriminals for damage purposes, such as corrupting the system and data as well as stealing sensitive data. The damage caused by malware is substantially increasing every day. There is a need to detect malware efficiently and automatically and remove threats quickly from the systems. Although there are various approaches to tackle malware problems, their prevalence and stealthiness necessitate an effective method for the detection and prevention of malware attacks. The deep learning-based approach is recently gaining attention as a suitable method that effectively detects malware. In this paper, a novel approach based on deep learning for detecting malware proposed. Furthermore, the proposed approach deploys novel feature selection, feature co-relation, and feature representations to significantly reduce the feature space. The proposed approach has been evaluated using a Microsoft prediction dataset with samples of 21,736 malware composed of 9 malware families. It achieved 96.01% accuracy and outperformed the existing techniques of malware detection.  相似文献   

3.
As nearly half of the incidents in enterprise security have been triggered by insiders, it is important to deploy a more intelligent defense system to assist enterprises in pinpointing and resolving the incidents caused by insiders or malicious software (malware) in real-time. Failing to do so may cause a serious loss of reputation as well as business. At the same time, modern network traffic has dynamic patterns, high complexity, and large volumes that make it more difficult to detect malware early. The ability to learn tasks sequentially is crucial to the development of artificial intelligence. Existing neurogenetic computation models with deep-learning techniques are able to detect complex patterns; however, the models have limitations, including catastrophic forgetfulness, and require intensive computational resources. As defense systems using deep-learning models require more time to learn new traffic patterns, they cannot perform fully online (on-the-fly) learning. Hence, an intelligent attack/malware detection system with on-the-fly learning capability is required. For this paper, a memory-prediction framework was adopted, and a simplified single cell assembled sequential hierarchical memory (s.SCASHM) model instead of the hierarchical temporal memory (HTM) model is proposed to speed up learning convergence to achieve on-the-fly learning. The s.SCASHM consists of a Single Neuronal Cell (SNC) model and a simplified Sequential Hierarchical Superset (SHS) platform. The s.SCASHM is implemented as the prediction engine of a user behavior analysis tool to detect insider attacks/anomalies. The experimental results show that the proposed memory model can predict users’ traffic behavior with accuracy level ranging from 72% to 83% while performing on-the-fly learning.  相似文献   

4.
Techniques for analyzing the safety and reliability of analog-based electronic protection systems that serve to mitigate hazards in process control systems have been developed over many years, and are reasonably understood. An example is the protection system in a nuclear power plant. The extension of these techniques to systems which include digital computers is not well developed, and there is little consensus among software engineering experts and safety experts on how to analyze such systems.One possible technique is to extend hazard analysis to include digital computer-based systems. Software is frequently overlooked during system hazard analyses, but this is unacceptable when the software is in control of a potentially hazardous operation. In such cases, hazard analysis should be extended to fully cover the software. A method for performing software hazard analysis is proposed in this paper. The method concentrates on finding hazards during the early stages of the software life cycle, using an extension of HAZOP.  相似文献   

5.
Computer security requires statistical methods to quickly and accurately flag malicious programs. This article proposes a nonparametric Bayesian approach for classifying programs as benign or malicious and simultaneously clustering malicious programs. The analysis is based on the dynamic trace (DT) of instructions under the first-order Markov assumption. Each row of the trace’s transition matrix is modeled using the Dirichlet process mixture (DPM) model. The DPM model clusters programs within each class (malicious or benign), and produces the posterior probability of being a malware which is used for classification. The novelty of the model is using this clustering algorithm to improve the classification accuracy. The simulation study shows that the DPM model outperforms the elastic net logistic (ENL) regression and the support vector machine (SVM) in classification performance under most of the scenarios, and also outperforms the spectral clustering method for grouping similar malware. In an analysis of real malicious and benign programs, the DPM model gives significantly better classification performance than the ENL model, and competitive results to the SVM. More importantly, the DPM model identifies clusters of programs during the classification procedure which is useful for reverse engineering.  相似文献   

6.
This study was conducted to enable prompt classification of malware, which was becoming increasingly sophisticated. To do this, we analyzed the important features of malware and the relative importance of selected features according to a learning model to assess how those important features were identified. Initially, the analysis features were extracted using Cuckoo Sandbox, an open-source malware analysis tool, then the features were divided into five categories using the extracted information. The 804 extracted features were reduced by 70% after selecting only the most suitable ones for malware classification using a learning model-based feature selection method called the recursive feature elimination. Next, these important features were analyzed. The level of contribution from each one was assessed by the Random Forest classifier method. The results showed that System call features were mostly allocated. At the end, it was possible to accurately identify the malware type using only 36 to 76 features for each of the four types of malware with the most analysis samples available. These were the Trojan, Adware, Downloader, and Backdoor malware.  相似文献   

7.
The development in Information and Communication Technology has led to the evolution of new computing and communication environment. Technological revolution with Internet of Things (IoTs) has developed various applications in almost all domains from health care, education to entertainment with sensors and smart devices. One of the subsets of IoT is Internet of Medical things (IoMT) which connects medical devices, hardware and software applications through internet. IoMT enables secure wireless communication over the Internet to allow efficient analysis of medical data. With these smart advancements and exploitation of smart IoT devices in health care technology there increases threat and malware attacks during transmission of highly confidential medical data. This work proposes a scheme by integrating machine learning approach and block chain technology to detect malware during data transmission in IoMT. The proposed Machine Learning based Block Chain Technology malware detection scheme (MLBCT-Mdetect) is implemented in three steps namely: feature extraction, Classification and blockchain. Feature extraction is performed by calculating the weight of each feature and reduces the features with less weight. Support Vector Machine classifier is employed in the second step to classify the malware and benign nodes. Furthermore, third step uses blockchain to store details of the selected features which eventually improves the detection of malware with significant improvement in speed and accuracy. ML-BCT-Mdetect achieves higher accuracy with low false positive rate and higher True positive rate.  相似文献   

8.
Various models which may be used for quantitative assessment of hardware, software and human reliability are compared in this paper. Important comparison criteria are the system life cycle phase in which the model is intended to be used, the failure category and reliability means considered in the model, model purpose, and model characteristic such as model construction approach, model output and model input. The main objective is to present limitations in the use of current models for reliability assessment of computer-based safety shutdown systems in the process industry and to provide recommendations on further model development. Main attention is given to presenting the overall concept of various models from a user's point of view rather than technical details of specific models. A new failure classification scheme is proposed which shows how hardware and software failures may be modelled in a common framework.  相似文献   

9.
Minimizing total costs of forest roads with computer-aided design model   总被引:1,自引:0,他引:1  
Abdullah E. Akay 《Sadhana》2006,31(5):621-633
Advances in personal computers (PCs) have increased interest in computer-based road-design systems to provide rapid evaluation of alternative alignments. Optimization techniques can provide road managers with a powerful tool that searches for large numbers of alternative alignments in short spans of time. A forest road optimization model, integrated with two optimization techniques, was developed to help a forest road engineer in evaluating alternative alignments in a faster and more systematic manner. The model aims at designing a path with minimum total road costs, while conforming to design specifications, environmental requirements, and driver safety. To monitor the sediment production of the alternative alignments, the average sediment delivered to a stream from a road section was estimated by using a road erosion/delivery model. The results indicated that this model has the potential to initiate a new procedure that will improve the forest road-design process by employing the advanced hardware and software capabilities of PCs and modern optimization techniques.  相似文献   

10.
In the present paper, an offshore platform model dealing with sacrificial anode protection was simulated using boundary element method. The potential and current density were calculated, and the distribution trend of the data was analyzed. To evaluate the computation results, proper physical model was built in a given dimension. The physical platform model was placed in a marine environment modeling tank that was designed to simulate the real marine environment with seawater, and the calculation data were compared with those from laboratory experimental work. This study showed that the boundary element method is a powerful tool for the sacrificial anode protection of marine structures.  相似文献   

11.
A simple software method to derive linearised output for constant temperature anemometer (CTA) is presented. The method uses a nonlinear ratiometric-logarithmic function, which consists of two parameters whose optimal values are determined by minimising the objective function (mean square error) to improve the linearity of CTA signal. Covariance matrix adopted evolutionary strategy algorithm, which generates optimal values consistently, is employed to determine the optimal values of linearisation parameters. The proposed linearisation algorithm was implemented using LabVIEW 7.1 Professional Development System in a personal computer that provides the facility to interface with the National Instruments data acquisition module PCMCIA-NI DAQCard-6024E. Experimental studies have been carried out using practical air-flow velocity measurement data obtained form Dantec Dynamics practical guide. The performance measures such as full-scale error and root mean square error are considered to compare the performance of the proposed method with the methods reported for linearisation of transducers. Experimental results reveal that the proposed evolutionary optimised nonlinear function-based software lineariser works well for CTA, and it can be suitable for computer-based flow measurement/control systems.  相似文献   

12.
Android has been dominating the smartphone market for more than a decade and has managed to capture 87.8% of the market share. Such popularity of Android has drawn the attention of cybercriminals and malware developers. The malicious applications can steal sensitive information like contacts, read personal messages, record calls, send messages to premium-rate numbers, cause financial loss, gain access to the gallery and can access the user’s geographic location. Numerous surveys on Android security have primarily focused on types of malware attack, their propagation, and techniques to mitigate them. To the best of our knowledge, Android malware literature has never been explored using information modelling techniques. Further, promulgation of contemporary research trends in Android malware research has never been done from semantic point of view. This paper intends to identify intellectual core from Android malware literature using Latent Semantic Analysis (LSA). An extensive corpus of 843 articles on Android malware and security, published during 2009–2019, were processed using LSA. Subsequently, the truncated singular Value Decomposition (SVD) technique was used for dimensionality reduction. Later, machine learning methods were deployed to effectively segregate prominent topic solutions with minimal bias. Apropos to observed term and document loading matrix values, this five core research areas and twenty research trends were identified. Further, potential future research directions have been detailed to offer a quick reference for information scientists. The study concludes to the fact that Android security is crucial for pervasive Android devices. Static analysis is the most widely investigated core area within Android security research and is expected to remain in trend in near future. Research trends indicate the need for a faster yet effective model to detect Android applications causing obfuscation, financial attacks and stealing user information.  相似文献   

13.
Software reliability growth models, which are based on nonhomogeneous Poisson processes, are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems. Faults in the systems, which eventually cause the failures, are usually connected with each other in complicated ways. Considering a group of networked faults, we raise a new model to examine the reliability of software systems and assess the model's performance from real‐world data sets. Our numerical studies show that the new model, capturing networking effects among faults, well fits the failure data. We also formally study the optimal software release policy using the multi‐attribute utility theory (MAUT), considering both the reliability attribute and the cost attribute. We find that, if the networking effects among different layers of faults were ignored by the software testing team, the best time to release the software package to the market would be much later while the utility reaches its maximum. Sensitivity analysis is further delivered.  相似文献   

14.
Because of the growing demand for increasingly complex computer-based systems there is now an urgent need to provide tools to assist during the design of such systems. Formal specifications and formal methods provide such assistance but their widespread adoption has been hindered by the so-called ‘math fear’ and the perception that the tools are too difficult, too time consuming and too costly to use in a commercial environment. The aim of this article is to dispel the mystery surrounding the topic and to explain what formal methods are, how and why they are used, the benefits that accrue and why the technology should be accepted on a broader front. The application of formal methods to the design of computer-based systems will be discussed without resorting to jargon or mathematics. The discussion will concentrate more on the software content of systems but the arguments apply equally well to hardware. Some of the available tools will also be introduced.  相似文献   

15.
16.
Superconducting fault current limiter (SFCL) has become one of the most ideal current limiting devices to solve the problem of increasing short-circuit current in high-voltage power grid. This paper presents a resistive-type SFCL model developed using simulation software PSCAD/EMTDC. After being verified by finite-element model and experimental results, the model is used to study the impact of SFCLs on the power grid and the co-ordination between SFCL and relay protections in 10 kV distribution network. A series of simulations are carried out to find appropriate parameters of SFCL model to cooperate with relay protection devices. The final result in this paper could provide important quantitative basis of parameters for SFCL to be applied in a real power system.  相似文献   

17.
Recently, TLS protocol has been widely used to secure the application data carried in network traffic. It becomes more difficult for attackers to decipher messages through capturing the traffic generated from communications of hosts. On the other hand, malwares adopt TLS protocol when accessing to internet, which makes most malware traffic detection methods, such as DPI (Deep Packet Inspection), ineffective. Some literatures use statistical method with extracting the observable data fields exposed in TLS connections to train machine learning classifiers so as to infer whether a traffic flow is malware or not. However, most of them adopt the features based on the complete flow, such as flow duration, but seldom consider that the detection result should be given out as soon as possible. In this paper, we propose MalDetect, a structure of encrypted malware traffic detection. MalDetect only extracts features from approximately 8 packets (the number varies in different flows) at the beginning of traffic flows, which makes it capable of detecting malware traffic before the malware behaviors take practical impacts. In addition, observing that it is inefficient and time-consuming to re-train the offline classifier when new flow samples arrive, we deploy Online Random Forest in MalDetect. This enables the classifier to update its parameters in online mode and gets rid of the re-training process. MalDetect is coded in C++ language and open in Github. Furthermore, MalDetect is thoroughly evaluated from three aspects: effectiveness, timeliness and performance.  相似文献   

18.
Due to polymorphic nature of malware attack, a signature-based analysis is no longer sufficient to solve polymorphic and stealth nature of malware attacks. On the other hand, state-of-the-art methods like deep learning require labelled dataset as a target to train a supervised model. This is unlikely to be the case in production network as the dataset is unstructured and has no label. Hence an unsupervised learning is recommended. Behavioral study is one of the techniques to elicit traffic pattern. However, studies have shown that existing behavioral intrusion detection model had a few issues which had been parameterized into its common characteristics, namely lack of prior information (p (θ)), and reduced parameters (θ). Therefore, this study aims to utilize the previously built Feature Selection Model subsequently to design a Predictive Analytics Model based on Bayesian Network used to improve the analysis prediction. Feature Selection Model is used to learn significant label as a target and Bayesian Network is a sophisticated probabilistic approach to predict intrusion. Finally, the results are extended to evaluate detection, accuracy and false alarm rate of the model against the subject matter expert model, Support Vector Machine (SVM), k nearest neighbor (k-NN) using simulated and ground-truth dataset. The ground-truth dataset from the production traffic of one of the largest healthcare provider in Malaysia is used to promote realism on the real use case scenario. Results have shown that the proposed model consistently outperformed other models.  相似文献   

19.
In the realm of safety related systems, a growing number of functions are realized by software, ranging from ‘firmware’ to autonomous decision‐taking software. To support (political) real‐world decision makers, quantitative risk assessment methodology quantifies the reliability of systems. The optimal choice of safety measures with respect to the available budget, for example, the UK (as low as reasonably practicable approach), requires quantification. If a system contains software, some accepted methods on quantification of software reliability exist, but none of them is generally applicable, as we will show. We propose a model bringing software into the quantitative risk assessment domain by introducing failure of software modules (with their probabilities) as basic events in a fault tree. The method is known as ‘TOPAAS’ (Task‐Oriented Probability of Abnormalities Analysis for Software). TOPAAS is a factor model allowing the quantification of the basic ‘software’ events in fault tree analyses. In this paper, we argue that this is the best approach currently available to industry. Task‐Oriented Probability of Abnormalities Analysis for Software is a practical model by design and is currently put to field testing in risk assessments of programmable electronic safety‐related systems in tunnels and control systems of movable storm surge barriers in the Netherlands. The TOPAAS model is constructed to incorporate detailed fields of knowledge and to provide focus toward reliability quantification in the form of a probability measure of mission failure. Our development also provides context for further in‐depth research. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents a methodology, based on Virtual Reality (VR), for representing a manufacturing system in order to help with the requirement analysis (RA) in CIM system development, suitable for SMEs. The methodology can reduce the costs and the time involved at this stage by producing precise and accurate plans, specification requirements, and a design for CIM information systems. These are essentials for small and medium scale manufacturing enterprises. Virtual Reality is computer-based and has better visualization effects for representing manufacturing systems than any other graphical user interface, and this helps users to collect information and decision needs quickly and correctly. A VR-RA tool is designed and developed as a software system to realize the features outlined in each phase of the methodology. A set of rules and a knowledge base is appended to the methodology to remove any inconsistency that could arise between the material and the information flows during the requirement analysis. A novel environment for matching the physical and the information model domains is suggested to delineate the requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号