共查询到20条相似文献,搜索用时 15 毫秒
1.
Navneet Bhatt Adarsh Anand V. S. S. Yadavalli 《Quality and Reliability Engineering International》2021,37(2):648-663
The number of security failure discovered and disclosed publicly are increasing at a pace like never before. Wherein, a small fraction of vulnerabilities encountered in the operational phase are exploited in the wild. It is difficult to find vulnerabilities during the early stages of software development cycle, as security aspects are often not known adequately. To counter these security implications, firms usually provide patches such that these security flaws are not exploited. It is a daunting task for a security manager to prioritize patches for vulnerabilities that are likely to be exploitable. This paper fills this gap by applying different machine learning techniques to classify the vulnerabilities based on previous exploit-history. Our work indicates that various vulnerability characteristics such as severity, type of vulnerabilities, different software configurations, and vulnerability scoring parameters are important features to be considered in judging an exploit. Using such methods, it is possible to predict exploit-prone vulnerabilities with an accuracy >85%. Finally, with this experiment, we conclude that supervised machine learning approach can be a useful technique in predicting exploit-prone vulnerabilities. 相似文献
2.
Nowadays, the number of software vulnerabilities incidents and the loss due to occurrence of software vulnerabilities are growing exponentially. The current existing security strategies, the vulnerability detection and remediating approaches are not intelligent, automated, self-managed and not competent to combat against the vulnerabilities and security threats, and to provide secured self-managed software environment to the organizations. Hence, there is a strong need to devise an intelligent and automated approach to optimize security and prevent the occurrence of vulnerabilities or mitigate the vulnerabilities. The autonomic computing is a nature-inspired and self-management-based computational model. In this paper, an autonomic-computing-based integrated framework is proposed to detect, fire the trigger of alarm, assess, classify, prioritize, mitigate and manage the software vulnerability automatically. The proposed framework uses a knowledge base and inference engine, which automatically takes the remediating actions on future occurrence of software security vulnerabilities through self-configuration, self-healing, self-prevention and self-optimization as per the needs. The proposed framework is beneficial to industry and society in various aspects because it is an integrated, cross-concern and intelligent framework and provides more secured self-managed environment to the organizations. The proposed framework reduces the security risks and threats, and also monetary and reputational loss. It can be embedded easily in existing software and incorporated or implemented as an inbuilt integral component of the new software during software development. 相似文献
3.
4.
5.
Vulnerability technology is the basic of network security technology, vulnerability quantitative grading methods, such as CVSS, WIVSS, ICVSS, provide a reference to vulnerability management, but the problems of ignoring the risk elevation caused by a group of vulnerabilities and low accuracy of exploitable level evaluation exist in current vulnerability quantitative grading methods. To solve problems above in current network security quantitative evaluation methods, this paper verified the high relevance degree between type and exploitable score of vulnerability, proposed a new vulnerability quantitative grading method ICVSS, ICVSS can explore attack path using continuity level defined by privilege, add vulnerability type to measure indexes of exploitable metrics and use Analytic Hierarchy Process (AHP) to quantify the influence of vulnerability type on exploitable level. Compared with CVSS and WIVSS, ICVSS is proved that it can discover attack path consist of a sequence of vulnerabilities for network security situation evaluation, and has more accuracy and stability. 相似文献
6.
Moatasem M. Draz Marwa S. Farhan Sarah N. Abdulkader M. G. Gafar 《计算机、材料和连续体(英文)》2021,68(2):1919-1935
Software systems have been employed in many fields as a means to reduce human efforts; consequently, stakeholders are interested in more updates of their capabilities. Code smells arise as one of the obstacles in the software industry. They are characteristics of software source code that indicate a deeper problem in design. These smells appear not only in the design but also in software implementation. Code smells introduce bugs, affect software maintainability, and lead to higher maintenance costs. Uncovering code smells can be formulated as an optimization problem of finding the best detection rules. Although researchers have recommended different techniques to improve the accuracy of code smell detection, these methods are still unstable and need to be improved. Previous research has sought only to discover a few at a time (three or five types) and did not set rules for detecting their types. Our research improves code smell detection by applying a search-based technique; we use the Whale Optimization Algorithm as a classifier to find ideal detection rules. Applying this algorithm, the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the within-class variance. The proposed framework adopts if-then detection rules during the software development life cycle. Those rules identify the types for both medium and large projects. Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes. The proposed detection framework has an average of 94.24% precision and 93.4% recall. These accurate values are better than other search-based algorithms of the same field. The proposed framework improves code smell detection, which increases software quality while minimizing maintenance effort, time, and cost. Additionally, the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells. 相似文献
7.
HyunChul Joh Yashwant K. Malaiya 《Quality and Reliability Engineering International》2014,30(8):1445-1459
A vulnerability discovery model attempts to model the rate at which the vulnerabilities are discovered in a software product. Recent studies have shown that the S‐shaped Alhazmi–Malaiya Logistic (AML) vulnerability discovery model often fits better than other models and demonstrates superior prediction capabilities for several major software systems. However, the AML model is based on the logistic distribution, which assumes a symmetrical discovery process with a peak in the center. Hence, it can be expected that when the discovery process does not follow a symmetrical pattern, an asymmetrical distribution based discovery model might perform better. Here, the relationship between performance of S‐shaped vulnerability discovery models and the skewness in target vulnerability datasets is examined. To study the possible dependence on the skew, alternative S‐shaped models based on the Weibull, Beta, Gamma and Normal distributions are introduced and evaluated. The models are fitted to data from eight major software systems. The applicability of the models is examined using two separate approaches: goodness of fit test to see how well the models track the data, and prediction capability using average error and average bias measures. It is observed that an excellent goodness of fit does not necessarily result in a superior prediction capability. The results show that when the prediction capability is considered, all the right skewed datasets are represented better with the Gamma distribution‐based model. The symmetrical models tend to predict better for left skewed datasets; the AML model is found to be the best among them. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
8.
In recent years, the number of exposed vulnerabilities has grown rapidly and
more and more attacks occurred to intrude on the target computers using these
vulnerabilities such as different malware. Malware detection has attracted more attention
and still faces severe challenges. As malware detection based traditional machine
learning relies on exports’ experience to design efficient features to distinguish different
malware, it causes bottleneck on feature engineer and is also time-consuming to find
efficient features. Due to its promising ability in automatically proposing and selecting
significant features, deep learning has gradually become a research hotspot. In this paper,
aiming to detect the malicious payload and identify their categories with high accuracy,
we proposed a packet-based malicious payload detection and identification algorithm
based on object detection deep learning network. A dataset of malicious payload on code
execution vulnerability has been constructed under the Metasploit framework and used to
evaluate the performance of the proposed malware detection and identification algorithm.
The experimental results demonstrated that the proposed object detection network can
efficiently find and identify malicious payloads with high accuracy. 相似文献
9.
Android applications are associated with a large amount of sensitive data, therefore application developers use encryption algorithms to provide user data encryption, authentication and data integrity protection. However, application developers do not have the knowledge of cryptography, thus the cryptographic algorithm may not be used correctly. As a result, security vulnerabilities are generated. Based on the previous studies, this paper summarizes the characteristics of password misuse vulnerability of Android application software, establishes an evaluation model to rate the security level of the risk of password misuse vulnerability and develops a repair strategy for password misuse vulnerability. And on this basis, this paper designs and implements a secure container for Android application software password misuse vulnerability: CM-Droid. 相似文献
10.
Yogita Kansal Parmod Kumar Kapur Uday Kumar 《Quality and Reliability Engineering International》2019,35(1):62-73
Software vulnerabilities trend over time has been proposed by various researchers and academicians in recent years. But none of them have considered operational coverage function in vulnerability discovery modeling. In this research paper, we have proposed a generalized statistical model that determines the relationship between operational coverage function and the number of expected vulnerabilities. During the operational phase, possible vulnerable sites are covered and vulnerabilities present at a particular site are discovered with some probability. We have assumed that the proposed model follows the nonhomogeneous Poisson process properties; thus, different distributions are used to formulate the model. The numerical illustration shows that the proposed model performs better and has the good fitness to the Google Chrome data. The second focus of this research paper is to evaluate the total cost incurred by the developer after software release and to identify the optimal vulnerability disclosure time through multiobjective utility function. The proposed vulnerability discovery helps in optimization. The optimal time problem depends on the combined effect of cost, risk, and effort. 相似文献
11.
Juan R. Bermejo Higuera Javier Bermejo Higuera Juan A. Sicilia Montalvo Javier Cubo Villalba Juan José Nombela Pérez 《计算机、材料和连续体(英文)》2020,64(3):1555-1577
To detect security vulnerabilities in a web application, the security analyst
must choose the best performance Security Analysis Static Tool (SAST) in terms of
discovering the greatest number of security vulnerabilities as possible. To compare static
analysis tools for web applications, an adapted benchmark to the vulnerability categories
included in the known standard Open Web Application Security Project (OWASP) Top
Ten project is required. The information of the security effectiveness of a commercial
static analysis tool is not usually a publicly accessible research and the state of the art on
static security tool analyzers shows that the different design and implementation of those
tools has different effectiveness rates in terms of security performance. Given the
significant cost of commercial tools, this paper studies the performance of seven static
tools using a new methodology proposal and a new benchmark designed for vulnerability
categories included in the known standard OWASP Top Ten project. Thus, the
practitioners will have more precise information to select the best tool using a benchmark
adapted to the last versions of OWASP Top Ten project. The results of this work have
been obtaining using widely acceptable metrics to classify them according to three
different degree of web application criticality. 相似文献
12.
13.
14.
15.
Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions. 相似文献
16.
Code defects can lead to software vulnerability and even produce vulnerability
risks. Existing research shows that the code detection technology with text analysis can
judge whether object-oriented code files are defective to some extent. However, these
detection techniques are mainly based on text features and have weak detection
capabilities across programs. Compared with the uncertainty of the code and text caused
by the developer’s personalization, the programming language has a stricter logical
specification, which reflects the rules and requirements of the language itself and the
developer’s potential way of thinking. This article replaces text analysis with
programming logic modeling, breaks through the limitation of code text analysis solely
relying on the probability of sentence/word occurrence in the code, and proposes an
object-oriented language programming logic construction method based on method
constraint relationships, selecting features through hypothesis testing ideas, and construct
support vector machine classifier to detect class files with defects and reduce the impact
of personalized programming on detection methods. In the experiment, some
representative Android applications were selected to test and compare the proposed
methods. In terms of the accuracy of code defect detection, through cross validation, the
proposed method and the existing leading methods all reach an average of more than
90%. In the aspect of cross program detection, the method proposed in this paper is
superior to the other two leading methods in accuracy, recall and F1 value. 相似文献
17.
18.
Brian Martin 《Technology in Society》1996,18(4):511-523
Technological vulnerability refers to the chance that a technological system may fail due to outside impacts. The usual approaches to studying technological risk are not so useful for studying vulnerabilities of major systems such as energy, communication, or defense. Analyzing the relation of interest groups to vulnerabilities can be illuminating. In some cases groups have interests in maintaining practices that cause vulnerabilities, while in other cases groups have interests in maintaining vulnerabilities themselves. These latter cases are especially difficult to deal with since they challenge prevailing belief systems. 相似文献
19.
Identifying and Verifying Vulnerabilities through PLC Network Protocol and Memory Structure Analysis
Joo-Chan Lee Hyun-Pyo Choi Jang-Hoon Kim Jun-Won Kim Da-Un Jung Ji-Ho Shin Jung-Taek Seo 《计算机、材料和连续体(英文)》2020,65(1):53-67
Cyberattacks on the Industrial Control System (ICS) have recently been
increasing, made more intelligent by advancing technologies. As such, cybersecurity for
such systems is attracting attention. As a core element of control devices, the
Programmable Logic Controller (PLC) in an ICS carries out on-site control over the ICS. A
cyberattack on the PLC will cause damages on the overall ICS, with Stuxnet and Duqu as
the most representative cases. Thus, cybersecurity for PLCs is considered essential, and
many researchers carry out a variety of analyses on the vulnerabilities of PLCs as part of
preemptive efforts against attacks. In this study, a vulnerability analysis was conducted on
the XGB PLC. Security vulnerabilities were identified by analyzing the network protocols
and memory structure of PLCs and were utilized to launch replay attack, memory
modulation attack, and FTP/Web service account theft for the verification of the results.
Based on the results, the attacks were proven to be able to cause the PLC to malfunction
and disable it, and the identified vulnerabilities were defined. 相似文献
20.
为解决网络欺骗的安全性、欺骗性和交互性问题,提出了一个基于深度欺骗策略的五重欺骗与控制架构,在此基础上,建立了可实际应用的网络积极防御系统.该系统将防护、欺骗、监视、控制与审计整合为一体,在安全的条件下,通过网络服务欺骗、安全漏洞伪造、操作行为控制、文件系统欺骗和信息欺骗,实现了对整个网络入侵过程的欺骗与控制.其完整的欺骗与控制框架不仅仅针对某一攻击过程,还确保了系统不会轻易被识别,使欺骗程度和安全性大大提高. 相似文献