首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ContextPassive testing is a technique in which traces collected from the execution of a system under test are examined for evidence of flaws in the system.ObjectiveIn this paper we present a method for detecting the presence of security vulnerabilities by detecting evidence of their causes in execution traces. This is a new approach to security vulnerability detection.MethodOur method uses formal models of vulnerability causes, known as security goal models and vulnerability detection conditions (VDCs). The former are used to identify the causes of vulnerabilities and model their dependencies, and the latter to give a formal interpretation that is suitable for vulnerability detection using passive testing techniques. We have implemented modeling tools for security goal models and vulnerability detection conditions, as well as TestInv-Code, a tool that checks execution traces of compiled programs for evidence of VDCs.ResultsWe present the full definitions of security goal models and vulnerability detection conditions, as well as structured methods for creating both. We describe the design and implementation of TestInv-Code. Finally we show results obtained from running TestInv-Code to detect typical vulnerabilities in several open source projects. By testing versions with known vulnerabilities, we can quantify the effectiveness of the approach.ConclusionAlthough the current implementation has some limitations, passive testing for vulnerability detection works well, and using models as the basis for testing ensures that users of the testing tool can easily extend it to handle new vulnerabilities.  相似文献   

2.
Input validation vulnerabilities are common in Android apps, especially in inter-component communications. Malicious attacks can exploit this kind of vulnerability to bypass Android security mechanism and compromise the integrity, confidentiality and availability of Android devices. However, so far there is not a sound approach at the source code level for app developers aiming to detect input validation vulnerabilities in Android apps. In this paper, we propose a novel approach for detecting input validation flaws in Android apps and we implement a prototype named EasyIVD, which provides practical static analysis of Java source code. EasyIVD leverages backward program slicing to extract transaction and constraint slices from Java source code. Then EasyIVD validates these slices with predefined security rules to detect vulnerabilities in a known pattern. To detect vulnerabilities in an unknown pattern, EasyIVD extracts implicit security specifications as frequent patterns from the duplicated slices and verifies them. Then EasyIVD semi-automatically confirms the suspicious rule violations and reports the confirmed ones as vulnerabilities. We evaluate EasyIVD on four versions of original Android apps spanning from version 2.2 to 5.0. It detects 58 vulnerabilities including confused deputy attacks and denial of service attacks. Our results prove that EasyIVD can provide a practical defensive solution for app developers.  相似文献   

3.
International Journal of Information Security - Memory corruption is a serious class of software vulnerabilities, which requires careful attention to be detected and removed from applications...  相似文献   

4.
Auto-parallelizing compilers for embedded applications have been unsuccessful due to the widespread use of pointer arithmetic and the complex memory model of multiple-address space digital signal processors (DSPs). This work develops, for the first time, a complete auto-parallelization approach, which overcomes these issues. It first combines a pointer conversion technique with a new modulo elimination transformation for program recovery enabling later parallelization stages. Next, it integrates a novel data transformation technique that exposes the processor location of partitioned data. When this is combined with a new address resolution mechanism, it generates efficient programs that run on multiple address spaces without using message passing. Furthermore, as DSPs do not possess any data cache structure, an optimization is presented which transforms the program to both exploit remote data locality and local memory bandwidth. This parallelization approach is applied to the DSPstone and UTDSP benchmark suites, giving an average speedup of 3.78 on four analog devices TigerSHARC TS-101 processors.  相似文献   

5.
The development of reliable software for industrial critical systems benefits from the use of formal models and verification tools for detecting and correcting errors as early as possible. Ideally, with a complete model-based methodology, the formal models should be the starting point to obtain the final reliable code and the verification step should be done over the high-level models. However, this is not the case for many projects, especially when integrating existing code. In this paper, we describe an approach to verify concurrent C code by automatically extracting a high-level formal model that is suitable for analysis with existing tools. The basic components of our approach are: (1) a method to construct a labeled transition system from the source code, that takes flow control and interaction among processes into account; (2) a modeling scheme of the behavior that is external to the program, namely the functionality provided by the operating system; (3) the use of demand-driven static analyses to make a further abstraction of the program, thus saving time and memory during its verification. The whole proposal has been implemented as an extension of the CADP toolbox, which already provides a variety of analysis modules for several input languages using labeled transition systems as the core model. The approach taken fits well within the existing architecture of CADP which does not need to be altered to enable C program verification. We illustrate the use of the extended CADP toolbox by considering examples of the VLTS benchmark suite and C implementations of various concurrent programs.  相似文献   

6.
Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or “tweets”. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. “Toyota”) and user-generated tags (e.g. “#irony”). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.  相似文献   

7.
Malware detection is still an open problem. There are numerous attacks that take place every day where malware is used to steal private information, disrupt services, or sabotage industrial systems. In this paper, we combine three kinds of contextual information, namely static, dynamic, and instruction-based, for malware detection. This leads to the definition of more than thirty thousand features, which is a large features set that covers a wide range of a sample characteristics. Through experiments with one million files, we show that this features set leads to machine learning based models that can detect both malware seen roughly at the time when the models are built, and malware first seen even months after the models were built (i.e., the detection models remain effective months ahead of time). This may be due to the comprehensiveness of the features set.  相似文献   

8.
9.
Although detecting highlights in films is a trivial task for humans, previous studies have not determined whether a computer can be equipped with this capability. In this paper, we present a content-based system that automatically detects highlight scenes and predicts highlight scores in action movies. In particular, high-level image attributes and an early event detection approach are applied. Dissimilar to current learning-based approaches that model the relationship between the whole highlight and corresponding audiovisual features, the proposed system studies the temporal changes of a set of general features from a nonhighlight to a highlight scene. The experimental results indicate that achieving the highlight detection task is technically feasible. It also provides critical insights into understanding the feasibility of solving this challenging problem. For example, both audio and visual features are crucial and the filming style can be captured using high-level image attributes, which further improve the overall detection performance.  相似文献   

10.
11.
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Def-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach.  相似文献   

12.
Nowadays malware is one of the serious problems in the modern societies. Although the signature based malicious code detection is the standard technique in all commercial antivirus softwares, it can only achieve detection once the virus has already caused damage and it is registered. Therefore, it fails to detect new malwares (unknown malwares). Since most of malwares have similar behavior, a behavior based method can detect unknown malwares. The behavior of a program can be represented by a set of called API's (application programming interface). Therefore, a classifier can be employed to construct a learning model with a set of programs' API calls. Finally, an intelligent malware detection system is developed to detect unknown malwares automatically. On the other hand, we have an appealing representation model to visualize the executable files structure which is control flow graph (CFG). This model represents another semantic aspect of programs. This paper presents a robust semantic based method to detect unknown malwares based on combination of a visualize model (CFG) and called API's. The main contribution of this paper is extracting CFG from programs and combining it with extracted API calls to have more information about executable files. This new representation model is called API-CFG. In addition, to have fast learning and classification process, the control flow graphs are converted to a set of feature vectors by a nice trick. Our approach is capable of classifying unseen benign and malicious code with high accuracy. The results show a statistically significant improvement over n-grams based detection method.  相似文献   

13.
Unified Approach for Developing EfficientAlgorithmic Programs   总被引:5,自引:0,他引:5       下载免费PDF全文
A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented.An algorithm(represented by recurrence and initiation)is separated from program,and special attention is paid to algorithm manipulation rather than proram calculus.An algorithm is exactly a set of mathematical formulae.It is easier for formal erivation and proof.After getting efficient and correct algorithm,a trivial transformation is used to get a final rogram,The approach covers several known algorithm design techniques,e.g.dynamic programming,greedy,divide-and-conquer and enumeration,etc.The techniques of partition and recurrence are not new.Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach.Recurrence is used in algorithm analysis,in developing loop invariants and dynamic programming approach.The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmicprograms.  相似文献   

14.
15.
Several styles and notations for representing concurrent programs are shortly explained and related to each other. It is basically demonstrated how the different language concepts found in concurrent programming conceptually evolve from classical styles of functional and applicative programming.  相似文献   

16.
Concurrent techniques have been widely adopted in software systems, and data race has become a great threat to stability and security of concurrent systems. Previous precise race detection techniques either may miss many races, or are only suitable for some specific programs, such as the programs executed in a virtual machine rather than in actual hardware. To solves these problems, this paper introduces a dynamic predictive race detector, called LayDetect, which detects predictable races in C/C++ programs. LayDetect applies an innovative layering technique, which can detect more races than other detectors, such as FastTrack. We have implemented and evaluated LayDetect with well-known benchmarks and real-world applications. LayDetect has detected 3.7 M races at run-time which is more than that of FastTrack by two orders of magnitude, while the average slowdown (3.0\(\times \)) and space overhead (34.1 MB) of LayDetect are similar to that of FastTrack.  相似文献   

17.
Multicore and multi-threaded processors have become the norm for modern processors.Accordingly,concurrent programs have become more and more prevalent despite being difficult to write and understand.Although errors are highly likely to appear in concurrent code,conventional error detection methods such as model checking,theorem proving,and code analysis do not scale smoothly to concurrent programs.Testing is an indispensable technique for detecting concurrency errors,but it involves a great deal of manual work and is inefficient.This paper presents an automatic method for detecting concurrency errors in classes in object-oriented languages.The method uses a heuristic algorithm to automatically generate test cases that can effectively trigger errors.Then,each test case is executed automatically and a fast method is adopted to identify the actual concurrency error from anomalous run results.We have implemented a prototype of the method and applied it to some typical Java classes.Evaluation shows that our method is more effective and faster than previous work.  相似文献   

18.
James Stanier  Des Watson 《Software》2012,42(1):117-130
Compilers use a variety of techniques to optimize and transform loops. However, many of these optimizations do not work when the loop is irreducible. Node splitting techniques transform irreducible loops into reducible loops, but many real‐world compilers choose to leave them unoptimized. This article describes an empirical study of irreducibility in current versions of open‐source software, and then compares them with older versions. We also study machine‐generated C code from a number of software tools. We find that irreducibility is extremely rare, and is becoming less common with time. We conclude that leaving irreducible loops unoptimized is a perfectly feasible future‐proof option due to the rarity of its occurrence in non‐trivial software. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
20.
This paper introduces the notion of attention-from-motion in which the objective is to identify, from an image sequence, only those object in motions that capture visual attention (VA). Following the important concept in film production, viz, the tracking shot, we define the attention object in motion (AOM) as those that are tracked by the camera. Three components are proposed to form an attention-from-motion framework: (i) a new factorization form of the measurement matrix to describe dynamic geometry of moving object observed by moving camera; (ii) determination of single AOM based on the analysis of certain structure on the motion matrix; (iii) an iterative framework for detecting multiple AOMs. The proposed analysis of structure from factorization enables the detection of AOMs even when only partial data is available due to occlusion and over-segmentation. Without recovering the motion of either object or camera, the proposed method can detect AOM robustly from any combination of camera motion and object motion and even for degenerate motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号