首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Program dependence graphs are a well-established device to represent possible information flow in a program. Path conditions in dependence graphs have been proposed to express more detailed circumstances of a particular flow; they provide precise necessary conditions for information flow along a path or chop in a dependence graph. Ordinary boolean path conditions, however, cannot express temporal properties, e.g. that for a specific flow it is necessary that some condition holds, and later another specific condition holds. In this contribution, we introduce temporal path conditions, which extend ordinary path conditions by temporal operators in order to express temporal dependencies between conditions for a flow. We present motivating examples, generation and simplification rules, application of model checking to generate witnesses for a specific flow, and a case study. We prove the following soundness property: if a temporal path condition for a path is satisfiable, then the ordinary boolean path condition for the path is satisfiable. The converse does not hold, indicating that temporal path conditions are more precise. An extended abstract of the present article appeared in the 2007 Proceedings of the Seventh IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2007). The research of A. Lochbihler was partially supported by Deutsche Forschungsgemeinschaft, grant Sn11/9-1.  相似文献   

2.
动态代码生成技术广泛使用在浏览器、Flash播放器等重要日常软件中,近年来其中曝出严重的安全问题,为控制流劫持攻击和相应的防御提供了新机会,受到越来越多的关注。针对动态生成代码在数据区且可被执行和直接依赖输入的特性,本文从代码注入攻击和代码重用攻击两个角度总结分析了控制流劫持攻击新技术,并从强制性防御和闪避防御(Moving target defense)两个角度对相关的主要防御新方法进行了阐述。同时提出动态代码生成系统安全性的衡量模型,对代表性防御技术进行对比分析和评估,并探讨了面向动态生成代码攻防技术的发展趋势和下一步的研究方向。  相似文献   

3.
There are several similar, but not identical, definitions of control dependence in the literature. These definitions are given in terms of control flow graphs which have had extra restrictions imposed (for example, end-reachability).We define two new generalisations of non-termination insensitive and non-termination sensitive control dependence called weak and strong control-closure. These are defined for all finite directed graphs, not just control flow graphs and are hence allow control dependence to be applied to a wider class of program structures than before.We investigate all previous forms of control dependence in the literature and prove that, for the restricted graphs for which each is defined, vertex sets are closed under each if and only if they are either weakly or strongly control-closed. Low polynomial-time algorithms for producing minimal weakly and strongly control-closed sets over generalised control flow graphs are given.This paper is the first to define an underlying semantics for control dependence: we define two relations between graphs called weak and strong projections, and prove that the graph induced by a set of vertices is a weak/strong projection of the original if and only if the set is weakly/strongly control-closed. Thus, all previous forms of control dependence also satisfy our semantics. Weak and strong projections, therefore, precisely capture the essence of control dependence in our generalisations and all the previous, more restricted forms. More fundamentally, these semantics can be thought of as correctness criteria for future definitions of control dependence.  相似文献   

4.
Measuring the strength of dependence between two sets of random variables lies at the heart of many statistical problems, in particular, feature selection for pattern recognition. We believe that there are some basic desirable criteria for a measure of dependence not satisfied by many commonly employed measures, such as the correlation coefficient, Briefly stated, a measure of dependence should: (1) be model-free and invariant under monotone transformations of the marginals; (2) fully differentiate different levels of dependence; (3) be applicable to both continuous and categorical distributions; (4) should not have the dependence of X on Y be necessarily the same as the dependence of Y on X; (5) be readily estimated from data; and (6) be straightforwardly extended to multivariate distributions. The new measure of dependence introduced in this paper, called the coefficient of intrinsic dependence(CID), satisfies these criteria. The main motivating idea is that Y is strongly (weakly, resp.) dependent on X if and only if the conditional distribution of Y given X is significantly (mildly, resp.) different from the marginal distribution of Y. We measure the difference by the normalized integrated square difference distance so that the full range of dependence can be adequately reflected in the interval [0, 1]. The paper treats estimation of the CID, provides simulations and comparisons, and applies the CID to gene prediction and cancer classification based on gene-expression measurements from microarrays.  相似文献   

5.
An experimental evaluation of data dependence analysis techniques   总被引:1,自引:0,他引:1  
Optimizing compilers rely upon program analysis techniques to detect data dependences between program statements. Data dependence information captures the essential ordering constraints of the statements in a program that need to be preserved in order to produce valid optimized and parallel code. Data dependence testing is very important for automatic parallelization, vectorization, and any other code transformation. In this paper, we examine the impact of data dependence analysis in practice. A number of data dependence tests have been proposed in the literature. In each test, there are different trade offs between accuracy and efficiency. We present an experimental evaluation of several data dependence tests, including the Banerjee test, the I-Test, and the Omega test. We compare these tests in terms of data dependence accuracy, compilation efficiency, effectiveness in parallelization, and program execution performance. We analyze the reasons why a data dependence test can be inexact and we explain how the examined tests handle such cases. We run various experiments using the Perfect Club Benchmarks and the scientific library Lapack. We present the measured accuracy of each test and the reasons for any approximation. We compare these tests in term's of efficiency and we analyze the trade offs between accuracy and efficiency. We also determine the impact of each data dependence test on the total compilation time. Finally, we measure the number of loops parallelized by each test and we compare the execution performance of each benchmark on a multiprocessor. Our results indicate that the Omega test is more accurate, but also very inefficient in the cases where the other two tests are inaccurate. In general, the cost of the Omega test is high and uses a significant percentage of the total compilation time. Furthermore, the difference in accuracy of the Omega test over the Banerjee test and the l-Test does not improve parallelization and program execution performance.  相似文献   

6.
Long-range dependence is a property of stochastic processes that has an important impact on network performance, especially on the buffer usage in routers. We analyze the presence of long-range dependence in on-chip processor traffic and we study the impact of long-range dependence on networks-on-chip. long-range dependence in communication traces of processor ips at the cycle-accurate level. We also study the impact of long-range dependence on a real network-on-chip using the SocLib simulation environment and traffic generators of our own. Our experiments show that long-range dependence is not an ubiquitous property of on-chip processor traffic and that its impact on the network-on-chip is highly correlated with the low level communication protocol used.  相似文献   

7.
《Computers & chemistry》1991,15(3):225-233
A program for the calculation of the effective charge and its volume dependence of solids of NaCl, CsCl, CaF2 and zincblende structures has been developed, which employs three different evaluations: (a) using the second Szigeti equation; (b) using the latter equation with γt = -(∂ ln ωt/∂ ln V) obtained from the generalized first Szigeti equation; and (c) using Hardy's model for seven different short-range potential forms. It consists of the FORTRAN-77 program DLOGS.FOR, which reads the input data from up to three input files (INPUTx.INP, x = 1, 2, 3; each of them with up to four data sets), calls subroutines SZIG2.FOR, SZI G12.FOR and HARDY.FOR for the computations by the three evaluations mentioned, respectively, and produces the corresponding output files (OUTPUTx. OUT, x = 1, 2, 3). DLOGS.FOR uses subroutines DIFDAT.FOR and DIFMA.FOR for calculating percentual differences between data. HARDY.FOR uses subroutine TABDAT.FOR which performs calculations with the short-range potentials mentioned above.  相似文献   

8.
The number of CAD programs and their capabilities have risen greatly in recent times. As well, the number of Application Programmer Interface (API) products and the number of representation standards for display, database storage and communication has also risen. These applications, API products and representation standards are generally not compatible except through specific, individually programmed interfaces. Incompatibility of API software products arises because of: (i) different representations for the same information, and (ii) different ways of communicating with the API products. This article describes the derivation of a generic software architecture to overcome the second source of incompatibility. The derivation employs the “box structure” (system engineering) software development methodology in a generic, high level manner; by considering activities performed with current CAD software, but without going into the details. The objective is to determine the types of software objects required and the types of messages that must be passed between them. The result is an architecture in which Tool objects embodying individual software tools are plugged into a Shell object which holds the Tool’s together as a single program, provides for interactions between Tool’s and controls when each Tool is active. In this way separately developed software tools can be combined seamlessly into a highly graphical and interactive environment.  相似文献   

9.
In this paper, we extend job scheduling models to include aspects of history-dependent scheduling, where setup times for a job are affected by the aggregate activities of all predecessors of that job. Traditional approaches to machine scheduling typically address objectives and constraints that govern the relative sequence of jobs being executed using available resources. This paper optimises the operations of multiple unrelated resources to address sequential and history-dependent job scheduling constraints along with time window restrictions. We denote this consolidated problem as the general precedence scheduling problem (GPSP). We present several applications of the GPSP and show that many problems in the literature can be represented as special cases of history-dependent scheduling. We design new ways to model this class of problems and then proceed to formulate it as an integer program. We develop specialized algorithms to solve such problems. An extensive computational analysis over a diverse family of problem data instances demonstrates the efficacy of the novel approaches and algorithms introduced in this paper.  相似文献   

10.
Software birthmarks utilize certain specific program characteristics to validate the origin of software, so it can be applied to detect software piracy. One state-of-the-art technology on software birthmark adopts dynamic system call dependence graphs as the unique signature of a program, which cannot be cluttered by existing obfuscation techniques and is also immune to the no-ops system call insertion attack. In this paper, we analyze its weaknesses and construct replacement attacks with the help of semantics equivalent system calls to unlock the high frequency dependencies between the system calls in the victim’s original system call dependence graph. Our results show that the proposed replacement attacks can destroy the original birthmark successfully.  相似文献   

11.
JDiff: A differencing technique and tool for object-oriented programs   总被引:2,自引:0,他引:2  
During software evolution, information about changes between different versions of a program is useful for a number of software engineering tasks. For example, configuration-management systems can use change information to assess possible conflicts among updates from different users. For another example, in regression testing, knowledge about which parts of a program are unchanged can help in identifying test cases that need not be rerun. For many of these tasks, a purely syntactic differencing may not provide enough information for the task to be performed effectively. This problem is especially relevant in the case of object-oriented software, for which a syntactic change can have subtle and unforeseen effects. In this paper, we present a technique for comparing object-oriented programs that identifies both differences and correspondences between two versions of a program. The technique is based on a representation that handles object-oriented features and, thus, can capture the behavior of object-oriented programs. We also present JDiff, a tool that implements the technique for Java programs. Finally, we present the results of four empirical studies, performed on many versions of two medium-sized subjects, that show the efficiency and effectiveness of the technique when used on real programs.
Mary Jean HarroldEmail:
  相似文献   

12.
This paper describes the use of market mechanisms for resource allocation in pervasive sensor applications to maximize their Value of Information (VoI), which combines the objectively measured Quality of Information (QoI) with the subjective value assigned to it by the users. The unique challenge of pervasive sensor applications that we address is the need for adjusting resource allocation in response to the changing application requirements and evolving sensor network conditions. We use two market mechanisms: auctions at individual sensor nodes to optimize routing, and switch options to optimize dynamic selection of sensor network services as well as switching between modes of operation in pervasive security applications. We also present scenarios of transient congestion management and home security system to motivate the proposed techniques.  相似文献   

13.
We examine the dependence structure of electricity spot prices across regional markets in Australia. One of the major objectives in establishing a national electricity market was to provide a nationally integrated and efficient electricity market, limiting market power of generators in the separate regional markets. Our analysis is based on a GARCH approach to model the marginal price series in the considered regions in combination with copulae to capture the dependence structure between the marginals. We apply different copula models including Archimedean, elliptical and copula mixture models. We find a positive dependence structure between the prices for all considered markets, while the strongest dependence is exhibited between markets that are connected via interconnector transmission lines. Regarding the nature of dependence, the Student-t copula provides a good fit to the data, while the overall best results are obtained using copula mixture models due to their ability to also capture asymmetric dependence in the tails of the distribution. Interestingly, our results also suggest that for the four major markets, NSW, QLD, SA and VIC, the degree of dependence has decreased starting from the year 2008 towards the end of the sample period in 2010. Examining the Value-at-Risk of stylized portfolios constructed from electricity spot contracts in different markets, we find that the Student-t and mixture copula models outperform the Gaussian copula in a backtesting study. Our results are important for risk management and hedging decisions of market participants, in particular for those operating in several regional markets simultaneously.  相似文献   

14.
We introduce the dependence distance, a new notion of the intrinsic distance between points, derived as a pointwise extension of statistical dependence measures between variables. We then introduce a dimension reduction procedure for preserving this distance, which we call the dependence map. We explore its theoretical justification, connection to other methods, and empirical behavior on real data sets.  相似文献   

15.
An important application of the dynamic program slicing technique is program debugging. In applications such as interactive debugging, the dynamic slicing algorithm needs to be efficient. In this context, we propose a new dynamic program slicing technique that is more efficient than the related algorithms reported in the literature. We use the program dependence graph as an intermediate program representation, and modify it by introducing the concepts of stable and unstable edges. Our algorithm is based on marking and unmarking the unstable edges as and when the dependences arise and cease during run-time. We call this algorithm edge-marking algorithm. After an execution of a node x, an unstable edge (x, y) is marked if the node x uses the value of the variable defined at node y. A marked unstable edge (x, y) is unmarked after an execution of a node z if the nodes y and z define the same variable var, and the value of var computed at the node y does not affect the present value of var defined at the node z. We show that our algorithm is more time and space efficient than the existing ones. The worst case space complexity of our algorithm is O(n2), where n is the number of statements in the program. We also briefly discuss an implementation of our algorithm.  相似文献   

16.
《Computer Networks》1999,31(8):831-860
Secure coprocessors enable secure distributed applications by providing safe havens where an application program can execute (and accumulate state), free of observation and interference by an adversary with direct physical access to the device. However, for these coprocessors to be effective, participants in such applications must be able to verify that they are interacting with an authentic program on an authentic, untampered device. Furthermore, secure coprocessors that support general-purpose computation and will be manufactured and distributed as commercial products must provide these core sanctuary and authentication properties while also meeting many additional challenges, including:
  • •the applications, operating system, and underlying security management may all come from different, mutually suspicious authorities;
  • •configuration and maintenance must occur in a hostile environment, while minimizing disruption of operations;
  • •the device must be able to recover from the vulnerabilities that inevitably emerge in complex software;
  • •physical security dictates that the device itself can never be opened and examined; and
  • •ever-evolving cryptographic requirements dictate that hardware accelerators be supported by reloadable on-card software.
This paper summarizes the hardware, software, and cryptographic architecture we developed to address these problems. Furthermore, with our colleagues, we have implemented this solution, into a commercially available product.  相似文献   

17.
Spatial variation of land-surface properties is a major challenge to ecological and biogeochemical studies in the Amazon basin. The scale dependence of biophysical variation (e.g., mixtures of vegetation cover types), as depicted in Landsat observations, was assessed for the common land-cover types bordering the Tapajós National Forest, Central Brazilian Amazon. We first collected hyperspectral signatures of vegetation and soils contributing to the optical reflectance of landscapes in a 600-km2 region. We then employed a spectral mixture model AutoMCU that utilizes bundles of the field spectra with Monte Carlo analysis to estimate sub-pixel cover of green plants, senescent vegetation and soils in Landsat Thematic Mapper (TM) pixels. The method proved useful for quantifying biophysical variability within and between individual land parcels (e.g., across different pasture conditions). Image textural analysis was then performed to assess surface variability at the inter-pixel scale. We compared the results from the textural analysis (inter-pixel scale) to spectral mixture analysis (sub-pixel scale). We tested the hypothesis that very high resolution, sub-pixel estimates of surface constituents are needed to detect important differences in the biophysical structure of deforested lands. Across a range of deforestation categories common to the region, there was strong correlation between the fractional green and senescent vegetation cover values derived from spectral unmixing and texture analysis variance results (r2>0.85, p<0.05). These results support the argument that, in deforested areas, biophysical heterogeneity at the scale of individual field plots (sub-pixel) is similar to that of whole clearings when viewed from the Landsat vantage point.  相似文献   

18.
Online discussions about software applications and services that take place on web-based communication platforms represent an invaluable knowledge source for diverse software engineering tasks, including requirements elicitation. The amount of research work on developing effective tool-supported analysis methods is rapidly increasing, as part of the so called software analytics. Textual messages in App store reviews, tweets, online discussions taking place in mailing lists and user forums, are processed by combining natural language techniques to filter out irrelevant data; text mining and machine learning algorithms to classify messages into different categories, such as bug report and feature request.Our research objective is to exploit a linguistic technique based on speech-acts for the analysis of online discussions with the ultimate goal of discovering requirements-relevant information. In this paper, we present a revised and extended version of the speech-acts based analysis technique, which we previously presented at CAiSE 2017, together with a detailed experimental characterisation of its properties. Datasets used in the experimental evaluation are taken from a widely used open source software project (161120 textual comments), as well as from an industrial project in the home energy management domain. We make them available for experiment replication purposes. On these datasets, our approach is able to successfully classify messages into Feature/Enhancement and Other, with F-measure of 0.81 and 0.84 respectively. We also found evidence that there is an association between types of speech-acts and categories of issues, and that there is correlation between some of the speech-acts and issue priority, thus motivating further research on the exploitation of our speech-acts based analysis technique in semi-automated multi-criteria requirements prioritisation.  相似文献   

19.
XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey the use of XML graphs for program analysis with four very different languages: Xact (XML in Java), Java Servlets (Web application programming), XSugar (transformations between XML and non-XML data), and XSLT (stylesheets for transforming XML documents).  相似文献   

20.
Context: Recent studies showed that combining present data, which are derived from the current software version, with past data, which are derived from previous software versions, can improve the accuracy of change impact predictions. However, for a specific program, existing combined techniques can rely only on version history of that program, if available, and the prediction results depend on the variety of available change impact examples.Objective: We propose a hybrid probabilistic approach that predicts the change impact for a software entity using, as training data, existing version histories of whatever software systems.Method: Change-impact predictors are learned from past change impact graphs (CIGs), extracted from the version history, along with their associations with different influencing factors of change propagation. The learning examples in CIGs are not specific to the software entities that are covered in those examples, and the change propagation influencing factors are structural and conceptual dependencies between software entities. Once our predictors are trained, they can predict change impacts regardless of the version history of the software under-analysis. We evaluate our approach using four systems in two scenarios. First, we use as training data the CIGs extracted from previous versions of the system under-analysis. Second, for each analyzed system, we use only the training data extracted from the other systems.Results: Our approach produces accurate predictions in terms of both precision and recall. Moreover, when training our classifiers with a large variety of CIGs extracted from the change histories of different projects, the recall scores of predicted impact sets were significantly improved.Conclusion:Our approach produces accurate predictions for new classes without recorded change histories, as well as for old classes. For the systems considered in our evaluation, once our approach is trained with a variety of CIGs it can predict change impacts with good recall scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号