首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Input validation is essential and critical in Web applications. It is the enforcement of constraints that any input must satisfy before it is accepted to raise external effects. We have discovered some empirical properties for characterizing input validation in Web applications. In this paper, we propose an approach for automated recovery of input validation model from program source code. The model recovered is represented in a variant of control flow graph, called validation flow graph, which shows essential input validation features implemented in programs. Based on the model, we then formulate two coverage criteria for testing input validation. The two criteria can be used to guide the structural testing of input validation in Web applications. We have evaluated the proposed approach through case studies and experiments.  相似文献   

2.
一种基于模型检验程序分析技术的前端工具研究   总被引:1,自引:0,他引:1  
叶俊民  谢茜  金聪  李明  张振方 《计算机科学》2010,37(5):118-122174
提出了一种用模型检验技术对程序进行分析的方法,其主要思想是将C/C++源代码转换为与控制流图等价的Kripke结构,用CTL公式描述待验证的源程序性质,使用NuSMV模型检验工具实施具体的程序分析。基于这一思想,设计并实现了一个自动将C/C++源代码转换为NuSMV输入的工具。所做的实验验证表明,该方法能够有效地对程序进行分析。  相似文献   

3.
嵌入式控制系统通常都有模式,比如启动模式、正常工作模式以及紧急模式等。程序模式是由其输入变量值范围组合构成的输入变量约束表达式表示的。基于源程序,获取其模式,不仅能够验证实现的模式与设计是否一致,还能够更加精确地计算程序的WCET。在对源程序进行分析的基础上,提出了一种自动获取程序模式的新方法。该方法基于C语言源程序,针对程序控制流程图,通过调整循环中节点流向以及去除与输入变量无关的节点,获得输入变量相关控制流程图ICFG,通过对ICFG每条路径建立线性规划问题并求解,获得每一个潜在的程序模式及其输入变量约束表达式。对基准程序的实验结果,表明了该方法的可行性和有效性。  相似文献   

4.
An alternative approach to developing reusable components from scratch is to recover them from existing systems. We apply program slicing, a program decomposition method, to the problem of extracting reusable functions from ill structured programs. As with conventional slicing first described by M. Weiser (1984), a slice is obtained by iteratively solving data flow equations based on a program flow graph. We extend the definition of program slice to a transform slice, one that includes statements which contribute directly or indirectly to transform a set of input variables into a set of output variables. Unlike conventional program slicing, these statements do not include either the statements necessary to get input data or the statements which test the binding conditions of the function. Transform slicing presupposes the knowledge that a function is performed in the code and its partial specification, only in terms of input and output data. Using domain knowledge we discuss how to formulate expectations of the functions implemented in the code. In addition to the input/output parameters of the function, the slicing criterion depends on an initial statement, which is difficult to obtain for large programs. Using the notions of decomposition slice and concept validation we show how to produce a set of candidate functions, which are independent of line numbers but must be evaluated with respect to the expected behavior. Although human interaction is required, the limited size of candidate functions makes this task easier than looking for the last function instruction in the original source code  相似文献   

5.
一种源程序到控制流图的自动生成方法   总被引:5,自引:0,他引:5  
将源程序转换为控制流图是软件工程领域中逆向工程的研究内容之一。本文给出了一种由源程序生成其对应的控制流图的方法和实现技术。该方法和技术也可应用于程序分析及软件纵等方面。  相似文献   

6.
Automated recovery of system features and their designs from program source codes is important in reverse engineering and system comprehension. It also helps in the testing of software. An error that is made by users in an input to an execution of a transaction and discovered only after the completion of the execution is called a posttransaction user-input error (PTUIE) of the transaction. For a transaction in any database application, usually, it is essential to-provide transactions for correcting the effect that could result from any PTUIE of the transaction. We discover some probable properties that exist between the control flow graph of a transaction and the control flow graphs of transactions for correcting PTUIE of the former transaction. Through recognizing these properties, we present a novel approach for the automated approximate recovery of provisions and designs for transactions to correct PTUIE of transactions in a database application. The approach recognizes these properties through analyzing the source codes of transactions in the database application statically.  相似文献   

7.
We present an algorithm for extracting control flow graphs from Java bytecode that captures normal as well as exceptional control flow. We prove its correctness, in the sense that the behaviour of the extracted control flow graph is a sound over-approximation of the behaviour of the original program. This makes control flow graphs suitable for performing various static analyses, such as model checking of temporal safety properties. Analysing exceptional control flow for Java bytecode is difficult because of the stack-based nature of the language. We therefore develop the extraction in two stages. In the first, we abstract away from the complications arising from exceptional flows, and relativize the extraction on an oracle that is able to look into the stack and predict the exceptions that can be raised at each instruction. This idealized algorithm provides a specification for concrete extraction algorithms, which have to provide a suitable implementation for the oracle. We prove correctness of the idealized algorithm by means of behavioural simulation. In the second stage, we develop a concrete extraction algorithm that consists of two phases. In the first phase, the program is transformed into a BIR program, a stack-less intermediate representation of Java bytecode, from which the control flow graph is extracted in the second phase. We use this intermediate format because it provides the information needed to implement the oracle, and since it gives rise to more compact graphs. We show that the behaviour of the control flow graph extracted via the intermediate representation is a sound over-approximation of the behaviour of the graph extracted by the direct, idealized algorithm, and thus of the original program. The concrete extraction algorithm is implemented as the ConFlEx tool. A number of test cases are performed to evaluate the efficiency of the algorithm.  相似文献   

8.
常天佑  魏强  耿洋洋 《计算机应用》2017,37(12):3574-3580
针对可编程逻辑控制器(PLC)程序在进行NuSMV模型检测时需要手工对程序进行建模,不仅浪费人力且容易出错的问题,提出一种基于状态转移的PLC程序模型自动化构建方法。该方法首先分析结构化文本(ST)语言特性并解析ST程序为抽象语法树;其次,在抽象语法树基础上,根据不同的文法结构进行控制流分析生成控制流图;然后,通过数据流分析得到程序依赖图;最后,根据程序依赖图生成NuSMV的输入模型。实验结果表明,所提方法实现了ST程序到NuSMV输入模型的自动化构建,并且构建的NuSMV输入模型既保留了ST程序的原有特性又符合NuSMV模型检测工具输入的规范,与传统手工模型构建方法相比,提高了模型生成的效率和准确率。  相似文献   

9.
Agent-oriented programming techniques seem appropriate for developing systems that operate in complex, dynamic, and unpredictable environments. We aim to address this requirement by developing model-checking techniques for the (automatic or semiautomatic) verification of rational-agent systems written in a logic-based agent-oriented programming language. Typically, developers apply model-checking techniques to abstract models of a system rather than the system implementation. Although this is important for detecting design errors at an early stage, developers might still introduce errors during coding. In contrast, developers can directly apply our model-checking techniques to systems implemented in an agent-oriented programming language, automatically verifying agent systems without the usual gap between design and implementation. We developed our techniques for AgentSpeak, a rational-agent programming language based on the AgentSpeak (L) abstract agent-oriented programming language. AgentSpeak shares many features of the agent-oriented programming paradigm. Similarly, we've developed techniques for automatically translating AgentSpeak programs into the model specification language of existing model-checking systems. In this way, we reduce the problem of verifying that an AgentSpeak system has certain BDI logic properties to a conventional LTL model-checking problem.  相似文献   

10.
For programs using secret information such as credit card numbers, preventing information-leaks is important. Denning, for example, has proposed a mechanism to certify that a given program does not violate a security policy. Kuninobu, on the other hand, has proposed a more practical framework for calculating the secrecy level of each output value from the secrecy level set to each input value, but no implementation has been yet explored. In this paper, we propose an implementation method for information-leak analysis, and show a system we have implemented based on program slicing. We have applied this system to a credit card program. Our results show that information-leak analysis before practical use of the program is important.  相似文献   

11.
基于过程蓝图的程序环路复杂性度量方法   总被引:1,自引:0,他引:1  
提出一种基于过程蓝图的程序环路复杂性度量实现方法。将传统基于程序控制流图的度量信息抽取变为对过程蓝图的实现层表示-抽象实现结构图的信息抽取,避免程序源代码的语法分析和控制流图的构造,简化度量过程和实现,并提高度量处理的效率。  相似文献   

12.
An integrated life-cycle model is presented for use in a software maintenance environment. The model represents information about the development and maintenance of software systems, emphasizing relationships between different phases of the software life cycle. It provides the basis for automated tools to assist maintenance personnel in making changes to existing software systems. The model is independent of particular specification, design, and programming languages because it represents only certain `basic' semantic properties of software systems: control flow, data flow, and data structure. The software development processes by which one phase of the software life cycle is derived from another are represented by graph rewriting rules, which indicate how various components of a software system have been implemented. This approach permits analysis of the basic properties of a software system throughout the software life cycle. Examples are given to illustrate the integrated software life-cycle model during evolution  相似文献   

13.
Graphs are ubiquitous in computer science. Moreover, in various application fields, graphs are equipped with attributes to express additional information such as names of entities or weights of relationships. Due to the pervasiveness of attributed graphs, it is highly important to have the means to express properties on attributed graphs to strengthen modeling capabilities and to enable analysis. Firstly, we introduce a new logic of attributed graph properties, where the graph part and attribution part are neatly separated. The graph part is equivalent to first-order logic on graphs as introduced by Courcelle. It employs graph morphisms to allow the specification of complex graph patterns. The attribution part is added to this graph part by reverting to the symbolic approach to graph attribution, where attributes are represented symbolically by variables whose possible values are specified by a set of constraints making use of algebraic specifications. Secondly, we extend our refutationally complete tableau-based reasoning method as well as our symbolic model generation approach for graph properties to attributed graph properties. Due to the new logic mentioned above, neatly separating the graph and attribution parts, and the categorical constructions employed only on a more abstract level, we can leave the graph part of the algorithms seemingly unchanged. For the integration of the attribution part into the algorithms, we use an oracle, allowing for flexible adoption of different available SMT solvers in the actual implementation. Finally, our automated reasoning approach for attributed graph properties is implemented in the tool AutoGraph integrating in particular the SMT solver Z3 for the attribute part of the properties. We motivate and illustrate our work with a particular application scenario on graph database query validation.  相似文献   

14.
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.  相似文献   

15.
In feature-oriented programming (FOP) a programmer decomposes a program in terms of features. Ideally, features are implemented modularly so that they can be developed in isolation. Access control mechanisms in the form of access or visibility modifiers are an important ingredient to attain feature modularity as they allow programmers to hide and expose internal details of a module’s implementation. But developers of contemporary feature-oriented languages have not considered access control mechanisms so far. The absence of a well-defined access control model for FOP breaks encapsulation of feature code and leads to unexpected program behaviors and inadvertent type errors. We raise awareness of this problem, propose three feature-oriented access modifiers, and present a corresponding access modifier model. We offer an implementation of the model on the basis of a fully-fledged feature-oriented compiler. Finally, by analyzing ten feature-oriented programs, we explore the potential of feature-oriented modifiers in FOP.  相似文献   

16.
For the maintenance of software systems, developers have to completely understand the existing system. The usage of design patterns leads to benefits for new and young developers by enabling them to reuse the knowledge of their experienced colleagues. Design patterns can support a faster and better understanding of software systems. There are different approaches for supporting pattern recognition in existing systems by tools. They are evaluated by the Information Retrieval criteria precision and recall. An automated search based on structures has a highly positive influence on the manual validation of the results by developers. This validation of graphical structures is the most intuitive technique. In this paper a new approach for automated pattern search based on minimal key structures is presented. It is able to detect all patterns described by the GOF [15]. This approach is based on positive and negative search criteria for structures and is prototypically implemented using Rational Rose and Together.  相似文献   

17.
18.
In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ‘self‐repair’ operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self‐localization, respectively. For the self‐repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2‐vertex‐rigidity or 2‐vertex‐global‐rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self‐localization to be performed even in the event of agent failure. We review a characterization of a class of 2‐vertex‐rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2‐vertex‐global‐rigidity. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Property‐based testing has gained popularity in recent years in many areas of software development. The specification of assertions/properties helps to understand the semantics of pieces of code, and in modern programming environments, it can serve to test the program behavior. In this paper an XQuery property‐based testing tool is presented, which enables to automatically test XQuery programs. The tool is able to systematically generate XML instances (i.e., test cases) from a given XML schema, and to filter XML instances with input properties specified by the programmer. Additionally, the tool automatically checks output (respectively, input‐output) properties in each output instance (respectively, each pair of input‐output instances). The tool is able to report whether the XQuery program passes the test, that is, if all the test cases satisfy the (input‐)output property, as well as the number of test cases used for testing. In addition, if the XQuery program fails the test, the tool shows counterexamples found in the test cases. Properties are specified with XQuery Boolean functions, and the testing tool has been implemented in XQuery. Additionally, an XQuery path validation tool is presented. This tool is able to detect wrong paths in XQuery expressions. The path validation tool takes as input an XML schema, and it reports those paths on the XQuery program that do not match the XML schema. The path validation tool is a complement to the testing tool rejecting XQuery programs that do not conform to the XML schema. The path validation tool has been also implemented in XQuery. Finally, a web tool has been developed enabling to test and validate XQuery programs.  相似文献   

20.
In contemporary aspect-oriented languages, pointcuts are usually specified directly in terms of the structure of the source code. The definition of such low-level pointcuts requires aspect developers to have a profound understanding of the entire application's implementation and often leads to complex, fragile and hard-to-maintain pointcut definitions. To resolve these issues, we present an aspect-oriented programming system that features a logic-based pointcut language that is open such that it can be extended with application-specific pointcut predicates. These predicates define an application-specific model that serves as a contract that base program developers provide and aspect developers can depend upon. As a result, pointcuts can be specified in terms of this more high-level model of the application which confines all intricate implementation details that are otherwise exposed in the pointcut definitions themselves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号