首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Existing SPARQL-to-SQL translation techniques have limitations that reduce their robustness, efficiency and dependability. These limitations include the generation of inefficient or even incorrect SQL queries, lack of formal background, and poor implementations. Moreover, some of these techniques cannot be used over arbitrary DB schemas due to the lack of support for RDB to RDF mapping languages, such as R2RML. In this paper we present a technique (implemented in the -ontop- system) that tackles all these issues. We propose a formal approach for SPARQL-to-SQL translation that (i) generates efficient SQL by combining optimization techniques from the logic programming and SQL optimization fields; (ii) provides a well-defined specification of the SPARQL semantics used in the translation; and (iii) supports R2RML mappings over general relational schemas. We provide extensive benchmarks using the -ontop- system for Ontology Based Data Access (OBDA) and show that by using these techniques -ontop- is able to outperform well known SPARQL-to-SQL systems, as well as commercial triple stores, by several orders of magnitude.  相似文献   

2.
Test data generation is one of the most technically challenging steps of testing software, but most commercial systems currently incorporate very little automation for this step. This paper presents results from a project that is trying to find ways to incorporate test data generation into practical test processes. The results include a new procedure for automatically generating test data that incorporates ideas from symbolic evaluation, constraint‐based testing, and dynamic test data generation. It takes an initial set of values for each input, and dynamically ‘pushes’ the values through the control‐flow graph of the program, modifying the sets of values as branches in the program are taken. The result is usually a set of values for each input parameter that has the property that any choice from the sets will cause the path to be traversed. This procedure uses new analysis techniques, offers improvements over previous research results in constraint‐based testing, and combines several steps into one coherent process. The dynamic nature of this procedure yields several benefits. Moving through the control flow graph dynamically allows path constraints to be resolved immediately, which is more efficient both in space and time, and more often successful than constraint‐based testing. This new procedure also incorporates an intelligent search technique based on bisection. The dynamic nature of this procedure also allows certain improvements to be made in the handling of arrays, loops, and expressions; language features that are traditionally difficult to handle in test data generation systems. The paper presents the test data generation procedure, examples to explain the working of the procedure, and results from a proof‐of‐concept implementation. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
Automating Software Testing Using Program Analysis   总被引:3,自引:0,他引:3  
During the last 10 years, code inspection for standard programming errors has largely been automated with static code analysis. During the next 10 years, we expect to see similar progress in automating testing, and specifically test generation, thanks to advances in program analysis, efficient constraint solvers, and powerful computers. Three new tools from Microsoft combine techniques from static program analysis, dynamic analysis, model checking, and automated constraint solving while targeting different application domains.  相似文献   

4.
Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open‐source and commercial programs are routinely used as benchmarks to evaluate different aspects of algorithms and tools. Unfortunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibility of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools and compilers. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated programs. We implemented our tool for Java and applied it to generate a set of large benchmark programs of up to 5M lines of code each with which we evaluated different program analysis and testing tools and compilers. The generated benchmarks let us independently rediscover several issues in the evaluated tools. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
With the advent of advanced program analysis and constraint solving techniques,several test generation tools use variants of symbolic execution.Symbolic techniques have been shown to be very effective in path-based test generation;however,they fail to scale to large programs due to the exponential number of paths to be explored.In this paper,we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution,while exploring only a fraction...  相似文献   

6.
Static value analysis is a classical approach for verifying programs with floating-point computations. Value analysis mainly relies on abstract interpretation and over-approximates the possible values of program variables. State-of-the-art tools may however compute over-approximations that can be rather coarse for some very usual program expressions. In this paper, we show that constraint solvers can significantly refine approximations computed with abstract interpretation tools. More precisely, we introduce a hybrid approach combining abstract interpretation and constraint programming techniques in a single static and automatic analysis. This hybrid approach benefits from the strong points of abstract interpretation and constraint programming techniques, and thus, it is more effective than static analysers and constraint solvers, when used separately. We compared the efficiency of the system we developed—named rAiCp—with state-of-the-art static analyzers: rAiCp produces substantially more precise approximations and is able to check program properties on both academic and industrial benchmarks.  相似文献   

7.
Constraint solving is a frequent, but expensive operation with symbolic execution to generate tests for a program. To improve the efficiency of test generation using constraint solving, four optimization techniques are usually applied to existing constraint solvers, which are constraint independence, constraint set simplification, constraint caching, and expression rewriting. In this paper, we conducted an empirical study, using these four constraint optimization techniques in a well known test generation tool KLEE with 77 GNU Coreutils applications, to systematically investigate how these optimization techniques affect the efficiency of test generation. The experimental results show that these constraint optimization techniques as well as their combinations cannot improve the efficiency of test generation significantly for ALL-SIZED programs. Moreover, we studied the constraint optimization techniques with respect to two static metrics, lines of code (LOC) and cyclomatic complexity (CC), of programs. The experimental results show that the “constraint set simplification” technique can improve the efficiency of test generation significantly for the programs with high LOC and CC values. The “constraint caching” optimization technique can improve the efficiency of test generation significantly for the programs with low LOC and CC values. Finally, we propose four hybrid optimization strategies and practical guidelines based on different static metrics.  相似文献   

8.
动态测试用例生成技术是一类新兴的软件测试技术。由于使用该类技术无需任何人工干预,也无需验证人员具备任何专业知识,同时该类技术能够无误地发现程序错误,越来越多的研究者采用该技术查找预发布的二进制级软件错误。然而,已有的该类技术及其实现系统不具有可重定向性,只能处理面向某种特定指令集体系结构(ISA)的二进制代码,进行测试用例的生成与查错。本文提出了一种全新的指令集体系结构无关的二进制级动态测试用例生成技术,以及实现该技术的系统Hunter。与已有的动态测试用例生成技术不同,Hunter具有极强的可重定向性,可对任何指令集体系结构的二进制代码进行查错,定向地为其生成指向不同执行路径的测试用例。Hunter定义了一套元指令集体系结构(MetaISA),将在二进制代码执行过程中收集到的所有执行信息映射为MetaISA,并对生成的MetaISA序列进行符号化执行、约束收集、约束求解以及测试用例生成,从而使整个过程与ISA无关。我们实现了Hunter,将其重定向至32位x86、PowerPC和Sparc ISA,并使用该系统为6个含有已知错误的测试程序查错。实验结果表明,由于MetaISA的引入,只需很小的开销,Hunter系统即可容易且有效地重定向至不同的ISA,并且Hunter能够有效地发现面向32位x86、PowerPC和Sparc ISA编写的二进制应用中隐藏极深的错误。  相似文献   

9.
Benchmarks are vital tools in the performance measurement and evaluation of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of domain boundness and workload boundness which results in the ir-representative and ir-reproducible performance readings. We tackle the issue by proposing a domain-independent and workload-independent benchmark method which is developed from the perspective of the user requirements. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application. The benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. In web search, the generic constructs are main common carriers we adopt to capture and compose the workload requirements. We determine the requirements via the analysis of literature study. In this study, we have conducted ten baseline experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method is capable of modeling the standard benchmarks as well as more general benchmark requirements.  相似文献   

10.
This paper describes MATISSE, a compiler able to translate a MATLAB subset to C targeting embedded systems. MATISSE uses LARA, an aspect‐oriented programming language, to specify additional information and transformations to the input MATLAB code, for example, insertion of code for initialization of variables, and specification of types and shapes of variables. The compiler is being developed bearing in mind flexibility, multitarget and multitoolchain support, allowing for the generation of several implementations in C from the same reference code in MATLAB. In this paper, we also present a number of techniques being employed in MATLAB to C compilation, such as element‐wise mapping operations, matrix views, weak types, and intrinsics. We validate these techniques using MATISSE and a set of representative benchmarks. More specifically, we evaluate the compiler with a set of 31 benchmarks using an embedded system board and a desktop computer. The results show speedups up to 1.8× by employing information provided by LARA aspects, when compared with C code generated without additional user information. When compared with the execution time of the original code running on MATLAB, the execution time of the generated C code achieved a geometric mean speedup of 13×. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
The role of information resource dictionary systems (data dictionary systems) is important in two important phases of information resource management:First, information requirements analysis and specification, which is a complex activity requiring data dictionary support: the end result is the specification of an “Enterprise Model,” which embodies the major activities, processes, information flows, organizational constraints, and concepts. This role is examined in detail after analyzing the existing approaches to requirements analysis and specification.Second, information modeling which uses the information in the Enterprise Model to construct a formal implementation independent database specification: several information models and support tools that may aid in transforming the initial requirements into the final logical database design are examined.The metadata — knowledge about both data and processes — contained in the data dictionary can be used to provide views of data for the specialized tools that make up the database design workbench. The role of data dictionary systems in the integration of tools is discussed.  相似文献   

12.
Context: Declarative business processes are commonly used to describe permitted and prohibited actions in a business process. However, most current proposals of declarative languages fail in three aspects: (1) they tend to be oriented only towards the execution order of the activities; (2) the optimization is oriented only towards the minimization of the execution time or the resources used in the business process; and (3) there is an absence of capacity of execution of declarative models in commercial Business Process Management Systems.Objective: This contribution aims at taking into account these three aspects, by means of: (1) the formalization of a hybrid model oriented towards obtaining the outcome data optimization by combining a data-oriented declarative specification and a control-flow-oriented imperative specification; and (2) the automatic creation from this hybrid model to an imperative model that is executable in a standard Business Process Management System.Method: An approach, based on the definition of a hybrid business process, which uses a constraint programming paradigm, is presented. This approach enables the optimized outcome data to be obtained at runtime for the various instances.Results: A language capable of defining a hybrid model is provided, and applied to a case study. Likewise, the automatic creation of an executable constraint satisfaction problem is addressed, whose resolution allows us to attain the optimized outcome data. A brief computational study is also shown.Conclusion: A hybrid business process is defined for the specification of the relationships between declarative data and control-flow imperative components of a business process. In addition, the way in which this hybrid model automatically creates an entirely imperative model at design time is also defined. The resulting imperative model, executable in any commercial Business Process Management System, can obtain, at execution time, the optimized outcome data of the process.  相似文献   

13.
工业界、学术界,以及最终用户都急切需要一个大数据的评测基准, 用以评估现有的大数据系统,改进现有技术以及开发新的技术。回顾了近几年来大数据评测基准研发方面的主要工作。 对它们的特点和缺点进行了比较分析。在此基础上, 对研发新的大数据评测基准提出了一系列考虑因素:1)为了对整个大数据平台的不同子工具进行评测, 以及把大数据平台作为一个整体进行评测, 需要研发面向组件的评测基准和面向大数据平台整体的评测基准, 后者是前者的有机组合;2)工作负载除了SQL查询之外, 必须包含大数据分析任务所需要的各种复杂分析功能, 涵盖各类应用需求;3)在评测指标方面,除了性能指标(响应时间和吞吐量)之外, 还需要考虑其他指标的评测, 包括系统的可扩展性、容错性、节能性和安全性等。  相似文献   

14.
ContextDespite the large number of publications on Search-Based Software Testing (SBST), there remain few publicly available tools. This paper introduces AUSTIN, a publicly available open source SBST tool for the C language.1 The paper is an extension of previous work [1]. It includes a new hill climb algorithm implemented in AUSTIN and an investigation into the effectiveness and efficiency of different pointer handling techniques implemented by AUSTIN’s test data generation algorithms.ObjectiveTo evaluate the different search algorithms implemented within AUSTIN on open source systems with respect to effectiveness and efficiency in achieving branch coverage. Further, to compare AUSTIN against a non-publicly available, state-of-the-art Evolutionary Testing Framework (ETF).MethodFirst, we use example functions from open source benchmarks as well as common data structure implementations to check if the decision procedure for pointer inputs, introduced in this paper, differs in terms of effectiveness and efficiency compared to a simpler alternative that generates random memory graphs. A second empirical study formulates two alternate hypotheses regarding the effectiveness and efficiency of AUSTIN compared to the ETF. These hypotheses are tested using a paired Wilcoxon test.Results and ConclusionThe first study highlights some practical problems with the decision procedure for pointer inputs described in this paper. In particular, if the code under test contains insufficient guard statements to enforce constraints over pointers, then using a constraint solver for pointer inputs may be suboptimal compared to a method that generates random memory graphs. The programs used in the second study do not require any constraint solving for pointer inputs and consist of eight non-trivial, real-world C functions drawn from three embedded automotive software modules. For these functions, AUSTIN is competitive compared to the ETF, achieving an equal or higher branch coverage for six of the functions. In addition, for functions where AUSTIN’s branch coverage is equal or higher, AUSTIN is more efficient than the ETF.  相似文献   

15.
The widespread use of service-oriented architectures (SOAs) and Web services in commercial software requires the adoption of development techniques to ensure the quality of Web services. Testing techniques and tools concern quality and play a critical role in accomplishing quality of SOA based systems. Existing techniques and tools for traditional systems are not appropriate to these new systems, making the development of Web services testing techniques and tools required. This article presents new testing techniques to automatically generate a set of test cases and data for Web services. The techniques presented here explore data perturbation of Web services messages upon data types, integrity and consistency. To support these techniques, a tool (GenAutoWS) was developed and applied to real problems.  相似文献   

16.
The merger of three-dimensional graphics with the X Window System has recently been standardized by adapting PHIGS, the Programmer's Hierarchical Interactive Graphics System, to the X Window System with PEX, the PHIGS Extension to X. The standard programming library for PEX has been defined to be identical to PHIGS PLUS allowing PHIGS programs to port directly to the X environment. X uses a client server model to run applications as client processes which communicate with a server to perform graphical display and input. For improved performance, the PEX extension defines new server resources to reduce network traffic and to take advantage of graphics hardware existing on high-end servers. A side effect of this distributed model of computation is a distribution of PHIGS structures leading to a relaxation of the exclusive access which a PHIGS application usually maintains over its Central Structure Store. We exploit the distributed nature of a PEX/PHIGS client's Central Structure Store to provide access to it for other applications besides the originating PEX/PHIGS client. We refer to these other applications as tools since one of our primary goals is to create development tools for PHIGS programmers. Rather than concentrate on particular debugging tools, we focus upon easing the process of actually developing tools. Our goal is to supply a collection of routines which can be used by PHIGS programmers to create custom tools or other programs which require access to the graphics data of remote PHIGS processes. Our Tool Development Library provides the PHIGS programmer a small number of management routines which orchestrate the connection and mapping to the data of one or more remote PHIGS applications. Manipulation of remote PHIGS structures is accomplished just as easily as local operations and is performed using standard PHIGS calls. The remote application being accessed requires no changes to its source code. Obvious uses for the Tool Development Library are in the construction of PHIGS tools such as structure browsers, editors and debugging aids. Less obvious is the potential for developing collections of cooperating graphics applications which share graphics data.  相似文献   

17.
Testing is the most dominant validation activity used by industry today, and there is an urgent need for improving its effectiveness, both with respect to the time and resources for test generation and execution, and obtained test coverage. We present a new technique for automatic generation of real-time black-box conformance tests for non-deterministic systems from a determinizable class of timed automata specifications with a dense time interpretation. In contrast to other attempts, our tests are generated using a coarse equivalence class partitioning of the specification. To analyze the specification, to synthesize the timed tests, and to guarantee coverage with respect to a coverage criterion, we use the efficient symbolic techniques recently developed for model checking of real-time systems. Application of our prototype tool to a realistic specification shows promising results in terms of both the test suite size, and the time and space used for test generation.  相似文献   

18.
Testing is often cited as one of the most costly operations in testing dependable systems (Heimdahl et al. 2001). A particular challenging task in testing is test-case generation. To improve the efficiency of test-case generation and reduce its cost, recently automated formal verification techniques such as model checking are extended to automate test-case generation processes. In model-checking-assisted test-case generation, a test criterion is formulated as temporal logical formulae, which are used by a model checker to generate test cases satisfying the test criterion. Traditional test criteria such as branch coverage criterion and newer temporal-logic-inspired criteria such as property coverage criteria (Tan et al. 2004) are used with model-checking-assisted test generation. Two key questions in model-checking-assisted test generation are how efficiently a model checker may generate test suites for these criteria and how effective these test suites are. To answer these questions, we developed a unified framework for evaluating (1) the effectiveness of the test criteria used with model-checking-assisted test-case generation and (2) the efficiency of test-case generation for these criteria. The benefits of this work are three-fold: first, the computational study carried out in this work provides some measurements of the effectiveness and efficiency of various test criteria used with model-checking-assisted test case generation. These performance measurements are important factors to consider when a practitioner selects appropriate test criteria for an application of model-checking-assisted test generation. Second, we propose a unified test generation framework based on generalized Büchi automata. The framework uses the same model checker, in this case, SPIN model checker (Holzmann 1997), to generate test cases for different criteria and compare them on a consistent basis. Last but not least, we describe in great details the methodology and automated test generation environment that we developed on the basis of our unified framework. Such details would be of interest to researchers and practitioners who want to use and extend this unified framework and its accompanying tools.  相似文献   

19.
This paper reviews the current status of both research and commercial testing systems, and addresses the features necessary for a commercial test system. These include test case specification, test data generation, testbed generation, program instrumentation, automatic test execution and validation, as well as dynamic analysis of control and data flow. Of particular value is the linking of the details of the test to the program specification by means of an assertion language. These and other features are then described within the contest of , an integrated system for testing Assembler, , and / 1 programs in a simulated test environment. This system is now being used to validate programs in a test laboratory.  相似文献   

20.
Real defects (e.g., resistive stuck at or bridging faults) in very large-scale integration (VLSI) circuits cause intermediate voltages which cannot be modeled as ideal shorts. In this paper, we first show that the traditional zero-resistance model is not sufficient for fault simulation. Then, we present a resistive fault model for real defects and use fuzzy logic techniques for fault simulation and test pattern generation at the gate level. Our method uses Takagi-Sugeno (TS) fuzzy system to accurately model digital VLSI circuits and produces much more realistic fault coverage compared to the conventional methods. The experimental results include the fault coverage and test-pattern statistics for the ISCAS85 benchmarks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号