首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Code transformation and analysis tools provide support for software engineering tasks such as style checking, testing, calculating software metrics as well as reverse‐ and re‐engineering. In this paper we describe the architecture and the applications of JTransform, a general Java source code processing and transformation framework. It consists of a Java parser generating a configurable parse tree and various visitors (transformers, tree evaluators) which produce different kinds of outputs. While our framework is written in Java, the paper further opens an opportunity for a new generation of XML‐based source code tools. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
C static analysis tools often use intermediate representations (IRs) that organize program data in a simple, well‐structured manner. However, the C parsers that create IRs are slow, and because they are difficult to write, only a few implementations exist, limiting the languages in which a C static analysis can be written. To solve these problems, we investigate two language‐independent, on‐disk representations of C IRs: one using XML and the other using an Internet standard binary encoding called eXternal Data Representation (XDR). We benchmark the parsing speeds of both options, finding the XML to be about a factor of 2 slower than parsing C and the XDR over 6 times faster. Furthermore, we show that the XML files are far too large at 19 times the size of C source code, whereas XDR is only 2.2 times the C size. We also demonstrate the portability of our XDR system by presenting a C source code querying tool in Ruby. Our solution and the insights we gained from building it will be useful to analysis authors and other clients of C IRs. We have made our software freely available for download at http://www.cs.umd.edu/projects/PL/scil/ . Copyright © 2010 John Wiley&Sons, Ltd.  相似文献   

3.
As Grids rapidly expand in size and complexity, the task of benchmarking and testing, interactive or unattended, quickly becomes unmanageable. In this article we describe the difficulties of testing/benchmarking resources in large Grid infrastructures and we present the software architecture implementation of GridBench, an extensible tool for testing, benchmarking and ranking of Grid resources. We give an overview of GridBench services and tools, which support the easy definition, invocation and management of tests and benchmarking experiments. We also show how the tool can be used in the analysis of benchmarking results and how the measurements can be used to complement the information provided by Grid Information Services and used as a basis for resource selection and user-driven resource ranking. In order to illustrate the usage of the tool, we describe scenarios for using the GridBench framework to perform test/benchmark experiments and analyze the results.  相似文献   

4.
目前由于缺乏针对分布式文件系统的性能测试工具,在很多研究中依然照搬传统的文件系统性能测试工具和方法,很难满足分布式文件系统性能测试的需求(如避免缓存的影响、多客户端并发测试等)。因此,我们研发了一个分布式文件系统在线性能测试平台,此平台通过提供多种大数据量测试负载,有效地避免了系统缓存和系统老化对测试结果的影响,同时支持在线对分布式文件系统执行性能测试,支持多客户端并发测试,并能够对测试结果进行可视化展示和分析。此外,平台也提供了很好的扩展性,满足多种测试需求。  相似文献   

5.
Grammars are traditionally used to recognize or parse sentences in a language, but they can also be used to generate sentences. In grammar‐based test generation (GBTG), context‐free grammars are used to generate sentences that are interpreted as test cases. A generator reads a grammar G and generates L(G), the language accepted by the grammar. Often L(G) is so large that it is not practical to execute all of the generated cases. Therefore, GBTG tools support ‘tags’: extra‐grammatical annotations which restrict the generation. Since its introduction in the early 1970s, GBTG has become well established: proven on industrial projects and widely published in academic venues. Despite the demonstrated effectiveness, the tool support is uneven; some tools target specific domains, e.g. compiler testing, while others are proprietary. The tools can be difficult to use and the precise meaning of the tags are sometimes unclear. As a result, while many testing practitioners and researchers are aware of GBTG, few have detailed knowledge or experience. We present YouGen, a new GBTG tool supporting many of the tags provided by previous tools. In addition, YouGen incorporates covering‐array tags, which support a generalized form of pairwise testing. These tags add considerable power to GBTG tools and have been available only in limited form in previous GBTG tools. We provide semantics for the YouGen tags using parse trees and a new construct, generation trees. We illustrate YouGen with both simple examples and a number of industrial case studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open‐source and commercial programs are routinely used as benchmarks to evaluate different aspects of algorithms and tools. Unfortunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibility of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools and compilers. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated programs. We implemented our tool for Java and applied it to generate a set of large benchmark programs of up to 5M lines of code each with which we evaluated different program analysis and testing tools and compilers. The generated benchmarks let us independently rediscover several issues in the evaluated tools. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
XML has been acknowledged as the defacto standard for data representation and exchange over the World Wide Web. Being self describing grants XML its great flexibility and wide acceptance but on the other hand it is the cause of its main drawback that of being huge in size. The huge document size means that the amount of information that has to be transmitted, processed, stored, and queried is often larger than that of other data formats. Several XML compression techniques has been introduced to deal with these problems.In this paper, we provide a complete survey over the state-of-the-art of XML compression techniques. In addition, we present an extensive experimental study of the available implementations of these techniques. We report the behavior of nine XML compressors using a large corpus of XML documents which covers the different natures and scales of XML documents. In addition to assessing and comparing the performance characteristics of the evaluated XML compression tools, the study also tries to assess the effectiveness and practicality of using these tools in the real world. Finally, we provide some guidelines and recommendations which are useful for helping developers and users for making an effective decision towards selecting the most suitable XML compression tool for their needs.  相似文献   

8.
Software tools are fundamental to the comprehension, analysis, testing and debugging of application systems. A necessary first step in the development of many tools is the construction of a parser front‐end that can recognize the implementation language of the system under development. In this paper, we describe our use of token decoration to facilitate recognition of ambiguous language constructs. We apply our approach to the C++ language since its grammar is replete with ambiguous derivations such as the declaration/expression and template‐declaration/expression ambiguity. We describe our implementation of a parser front‐end for C++, keystone, and we describe our results in decorating tokens for our test suite including the examples from Clause Three of the C++ standard. We are currently exploiting the keystone front‐end to develop a taxonomy for implementation‐based class testing and to reverse‐engineer Unified Modeling Language (UML) class diagrams. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Many real‐time systems are safety‐and security‐critical systems and, as a result, tools and techniques for verifying them are extremely important. Simulation and testing such systems can be exceedingly time‐consuming and these techniques provide only probabilistic measures of correctness. There are a number of model‐checking tools for real‐time systems. Although they provide formal verification for models, we still need to implement these models. To increase the confidence in real‐time programs written in real‐time Java, this paper proposes a model‐based approach to the development of such programs. First, models can be mechanically verified, to check whether they satisfy particular properties, by using current real‐time model‐checking tools. Then, programs can be derived from the model by following a systematic approach. We introduce a timed automata to RTSJ Tool (TART), a prototype tool to automatically generate real‐time Java code from the model. Finally, we show the applicability of our approach by means of four examples: a gear controller, an audio/video protocol, a producer/consumer and the Fischer protocol. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Property‐based testing has gained popularity in recent years in many areas of software development. The specification of assertions/properties helps to understand the semantics of pieces of code, and in modern programming environments, it can serve to test the program behavior. In this paper an XQuery property‐based testing tool is presented, which enables to automatically test XQuery programs. The tool is able to systematically generate XML instances (i.e., test cases) from a given XML schema, and to filter XML instances with input properties specified by the programmer. Additionally, the tool automatically checks output (respectively, input‐output) properties in each output instance (respectively, each pair of input‐output instances). The tool is able to report whether the XQuery program passes the test, that is, if all the test cases satisfy the (input‐)output property, as well as the number of test cases used for testing. In addition, if the XQuery program fails the test, the tool shows counterexamples found in the test cases. Properties are specified with XQuery Boolean functions, and the testing tool has been implemented in XQuery. Additionally, an XQuery path validation tool is presented. This tool is able to detect wrong paths in XQuery expressions. The path validation tool takes as input an XML schema, and it reports those paths on the XQuery program that do not match the XML schema. The path validation tool is a complement to the testing tool rejecting XQuery programs that do not conform to the XML schema. The path validation tool has been also implemented in XQuery. Finally, a web tool has been developed enabling to test and validate XQuery programs.  相似文献   

11.
There are two main approaches to manage changes in XML documents, change‐tracking and diff. Change‐tracking tools, which record edit actions while they are performed on the source document, are able to capture the exact editing process. That is much more difficult for diff algorithms, which have to reconstruct it by comparing two different versions. Interestingly, these algorithms process both text‐centric and data‐centric XML documents the same way. In this paper, we show that more accurate, clear, and human‐readable results can be achieved on text‐centric resources, by employing specific models and algorithms. We describe and discuss a specialized diff algorithm for such a class of documents. We also compare a Java implementation of the algorithm, named JNDiff, with other general‐purpose or data‐oriented diff tools, focusing on the quality of their output. Copyright © 2014 Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we consider ANSI C program slicing using XML (Extensible Markup Language). Our goal is to build a flexible, useful and uniform data interchange format for CASE tools, which is a key issue to make it much easier to develop CASE tools such as program slicers. Although XML has a great potential for such data interchange formats, we first point out that there are still a lot of challenging problems to be solved. Then, as a first step to our goal, we introduce ACML (ANSI C Markup Language), which describes the syntactic structure and static semantics for ANSI C code. In our preliminary experiment, we had a good result; it took only 0.5 man-month to implement Weiser's slicer based on ACML, whereas it took about 2 man-months to implement an ANSI C parser and static semantics analyzer of XCI (Experimental C Interpreter).  相似文献   

13.
Alex Dvinsky  Roy Friedman 《Software》2015,45(10):1429-1455
This paper reports about our experience in designing and developing Chameleon, a highly portable and adaptable group communication framework for smartphones. Chameleon owes its level of portability to several design choices, including the following: (i) a layered architecture, where the headers of each layer have a standard XML‐based format, enabling automatic, error‐resistant generation of efficient serialization code in any platform; (ii) reliance only on the J2ME library, which serves as least common denominator for Java dialects and facilitates automatic translation to.NET; (iii) having flexible membership models; and (iv) supporting multiple concurrent protocol stacks.Through a single codebase, Chameleon is currently available as an open‐source project for J2ME, J2SE, Android,.NET CF, and.NET. Chameleon is easily extendable and is bundled with tools, configurations, and third‐party code tuned in a way that lifts some of the burden normally associated with multiplatform development for smartphones. Both the header generation from XML and automatic translation to.NET features of Chameleon are readily available to any application that is based on it. Chameleon's threading model separates between execution of internal layers and application's code and by that protects one from the other. As we describe in the paper, it simplifies layers' development and allows the protocol stack to easily block application calls when this is required by internal algorithms. Additionally, this model simplifies testing, and an extensive testing framework is supplied along with Chameleon, which is also usable for testing of application‐specific layers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
People who classify and identify things based on their observable or deducible properties (called “characters” by biologists) can benefit from databases and keys that assist them in naming a specimen. This paper discusses our approach to generating an identification tool based on the field guide concept. Our software accepts character lists either expressed as XML (which biologists rarely provide knowingly—although most databases can now export in XML) or via ODBC connections to the data author’s relational database. The software then produces an Electronic Field Guide (EFG) implemented as a collection of Java servlets. The resulting guide answers queries made locally to a backend, or to Internet data sources via http, and returns XML. If, however, the query client requires HTML (e.g., if the EFG is responding to a human-centric browser interface that we or the remote application provides), or if some specialized XML is required, then the EFG forwards the XML to a servlet that applies an XSLT transformation to provide the look and feel that the client application requires. We compare our approach to the architecture of other taxon identification tools. Finally, we discuss how we combine this service with other biodiversity data services on the web to make integrated applications.  相似文献   

15.
XML is a markup language used to describe data or documents. The main goal of XML is to facilitate the sharing of data across diverse information systems, especially via the Internet. XML Stylesheet Transformations (XSLT) is a standard approach to describing how to transform an XML document into another data format. The ever‐increasing number of Web technologies being used in our everyday lives commonly employs XSLT to support data exchange among heterogeneous environments, and the associated increasing burdens on XSLT processors have increased the demand for high‐performance XSLT processors. In this paper, we present an XSLT compiler, named Zebu, which can transform an XSLT stylesheet into the corresponding C program. The compiled program can be used to transform documents without the processing of XSLT stylesheets. The results of experimental testing using standard benchmarks show that the proposed XSLT compiler performs well in processing XML transformations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
17.
18.
Parallel programs present some features such as concurrency, communication and synchronization that make the test a challenging activity. Because of these characteristics, the direct application of traditional testing is not always possible and adequate testing criteria and tools are necessary. In this paper we investigate the challenges of validating message‐passing parallel programs and present a set of specific testing criteria. We introduce a family of structural testing criteria based on a test model. The model captures control and data flow of the message‐passing programs, by considering their sequential and parallel aspects. The criteria provide a coverage measure that can be used for evaluating the progress of the testing activity and also provide guidelines for the generation of test data. We also describe a tool, called ValiPar, which supports the application of the proposed testing criteria. Currently, ValiPar is configured for parallel virtual machine (PVM) and message‐passing interface (MPI). Results of the application of the proposed criteria to MPI programs are also presented and analyzed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Temporal XML: modeling, indexing, and query processing   总被引:1,自引:0,他引:1  
In this paper we address the problem of modeling and implementing temporal data in XML. We propose a data model for tracking historical information in an XML document and for recovering the state of the document as of any given time. We study the temporal constraints imposed by the data model, and present algorithms for validating a temporal XML document against these constraints, along with methods for fixing inconsistent documents. In addition, we discuss different ways of mapping the abstract representation into a temporal XML document, and introduce TXPath, a temporal XML query language that extends XPath 2.0. In the second part of the paper, we present our approach for summarizing and indexing temporal XML documents. In particular we show that by indexing continuous paths, i.e., paths that are valid continuously during a certain interval in a temporal XML graph, we can dramatically increase query performance. To achieve this, we introduce a new class of summaries, denoted TSummary, that adds the time dimension to the well-known path summarization schemes. Within this framework, we present two new summaries: LCP and Interval summaries. The indexing scheme, denoted TempIndex, integrates these summaries with additional data structures. We give a query processing strategy based on TempIndex and a type of ancestor-descendant encoding, denoted temporal interval encoding. We present a persistent implementation of TempIndex, and a comparison against a system based on a non-temporal path index, and one based on DOM. Finally, we sketch a language for updates, and show that the cost of updating the index is compatible with real-world requirements.  相似文献   

20.
While developing a two-dimensional sequence stratigraphy simulation to be used by geologists, our research group found tools such as portable graphics libraries and cross-platform compilers useful for supporting the simulation. The resources for developing the interface aspect of our work were inadequate, however. We have created several tools which aided the development of data entry mechanisms for large sets of numerical data. A major concern in designing our tool set was to enable our graphically oriented users to work in a natural manner, while still retaining the ability to achieve the precision required for an accurate simulation. Our Plotter mechanism allows users to view and edit curve data using the mouse on an X Window terminal. The DataSheet we developed provides a means for precise data entry from the keyboard. The Calculator works with ranges of values in the DataSheet to make large-scale changes. Although these tools were designed with a specific purpose in mind, they are more general and could be used for other applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号