共查询到20条相似文献,搜索用时 12 毫秒
1.
J. S. Briggs 《Software》1987,17(7):439-453
This paper describes the production of a system to control an electronic cricket scoreboard. The main feature of the system is the ability of the operator to ‘undo’ operations that he has performed, in order to correct errors that has has made. Undo is implemented by reversing the execution of the program. The code to perform the reversal is generated automatically and results in a minimal amount of state information being recorderded. 相似文献
2.
Context
Input/output transition system (IOTS) models are commonly used when next input can arrive even before outputs are produced. The interaction between the tester and an implementation under test (IUT) is usually assumed to be synchronous. However, as the IUT can produce outputs at any moment, the tester should be prepared to accept all outputs from the IUT, or else be able to block (refuse) outputs of the implementation. Testing distributed, remote applications under the assumptions that communication is synchronous and actions can be blocked is unrealistic, since synchronous communication for such applications can only be achieved if special protocols are used. In this context, asynchronous tests can be more appropriate, reflecting the underlying test architecture which includes queues.Objective
In this paper, we investigate the problem of constructing test cases for given test purposes and specification input/output transition systems, when the communication between the tester and the implementation under test is assumed to be asynchronous, performed via multiple queues.Method
When issuing verdicts, asynchronous tests should take into account a distortion caused by the queues in the observed interactions. First, we investigate how the test purpose can be transformed to account for this distortion when there are a single input queue and a single output queue. Then, we consider a more general problem, when there may be multiple queues.Results
We propose an algorithm which constructs a sound test case, by transforming the test purpose prior to composing it with the specification without queues.Conclusion
The proposed algorithm mitigates the state explosion problem which usually occurs when queues are directly involved in the composition. Experimental results confirm the resulting state space reduction. 相似文献3.
测试自动化是提高软件测试效率的重要途径。基于UML模型的面向对象软件测试是当前研究的热点。这些研究大都是类或集成测试,对于如何自动生成较为完整、合理的系统测试用例较少。在总结研究现状的基础上,以应用举例的方式提出一种从描述系统用例问顺序依赖关系的活动图和用例实现的活动图中获取系统功能测试线索的方法。 相似文献
4.
John Håkansson Bengt Jonsson Ola Lundqvist 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(4):456-471
This paper is concerned with the problem of checking, by means of testing, that a software component satisfies a specification of temporal safety properties. Checking that an actual observed behavior conforms to the specification is performed by a test oracle, which can be either a human tester or a software module. We present a technique for automatically generating test oracles from specifications of temporal safety properties in a metric temporal logic. The logic can express quantitative timing properties, and can also express properties of data values by means of a quantification construct. The generated oracle works online in the sense that checking is performed simultaneously with observation. The technique has been implemented and used in case studies at Volvo Technical Development Corporation . 相似文献
5.
Jeff Offutt Shaoying Liu Aynur Abdurazik Paul Ammann 《Software Testing, Verification and Reliability》2003,13(1):25-53
Although the majority of software testing in industry is conducted at the system level, most formal research has focused on the unit level. As a result, most system‐level testing techniques are only described informally. This paper presents formal testing criteria for system level testing that are based on formal specifications of the software. Software testing can only be formalized and quantified when a solid basis for test generation can be defined. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be automatically manipulated. This paper presents general criteria for generating test inputs from state‐based specifications. The criteria include techniques for generating tests at several levels of abstraction for specifications (transition predicates, transitions, pairs of transitions and sequences of transitions). These techniques provide coverage criteria that are based on the specifications and are made up of several parts, including test prefixes that contain inputs necessary to put the software into the appropriate state for the test values. The test generation process includes several steps for transforming specifications to tests. These criteria have been applied to a case study to compare their ability to detect seeded faults. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
6.
To disseminate information via broadcasting, a data server must construct a broadcast “program” that meets the needs of the client population. Existing works on generating broadcast programs have shown the effectiveness of nonuniform broadcast programs in reducing the average access times of objects for nonuniform access patterns. However, these broadcast programs perform poorly for range queries. The article presents a novel algorithm to generate broadcast programs that facilitate range queries without sacrificing much on the performance of single object retrievals 相似文献
7.
Nowadays, high performance applications exploit multiple level architectures, due to the presence of hardware accelerators like GPUs inside each computing node. Data transfers occur at two different levels: inside the computing node between the CPU and the accelerators and between computing nodes. We consider the case where the intra-node parallelism is handled with HMPP compiler directives and message-passing programming with MPI is used to program the inter-node communications. This way of programming on such an heterogeneous architecture is costly and error-prone. In this paper, we specifically demonstrate the transformation of HMPP programs designed to exploit a single computing node equipped with a GPU into an heterogeneous HMPP + MPI exploiting multiple GPUs located on different computing nodes. 相似文献
8.
有效降低测试成本是软件测试优化的重要研究问题。将遗传算法引入到软件测试中,对生成测试场景提供了必要的动力,然而遗传算法局域搜索能力差,在进化后期搜索效率低,导致算法比较费时。基于UML活动图提出了混合遗传算法生成测试场景的方法,该方法结合遗传算法和爬山法,有效地加快了测试场景的生成速度。为了避免局部性问题,在算法每次进行爬山操作之前调用种群生成函数。实验结果表明,与简单的遗传算法相比,混合遗传算法不仅有效地解决了局部性问题,而且较大地提高了生成测试场景的效率,降低了软件测试成本。 相似文献
9.
Generating software test data by evolution 总被引:5,自引:0,他引:5
Michael C.C. McGraw G. Schatz M.A. 《IEEE transactions on pattern analysis and machine intelligence》2001,27(12):1085-1110
This paper discusses the use of genetic algorithms (GAs) for automatic software test data generation. This research extends previous work on dynamic test data generation where the problem of test data generation is reduced to one of minimizing a function. In our work, the function is minimized by using one of two genetic algorithms in place of the local minimization techniques used in earlier research. We describe the implementation of our GA-based system and examine the effectiveness of this approach on a number of programs, one of which is significantly larger than those for which results have previously been reported in the literature. We also examine the effect of program complexity on the test data generation problem by executing our system on a number of synthetic programs that have varying complexities 相似文献
10.
A system is described which inputs EBNF syntax equations as text, checks them and builds a corresponding syntax graph representation. An EBNF parser, with full error recovery, is included. The system is designed using the principles of modular decomposition and data abstraction, and is presented as a case study in the application of these principles to program design. The system is programmed in Pascal-plus, and has been used as a basis for the automatic generation of parsers. 相似文献
11.
Valdivino Alexandre de Santiago Júnior Nandamudi Lankalapalli Vijaykumar 《Software Quality Journal》2012,20(1):77-143
Natural Language (NL) deliverables suffer from ambiguity, poor understandability, incompleteness, and inconsistency. Howewer, NL is straightforward and stakeholders are familiar with it to produce their software requirements documents. This paper presents a methodology, SOLIMVA, which aims at model-based test case generation considering NL requirements deliverables. The methodology is supported by a tool that makes it possible to automatically translate NL requirements into Statechart models. Once the Statecharts are derived, another tool, GTSC, is used to generate the test cases. SOLIMVA uses combinatorial designs to identify scenarios for system and acceptance testing, and it requires that a test designer defines the application domain by means of a dictionary. Within the dictionary there is a Semantic Translation Model in which, among other features, a word sense disambiguation method helps in the translation process. Using as a case study a space application software product, we compared SOLIMVA with a previous manual approach developed by an expert under two aspects: test objectives coverage and characteristics of the Executable Test Cases. In the first aspect, the SOLIMVA methodology not only covered the test objectives associated to the expert’s scenarios but also proposed a better strategy with test objectives clearly separated according to the directives of combinatorial designs. The Executable Test Cases derived in accordance with the SOLIMVA methodology not only possessed similar characteristics with the expert’s Executable Test Cases but also predicted behaviors that did not exist in the expert’s strategy. The key benefits from applying the SOLIMVA methodology/tool within a Verification and Validation process are the ease of use and, at the same time, the support of a formal method consequently leading to a potential acceptance of the methodology in complex software projects. 相似文献
12.
Torbjörn Björkman 《Computer Physics Communications》2011,182(5):1183-1186
The CIF2Cell program generates the geometrical setup for a number of electronic structure programs based on the crystallographic information in a Crystallographic Information Framework (CIF) file. The program will retrieve the space group number, Wyckoff positions and crystallographic parameters, make a sensible choice for Bravais lattice vectors (primitive or principal cell) and generate all atomic positions. Supercells can be generated and alloys are handled gracefully. The code currently has output interfaces to the electronic structure programs ABINIT, CASTEP, CPMD, Crystal, Elk, Exciting, EMTO, Fleur, RSPt, Siesta and VASP.
Program summary
Program title: CIF2CellCatalogue identifier: AEIM_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIM_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL version 3No. of lines in distributed program, including test data, etc.: 12 691No. of bytes in distributed program, including test data, etc.: 74 933Distribution format: tar.gzProgramming language: Python (versions 2.4–2.7)Computer: Any computer that can run Python (versions 2.4–2.7)Operating system: Any operating system that can run Python (versions 2.4–2.7)Classification: 7.3, 7.8, 8External routines: PyCIFRW [1]Nature of problem: Generate the geometrical setup of a crystallographic cell for a variety of electronic structure programs from data contained in a CIF file.Solution method: The CIF file is parsed using routines contained in the library PyCIFRW [1], and crystallographic as well as bibliographic information is extracted. The program then generates the principal cell from symmetry information, crystal parameters, space group number and Wyckoff sites. Reduction to a primitive cell is then performed, and the resulting cell is output to suitably named files along with documentation of the information source generated from any bibliographic information contained in the CIF file. If the space group symmetries is not present in the CIF file the program will fall back on internal tables, so only the minimal input of space group, crystal parameters and Wyckoff positions are required. Additional key features are handling of alloys and supercell generation.Additional comments: Currently implements support for the following general purpose electronic structure programs: ABINIT [2,3], CASTEP [4], CPMD [5], Crystal [6], Elk [7], exciting [8], EMTO [9], Fleur [10], RSPt [11], Siesta [12] and VASP [13–16].Running time: The examples provided in the distribution take only seconds to run.References:[1]
J.R. Hester, A validating CIF parser: PyCIFRW, Journal of Applied Crystallography 39 (4) (2006) 621–625, doi:10.1107/S0021889806015627, URL http://dx.doi.org/10.1107/S0021889806015627 [2]
X. Gonze, G.-M. Rignanese, M. Verstraete, J.-M. Beuken, Y. Pouillon, R. Caracas, F. Jollet, M. Torrent, G. Zerah, M. Mikami, P. Ghosez, M. Veithen, J.-Y. Raty, V. Olevano, F. Bruneval, L. Reining, R. Godby, G. Onida, D.R. Hamann, D.C. Allan, A brief introduction to the abinit software package, Zeitschrift für Kristallographie 220 (12) (2005) 558–562. [3]
X. Gonze, B. Amadon, P.-M. Anglade, J.-M. Beuken, F. Bottin, P. Boulanger, F. Bruneval, D. Caliste, R. Caracas, M. Ct, T. Deutsch, L. Genovese, P. Ghosez, M. Giantomassi, S. Goedecker, D. Hamann, P. Hermet, F. Jollet, G. Jomard, S. Leroux, M. Mancini, S. Mazevet, M. Oliveira, G. Onida, Y. Pouillon, T. Rangel, G.-M. Rignanese, D. Sangalli, R. Shaltaf, M. Torrent, M. Verstraete, G. Zerah, J. Zwanziger, Abinit: First-principles approach to material and nanosystem properties, Computer Physics Communications 180 (12) (2009) 2582–2615 (40 years of CPC: A celebratory issue focused on quality software for high performance, grid and novel computing architectures), doi:10.1016/j.cpc.2009.07.007; http://www.sciencedirect.com/science/article/B6TJ5-4WTRSCM-3/2/20edf8da70cd808f10fe352c45d0c0be. [4]
S.J. Clark, M.D. Segall, C.J. Pickard, P.J. Hasnip, M.J. Probert, K. Refson, M.C. Payne, First principles methods using CASTEP, Zeitschrift für Kristallographie 220 (12) (2005) 567–570. [5]
URL http://www.cpmd.org. [6]
R. Dovesi, R. Orlando, B. Civalleri, C. Roetti, V.R. Saunders, C.M. Zicovich-Wilson, Crystal: a computational tool for the ab initio study of the electronic properties of crystals, Zeitschrift für Kristallographie 220 (2005) 571–573. URL http://dx.doi.org/10.1524/zkri.220.5.571.65065. [7]
URL http://elk.sourceforge.net. [8]
URL http://exciting-code.org. [9]
L. Vitos, Computational Quantum Mechanics for Materials Engineers; The EMTO Method and Applications, Springer, London, 2007, doi:10.1007/978-1-84628-951-4.
[10]
URL http://www.flapw.de. [11]
J.M. Wills, O. Eriksson, M. Alouani, D.L. Price, Full-potential LMTO total energy and force calculations, in: H. Dreussé (Ed.), Electronic Structure and Physical Properties of Solids; The Uses of the LMTO Method, Springer, 1996, pp. 148–167. [12]
J.M. Soler, E. Artacho, J.D. Gale, A. García, J. Junquera, P. Ordejón, D. Sánchez-Portal, The siesta method for ab initio order-n materials simulation, Journal of Physics: Condensed Matter 14 (11) (2002) 2745. URL http://stacks.iop.org/0953-8984/14/i=11/a=302 [13]
G. Kresse, J. Hafner, Ab initio molecular dynamics for liquid metals, Phys. Rev. B 47 (1) (1993) 558–561, doi:10.1103/PhysRevB.47.558. [14]
G. Kresse, J. Hafner, Ab initio molecular-dynamics simulation of the liquid–metal amorphous-semiconductor transition in germanium, Phys. Rev. B 49 (20) (1994) 14251–14269, doi:10.1103/PhysRevB.49.14251. [15]
G. Kresse, J. Furthmüller, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Computational Materials Science 6 (1) (1996) 15–50, doi:10.1016/0927-0256(96)00008-0. URL http://www.sciencedirect.com/science/article/B6TWM-3VRVTBF-3/2/88689b1eacfe2b5fe57f09d37eff3b74. [16]
G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54 (16) (1996) 11169–11186, doi:10.1103/PhysRevB.54.11169.
13.
Peter Schrammel Tom Melham Daniel Kroening 《International Journal on Software Tools for Technology Transfer (STTT)》2016,18(3):319-334
Testing of reactive systems is challenging because long input sequences are often needed to drive them into a state to test a desired feature. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the amount of time required for resetting is high. This article presents an approach to discovering a test case chain—a single software execution that covers a group of test goals and minimizes overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimize the number of test case chains if a single test case chain is infeasible. We report experimental results with our ChainCover tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators. 相似文献
14.
Generating test data with enhanced context-free grammars 总被引:1,自引:0,他引:1
The use of context-free grammars to improve functional testing of very-large-scale integrated circuits is described. It is shown that enhanced context-free grammars are effective tools for generating test data. The discussion covers preliminary considerations, the first tests, generating systematic tests, and testing subroutines. The author's experience using context-free grammars to generate tests for VLSI circuit simulators indicates that they are remarkably effective tools that virtually anyone can use to debug virtually any program 相似文献
15.
Combinatorial testing (CT) is an effective technique to test software with multiple configurable parameters. It is used to detect interaction faults caused by the combination effect of parameters. CT test generation aims at generating covering arrays that cover all t-way parameter combinations, where t is a given covering strength. In practical CT usage scenarios, there are usually constraints between parameters, and the performance of existing constraint-handling methods degrades fast when the number of constraints increases.The contributions of this paper are (1) we propose a new one-test-at-a-time algorithm for CT test generation, which uses pseudo-Boolean optimization to generate each new test case; (2) we have found that pursuing the maximum coverage for each test case is uneconomic, and a possible balance point is to keep the approximation ratio in [0.8,0.9]; (3) we propose a new self-adaptive mechanism to stop the optimization process at a proper time when generating each test case; (4) extensive experimental results show that our algorithm works fine on existing benchmarks, and the constraint-handling ability is better than existing approaches when the number of constraints is large; and (5) we propose a method to translate shielding parameters (a common type of constraints) into normal constraints. 相似文献
16.
Niloofar Razavi Azadeh Farzan Sheila A. McIlraith 《International Journal on Software Tools for Technology Transfer (STTT)》2014,16(1):49-65
Testing concurrent programs is a challenging problem due to interleaving explosion: even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework [35]. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs. 相似文献
17.
嵌入式软件的可信性对航天任务的成败至关重要. 航天嵌入式软件广泛采用中断驱动的并发机制, 不确定的中断抢占可能导致并发缺陷, 引发严重的安全问题. 研究表明原子性违反是中断并发缺陷中最突出的一类缺陷模式. 目前面向中断驱动型嵌入式软件的原子性违反检测方法都无法同时实现高精度和高可扩展性, 且其对真实航天嵌入式软件的有效性尚未得到证实. 为了有效提升检测该类缺陷的精度与效率, 对82个航天嵌入式软件原子性违反进行实证研究, 获得该类缺陷在原子区范围、中断嵌套层数、访问交错模式、共享数据访问方式、访问粒度等5个方面的表现特征. 并在此基础上, 提出一个精确、高效的原子性违反静态检测方法intAtom-S. 首先, 基于一个由数值不变式参数化的细粒度内存访问模型, 引入基于规则的方法剔除标志变量访问, 并采用抽象解释进行精确的共享数据分析, 该分析能将共享数据访问粒度精确至数组元素, 并可识别共享的内存映射I/O地址. 然后, 使用轻量级数据流分析技术匹配符合缺陷访问交错模式的所有并发三次访问序, 作为潜在的原子性违反缺陷候选. 最后, 采用基于路径条件的约束求解对缺陷候选中的串行访问对和并发三次访问序进行路径可行性分析, 逐步消除误报, 得到最终的原子性违反结果. 在基准测试集和8个真实航天嵌入式软件上的实验表明, 与目前最先进的方法相比, intAtom-S误报率降低了72%, 检测效率提高了3倍. 此外, 该方法能够快速完成对真实航天嵌入式软件的分析, 平均误报率仅为8.9%, 并发现了23个已被开发人员确认的缺陷.
相似文献18.
This paper describes how an Abstract Programming Interface (
) and its implementation can be generated from the syntax definition of a data type. In particular we describe how a grammar (in
) can be used to generate a library of access functions that manipulate the parse trees of terms over this syntax. Application of this technique in the
+
Meta-Environment has resulted in the elimination of 47% of the handwritten code, thus greatly improving both maintainability of the tools and their flexibility with respect to changes in the parse tree format. Although the focus is on ATerms, the issues discussed and the techniques described are more generic and are relevant in related areas such as XML data-binding. 相似文献
19.
Jukka Perkiö Antti Tuominen Taneli Vähäkangas Petri Myllymäki 《Multimedia Tools and Applications》2012,57(1):5-27
Measuring image similarity is an important task for various multimedia applications. Similarity can be defined at two levels: at the syntactic (lower, context-free) level and at the semantic (higher, contextual) level. As long as one deals with the syntactic level, defining and measuring similarity is a relatively straightforward task, but as soon as one starts dealing with the semantic similarity, the task becomes very difficult. We examine the use of simple readily available syntactic image features combined with other multimodal features to derive a similarity measure that captures the weak semantics of an image. The weak semantics can be seen as an intermediate step between low level image understanding and full semantic image understanding. We investigate the use of single modalities alone and see how the combination of modalities affect the similarity measures. We also test the measure on multimedia retrieval task on a tv series data, even though the motivation is in understanding how different modalities relate to each other. 相似文献
20.
Guido Boella Luigi Di Caro Alice Ruggeri Livio Robaldo 《Journal of Intelligent Information Systems》2014,43(2):231-246
Nowadays, there is a huge amount of textual data coming from on-line social communities like Twitter or encyclopedic data provided by Wikipedia and similar platforms. This Big Data Era created novel challenges to be faced in order to make sense of large data storages as well as to efficiently find specific information within them. In a more domain-specific scenario like the management of legal documents, the extraction of semantic knowledge can support domain engineers to find relevant information in more rapid ways, and to provide assistance within the process of constructing application-based legal ontologies. In this work, we face the problem of automatically extracting structured knowledge to improve semantic search and ontology creation on textual databases. To achieve this goal, we propose an approach that first relies on well-known Natural Language Processing techniques like Part-Of-Speech tagging and Syntactic Parsing. Then, we transform these information into generalized features that aim at capturing the surrounding linguistic variability of the target semantic units. These new featured data are finally fed into a Support Vector Machine classifier that computes a model to automate the semantic annotation. We first tested our technique on the problem of automatically extracting semantic entities and involved objects within legal texts. Then, we focus on the identification of hypernym relations and definitional sentences, demonstrating the validity of the approach on different tasks and domains. 相似文献