首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An alternative approach to developing reusable components from scratch is to recover them from existing systems. We apply program slicing, a program decomposition method, to the problem of extracting reusable functions from ill structured programs. As with conventional slicing first described by M. Weiser (1984), a slice is obtained by iteratively solving data flow equations based on a program flow graph. We extend the definition of program slice to a transform slice, one that includes statements which contribute directly or indirectly to transform a set of input variables into a set of output variables. Unlike conventional program slicing, these statements do not include either the statements necessary to get input data or the statements which test the binding conditions of the function. Transform slicing presupposes the knowledge that a function is performed in the code and its partial specification, only in terms of input and output data. Using domain knowledge we discuss how to formulate expectations of the functions implemented in the code. In addition to the input/output parameters of the function, the slicing criterion depends on an initial statement, which is difficult to obtain for large programs. Using the notions of decomposition slice and concept validation we show how to produce a set of candidate functions, which are independent of line numbers but must be evaluated with respect to the expected behavior. Although human interaction is required, the limited size of candidate functions makes this task easier than looking for the last function instruction in the original source code  相似文献   

2.
陈星  史再峰  姚素英  张之圣 《计算机工程》2012,38(15):251-253,257
为加快多制式视频后处理芯片的验证进度,以约束随机化和功能覆盖率收敛技术为指导,提出基于类定向测试的芯片验证方法,给出定向测试中的权重修正过程。仿真实验结果表明,该方法能够提高覆盖盲点被击中的概率、减少重复配置,使输入输出制式覆盖率快速收敛,验证效率比传统方法提升60%~70%。  相似文献   

3.
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100 % branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements, and for improving estimates of the fault detection capacity of a test set.  相似文献   

4.
Computing involving data in a logo-syllabic oriental language like Chinese is more difficult than in English. Many attempts have been made to develop bilingual or multilingual processing systems.1–3 Most of them are based on microcomputers like the IBM-PC. With the widespread use of workstations and window systems, much better multi-lingual processing environments can be provided. This paper describes the design and implementation of the program cxterm, a Chinese terminal emulator for the X Window System. We discuss the representation of multi-byte international characters, the problems of Chinese character input and output, the pros and cons of various approaches, and the design decisions for cxterm. A feature of cxterm is its independence of input methods. A user can incorporate new input methods into cxterm at run-time, without changing the program code. We also compare our approach with related work in multi-lingual input/output in X, and describe how cxterm performs better in terms of efficiency, flexibility, and user-friendliness.  相似文献   

5.
An NMOS implementation of a new built-in self-test PLA design is presented. The layouts for its additional test circuitry result in appoximately 15-percent overhead for most large PlAS, a significantly better overhead than that of any existing scheme. Both the input test patterns and the output responses, which are compressed intoastring of parity bits, are independent of the functions that the PLA realizes, and the 15-percent overhead includes the storage needed for the fault-free compressed output data. The fault coverage of this approach consists of all single and (1-2 -( 2n + m)) of all multiple stuck, crosspoint, and bridging faults in the original PLA and the additional test circuitry (n and m are the number of input variables and product terms, respectively). The article begins with a short review of existing design schemes.  相似文献   

6.
Testing model transformations poses several challenges, among them the automatic generation of appropriate input test models and the specification of oracle functions. Most approaches for the generation of input models ensure a certain coverage of the source meta-model or the transformation implementation code, whereas oracle functions are frequently defined using query or graph languages. However, these two tasks are usually performed independently regardless of their common purpose, and sometimes, there is a gap between the properties exhibited by the generated input models and those considered by the transformations. Recently, we proposed a formal specification language for the declarative formulation of transformation properties (by means of invariants, pre-, and postconditions) from which we generated partial oracle functions used for transformation testing. Here, we extend the usage of our specification language for the automated generation of input test models by SAT solving. The testing process becomes more intentional because the generated models ensure a certain coverage of the transformation requirements. Moreover, we use the same specification to consistently derive both the input test models and the oracle functions. A set of experiments is presented, aimed at measuring the efficacy of our technique.  相似文献   

7.
Managing the large volumes of data produced by emerging scientific and engineering simulations running on leadership‐class resources has become a critical challenge. The data have to be extracted off the computing nodes and transported to consumer nodes so that it can be processed, analyzed, visualized, archived, and so on. Several recent research efforts have addressed data‐related challenges at different levels. One attractive approach is to offload expensive input/output operations to a smaller set of dedicated computing nodes known as a staging area. However, even using this approach, the data still have to be moved from the staging area to consumer nodes for processing, which continues to be a bottleneck. In this paper, we investigate an alternate approach, namely moving the data‐processing code to the staging area instead of moving the data to the data‐processing code. Specifically, we describe the ActiveSpaces framework, which provides (1) programming support for defining the data‐processing routines to be downloaded to the staging area and (2) runtime mechanisms for transporting codes associated with these routines to the staging area, executing the routines on the nodes that are part of the staging area, and returning the results. We also present an experimental performance evaluation of ActiveSpaces using applications running on the Cray XT5 at Oak Ridge National Laboratory. Finally, we use a coupled fusion application workflow to explore the trade‐offs between transporting data and transporting the code required for data processing during coupling, and we characterize sweet spots for each option. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
针对大范围遥感应用,前期工作是查询某数据源的覆盖范围以及被云雪覆盖区域或未覆盖区域是否有同尺度的数据源替代,由于各种卫星数据的查询系统一般是相互独立的,使得工作繁琐费时。介绍了在ENVI/IDL软件环境支持下,多源遥感数据覆盖范围快速查询技术及其实现。首先,根据元数据获取影像快视图4个角点的经纬度坐标,并生成矢量落图文件;然后,通过算法检测出影像快视图有效范围4个角点的像元坐标;最后,对相同数据源的一批快视图进行自动几何校正,生成带有经纬度坐标信息的GeoTiff格式图像,从而使得各种数据源的快视图都可以在GIS软件中与行政界线一起叠加显示。由于可以对大量数据进行批处理,并且在设置好初始元数据格式后,不需要任何人工干预,极大地提高了多源数据覆盖情况的查询效率,从而可以快速、准确地判断有效数据的覆盖范围是否满足应用需求。因此,该方法具有很大的实用性和推广价值。  相似文献   

9.
Modelling a software system is often a challenging prerequisite to automatic test case generation. Modelling the navigation structure of a dynamic web application is particularly challenging because of the presence of a large number of pages that are created dynamically and the difficulty of reaching a dynamic page unless a set of appropriate input values are provided for the parameters. To address the first challenge, some form of abstraction is required to enable scalable modelling. For the second challenge, techniques are required to select appropriate input values for parameters and systematically combine them to reach new pages. This paper presents a combinatorial approach in building a navigation graph for dynamic web applications. The navigation graph can then be used to automatically generate test sequences for testing web applications. The novelty of our approach is twofold. First, we use an abstraction scheme to control the page explosion problem, where pages that are likely to have the same navigation behaviour are grouped together and are represented as a single node in the navigation graph. Second, assuming that values of individual parameters are supplied manually or generated from other techniques, we combine parameter values such that well‐defined combinatorial coverage of input parameter values is achieved. Using combinatorial coverage can significantly reduce the number of requests that have to be submitted while still achieving effective coverage of the navigation structure. We implement our combinatorial approach in a tool, Tansuo, and apply the tool on seven open‐source web applications. We evaluate the effectiveness of Tansuo's exploration process guided by t‐way coverage, for t = 1,2,3, with respect to code coverage, and find that the navigation structure exploration by Tansuo, in general, results in high code coverage (more than 80% statement coverage for most of our subject applications when dead code is removed). We compare Tansuo's effectiveness with two other navigation graph tools and find that Tansuo is more effective. Our empirical results indicate that using pairwise coverage in Tansuo results in the efficient generation of navigation graphs and effective exploration of dynamic web applications. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.

Unit testing is widely used in software development. One important activity in unit testing is automatic test data generation. Constraint-based test data generation is a technique for automatic generation of test data, which uses symbolic execution to generate constraints. Unit testing only tests functions instead of the whole program, where individual functions typically have preconditions imposed on their inputs. Conventional symbolic execution cannot detect these preconditions, let alone converting these preconditions into constraints. To overcome these limitations, we propose a novel unit test data generation approach using rule-directed symbolic execution for dealing with functions with missing input preconditions. Rule-directed symbolic execution uses predefined rules to detect preconditions in the individual function, and generates constraints for inputs based on preconditions. We introduce implicit constraints to represent preconditions, and unify implicit constraints and program constraints into integrated constraints. Test data generated based on integrated constraints can explore previously unreachable code and help developers find more functional faults and logical faults. We have implemented our approach in a tool called CTS-IC, and applied it to real-world projects. The experimental results show that rule-directed symbolic execution can find preconditions (implicit constraints) automatically from an individual function. Moreover, the unit test data generated by our approach achieves higher coverage than similar tools and efficiently mitigates missing input preconditions problems in unit testing for individual functions.

  相似文献   

11.
Complex computer codes are often too time expensive to be directly used to perform uncertainty propagation studies, global sensitivity analysis or to solve optimization problems. A well known and widely used method to circumvent this inconvenience consists in replacing the complex computer code by a reduced model, called a metamodel, or a response surface that represents the computer code and requires acceptable calculation time. One particular class of metamodels is studied: the Gaussian process model that is characterized by its mean and covariance functions. A specific estimation procedure is developed to adjust a Gaussian process model in complex cases (non-linear relations, highly dispersed or discontinuous output, high-dimensional input, inadequate sampling designs, etc.). The efficiency of this algorithm is compared to the efficiency of other existing algorithms on an analytical test case. The proposed methodology is also illustrated for the case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater.  相似文献   

12.
周小莉  赵建华 《软件学报》2021,32(7):2103-2117
数据驱动的智能系统的核心是处理数据的算法,对算法的正确性的要求高,导致其测试开销大,需要有效的缩减测试的规模,其中回归测试选择是控制测试规模的有效手段.数据驱动的智能系统由于其动态信息流强度弱的原因,发生偶然正确性现象的概率高,并且该现象会导致常用的回归测试选择技术所选择出的测试集包含大量检测不到故障的测试用例.因此,我们从偶然正确性现象的角度出发,提出一种基于偶然正确性概率的回归测试选择技术,进一步排除可能发生偶然正确性现象的用例.该方法能够兼顾代码覆盖,同时从偶然正确性的角度保证缩减后的测试用例集合对被修改的代码的测试是充分的.根据在用例缩减和故障检测能力之间侧重的不同,我们提出了基于最小化和安全性技术的两种选择策略,并给出三种具体的选择算法.在实验中将本文的方法与一种安全的测试选择技术进行比较,结果表明基于本文的三种选择算法都很好地缩减了测试集合的规模,提高了测试选择的精度,并提高了安全性和精度的综合指标.  相似文献   

13.
王颖  王冰青  关永  李晓娟  王瑞 《软件学报》2021,32(6):1867-1881
机器人操作系统(Robot Operating System,简称ROS)是一种广泛应用于机器人开发的开源系统,它可以为开发者提供硬件抽象、设备驱动、库函数、可视化、消息传递和软件包管理等诸多功能,具有重要而广阔的应用前景.ROS集成了可以实现不同功能的功能包,例如定位绘图、行动规划、感知、模拟等等,但其中可能存在一些漏洞破坏整个机器人系统的安全性和可靠性;本文提出了一种差分模糊测试方法对ROS不同版本的功能包进行测试,找出其中的漏洞.我们的方法包括测试用例生成和差分模糊测试执行两个模块.首先,对于输入文件进行加载、处理并基于策略生成的方法生成测试用例文件;其次,节点间使用话题通讯机制实现通讯,使用上一模块生成的测试用例文件作为统一的模糊输入,对ROS不同版本的功能包进行差分模糊测试;接着,对测试结果中的不一致输出进行差异计算并评估,符合评估指标的种子将被保留并反馈给用例生成模块循环生成测试用例,有效提高种子质量及代码覆盖率;最后分析不一致输出原因,找出漏洞.我们将该方法应用在机器人坐标转换的实验中,实现对不同参考系下坐标转换的功能包TF和TF2的测试;最终实验表明,与TF2相比,TF在功能实现上更加准确.TF2实现坐标旋转变换的函数存在漏洞.  相似文献   

14.
一种结构测试数据自动生成的框架   总被引:1,自引:0,他引:1       下载免费PDF全文
针对结构测试中控制流和数据流覆盖测试数据的生成都可以归结为面向路径的测试数据生成的问题,提出了一个通用的基于控制流和数据流的结构测试数据自动生成的框架。该框架根据控制流和数据流测试中所采用的覆盖标准优化选取测试路径,并以改进后的迭代松弛法为核心,对所选取的路径生成测试数据。以基于路径覆盖、分支覆盖和数据流覆盖测试数据自动生成这3种算法为核心,开发了一个测试数据自动生成的框架原型。实验结果表明该框架是可行的。  相似文献   

15.
Software testing techniques and criteria are considered complementary since they can reveal different kinds of faults and test distinct aspects of the program. The functional criteria, such as Category Partition, are difficult to be automated and are usually manually applied. Structural and fault-based criteria generally provide measures to evaluate test sets. The existing supporting tools produce a lot of information including: input and produced output, structural coverage, mutation score, faults revealed, etc. However, such information is not linked to functional aspects of the software. In this work, we present an approach based on machine learning techniques to link test results from the application of different testing techniques. The approach groups test data into similar functional clusters. After this, according to the tester's goals, it generates classifiers (rules) that have different uses, including selection and prioritization of test cases. The paper also presents results from experimental evaluations and illustrates such uses.  相似文献   

16.
一种基于时间自动机的实时系统测试方法   总被引:2,自引:0,他引:2  
基于时间自动机(timed automata,简称TA)的一种变体--时间安全输入/输出自动机(timed safety input/output automata,简称TSIOA),提出了一种实时系统测试方法.该方法首先将时间安全输入/输出自动机描述的系统模型转换为不含抽象时间延迟迁移的稳定符号状态迁移图(untimed stable transition graph of symbolic state,简称USTGSS);然后采用基于标号迁移系统(labeled transition system,简称LTS)的测试方法来静态生成满足各种结构覆盖标准的包含时间延迟变量迁移动作序列;最后,给出了一个根据迁移动作序列构造和执行测试用例的过程,该过程引入了时间延迟变量目标函数,并采用线性约束求解方法动态求解迁移动作序列中的时间延迟变量.  相似文献   

17.
Code-coverage guided prioritized test generation   总被引:1,自引:0,他引:1  
Most automatic test generation research focuses on generation of test data from pre-selected program paths or input domains or program specifications. This paper presents a methodology for a full solution to code-coverage-based test case generation, which includes code coverage-based path selection, test data generation and actual test case representation in program’s original languages. We implemented this method in an automatic testing framework, eXVantage. Experimental results and industrial trials show that the framework is able to generate tests to achieve program line coverage from 20% to 98% with reduced overall testing effort. Our major contributions include an innovative coverage-based program prioritization algorithm, a novel path selection algorithm that takes into consideration program priority and functional calling relationship, and a constraint solver for test data generation that derives constraints from bytecode and solves complex constraints involving strings and dynamic objects.  相似文献   

18.
A structured approach to debugging is presented based on restricting the class of programs that can be written. This is achieved by defining a domain specific language, eliminating control flow by using aggregate operations and eliminating side effects by using an applicative language as a base. The domain specific language is realized by defining domain specific operators as extensions to the base language; a set of operators are defined in the paper for data processing applications. The debugging strategy consists of setting up a sequence of input/output transformations; this process takes the input to the output in partial stages so that the correctness of each transformation relative to its input and output set can be verified. The approach is illustrated by a simple data processing problem.  相似文献   

19.
祝玉芬  刘超 《计算机工程》2003,29(21):45-47
给出了测试用例的定义和如何根据UML活动图模型来生成测试用例的基本方法,包括基于活动图模型控制流结构的测试场景生成和针对活动的输入量的测试数据生成。根据活动图模型的层次型特点,引入了针对话动的层次化的测试剖面(Test Profiles)概念和输入输出数据描述规范,用以支持用户在活动图上分层次地提供有关测试数据生成的约束条件。同时,给出了基于测试剖面的基本测试数据的生成方法,以及基于测试场景和基本测试数据的组合来生成一组测试用例的方法。  相似文献   

20.
A novel parameter learning scheme using multi-signal processing is developed that aims at estimating parameters of the Hammerstein nonlinear model with output disturbance in this paper. The Hammerstein nonlinear model consists of a static nonlinear block and a dynamic linear block, and the multi-signals are devised to estimate separately the nonlinear block parameters and the linear block parameters; the parameter estimation procedure is greatly simplified. Firstly, in view of the input–output data of separable signals, the linear block parameters are computed through correlation analysis method, thereby the influence of output noise is effectively handled. In addition, model error probability density function technology is employed to estimate the nonlinear block parameters with the help of measurable input–output data of random signals, which not only controls the space state distribution of model error but also makes error distribution tends to normal distribution. The simulation results demonstrate that the developed approach obtains high learning accuracy and small modeling error, which verifies the effectiveness of the developed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号