共查询到20条相似文献,搜索用时 46 毫秒
1.
N.C.W.M. Braspenning J.M. van de Mortel-Fronczak J.E. Rooda 《Electronic Notes in Theoretical Computer Science》2006,164(4):13-28
New methods and techniques are needed to reduce the very costly integration and test effort (in terms of lead time, costs, resources) in the development of high-tech multi-disciplinary systems. To facilitate this effort reduction, we propose a method called model-based integration. This method allows to integrate formal executable models of system components that are not yet physically realized with available realizations of other components. The combination of models and realizations is then used for early analysis of the integrated system by means of validation, verification, and testing. This analysis enables early detection and prevention of problems that would otherwise occur during real integration, resulting in a significant reduction of effort invested in the the real integration and test phases. This paper illustrates how models of components, developed for model-based integration, can be used for automated model-based testing, which allows time-efficient determination of the conformance of component realizations with respect to their requirements. The combination of model-based integration and model-based testing is practically illustrated in a realistic industrial case study. Results obtained from this study encourage further research on model-based integration as a prominent method to reduce the integration and test effort. 相似文献
2.
Sofia Cassel Falk Howar Bengt Jonsson Bernhard Steffen 《Formal Aspects of Computing》2016,28(2):233-263
We present a black-box active learning algorithm for inferring extended finite state machines (EFSM)s by dynamic black-box analysis. EFSMs can be used to model both data flow and control behavior of software and hardware components. Different dialects of EFSMs are widely used in tools for model-based software development, verification, and testing. Our algorithm infers a class of EFSMs called register automata. Register automata have a finite control structure, extended with variables (registers), assignments, and guards. Our algorithm is parameterized on a particular theory, i.e., a set of operations and tests on the data domain that can be used in guards.Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We also show that, under these conditions, our framework induces a generalization of the classical Nerode equivalence and canonical automata construction to the symbolic setting. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library. 相似文献
3.
Constant C. Jeron T. Marchand H. Rusu V. 《IEEE transactions on pattern analysis and machine intelligence》2007,33(8):558-574
In this paper, we describe a methodology integrating verification and conformance testing. A specification of a system - an extended input-output automaton, which may be infinite-state - and a set of safety properties ("nothing bad ever happens") and possibility properties ("something good may happen") are assumed. The properties are first tentatively verified on the specification using automatic techniques based on approximated state-space exploration, which are sound, but, as a price to pay for automation, are not complete for the given class of properties. Because of this incompleteness and of state-space explosion, the verification may not succeed in proving or disproving the properties. However, even if verification did not succeed, the testing phase can proceed and provide useful information about the implementation. Test cases are automatically and symbolically generated from the specification and the properties and are executed on a black-box implementation of the system. The test execution may detect violations of conformance between implementation and specification; in addition, it may detect violation/satisfaction of the properties by the implementation and by the specification. In this sense, testing completes verification. The approach is illustrated on simple examples and on a bounded retransmission protocol. 相似文献
4.
Software systems are present all around us and playing their vital roles in our daily life. The correct functioning of these systems is of prime concern. In addition to classical testing techniques, formal techniques like model checking are used to reinforce the quality and reliability of software systems. However, obtaining of behavior model, which is essential for model-based techniques, of unknown software systems is a challenging task. To mitigate this problem, an emerging black-box analysis technique, called Model Learning, can be applied. It complements existing model-based testing and verification approaches by providing behavior models of blackbox systems fully automatically. This paper surveys the model learning technique, which recently has attracted much attention from researchers, especially from the domains of testing and verification. First, we review the background and foundations of model learning, which form the basis of subsequent sections. Second, we present some well-known model learning tools and provide their merits and shortcomings in the form of a comparison table. Third, we describe the successful applications of model learning in multidisciplinary fields, current challenges along with possible future works, and concluding remarks. 相似文献
5.
Chase JG Le Compte AJ Suhaimi F Shaw GM Lynn A Lin J Pretty CG Razak N Parente JD Hann CE Preiser JC Desaive T 《Computer methods and programs in biomedicine》2011,102(2):156-171
Tight glycemic control (TGC) has emerged as a major research focus in critical care due to its potential to simultaneously reduce both mortality and costs. However, repeating initial successful TGC trials that reduced mortality and other outcomes has proven difficult with more failures than successes. Hence, there has been growing debate over the necessity of TGC, its goals, the risk of severe hypoglycemia, and target cohorts.This paper provides a review of TGC via new analyses of data from several clinical trials, including SPRINT, Glucontrol and a recent NICU study. It thus provides both a review of the problem and major background factors driving it, as well as a novel model-based analysis designed to examine these dynamics from a new perspective. Using these clinical results and analysis, the goal is to develop new insights that shed greater light on the leading factors that make TGC difficult and inconsistent, as well as the requirements they thus impose on the design and implementation of TGC protocols.A model-based analysis of insulin sensitivity using data from three different critical care units, comprising over 75,000 h of clinical data, is used to analyse variability in metabolic dynamics using a clinically validated model-based insulin sensitivity metric (SI). Variation in SI provides a new interpretation and explanation for the variable results seen (across cohorts and studies) in applying TGC. In particular, significant intra- and inter-patient variability in insulin resistance (1/SI) is seen be a major confounder that makes TGC difficult over diverse cohorts, yielding variable results over many published studies and protocols. Further factors that exacerbate this variability in glycemic outcome are found to include measurement frequency and whether a protocol is blind to carbohydrate administration. 相似文献
6.
7.
网络协议测试是检验网络协议实现的一个关键性方式,它是在软件测试的基础上发展起来的。面向IPv6的协议测试技术作为它的一个分支,则是为了验证IPv6协议栈是否满足协议标准而提出的。文章详细介绍了针对小设备上的IPv6协议栈开发过程中所用到的一致性测试工具和方法,包括:PacketGenator,TAHI Project,and TTCN-3,并结合Ethereal抓包软件,对项目组自行开发的协议栈进行测试,实测效果良好。 相似文献
8.
In this paper, we describe a quality-directed perspective on the lifecycle process of designing and assembling communications systems and services. We claim this perspective addresses some of the industrial concerns of quality and productivity for the protocol engineering process, while allowing for some of the best formal techniques known for protocol synthesis, verification, conformance testing and performance assessment. We hope that this perspective will assist in the development of a generic conceptual framework which enables the evolution, integration and practical application of protocol engineering models, methods, languages and tools. 相似文献
9.
Jingyi Wang Jun Sun Qixia Yuan Jun Pang 《International Journal on Software Tools for Technology Transfer (STTT)》2018,20(6):689-704
Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically “learn” models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? Moreover, how well does learning scale to real-world applications? If the answer is negative, what are the potential methods to improve the efficiency of learning? In this work, we first investigate existing algorithms for learning probabilistic models for model checking and propose an evolution-based approach for better controlling the degree of generalization. Then, we present existing approaches to learn abstract models to improve the efficiency of learning for scalability reasons. Lastly, we conduct an empirical study in order to answer the above questions. Our findings include that the effectiveness of learning may sometimes be limited and it is worth investigating how abstraction should be done properly in order to learn abstract models. 相似文献
10.
11.
Danial Jahed Armaghani Edy Tonnizam Mohamad Mohsen Hajihassani Saffet Yagiz Hossein Motaghedi 《Engineering with Computers》2016,32(2):189-206
Uniaxial compressive strength (UCS) of rock is crucial for any type of projects constructed in/on rock mass. The test that is conducted to measure the UCS of rock is expensive, time consuming and having sample restriction. For this reason, the UCS of rock may be estimated using simple rock tests such as point load index (I s(50)), Schmidt hammer (R n) and p-wave velocity (V p) tests. To estimate the UCS of granitic rock as a function of relevant rock properties like R n, p-wave and I s(50), the rock cores were collected from the face of the Pahang–Selangor fresh water tunnel in Malaysia. Afterwards, 124 samples are prepared and tested in accordance with relevant standards and the dataset is obtained. Further an established dataset is used for estimating the UCS of rock via three-nonlinear prediction tools, namely non-linear multiple regression (NLMR), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). After conducting the mentioned models, considering several performance indices including coefficient of determination (R 2), variance account for and root mean squared error and also using simple ranking procedure, the models were examined and the best prediction model was selected. It is concluded that the R 2 equal to 0.951 for testing dataset suggests the superiority of the ANFIS model, while these values are 0.651 and 0.886 for NLMR and ANN techniques, respectively. The results pointed out that the ANFIS model can be used for predicting UCS of rocks with higher capacity in comparison with others. However, the developed model may be useful at a preliminary stage of design; it should be used with caution and only for the specified rock types. 相似文献
12.
13.
Compositional reasoning aims to improve scalability of verification tools by reducing the original verification task into
subproblems. The simplification is typically based on assume-guarantee reasoning principles, and requires user guidance to
identify appropriate assumptions for components. In this paper, we propose a fully automated approach to compositional reasoning
that consists of automated decomposition using a hypergraph partitioning algorithm for balanced clustering of variables, and
discovering assumptions using the L
* algorithm for active learning of regular languages. We present a symbolic implementation of the learning algorithm, and incorporate
it in the model checker NuSmv. In some cases, our experiments demonstrate significant savings in the computational requirements of symbolic model checking.
This research was partially supported by ARO grant DAAD19-01-1-0473, and NSF grants ITR/SY 0121431 and CCR0306382. 相似文献
14.
We propose a novel framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I,J) such that J=T P (I), where T P is the immediate consequence operator, we infer the program P. The learning framework can be repeatedly applied for identifying Boolean networks from basins of attraction. Two algorithms have been implemented for this learning task, and are compared using examples from the biological literature. We also show how to incorporate background knowledge and inductive biases, then apply the framework to learning transition rules of cellular automata. 相似文献
15.
不同iSCSI协议实现之间能否互联互通取决于协议实现是否与iSCSI协议规范相一致。针对这个问题,研究了iSCSI协议一致性测试的测试原理和测试流程,设计了适合于iSCSI协议一致性测试的测试系统以及基于XML描述的iSCSI协议一致性测试套,给出了较为完整的iSCSI协议一致性测试例。使用该测试例对某产品的iSCSI协议实现进行一致性测试,发现了被测实现存在与协议规范不一致的地方。 相似文献
16.
Jae-Ho Lee Jong-Gyu Hwang Ducko Shin Kang-Mi Lee Sung-Un Kim 《Computer Standards & Interfaces》2009,31(2):362-371
Verification and conformance testing for protocol specification, the key part of the protocol development process, are complementary technologies employed to increase confidence that a system will function as stated in its specifications. In this paper, we verify the safety and liveness of the protocol specified for the Labeled Transition System (LTS) by using a model-checking method and implementing the testing tool, which experimentally demonstrates the presence of deadlock and reachability from the initial state to a random state. Implementing the testing tool can use modal mu-calculus to assess whether protocol model properties, presented by modal logic, meet protocol specifications. In addition, we propose a conformance testing tool to check correct implementation of sequences that have been derived by the UIO method from the specification of the protocol being verified. This generating tool uses the C++ language in the Microsoft Windows NT environment. 相似文献
17.
18.
根据协议一致性测试的特点和要求,针对BGP4+协议的特点,介绍了一种BGP4+协议一致性测试系统的设计方法。该方法使用带有控制通道的多个虚拟测试器环境,用TCL语言作为测试例的描述语言,具有很好的测试能力和灵活性。在介绍BGP4+的基础上,设计并实现了该协议一致性测试系统,并对一基于FreeBSD平台的BGP4+一致性测试实现的结果进行了分析。 相似文献
19.
For a Petri net N and a marking M, let RN(M) be the set of markings reachable from M and let CN(M) be the set of markings M′ such that M′ ?RN(M) and M?RN(M′)CN(M) is a strongly connected component of RN(M) to which M belongs. If RN(M) = CN (M), then N is said to be M-reversible, and if N is M-reversible for every marking M, then N is said to be reversible. In this paper the following results are presented. (1) CN(M) is semilinear and therefore it is decidable whether (i) M′ ?CN(M), (ii) CN(M) is a finite set, (iii) CN(M) ? CN′(M′), and (iv) given two markings M and M′ such that M ? M′, there is a nonnegative integer k such that M + k(M ? M′) ?RN(M′). (2) It is decidable whether (i) N is M-reversible or not, and (ii) N is reversible or not. (3) Given a Petri net N and a marking M, we can construct an M-reversible Petri net N′ such that CN(M) = RN′(M). (4) The equality problem for the sets of all firing sequences of an M-reversible Petri net N and an M′-reversible Petri net N′ is decidable. And some related problems are discussed. 相似文献
20.
Verification is applied to software as a proof method with respect to its requirements. Software testing is necessary because
verification is often infeasible. Automation is desirable since the complexity and the effort involved are significant. However,
automated software testing is commonly used to ensure confidence in the conformance of an implementation to an abstract model,
not to its requirement properties. In this paper, we introduce the notion of property relevance of test-cases. Property relevant
test-cases can be used to determine property violations. It is shown how to detect the properties relevant to a test-case.
New coverage criteria based on property relevance are introduced. Automated generation of test-suites satisfying these criteria
is also presented. Finally, feasibility is illustrated with an empirical evaluation.
相似文献
Franz WotawaEmail: |