(1) Chair of Service Engineering for Distributed Systems, Universität Göttingen, Germany;(2) Lehrstuhl für Programmiersysteme, Universität Dortmund, Germany;(3) METAFrame Technologies GmbH, Dortmund, (Germany)
Abstract:
Test-based model generation by classical automata learning is very expensive. It requires an impractically large number of queries to the system, each of which must be implemented as a system-level test case. Key in the tractability of observation-based model generation are powerful optimizations exploiting different kinds of expert knowledge in order to drastically reduce the number of required queries, and thus the testing effort. In this paper, we present a thorough experimental analysis of the second-order effects between such optimizations in order to maximize their combined impact.