首页 | 本学科首页   官方微博 | 高级检索  
     


Experimenting with software testbeds for evaluating new technologies
Authors:Mikael Lindvall  Ioana Rus  Paolo Donzelli  Atif Memon  Marvin Zelkowitz  Aysu Betin-Can  Tevfik Bultan  Chris Ackermann  Bettina Anders  Sima Asgari  Victor Basili  Lorin Hochstein  Jörg Fellmann  Forrest Shull  Roseanne Tvedt  Daniel Pech  Daniel Hirschbach
Affiliation:(1) Computer Science Department, University of Maryland, College Park, MD 20742, USA;(2) Fraunhofer Center for Experimental Software Engineering, College Park, MD, USA;(3) University of California at Santa Barbara, Santa Barbara, CA, USA;(4) Informatics Institute, Middle East Technical University, Ankara, Turkey;(5) Department of Computer Science and Engineering, University of Nebraska-Lincoln, 256 Avery Hall, Lincoln, NE, 68588-0115
Abstract:The evolution of a new technology depends upon a good theoretical basis for developing the technology, as well as upon its experimental validation. In order to provide for this experimentation, we have investigated the creation of a software testbed and the feasibility of using the same testbed for experimenting with a broad set of technologies. The testbed is a set of programs, data, and supporting documentation that allows researchers to test their new technology on a standard software platform. An important component of this testbed is the Unified Model of Dependability (UMD), which was used to elicit dependability requirements for the testbed software. With a collection of seeded faults and known issues of the target system, we are able to determine if a new technology is adept at uncovering defects or providing other aids proposed by its developers. In this paper, we present the Tactical Separation Assisted Flight Environment (TSAFE) testbed environment for which we modeled and evaluated dependability requirements and defined faults to be seeded for experimentation. We describe two completed experiments that we conducted on the testbed. The first experiment studies a technology that identifies architectural violations and evaluates its ability to detect the violations. The second experiment studies model checking as part of design for verification. We conclude by describing ongoing experimental work studying testing, using the same testbed. Our conclusion is that even though these three experiments are very different in terms of the studied technology, using and re-using the same testbed is beneficial and cost effective.
Contact Information Daniel HirschbachEmail:
Keywords:Empirical study  Technology evaluation  Software testbed
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号