首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 472 毫秒
1.
This paper introduces a model-based approach for minimization of test sets to validate the interaction of human-computer systems. The novelty of the approach is twofold: (i) Test cases generated and selected holistically cover both the behavioral model and the complementary, fault model of the system under test (SUT). (ii) Methods known from state-based conformance testing and graph theory are extended to construct efficient, heuristic search-based algorithms for minimizing the test sets that are constructed in step (i), considering also structural features. Experience shows that the approach can help to considerably save test costs, up to 60% Fevzi Belli received the M.S., Ph.D., and Habilitation degrees in electrical engineering and computer science from the Berlin Technical University. He is presently a Professor of Software Engineering in the Faculty of Computer Science, Electrical Engineering and Mathematics, University of Paderborn, Paderborn, Germany. Prior to this, he headed several projects at a software house in Munich, was a Professor of Computing Science at the Hochschule Bremerhaven and a faculty member of the University of Maryland, European Division. He chaired several international conferences, e.g., ISSRE 1998 and is author and co-author of more than 100 papers published in scientific journals and conference proceedings. His research interests are in testing/fault tolerance/reliability of software and programming techniques. Christof J. Budnik received the MS degree in electrical engineering and computer science in 2001 from the University of Paderborn. In 2002, he joined the Department of Computer Science, Electrical Engineering and Mathematics at the same University where he is currently a faculty member. His research interests are in the areas of software quality, testing of interactive systems and safety-critical user interfaces.  相似文献   

2.
Understanding a software system at source-code level requires understanding the different concerns that it addresses, which in turn requires a way to identify these concerns in the source code. Whereas some concerns are explicitly represented by program entities (like classes, methods and variables) and thus are easy to identify, crosscutting concerns are not captured by a single program entity but are scattered over many program entities and are tangled with the other concerns. Because of their crosscutting nature, such crosscutting concerns are difficult to identify, and reduce the understandability of the system as a whole. In this paper, we report on a combined experiment in which we try to identify crosscutting concerns in the JHotDraw framework automatically. We first apply three independently developed aspect mining techniques to JHotDraw and evaluate and compare their results. Based on this analysis, we present three interesting combinations of these three techniques, and show how these combinations provide a more complete coverage of the detected concerns as compared to the original techniques individually. Our results are a first step towards improving the understandability of a system that contains crosscutting concerns, and can be used as a basis for refactoring the identified crosscutting concerns into aspects. M. Ceccato is a PhD student in ITC-irst in Trento, Italy. He received his degree in Software Engineering from the University of Padova, Italy, in 2003. The master thesis concerned the Re-engineering of an existing big-sized data warehouse application. The project was developed in the Information Technology department in Alcoa Servizi. His research interests are on source code analysis and manipulation, especially for the the migration of object-oriented code to aspect-oriented programming. He collaborates with King’s College London and Loyola College in Maryland on the automatic support for this migration process. He has been involved in the organization and in the program committee of a number of AOP-related events, such as Late Workshop, in Chicago (2005) and in Bonn, Germany (2006), held within the major Aspect Oriented Programming conference (AOSD) and 3rd European Workshop on Aspects in Software (EWAS’06) in Enschede, The Netherlands. Marius Marin is a Ph.D. researcher in the Software Evolution Reseach Laboratory at Delft University of Technology, the Netherlands. He was granted an engineering degree by the Technical University of Civil Engineering, Bucharest, in 2000, and Licentiate in Economic Computer Science from the Academy of Economic Studies, Bucharest, in 2002. Before starting his Ph.D. studies, he worked as a software engineer in industry. His main research interests are in the area of reverse engineering, software modularization and modeling, and aspect-oriented software development. He is the main author of the publicly available aspect mining tool FINT and he publishes at international conferences in the aforementioned topics. He has been involved in program- and organizing committees of several workshops related to aspect mining. Kim Mens obtained his Ph.D. in Computer Science at the Vrije Universiteit Brussel, on “architectural conformance checking,” for which he used a declarative meta-programming approach. After his Ph.D. he became a full-time professor (chargé de cours) at the Université catholique de Louvain-la-Neuve (UCL). In addition to his current interest in logic meta-programming and intensional views, Kim Mens is one of the originators of the reuse contracts technique for automatically detecting conflicts in evolving software. He has been formally involved in several research networks related to software evolution. He has a strong interest in object-oriented and aspect-oriented software development and has actively participated in the organization of several workshops and conferences on those topics. He combines all these different research interests under the common denominator of co-evolution (between source code and earlier life-cycle software artifacts). Other research topics that fit this common theme and in which he is interested are software architecture, software maintenance, reverse engineering, software transformation, software restructuring and renovation, aspect mining and evolution of aspect programs. L. Moonen is an assistant professor in the Software Evolution Research Lab at Delft University of Technology and a researcher at the Centre for Mathematics and Computer Science (CWI), the Netherlands. His research interests are the design and development of advanced program analysis tools and techniques that support development, maintenance and evolution of large software systems. Concrete topics include the reverse engineering and exploration of views on software systems and their use for understanding and assessing software quality attributes such as evolvability, reliability and security. Dr. Moonen received an MSc (cum laude, Computer Science, 1996) and PhD (Computer Science, 2002) from the University of Amsterdam. He is one of the founders of the Software Improvement Group, a company that specializes in tools and consultancy to help organizations solve their legacy problems. He publishes regularly at, and serves on organizing-, steering- and program committees of, international workshops and conferences on reverse engineering (WCRE), source code analysis (SCAM), software maintenance (ICSM), program understanding (ICPC), reengineering (CSMR), aspect mining (Dagstuhl 06302, TEAM) and software security (CoBaSSA). Paolo Tonella is a senior researcher at ITC-irst, Trento, Italy. He received his laurea degree cum laude in Electronic Engineering from the University of Padova, Italy, in 1992, and his Ph.D. degree in Software Engineering from the same University, in 1999, with the thesis “Code Analysis in Support to Software Maintenance.” Since 1994 he has been a full time researcher of the Software Engineering group at ITC-irst. He participated in several industrial and European Community projects on software analysis and testing. He is the author of “Reverse Engineering of Object Oriented Code,” Springer, 2005. His current research interests include reverse engineering, aspect oriented programming, empirical studies, Web applications and testing. Tom Tourwé obtained the degree of Licentiate in Computer Science in 1997 and Ph.D. in Science in 2002 at the Vrije Universiteit Brussel. He is currently associated to the Centrum voor Wiskunde en Informatica, based in Amsterdam, The Netherlands, where he works as a post- doctoral researcher in the Ideals project. His main research interests lie in the broad area of software engineering, and include aspect-oriented software evolution and re-engineering in particular. He published several peer-reviewed articles on these topics in international journals and conferences, and organised a number of workshops on those themes.  相似文献   

3.
4.
Component middleware provides dependable and efficient platforms that support key functional, and quality of service (QoS) needs of distributed real-time embedded (DRE) systems. Component middleware, however, also introduces challenges for DRE system developers, such as evaluating the predictability of DRE system behavior, and choosing the right design alternatives before committing to a specific platform or platform configuration. Model-based technologies help address these issues by enabling design-time analysis, and providing the means to automate the development, deployment, configuration, and integration of component-based DRE systems. To this end, this paper applies model checking techniques to DRE design models using model transformations to verify key QoS properties of component-based DRE systems developed using Real-time CORBA. We introduce a formal semantic domain for a general class of DRE systems that enables the verification of distributed non-preemptive real-time scheduling. Our results show that model-based techniques enable design-time analysis of timed properties and can be applied to effectively predict, simulate, and verify the event-driven behavior of component-based DRE systems. This research was supported by the NSF Grants CCR-0225610 and ACI-0204028 Gabor Madl is a Ph.D. student and a graduate student researcher at the Center for Embedded Computer Systems at the University of California, Irvine. His advisor is Nikil Dutt. His research interests include the formal verification, optimization, component-based composition, and QoS management of distributed real-time embedded systems. He received his M.S. in computer science from Vanderbilt University and in computer engineering from the Budapest University of Technology and Economics. Dr. Sherif Abdelwahed received his Ph.D. degree in Electrical and Computer Engineering from the University of Toronto, Canada, in 2001. During 2000–2001, he was a research scientist with the system diagnosis group at the Rockwell Scientific Company. Since 2001 he has been with the Department of Electrical Engineering and Computer Science at Vanderbilt University as a Research Assistant Professor. His research interests include verification and control of distributed real-time systems, and model-based diagnosis of discrete-event and hybrid systems. Dr. Douglas C. Schmidt is a Professor of Computer Science, Associate Chair of the Computer Science and Engineering program, and a Senior Researcher in the Institute for Software Integrated Systems (ISIS) all at Vanderbilt University. He has published over 300 technical papers and 6 books that cover a range of research topics, including patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and applications. Dr. Schmidt has served as a Deputy Office Director and a Program Manager at DARPA, where he lead the national R&D effort on middleware for DRE systems. In addition to his academic research and government service, Dr. Schmidt has over fifteen years of experience leading the development of ACE, TAO, CIAO, and CoSMIC, which are widely used, open-source DRE middleware frameworks and model-driven tools that contain a rich set of components and domain-specific languages that implement patterns and product-line architectures for high-performance DRE systems.  相似文献   

5.
Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes. These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts. AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper, we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the decision process. We apply our method to an example of a real-world distributed architecture presented in the literature. The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and the architecture's capability to handle future quality attribute changes. These expose critical decisions which are otherwise too subtle to be detected in standard AHP results. Liming Zhu is a PHD candidate in the School of Computer Science and Engineering at University of New South Wales. He is also a member of the Empirical Software Engineering Group at National ICT Australia (NICTA). He obtained his BSc from Dalian University of Technology in China. After moving to Australia, he obtained his MSc in computer science from University of New South Wales. His principle research interests include software architecture evaluation and empirical software engineering. Aybüke Aurum is a senior lecturer at the School of Information Systems, Technology and Management, University of New South Wales. She received her BSc and MSc in geological engineering, and MEngSc and PhD in computer science. She also works as a visiting researcher in National ICT, Australia (NICTA). Dr. Aurum is one of the editors of “Managing Software Engineering Knowledge”, “Engineering and Managing Software Requirements” and “Value-Based Software Engineering” books. Her research interests include management of software development process, software inspection, requirements engineering, decision making and knowledge management in software development. She is on the editorial boards of Requirements Engineering Journal and Asian Academy Journal of Management. Ian Gorton is a Senior Researcher at National ICT Australia. Until Match 2004 he was Chief Architect in Information Sciences and Engineering at the US Department of Energy's Pacific Northwest National Laboratory. Previously he has worked at Microsoft and IBM, as well as in other research positions. His interests include software architectures, particularly those for large-scale, high-performance information systems that use commercial off-the-shelf (COTS) middleware technologies. He received a PhD in Computer Science from Sheffield Hallam University. Dr. Ross Jeffery is Professor of Software Engineering in the School of Computer Science and Engineering at UNSW and Program Leader in Empirical Software Engineering in National ICT Australia Ltd. (NICTA). His current research interests are in software engineering process and product modeling and improvement, electronic process guides and software knowledge management, software quality, software metrics, software technical and management reviews, and software resource modeling and estimation. His research has involved over fifty government and industry organizations over a period of 15 years and has been funded from industry, government and universities. He has co-authored four books and over one hundred and twenty research papers. He has served on the editorial board of the IEEE Transactions on Software Engineering, and the Wiley International Series in Information Systems and he is Associate Editor of the Journal of Empirical Software Engineering. He is a founding member of the International Software Engineering Research Network (ISERN). He was elected Fellow of the Australian Computer Society for his contribution to software engineering research.  相似文献   

6.
Software testing is an essential process in software development. Software testing is very costly, often consuming half the financial resources assigned to a project. The most laborious part of software testing is the generation of test-data. Currently, this process is principally a manual process. Hence, the automation of test-data generation can significantly cut the total cost of software testing and the software development cycle in general. A number of automated test-data generation approaches have already been explored. This paper highlights the goal-oriented approach as a promising approach to devise automated test-data generators. A range of optimization techniques can be used within these goal-oriented test-data generators, and their respective characteristics, when applied to these situations remain relatively unexplored. Therefore, in this paper, a comparative study about the effectiveness of the most commonly used optimization techniques is conducted.
James Miller (Corresponding author)Email:

Man Xiao   received a B.S. degree in Space Physics and Electronics Information Engineering from the University of Wuhan, China; and a M.S. degree in Software Engineering, from the University of Alberta, Canada. She is now a Software Engineer at a small start-up company in Edmonton, Alberta, Canada. Mohamed El-Attar   is a Ph.D. candidate (Software Engineering) at the University of Alberta and a member of the STEAM laboratory. His research interests include Requirements Engineering, in particular with UML and use cases, object-oriented analysis and design, model transformation and empirical studies. Mohamed received a B.S. Engineering in Computer Systems from Carleton University. Marek Reformat   received his M.S. degree from the Technical University of Poznan, Poland, and his Ph.D. from the University of Manitoba, Canada. His interests are related to simulation and modeling in time-domain, and evolutionary computing and its application to optimization problems. For 3 years he worked for the Manitoba HVDC Research Centre, Canada where he was a member of a simulation software development team. Currently, he is with the Department of Electrical and Computer Engineering at the University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, and probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. He was a member of program committees of several conferences related to computational intelligence and evolutionary computing. James Miller   received his B.S. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship and subsequently a senior lectureship in Computer Science. Initially, during this period, his research interests were in computer vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests were in software and systems engineering. In 2000, he joined the Department of Electronic and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate verification and validation issues of software, embedded and ubiquitous computer systems. He has published over one hundred refereed journal and conference papers on software and systems engineering (see for details for recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering.   相似文献   

7.
This paper presents a metamodel for modeling system features and relationships between features. The underlying idea of this metamodel is to employ features as first-class entities in the problem space of software and to improve the customization of software by explicitly specifying both static and dynamic dependencies between system features. In this metamodel, features are organized as hierarchy structures by the refinement relationships, static dependencies between features are specified by the constraint relationships, and dynamic dependencies between features are captured by the interaction relationships. A first-order logic based method is proposed to formalize constraints and to verify constraints and customization. This paper also presents a framework for interaction classification, and an informal mapping between interactions and constraints through constraint semantics. Hong Mei received the BSc and MSc degrees in computer science from the Nanjing University of Aeronautics and Astronautics (NUAA), China, in 1984 and 1987, respectively, and the PhD degree in computer science from the Shanghai Jiao Tong University in 1992. He is currently a professor of Computer Science at the Peking University, China. His current research interests include Software Engineering and Software Engineering Environment, Software Reuse and Software Component Technology, Distributed Object Technology, and Programming Language. He has published more than 100 technical papers. Wei Zhang received the BSc in Engineering Thermophysics and the MSc in Computer Science from the Nanjing University of Aeronautics and Astronautics (NUAA), China, in 1999 and 2002, respectively. He is currently a PhD student at the School of Electronics Engineering and Computer Science of the Peking University, China. His research interests include feature-oriented requirements modeling, feature-driven software architecture design and feature-oriented software reuse. Haiyan Zhao received both the BSc and the MSc degree in Computer Science from the Peking Univeristy, China, and the Ph.D degree in Information Engineering from the University of Tokyo, Japan. She is currently an associate professor of Computer Science at the Peking University, China. Her research interests include Software Reuse, Domain Engineering, Domain Specific Languange and Program Transformation.  相似文献   

8.
A practical approach to testing GUI systems   总被引:1,自引:0,他引:1  
GUI systems are becoming increasingly popular thanks to their ease of use when compared against traditional systems. However, GUI systems are often challenging to test due to their complexity and special features. Traditional testing methodologies are not designed to deal with the complexity of GUI systems; using these methodologies can result in increased time and expense. In our proposed strategy, a GUI system will be divided into two abstract tiers—the component tier and the system tier. On the component tier, a flow graph will be created for each GUI component. Each flow graph represents a set of relationships between the pre-conditions, event sequences and post-conditions for the corresponding component. On the system tier, the components are integrated to build up a viewpoint of the entire system. Tests on the system tier will interrogate the interactions between the components. This method for GUI testing is simple and practical; we will show the effectiveness of this approach by performing two empirical experiments and describing the results found.
James MillerEmail:

Ping Li   received her M.Sc. in Computer Engineering from the University of Alberta, Canada, in 2004. She is currently working for Waterloo Hydrogeologic Inc., a Schlumberger Company, as a Software Quality Analyst. Toan Huynh   received a B.Sc. in Computer Engineering from the University of Alberta, Canada. He is currently a PhD candidate at the same institution. His research interests include: web systems, e-commerce, software testing, vulnerabilities and defect management, and software approaches to the production of secure systems. Marek Reformat   received his M.Sc. degree from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada. His interests were related to simulation and modeling in time-domain, as well as evolutionary computing and its application to optimization problems. For three years he worked for the Manitoba HVDC Research Centre, Canada, where he was a member of a simulation software development team. Currently, Marek Reformat is with the Department of Electrical and Computer Engineering at University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, as well as probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. Dr. Reformat has been a member of program committees of several conferences related to Computational Intelligence and evolutionary computing. He is a member of the IEEE Computer Society and ACM. James Miller   received the B.Sc. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship, and subsequently a senior lectureship in Computer Science. Initially during this period his research interests were in Computer Vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests have been in Software and Systems Engineering. In 2000, he joined the Department of Electrical and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate software verification and validation issues across various domains, including embedded, web-based and ubiquitous environments. He has published over one hundred refereed journal and conference papers on Software and Systems Engineering (see www.steam.ualberta.ca for details on recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering.   相似文献   

9.
An empirical study of predicting software faults with case-based reasoning   总被引:1,自引:0,他引:1  
The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering Laboratory. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, and statistical modeling. He has published more than 200 refereed papers in these areas. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the Association for Computing Machinery, the IEEE Computer Society, and IEEE Reliability Society. He served as the general chair of the 1999 International Symposium on Software Reliability Engineering (ISSRE’99), and the general chair of the 2001 International Conference on Engineering of Computer Based Systems. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Naeem Seliya received the M.S. degree in Computer Science from Florida Atlantic University, Boca Raton, FL, USA, in 2001. He is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include software engineering, computational intelligence, data mining, software measurement, software reliability and quality engineering, software architecture, computer data security, and network intrusion detection. He is a student member of the IEEE Computer Society and the Association for Computing Machinery.  相似文献   

10.
Designs almost always require tradeoffs between competing design choices to meet system requirements. We present a framework for evaluating design choices with respect to meeting competing requirements. Specifically, we develop a model to estimate the performance of a UML design subject to changing levels of security and fault-tolerance. This analysis gives us a way to identify design solutions that are infeasible. Multi-criteria decision making techniques are applied to evaluate the remaining feasible alternatives. The method is illustrated with two examples: a small sensor network and a system for controlling traffic lights. Dr. Anneliese Amschler Andrews is Professor and Chair of the Department of Computer Science at the University of Denver. Before that she was the Huie Rogers Endowed Chair in Software Engineering at Washington State University. Dr. Andrews is the author of a text book and over 130 articles in the area of Software Engineering, particularly software testing and maintenance. Dr. Andrews holds an MS and PhD from Duke University and a Dipl.-Inf. from the Technical University of Karlsruhe. She served as Editor-in-Chief of the IEEE Transactions on Software Engineering. She has also served on several other editorial boards including the IEEE Transactions on Reliability, the Empirical Software Engineering Journal, the Software Quality Journal, the Journal of Information Science and Technology, and the Journal of Software Maintenance. She was Director of the Colorado Advanced Software Institute from 1995 to 2002. CASI's mission was to support technology transfer research related to software through collaborations between industry and academia. Ed Mancebo studied software engineering at Milwaukee School of Engineering and computer science at Washington State University. His masters thesis explored applying systematic decision making methods to software engineering problems. He is currently a software developer at Amazon.com. Dr. Per Runeson is a professor in software engineering at Lund University, Sweden. His research interests include methods to facilitate, measure and manage aspects of software quality. He received a PhD from Lund University in 1998 and has industrial experience as a consulting expert. He is a member of the editorial board of Empirical Software Engineering and several program committees, and currently has a senior researcher position funded by the Swedish Research Council. Robert France is currently a Full Professor in the Department of Computer Science at Colorado State University. His research interests are in the area of Software Engineering, in particular formal specification techniques, software modeling techniques, design patterns, and domain-specific modeling languages. He is an Editor-in-Chief of the Springer journal on Software and System Modeling (SoSyM), and is a Steering Committee member and past Steering Committee Chair of the MoDELS/UML conference series. He was also a member of the revision task forces for the UML 1.x standards.  相似文献   

11.
Many statechart-based testing strategies result in specifying a set of paths to be executed through a (flattened) statechart. These techniques can usually be easily automated so that the tester does not have to go through the tedious procedure of deriving paths manually to comply with a coverage criterion. The next step is then to take each test path individually and derive test requirements leading to fully specified test cases. This requires that we determine the system state required for each event/transition that is part of the path to be tested and the input parameter values for all events and actions associated with the transitions. We propose here a methodology towards the automation of this procedure, which is based on a careful normalization and analysis of operation contracts and transition guards written with the Object Constraint Language (OCL). It is illustrated by one case study that exemplifies the steps of our methodology and provides a first evaluation of its applicability. The scope of the testing activity depends on what is modeled by the statechart. If the statechart models the behavior of a single class, then it can be used to support unit testing. If the behavior of a class-cluster, a subsystem or a component is modeled, then we are concerned with integration testing. If the whole system is modeled, then the focus of statechart-based testing is system testing. Lionel C. Briand is on the faculty of the Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he founded and leads the Software Quality Engineering Laboratory (http://www.sce.carleton.ca/Squall/ Squall.htm). He has been granted the Canada Research Chair in Software Quality Engineering and is also a visiting professor at the Simula laboratories, University of Oslo, Norway. Before that he was the software quality engineering department head at the Fraunhofer Institute for Experimental Software Engineering, Germany. Dr. Lionel also worked as a research scientist for the Software Engineering Laboratory, a consortium of the NASA Goddard Space Flight Center, CSC, and the University of Maryland. He has been on the program, steering, or organization committees of many international, IEEE conferences such as ICSE, ICSM, ISSRE, and METRICS. He is the coeditor-in-chief of Empirical Software Engineering (Springer) and is a member of the editorial board of Systems and Software Modeling (Springer). He was on the board of IEEE Transactions on Software Engineering from 2000 to 2004. His research interests include: object-oriented analysis and design, inspections and testing in the context of object-oriented development, quality assurance and control, project planning and risk analysis, and technology evaluation. Lionel received the BSc and MSc degrees in geophysics and computer systems engineering from the University of Paris VI, France. He received the PhD degree in computer science, with high honors, from the University of Paris XI, France. Yvan Labiche received the BSc in Computer System Engineering, from the graduate school of engineering: CUST (Centre Universitaire des Science et Techniques, Clermont-Ferrand), France. He completed a Master of fundamental computer science and production systems in 1995 (Université Blaise Pascal, Clermont Ferrand, France). While doing his Ph.D. in Software Engineering, completed in 2000 at LAAS/CNRS in Toulouse, France, Yvan worked with Aerospatiale Matra Airbus (now EADS Airbus) on the definition of testing strategies for safety-critical, on-board software, developed using object-oriented technologies. In January 2001, Dr. Yvan Labiche joined the Department of Systems and Computer Engineering at Carleton University, as an Assistant Professor. His research interests include: object-oriented analysis and design, software testing in the context of object-oriented development, and technology evaluation. He is a member of the IEEE. Jim (Jingfeng) Cui completed his BSc in Industrial Automation Control, from the School of Information and Engineering, Northeastern University, China. He received a Master of Applied Science (specialization in Software Engineering) in 2004 from the Ottawa-Carleton Institute of Electrical and Computer Engineering, Ottawa, Canada. While in his graduate study, he was awarded the Ontario Graduate Scholarship of Science and Technology. He is now a senior Software Architect in Sunyard System & Engineering Co.Ltd., China. His interest includes Object-Oriented Software Development, Quality Assurance, and Content Management System.  相似文献   

12.
Mutation testing has traditionally been used as a defect injection technique to assess the effectiveness of a test suite as represented by a “mutation score.” Recently, mutation testing tools have become more efficient, and industrial usage of mutation analysis is experiencing growth. Mutation analysis entails adding or modifying test cases until the test suite is sufficient to detect as many mutants as possible and the mutation score is satisfactory. The augmented test suite resulting from mutation analysis may reveal latent faults and provides a stronger test suite to detect future errors which might be injected. Software engineers often look for guidance on how to augment their test suite using information provided by line and/or branch coverage tools. As the use of mutation analysis grows, software engineers will want to know how the emerging technique compares with and/or complements coverage analysis for guiding the augmentation of an automated test suite. Additionally, software engineers can benefit from an enhanced understanding of efficient mutation analysis techniques. To address these needs for additional information about mutation analysis, we conducted an empirical study of the use of mutation analysis on two open source projects. Our results indicate that a focused effort on increasing mutation score leads to a corresponding increase in line and branch coverage to the point that line coverage, branch coverage and mutation score reach a maximum but leave some types of code structures uncovered. Mutation analysis guides the creation of additional “common programmer error” tests beyond those written to increase line and branch coverage. We also found that 74% of our chosen set of mutation operators is useful, on average, for producing new tests. The remaining 26% of mutation operators did not produce new test cases because their mutants were immediately detected by the initial test suite, indirectly detected by test suites we added to detect other mutants, or were not able to be detected by any test.
Laurie WilliamsEmail:

Ben Smith   is a second year Ph.D. student in Computer Science at North Carolina State University working as an RA under Dr. Laurie Williams. He received his Bachelor’s degree in Computer Science in May of 2007 and he hopes to receive his doctorate in 2012. He has begun work on developing SQL Coverage Metrics as a predictive measure of the security of a web application. This fall, he will be beginning the doctoral preliminary exam and working as a Testing Manager for the NCSU CSC Senior Design Center: North Carolina State’s capstone course for Computer Science. Finally, he has designed and maintained the websites for the Center for Open Software Engineering and ESEM 2009. Laurie Williams   is an Associate Professor in the Computer Science Department of the College of Engineering at North Carolina State University. She leads the Software Engineering Reasearch group and is also the Director of the North Carolina State University Laboratory for Collaborative System Development and the Center for Open Software Engineering. She is also technical co-director of the Center for Open Software Engineering (COSE) and the area technical director of the Secure Open Systems Initiative (SOSI) at North Carolina State University. Laurie received her Ph.D. in Computer Science from the University of Utah, her MBA from Duke University, and her BS in Industrial Engineering from Lehigh University. She worked for IBM for nine years in Raleigh, NC before returning to academia. Laurie’s research interests include agile software development methodologies and practices, collaborative/pair programming, software reliability and testing, and software engineering for secure systems development.   相似文献   

13.
While integrating components into systems, we will be confronted with problems concerned with the interoperability of components due to the interaction mismatches at multiple levels, such as interaction behaviors between components and features imposed by architectural styles. In this paper, we studied the interoperability of components and explored the approach to supporting high interoperability of components involved in mismatching interactions. First, we formalized components involved in different architectural styles in the pi-calculus. Next, we studied the formal foundation of the interoperability of components for reasoning about the conditions under which two heterogeneous components are possible to interoperate and interconnect together properly. Then, we described a wrapper-based solution for integrating components into systems that impose mismatching assumptions about usage of the components. In the end, we presented an agent-based implementation for the solution, in which agents are used to wrap components and can automatically resolve multiple levels of interaction mismatches between components. We also gave a simple example to illustrate our approach.
Hong MeiEmail:

Wenpin Jiao   received his BA and MS degree in computer science from East China University of Science and Technology in 1991 and 1997, respectively, and Ph.D. degree in computer science from the Institute of Software at Chinese Academy of Sciences in 2000. From 2000 to 2002, he was a postdoctoral fellow in the Department of Computer Science at the University of Victoria, Canada. Since 2004, he has been an associate professor in the School of Electronics Engineering and Computer Science at Peking University. His major research focus is on the autonomous component technology, multi-agent systems, and software engineering. Hong Mei   received his BA and MS degrees in computer science from Nanjing University of Aeronautics and Astronautics in 1984 and 1987, respectively; and Ph.D. degree in computer science from Shanghai Jiaotong University in 1992. From 1992 to 1994, he was a postdoctoral research fellow at Peking University. Since 1997, he has been a professor and Ph.D. advisor in the Department of Computer Science and Engineering at Peking University. He has also served as vice dean of the School of Electronics Engineering and Computer Science and the Capital Development Institute at Peking University, respectively. His current research interests include: Software Engineering and Software Engineering Environment, Software Reuse and Software Component Technology, Distributed Object Technology, Software Production Technology, and Programming Language. He is a member of the Expert Committee for Computer Science and Technology of State 863 High-Tech Program, a chief scientist of State 973 Fundamental Research Program, a consultant of Bell Labs Research China, the director of Special Interest Group of Software Engineering of China Computer Federation (CCF), a member of the Editorial Board of Sciences in China (Series F), ACTA ELECTRONICA SINICA and Journal of Software, and a guest professor of NUAA. He also served at various Program Committees of international conferences.   相似文献   

14.
The pairwise attribute noise detection algorithm   总被引:1,自引:3,他引:1  
Analyzing the quality of data prior to constructing data mining models is emerging as an important issue. Algorithms for identifying noise in a given data set can provide a good measure of data quality. Considerable attention has been devoted to detecting class noise or labeling errors. In contrast, limited research work has been devoted to detecting instances with attribute noise, in part due to the difficulty of the problem. We present a novel approach for detecting instances with attribute noise and demonstrate its usefulness with case studies using two different real-world software measurement data sets. Our approach, called Pairwise Attribute Noise Detection Algorithm (PANDA), is compared with a nearest neighbor, distance-based outlier detection technique (denoted DM) investigated in related literature. Since what constitutes noise is domain specific, our case studies uses a software engineering expert to inspect the instances identified by the two approaches to determine whether they actually contain noise. It is shown that PANDA provides better noise detection performance than the DM algorithm. Jason Van Hulse is a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include data mining and knowledge discovery, machine learning, computational intelligence and statistics. He is a student member of the IEEE and IEEE Computer Society. He received the M.A. degree in mathematics from Stony Brook University in 2000, and is currently Director, Decision Science at First Data Corporation. Taghi M. Khoshgoftaar is a professor at the Department of Computer Science and Engineering, Florida Atlantic University, and the director of the Empirical Software Engineering and Data Mining and Machine Learning Laboratories. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, machine learning, and statistical modeling. He has published more than 300 refereed papers in these subjects. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the IEEE, the IEEE Computer Society, and IEEE Reliability Society. He served as the program chair and general chair of the IEEE International Conference on Tools with Artificial Intelligence in 2004 and 2005, respectively. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Haiying Huang received the M.S. degree in computer engineeringfrom Florida Atlantic University, Boca Raton, Florida, USA, in 2002. She is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. Her research interests include software engineering, computational intelligence, data mining, software measurement, software reliability, and quality engineering.  相似文献   

15.
This paper presents a methodology for estimating users’ opinion of the quality of a software product. Users’ opinion changes with time as they progressively become more acquainted with the software product. In this paper, we study the dynamics of users’ opinion and offer a method for assessing users’ final perception, based on measurements in the early stages of product release. The paper also presents methods for collecting users’ opinion and from the derived data, shows how their initial belief state for the quality of the product is formed. It adapts aspects of Belief Revision theory in order to present a way of estimating users’ opinion, subsequently formed after their opinion revisions. This estimation is achieved by using the initial measurements and without having to conduct surveys frequently. It reports the correlation that users tend to infer among quality characteristics and represents this correlation through a determination of a set of constraints between the scores of each quality characteristic. Finally, this paper presents a fast and automated way of forming users’ new belief state for the quality of a product after examining their opinion revisions. Dimitris Stavrinoudis received his degree in Computer Engineering from Patras University and is a Ph.D. student of Computer Engineering and Informatics Department. He worked as a senior computer engineer and researcher at the R.A. Computer Technology Institute. He has participated in research and development projects in the areas of software engineering, databases and educational technologies. Currently, he works at the Hellenic Open University. His research interests include software quality, software metrics and measurements. Michalis Xenos received his degree and Ph.D. in Computer Engineering from Patras University. He is a Lecturer in the Informatics Department of the School of Sciences and Technology of the Hellenic Open University. He also works as a researcher in the Computer Technology Institute of Patras and has participated in over 15 research and development projects in the areas of software engineering and IT development management. His research interests include, inter alia, Software Engineering and Educational Technologies. He is the author of 6 books in Greek and over 30 papers in international journals and conferences. Pavlos Peppas received his B.Eng. in Computer Engineering from Patras University (1988), and his Ph.D. in Computer Science from Sydney University (1994). He joined Macquarie University, Sydney, as a lecturer in September 1993, and was promoted to a senior lecturer in October 1998. In January 2000, he took up an appointment at Intrasoft, Athens, where he worked as a senior specialist in the Data Warehousing department. He joint Athens Information Technology in February 2003 as a senior researcher, and since November 2003 he is an associate professor at the Dept of Business Administration at the University of Patras. He also holds an adjunct associate professorship at the School of Computer Science and Engineering at the University of New South Wales. His research interests lie primarily within the area of Artificial Intelligence, and more specifically in logic-based approaches to Knowledge Representation and Reasoning with application in robotics, software engineering, organizational knowledge management, and the semantic web. Dimitris Christodoulakis received his degree in Mathematics from the University of Athens and his Ph.D. in Informatics from the University of Bonn. He was a researcher at the National Informatics Centre of Germany. He is a Professor and Vice President of Computer Engineering and Informatics Department of Patras University. Scientific Coordinator in many research and development projects in the followings sections: Knowledge and Data Base Systems, Very large volume information storage, Hypertext, Natural Language Technology for Modern Greek. Author and co-author in many articles published in international conferences. Editor in proceedings of conventions. Responsible for proofing tools development for Microsoft Corp. He is Vice Director in the Research Academic Computer Technology Institute (RACTI).  相似文献   

16.
17.
Commercial off-the-shelf (COTS) middleware is now widely used to develop distributed real-time and embedded (DRE) systems. DRE systems are themselves increasingly combined to form systems of systems that have diverse quality of service (QoS) requirements. Earlier generations of COTS middleware, such as Object Request Brokers (ORBs) based on the CORBA 2.x standard, did not facilitate the separation of QoS policies from application functionality, which made it hard to configure and validate complex DRE applications. The new generation of component middleware, such as the CORBA Component Model (CCM) based on the CORBA 3.0 standard, addresses the limitations of earlier generation middleware by establishing standards for implementing, packaging, assembling, and deploying component implementations.There has been little systematic empirical study of the performance characteristics of component middleware implementations in the context of DRE systems. This paper therefore provides four contributions to the study of CCM for DRE systems. First, we describe the challenges involved in benchmarking different CCM implementations. Second, we describe key criteria for comparing different CCM implementations using key black-box and white-box metrics. Third, we describe the design of our CCMPerf benchmarking suite to illustrate test categories that evaluate aspects of CCM implementation to determine their suitability for the DRE domain. Fourth, we use CCMPerf to benchmark CIAO implementation of CCM and analyze the results. These results show that the CIAO implementation based on the more sophisticated CORBA 3.0 standard has comparable DRE performance to that of the TAO implementation based on the earlier CORBA 2.x standard.Arvind S. Krishna is a PhD student in the Electrical Engineering and Computer Science Department at Vanderbilt University and a member of the Institute for Software Integrated Systems. He received his MA in management from the Brila Institute for Technology and Science (BITS), Pilani, India and his MS in computer science from University of California, Irvine. His research interests include patterns, real-time Java technologies for Real-Time Corba, model-integrated QA techniques, and tools for partial evaluation and specialization of middleware. He is a student member of the IEEE and ACM. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Balachandran Natarajan is a senior staff engineer at the Institute for Software Integrated Systems and a PhD student in electrical engineering and computer science at Vanderbilt University. His research focuses on applying patterns, optimization principles, and frameworks to build high-performance, dependable, and real-time distributed systems. He received his MS in computer science from Washington University. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Aniruddha Gokhale is an assistant professor in the Electrical Engineering and Computer Science Department at Vanderbilt University and a senior research scientist at the Institute for Software Integrated Systems. His research focuses on real-time component middleware optimizations, distributed systems and networks, model-driven software synthesis applied to component middleware-based distributed systems, and distributed resource management. He received his PhD in computer science from Washington University. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Douglas C. Schmidt is a professor in the Electrical Engineering and Computer Science Department at Vanderbilt University and a senior research scientist at the Institute for Software Integrated Systems. His research interests include patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded middleware and applications running over high-speed networks and embedded system interconnects. He received his PhD in information and computer science at the University of California, Irvine. Contact him at the Inst. for Software Integrated Systems, 2015 Terrace Pl., Nashville, TN 37203.Nanbor Wang is a Research Scientist in the Distributed Technologies Group at the Tech-X Corporation in Boulder, Colorado. He received M.S. and Ph.D. degrees in Computer Science from Washington University in St. Louis, Missouri. While working for his degree, he also worked as a Research Associate in the Center of Distributed Object Computing in the Department of Computer Science where he conducted research on design, implementation and analysis of object-oriented and component-based techniques for development of distributed systems and management of extra-functional concerns. Dr. Wangs work currently focuses on developing and applying middleware techniques, such as CORBA and Grid Computing, for enabling distributed and parallel scientific applications, such as, distributed data analysis, remote visualization and collaboration, and, work-flow management for large-scale scientific applications.Gautam H. Thaker was born in Amdavad, India, in 1955. He holds a BSEE (75) and MSEE (77) from Clemson University, Clemson, SC. He spent the 85-86 academic year at M.I.T. as a visiting researcher. His research interests include analysis, design, construction and validation of real-time, command and control systems. In particular he has focused on interactions between operating systems, networking protocols, and middleware technologies.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号