首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextReplication plays an important role in experimental disciplines. There are still many uncertainties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experimenters, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment?ObjectiveTo improve our understanding of SE experiment replication, in this work we propose a classification which is intend to provide experimenters with guidance about what types of replication they can perform.MethodThe research approach followed is structured according to the following activities: (1) a literature review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE.ResultsWe propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment.ConclusionThe aim of replication is to verify results, but different types of replication serve special verification purposes and afford different degrees of change. Each replication type helps to discover particular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.  相似文献   

2.
In no science or engineering discipline does it make sense to speak of isolated experiments. The results of a single experiment cannot be viewed as representative of the underlying reality. Experiment replication is the repetition of an experiment to double-check its results. Multiple replications of an experiment increase the confidence in its results. Software engineering has tried its hand at the identical (exact) replication of experiments in the way of the natural sciences (physics, chemistry, etc.). After numerous attempts over the years, apart from experiments replicated by the same researchers at the same site, no exact replications have yet been achieved. One key reason for this is the complexity of the software development setting, which prevents the many experimental conditions from being identically reproduced. This paper reports research into whether non-exact replications can be of any use. We propose a process aimed at researchers running non-exact replications. Researchers enacting this process will be able to identify new variables that are possibly having an effect on experiment results. The process consists of four phases: replication definition and planning, replication operation and analysis, replication interpretation, and analysis of the replication’s contribution. To test the effectiveness of the proposed process, we have conducted a multiple-case study, revealing the variables learned from two different replications of an experiment.  相似文献   

3.
The role of replications in Empirical Software Engineering   总被引:2,自引:1,他引:1  
Replications play a key role in Empirical Software Engineering by allowing the community to build knowledge about which results or observations hold under which conditions. Therefore, not only can a replication that produces similar results as the original experiment be viewed as successful, but a replication that produce results different from those of the original experiment can also be viewed as successful. In this paper we identify two types of replications: exact replications, in which the procedures of an experiment are followed as closely as possible; and conceptual replications, in which the same research question is evaluated by using a different experimental procedure. The focus of this paper is on exact replications. We further explore them to identify two sub-categories: dependent replications, where researchers attempt to keep all the conditions of the experiment the same or very similar and independent replications, where researchers deliberately vary one or more major aspects of the conditions of the experiment. We then discuss the role played by each type of replication in terms of its goals, benefits, and limitations. Finally, we highlight the importance of producing adequate documentation for an experiment (original or replication) to allow for replication. A properly documented replication provides the details necessary to gain a sufficient understanding of the study being replicated without requiring the replicator to slavishly follow the given procedures.
Natalia JuristoEmail:
  相似文献   

4.
ContextTwo recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012).ObjectiveIn this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research.MethodWe applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 1996–2012 complemented by manual and automatic search procedures that collected articles published in 2013.ResultsWe analyzed 37 papers reporting studies about replication published in the last 17 years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012.ConclusionsReplication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.  相似文献   

5.
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master’s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.  相似文献   

6.
ContextSoftware patterns encapsulate expert knowledge for constructing successful solutions to recurring problems. Although a large collection of software patterns is available in literature, empirical evidence on how well various patterns help in problem solving is limited and inconclusive. The context of these empirical findings is also not well understood, limiting applicability and generalizability of the findings.ObjectiveTo characterize the research design of empirical studies exploring software pattern application involving human participants.MethodWe conducted a systematic mapping study to identify and analyze 30 primary empirical studies on software pattern application, including 24 original studies and 6 replications. We characterize the research design in terms of the questions researchers have explored and the context of empirical research efforts. We also classify the studies in terms of measures used for evaluation, and threats to validity considered during study design and execution.ResultsUse of software patterns in maintenance is the most commonly investigated theme, explored in 16 studies. Object-oriented design patterns are evaluated in 14 studies while 4 studies evaluate architectural patterns. We identified 10 different constructs with 31 associated measures used to evaluate software patterns. Measures for ‘efficiency’ and ‘usability’ are commonly used to evaluate the problem solving process. While measures for ‘completeness’, ‘correctness’ and ‘quality’ are commonly used to evaluate the final artifact. Overall, ‘time to complete a task’ is the most frequently used measure, employed in 15 studies to measure ‘efficiency’. For qualitative measures, studies do not report approaches for minimizing biases 27% of the time. Nine studies do not discuss any threats to validity.ConclusionSubtle differences in study design and execution can limit comparison of findings. Establishing baselines for participants’ experience level, providing appropriate training, standardizing problem sets, and employing commonly used measures to evaluate performance can support replication and comparison of results across studies.  相似文献   

7.
ContextThere are several empirical principles related to the distribution of faults in a software system (e.g. the Pareto principle) widely applied in practice and thoroughly studied in the software engineering research providing evidence in their favor. However, the knowledge of the underlying probability distribution of faults, that would enable a systematic approach and refinement of these principles, is still quite limited.ObjectiveIn this paper we study the probability distribution of faults detected during verification in four consecutive releases of a large-scale complex software system for the telecommunication exchanges. This is the first such study analyzing closed software system, replicating two previous studies for open source software.MethodWe take into consideration the Weibull, lognormal, double Pareto, Pareto, and Yule–Simon probability distributions, and investigate how well these distributions fit our empirical fault data using the non-linear regression.ResultsThe results indicate that the double Pareto distribution is the most likely choice for the underlying probability distribution. This is not consistent with the previous studies on open source software.ConclusionThe study shows that understanding the probability distribution of faults in complex software systems is more complicated than previously thought. Comparison with previous studies shows that the fault distribution strongly depends on the environment, and only further replications would make it possible to build up a general theory for a given context.  相似文献   

8.
Background: Test-Driven Development (TDD) is claimed to have positive effects on external code quality and programmers’ productivity. The main driver for these possible improvements is the tests enforced by the test-first nature of TDD as previously investigated in a controlled experiment (i.e. the original study). Aim: Our goal is to examine the nature of the relationship between tests and external code quality as well as programmers’ productivity in order to verify/ refute the results of the original study. Method: We conducted a differentiated and partial replication of the original setting and the related analyses, with a focus on the role of tests. Specifically, while the original study compared test-first vs. test-last, our replication employed the test-first treatment only. The replication involved 30 students, working in pairs or as individuals, in the context of a graduate course, and resulted in 16 software artifacts developed. We performed linear regression to test the original study’s hypotheses, and analyses of covariance to test the additional hypotheses imposed by the changes in the replication settings. Results: We found significant correlation (Spearman coefficient = 0.66, with p-value = 0.004) between the number of tests and productivity, and a positive regression coefficient (p-value = 0.011). We found no significant correlation (Spearman coefficient = 0.41 with p-value = 0.11) between the number of tests and external code quality (regression coefficient p-value = 0.0513). For both cases we observed no statistically significant interaction caused by the subject units being individuals or pairs. Further, our results are consistent with the original study although there were changes in the timing constraints for finishing the task and the enforced development processes. Conclusions: This replication study confirms the results of the original study concerning the relationship between the number of tests vs. external code quality and programmer productivity. Moreover, this replication allows us to identify additional context variables, for which the original results still hold; namely the subject unit, timing constraint and isolation of test-first process. Based on our findings, we recommend practitioners to implement as many tests as possible in order to achieve higher baselines for quality and productivity.  相似文献   

9.
ContextMisuse case modeling is a well-known technique in the domain of capturing and specifying functional security requirements. Misuse case modeling provides a mechanism for security analysts to consider and account for security requirements in the early stages of a development process instead of relying on generic defensive mechanisms that are augmented to software systems towards the latter stages of development.ObjectiveMany research contributions in the area of misuse case modeling have been devoted to extending the notation to increase its coverage of additional security related semantics. However, there lacks research that evaluates the perception of misuse case models by its readers. A misread or misinterpreted misuse case model can have dire consequences downstream leading to the development of an insecure system.MethodThis paper presents an assessment of the design of the original misuse case modeling notation based on the Physics of Notations framework. A number of improvements to the notation were suggested. A survey and a controlled experiment were carried out to compare the cognitive effectiveness of the new notation in comparison to the original notation.ResultsThe survey had 55 participants for have mostly indicated that the new notation is more semantically transparent than the original notation. The results of the experiment show that subjects reading diagrams developed using the new notation performed their tasks an average 6 min quicker, while in general the subjects performed their tasks in approximately 14.5 min. The experimental tasks only required subjects reading diagrams and not creating them.ConclusionThe main finding of this paper is that the use of colors and icons has improved the readability of misuse case diagrams. Software engineering notations are usually black and white. It is expected that the readability of other software notations will improve if they utilize colors and icons.  相似文献   

10.
ContextPerformance-related failures of Distributed and Real-Time Software Systems (DRTS’s) can be very costly, e.g., explosion of a nuclear reactor. We reported in a previous work a stress testing methodology to detect performance-related Real-Time (RT) faults in DRTS’s based on the design UML model of a System Under Test (SUT). The stress methodology aimed at increasing the chances of RT failures (violations in RT constraints).ObjectiveAfter stress testing a SUT and finding RT faults, an important immediate question is how to fix (debug) those RT faults and prevent the same RT violations in the future and after deployment. If appropriate solutions to this challenge cannot be found, stress testing and its findings (detection of RT faults) will be of no or little use to the quality assurance goals of the development team.MethodTo move towards systematically solving performance-related problems causing RT faults, we develop a customized version of the standard Software Performance Engineering process and conduct an experiment on a DRTS. The process is iteratively applied to a SUT, while results from stress testing reveal that there are still scenarios in which RT constraints are violated.ResultsApplication of the performance engineering paradigm in this context on a real DRTS enables systematic analysis of performance-related defects and their fixations.ConclusionThe contributions of this work are an initial approach to software performance engineering based on stress testing, and an analysis, based on experimentation, of the open issues that need to be addressed in order to improve the approach.  相似文献   

11.
ContextResearch into software engineering teams focuses on human and social team factors. Social psychology deals with the study of team formation and has found that personality factors and group processes such as team climate are related to team effectiveness. However, there are only a handful of empirical studies dealing with personality and team climate and their relationship to software development team effectiveness.ObjectiveWe present aggregate results of a twice replicated quasi-experiment that evaluates the relationships between personality, team climate, product quality and satisfaction in software development teams.MethodOur experimental study measures the personalities of team members based on the Big Five personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism) and team climate factors (participative safety, support for innovation, team vision and task orientation) preferences and perceptions. We aggregate the results of the three studies through a meta-analysis of correlations. The study was conducted with students.ResultsThe aggregation of results from the baseline experiment and two replications corroborates the following findings. There is a positive relationship between all four climate factors and satisfaction in software development teams. Teams whose members score highest for the agreeableness personality factor have the highest satisfaction levels. The results unveil a significant positive correlation between the extraversion personality factor and software product quality. High participative safety and task orientation climate perceptions are significantly related to quality.ConclusionsFirst, more efficient software development teams can be formed heeding personality factors like agreeableness and extraversion. Second, the team climate generated in software development teams should be monitored for team member satisfaction. Finally, aspects like people feeling safe giving their opinions or encouraging team members to work hard at their job can have an impact on software quality. Software project managers can take advantage of these factors to promote developer satisfaction and improve the resulting product.  相似文献   

12.
ContextSharing expert knowledge is a key process in developing software products. Since expert knowledge is mostly tacit, the acquisition and sharing of tacit knowledge along with the development of a transactive memory system (TMS) are significant factors in effective software teams.ObjectiveWe seek to enhance our understanding human factors in the software development process and provide support for the agile approach, particularly in its advocacy of social interaction, by answering two questions: How do software development teams acquire and share tacit knowledge? What roles do tacit knowledge and transactive memory play in successful team performance?MethodA theoretical model describing the process for acquiring and sharing tacit knowledge and development of a TMS through social interaction is presented and a second predictive model addresses the two research questions above. The elements of the predictive model and other demographic variables were incorporated into a larger online survey for software development teams, completed by 46 software SMEs, consisting of 181 individual team members.ResultsOur results show that team tacit knowledge is acquired and shared directly through good quality social interactions and through the development of a TMS with quality of social interaction playing a greater role than transactive memory. Both TMS and team tacit knowledge predict effectiveness but not efficiency in software teams.ConclusionIt is concluded that TMS and team tacit knowledge can differentiate between low- and high-performing teams in terms of effectiveness, where more effective teams have a competitive advantage in developing new products and bringing them to market. As face-to-face social interaction is key, collocated, functionally rich, domain expert teams are advocated rather than distributed teams, though arguably the team manager may be in a separate geographic location provided that there is frequent communication and effective use of issue tracking tools as in agile teams.  相似文献   

13.
A Replicated Experiment to Assess Requirements Inspection Techniques   总被引:4,自引:2,他引:2  
This paper presents the independent replication of a controlled experiment which compared three defect detection techniques (Ad Hoc, Checklist, and Defect-based Scenario) for software requirements inspections, and evaluated the benefits of collection meetings after individual reviews. The results of our replication were partially different from those of the original experiment. Unlike the original experiment, we did not find any empirical evidence of better performance when using scenarios. To explain these negative findings we provide a list of hypotheses. On the other hand, the replication confirmed one result of the original experiment: the defect detection rate is not improved by collection meetings.The independent replication was made possible by the existence of an experimental kit provided by the original investigators. We discuss what difficulties we encountered in applying the package to our environment, as a result of different cultures and skills. Using our results, experience and suggestions, other researchers will be able to improve the original experimental design before attempting further replications.  相似文献   

14.
Further Experiences with Scenarios and Checklists   总被引:2,自引:2,他引:0  
Software inspection is one of the best methods of verifying software documents. Software inspection is a complex process, with many possible variations, most of which have received little or no evaluation. This paper reports on the evaluation of one component of the inspection process, detection aids, specifically using Scenario or Checklist approaches. The evaluation is by subject-based experimentation, and is currently one of three independent experiments on the same hypothesis. The paper describes the experimental process, the resulting analysis of the experimental data, and attempts to compare the results in this experiment with the other experiments. This replication is broadly supportive of the results from the original experiment, namely, that the Scenario approach is superior to the Checklist approach; and that the meeting component of a software inspection is not an effective defect detection mechanism. This experiment also tentatively proposes additional relationships between general academic performance and individual inspection performance; and between meeting loss and group inspection performance.  相似文献   

15.
ContextSoftware engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results.ObjectiveTo reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems.MethodA new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random ‘predictions’, that is guessing, and (3) calculation of effect sizes.ResultsPreviously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing.ConclusionsBiased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.  相似文献   

16.
Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes. These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts. AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper, we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the decision process. We apply our method to an example of a real-world distributed architecture presented in the literature. The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and the architecture's capability to handle future quality attribute changes. These expose critical decisions which are otherwise too subtle to be detected in standard AHP results. Liming Zhu is a PHD candidate in the School of Computer Science and Engineering at University of New South Wales. He is also a member of the Empirical Software Engineering Group at National ICT Australia (NICTA). He obtained his BSc from Dalian University of Technology in China. After moving to Australia, he obtained his MSc in computer science from University of New South Wales. His principle research interests include software architecture evaluation and empirical software engineering. Aybüke Aurum is a senior lecturer at the School of Information Systems, Technology and Management, University of New South Wales. She received her BSc and MSc in geological engineering, and MEngSc and PhD in computer science. She also works as a visiting researcher in National ICT, Australia (NICTA). Dr. Aurum is one of the editors of “Managing Software Engineering Knowledge”, “Engineering and Managing Software Requirements” and “Value-Based Software Engineering” books. Her research interests include management of software development process, software inspection, requirements engineering, decision making and knowledge management in software development. She is on the editorial boards of Requirements Engineering Journal and Asian Academy Journal of Management. Ian Gorton is a Senior Researcher at National ICT Australia. Until Match 2004 he was Chief Architect in Information Sciences and Engineering at the US Department of Energy's Pacific Northwest National Laboratory. Previously he has worked at Microsoft and IBM, as well as in other research positions. His interests include software architectures, particularly those for large-scale, high-performance information systems that use commercial off-the-shelf (COTS) middleware technologies. He received a PhD in Computer Science from Sheffield Hallam University. Dr. Ross Jeffery is Professor of Software Engineering in the School of Computer Science and Engineering at UNSW and Program Leader in Empirical Software Engineering in National ICT Australia Ltd. (NICTA). His current research interests are in software engineering process and product modeling and improvement, electronic process guides and software knowledge management, software quality, software metrics, software technical and management reviews, and software resource modeling and estimation. His research has involved over fifty government and industry organizations over a period of 15 years and has been funded from industry, government and universities. He has co-authored four books and over one hundred and twenty research papers. He has served on the editorial board of the IEEE Transactions on Software Engineering, and the Wiley International Series in Information Systems and he is Associate Editor of the Journal of Empirical Software Engineering. He is a founding member of the International Software Engineering Research Network (ISERN). He was elected Fellow of the Australian Computer Society for his contribution to software engineering research.  相似文献   

17.
This paper presents a replication of an empirical study regarding the impact of individual factors on the effectiveness of requirements inspections. Experimental replications are important for verifying results and investigating the generality of empirical studies. We utilized the lab package and procedures from the original study, with some changes and additions, to conduct the replication with 69 professional developers in three different companies in Turkey. In general the results of the replication were consistent with those of the original study. The main result from the original study, which is supported in the replication, was that inspectors whose degree is in a field related to software engineering are less effective during a requirements inspection than inspectors whose degrees are in other fields. In addition, we found that Company, Experience, and English Proficiency impacted inspection effectiveness.  相似文献   

18.
A Software Product Line is a set of software systems of a domain, which share some common features but also have significant variability. A feature model is a variability modeling artifact which represents differences among software products with respect to variability relationships among their features. Having a feature model along with a reference model developed in the domain engineering lifecycle, a concrete product of the family is derived by selecting features in the feature model (referred to as the configuration process) and by instantiating the reference model. However, feature model configuration can be a cumbersome task because: 1) feature models may consist of a large number of features, which are hard to comprehend and maintain; and 2) many factors including technical limitations, implementation costs, stakeholders’ requirements and expectations must be considered in the configuration process. Recognizing these issues, a significant amount of research efforts has been dedicated to different aspects of feature model configuration such as automating the configuration process. Several approaches have been proposed to alleviate the feature model configuration challenges through applying visualization and interaction techniques. However, there have been limited empirical insights available into the impact of visualization and interaction techniques on the feature model configuration process. In this paper, we present a set of visualization and interaction interventions for representing and configuring feature models, which are then empirically validated to measure the impact of the proposed interventions. An empirical study was conducted by following the principles of control experiments in software engineering and by applying the well-known software quality standard ISO 9126 to operationalize the variables investigated in the experiment. The results of the empirical study revealed that the employed visualization and interaction interventions significantly improved completion time of comprehension and changing of the feature model configuration. Additionally, according to results, the proposed interventions are easy-to-use and easy-to-learn for the participants.  相似文献   

19.
Many small software organizations have recognized the need to improve their software product. Evaluating the software product alone seems insufficient since it is known that its quality is largely dependant on the process that is used to create it. Thus, small organizations are asking for evaluation of their software processes and products. The ISO/IEC 14598-5 standard is already used as a methodology basis for evaluating software products. This article explores how it can be combined with the CMMI to produce a methodology that can be tailored for process evaluation in order to improve their software processes. SM: CMMI is a service mark of Carnegie-Mellon University. Sylvie Trudel has over 20 years of experience in software. She worked for more than 10 years in development and implementation of management information systems and embedded real-time systems. Since 1996, she works as a process improvement specialist, implementing best practices into organizations processes from CMM and CMMI models. She performed several CMM and CMMI assessments and participated in many other CMM assessments such as CBA IPI, SCE, and other proprietary methods. She obtained a bachelors degree in computer science in 1986 from Laval University in Québec City and a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal. Sylvie is currently working as a software engineering advisor at the Centre de Recherche Informatique de Montréal (CRIM). Jean-Marc Lavoie has been working in software development for over 10 years. He performed and published a comparative study between the guide to the SWEBOK and the CMMI in 2003. Jean-Marc obtained a bachelor degree in Electrical Engineering. He is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software architect at Trisotech. Marie-Claude Pare has been working in software development for 7 years. Marie-Claude obtained a bachelor degree in Software Engineering from école Polytechnique in Montréal. She is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software engineer at Motorola GSG Canada. Dr Witold Suryn is a Professor at the école de technologie supérieure, Montreal, Canada (engineering school of the Université du Québec network of institutions) where he teaches graduate and undergraduate software engineering courses and conducts research in the domain of software quality engineering, software engineering body of knowledge and software engineering fundamental principles. Dr Suryn is also the principal researcher and the director of GELOG : IQUAL, the Software Quality Engineering Research Group at école de technologie supérieure. From October 2003 Dr. Suryn holds the position of the International Secretary of ISO/IEC SC7 – System and Software Engineering.  相似文献   

20.
ContextRequirements optimization has been widely studied in the Search Based Software Engineering (SBSE) literature. However, previous approaches have not handled requirement interactions, such as the dependencies that may exist between requirements, and, or, precedence, cost- and value-based constraints.ObjectiveTo introduce and evaluate a Multi-Objective Search Based Requirements Selection technique, using chromosome repair and to evaluate it on both synthetic and real world data sets, in order to assess its effectiveness and scalability. The paper extends and improves upon our previous conference paper on requirements interaction management.1MethodThe popular multi-objective evolutionary algorithm NSGA-II was used to produce baseline data for each data set in order to determine how many solutions on the Pareto front fail to meet five different requirement interaction constraints. The results for this baseline data are compared to those obtained using the archive based approach previously studied and the repair based approach introduced in this paper.ResultsThe repair based approach was found to produce more solutions on the Pareto front and better convergence and diversity of results than the previously studied NSGA-II and archive-based NSGA-II approaches based on Kruskal–Wallis test in most cases. The repair based approach was also found to scale almost as well as the previous approach.ConclusionThere is evidence to indicate that the repair based algorithm introduced in this paper is a suitable technique for extending previous work on requirements optimization to handle the requirement interaction constraints inherent in requirement interactions arising from dependencies, and, or, precedence, cost- and value-based constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号