首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextSoftware documents are core artifacts produced and consumed in documentation activity in the software lifecycle. Meanwhile, knowledge-based approaches have been extensively used in software development for decades, however, the software engineering community lacks a comprehensive understanding on how knowledge-based approaches are used in software documentation, especially documentation of software architecture design.ObjectiveThe objective of this work is to explore how knowledge-based approaches are employed in software documentation, their influences to the quality of software documentation, and the costs and benefits of using these approaches.MethodWe use a systematic literature review method to identify the primary studies on knowledge-based approaches in software documentation, following a pre-defined review protocol.ResultsSixty studies are finally selected, in which twelve quality attributes of software documents, four cost categories, and nine benefit categories of using knowledge-based approaches in software documentation are identified. Architecture understanding is the top benefit of using knowledge-based approaches in software documentation. The cost of retrieving information from documents is the major concern when using knowledge-based approaches in software documentation.ConclusionsThe findings of this review suggest several future research directions that are critical and promising but underexplored in current research and practice: (1) there is a need to use knowledge-based approaches to improve the quality attributes of software documents that receive less attention, especially credibility, conciseness, and unambiguity; (2) using knowledge-based approaches with the knowledge content in software documents which gets less attention in current applications of knowledge-based approaches in software documentation, to further improve the practice of software documentation activity; (3) putting more focus on the application of software documents using the knowledge-based approaches (knowledge reuse, retrieval, reasoning, and sharing) in order to make the most use of software documents; and (4) evaluating the costs and benefits of using knowledge-based approaches in software documentation qualitatively and quantitatively.  相似文献   

2.

Context

In software development, Testing is an important mechanism both to identify defects and assure that completed products work as specified. This is a common practice in single-system development, and continues to hold in Software Product Lines (SPL). Even though extensive research has been done in the SPL Testing field, it is necessary to assess the current state of research and practice, in order to provide practitioners with evidence that enable fostering its further development.

Objective

This paper focuses on Testing in SPL and has the following goals: investigate state-of-the-art testing practices, synthesize available evidence, and identify gaps between required techniques and existing approaches, available in the literature.

Method

A systematic mapping study was conducted with a set of nine research questions, in which 120 studies, dated from 1993 to 2009, were evaluated.

Results

Although several aspects regarding testing have been covered by single-system development approaches, many cannot be directly applied in the SPL context due to specific issues. In addition, particular aspects regarding SPL are not covered by the existing SPL approaches, and when the aspects are covered, the literature just gives brief overviews. This scenario indicates that additional investigation, empirical and practical, should be performed.

Conclusion

The results can help to understand the needs in SPL Testing, by identifying points that still require additional investigation, since important aspects regarding particular points of software product lines have not been addressed yet.  相似文献   

3.
Both software organisations and the academic community are aware that the requirements phase of software development is in need of further support. We address this problem by creating a specialised Requirements Capability Maturity Model (R-CMM1). The model focuses on the requirements engineering process as defined within the established Software Engineering Institute’s (SEI’s) software process improvement framework. Our empirical work with software practitioners is a primary motivation for creating this requirements engineering process improvement model. Although all organisations in our study were involved in software process improvement (SPI), they all showed a lack of control over many requirement engineering activities.This paper describes how the requirements engineering (RE) process is decomposed and prioritised in accordance with maturity goals set by the SEI’s Software Capability Maturity Model (SW CMM). Our R-CMM builds on the SEI’s framework by identifying and defining recommended RE sub-processes that meet maturity goals. This new focus will help practitioners to define their RE process with a view to setting realistic goals for improvement.Sarah Beecham is a research fellow in the Department of Maths and Computing in The Open University in the UK. She is currently working on the EPSRC funded CRESTES project () looking into modelling resource estimation for long-lived software. She has recently completed her PhD for a program of work entitled “A Requirements-based Software Process Maturity Model”. Current research interests are in estimation for software evolution and maintenance and in the general areas of software process improvement. Her particular research interests are in empirical methods in software engineering and requirements engineering.Tracy Hall leads the Systems & Software Research Group in the Department of Computer Science at the University of Hertfordshire. She specialises in the empirical investigation of technical and non-technical issues within software engineering. During the past ten years Tracy has successfully collaborated with many companies on a variety of research projects. She is very active in the Empirical Software Engineering community and is regularly invited to talk about empirical methods both in the UK and abroad. Tracy is an accomplished researcher having published over twenty high quality journal papers.Austen Rainer Austen Rainer is a senior lecturer at the University of Hertfordshire. He studied for his PhD at Bournemouth University, in conjunction with IBM Hursley Park. His current research interests include open source software development, longitudinal case study research, and the credibility of empirical evidence for researchers and software practitioners.  相似文献   

4.
ContextService-Orientation (SO) is a rapidly emerging paradigm for the design and development of adaptive and dynamic software systems. Software Product Line Engineering (SPLE) has also gained attention as a promising and successful software reuse development paradigm over the last decade and proven to provide effective solutions to deal with managing the growing complexity of software systems.ObjectiveThis study aims at characterizing and identifying the existing research on employing and leveraging SO and SPLE.MethodWe conducted a systematic mapping study to identify and analyze related literature. We identified 81 primary studies, dated from 2000–2011 and classified them with respect to research focus, types of research and contribution.ResultThe mapping synthesizes the available evidence about combining the synergy points and integration of SO and SPLE. The analysis shows that the majority of studies focus on service variability modeling and adaptive systems by employing SPLE principles and approaches.In particular, SPLE approaches, especially feature-oriented approaches for variability modeling, have been applied to the design and development of service-oriented systems. While SO is employed in software product line contexts for the realization of product lines to reconcile the flexibility, scalability and dynamism in product derivations thereby creating dynamic software product lines.ConclusionOur study summarizes and characterizes the SO and SPLE topics researchers have investigated over the past decade and identifies promising research directions as due to the synergy generated by integrating methods and techniques from these two areas.  相似文献   

5.
ContextThe research about motivation in software engineering has provided important insights into characterizing factors and outcomes related to motivation. However, the complex relationships among these factors, including the moderating and mediating effects of organisational and individual characteristics, still require deeper explanatory investigation.ObjectiveOur general goal is to build explanatory theories of motivation in different software organisations and to integrate these local theories towards a comprehensive understanding of the role of motivation in the effectiveness of the individuals and the teams in which they work. In this article, we describe the integrative synthesis of the results of two case studies performed with software organisations in different business contexts.MethodWe performed two case studies using a multiple-case, replication design, focusing on the software engineers as the unit of analysis. For 13 months, we conducted semi structured interviews, diary studies, and document analyses, and analysed the collected data using grounded theory procedures. The results of the two cases were synthesized using a meta-ethnography supported process.ResultsWe built translations of the concepts and propositions from the two studies into one another. We then used the translations to build a central story of motivation that synthesizes the individual stories. This synthesis is contextualized by the differences in organisational and individual characteristics.ConclusionThe differences in organisational contexts and in the characteristics of the software engineers in each study provided rich explanations for contrasts in perceptions and feelings about motivation in both organisations. The theory that emerged from the synthesis, supported by these explanations, provides a deeper understanding of the interplay between motivators and outcomes, and the needs and personal goals of the software engineers. This theory also characterises the role of team cohesion in motivation, advancing previous models about motivation in software engineering.  相似文献   

6.
7.
ContextScientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code.ObjectiveThis study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software.MethodWe conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software.ResultsWe found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them.ConclusionsScientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.  相似文献   

8.
ContextSoftware startups are newly created companies with no operating history and fast in producing cutting-edge technologies. These companies develop software under highly uncertain conditions, tackling fast-growing markets under severe lack of resources. Therefore, software startups present a unique combination of characteristics which pose several challenges to software development activities.ObjectiveThis study aims to structure and analyze the literature on software development in startup companies, determining thereby the potential for technology transfer and identifying software development work practices reported by practitioners and researchers.MethodWe conducted a systematic mapping study, developing a classification schema, ranking the selected primary studies according their rigor and relevance, and analyzing reported software development work practices in startups.ResultsA total of 43 primary studies were identified and mapped, synthesizing the available evidence on software development in startups. Only 16 studies are entirely dedicated to software development in startups, of which 10 result in a weak contribution (advice and implications (6); lesson learned (3); tool (1)). Nineteen studies focus on managerial and organizational factors. Moreover, only 9 studies exhibit high scientific rigor and relevance. From the reviewed primary studies, 213 software engineering work practices were extracted, categorized and analyzed.ConclusionThis mapping study provides the first systematic exploration of the state-of-art on software startup research. The existing body of knowledge is limited to a few high quality studies. Furthermore, the results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.  相似文献   

9.
Target setting in software quality function deployment (SQFD) is very important since it is directly related to development of high quality products with high customer satisfaction. However target setting is usually done subjectively in practice, which is not scientific. Two quantitative approaches for setting target values: benchmarking and primitive linear regression have been developed and applied in the past to overcome this problem (Akao and Yoji, 1990). But these approaches cannot be used to assess the impact of unachieved targets on satisfaction of customers for customer requirements. In addition, both of them are based on linear regression and not very practical in many applications. In this paper, we present an innovative quantitative method of setting technical targets in SQFD to enable analysis of impact of unachieved target values on customer satisfaction. It is based on assessment of impact of technical attributes on satisfaction of customer requirements. In addition both linear and non linear regression techniques are utilized in our method, which certainly improves the existing quantitative methods which are based on only linear regression. Frank Liu is currently an associate professor and a director of the McDonnel Douglass Foundation software engineering laboratory in the University of Missouri-Rolla. He has been working on requirements engineering, software quality management, and knowledge based software engineering since 1992. He has published about 50 papers in peer-reviewed journals and conferences in the above areas and several other software engineering application areas. He participates in research projects with a total amount of funds of more than four millions dollars as a PI or Co-PI sponsored by the National Science Foundation, Sandia National Laboratory, U.S. Air Force, University of Missouri Research Board, and Toshiba Corporation. He has served as a program committee member for many conferences. He was a program committee vice chair for the 2000 International Conference on Software Engineering and Knowledge Engineering. Kunio Noguchi is a senior quality expert in the software engineering center in the Toshiba Corporation. He has published several papers in the area of quality management systems. Anuj Dhungana was a M.S. graduate student in the computer science department at the Texas Tech University when he performed this research. V.V.N.S.N. Srirangam A. was a M.S. graduate student in the computer science department at the Texas Tech University when he performed this research. Praveen Inuganti was a M.S. graduate student in the computer science department at the University of Missouri-Rolla when he performed this research.  相似文献   

10.
ContextSoftware quality issues are commonly reported when offshoring software development. Value-based software engineering addresses this by ensuring key stakeholders have a common understanding of quality.ObjectiveThis work seeks to understand the levels of alignment between key stakeholder groups within a company on the priority given to aspects of software quality developed as part of an offshoring relationship. Furthermore, the study aims to identify factors impacting the levels of alignment identified.MethodThree case studies were conducted, with representatives of key stakeholder groups ranking aspects of software quality in a hierarchical cumulative exercise. The results are analysed using Spearman rank correlation coefficients and inertia. The results were discussed with the groups to gain a deeper understanding of the issues impacting alignment.ResultsVarious levels of alignment were found between the various groups. The reasons for misalignment were found to include cultural factors, control of quality in the development process, short-term versus long-term orientations, understanding of cost-benefits of quality improvements, communication and coordination.ConclusionsThe factors that negatively affect alignment can vary greatly between different cases. The work emphasises the need for greater support to align company internal success-critical stakeholder groups in their understanding of quality when offshoring software development.  相似文献   

11.
ContextSoftware engineering organizations routinely define and implement processes to support, guide and control project execution. An assumption underlying this process-centric approach to business improvement is that the quality of the process will influence the quality, cost and time-to-release of the software produced. A critical question thus arises of what constitutes quality for software engineering processes.ObjectiveTo identify criteria used by experienced practitioners to judge the quality of software engineering processes and to understand how knowledge of these criteria and their relationships may be useful for those undertaking software process improvement activities.MethodInterviews were conducted with 17 experienced software engineering practitioners from a range of geographies, roles and industry sectors. Published reports from 30 software process improvement case-studies were selected from multiple peer-reviewed software journals. A qualitative Grounded Theory-based methodology was employed to systematically analyze the collected data to synthesize a model of quality for software engineering processes.ResultsThe synthesized model suggests that practitioners perceive the overall quality of a software engineering process based on four quality attributes: suitability, usability, manageability and evolvability. Furthermore, these judgments are influenced by key properties related to the semantic content, structure, representation and enactment of that process. The model indicates that these attributes correspond to particular organizational perspectives and that these differing views may explain role-based conflicts in the judgement of process quality.ConclusionConsensus exists amongst practitioners about which characteristics of software engineering process quality most influence project outcomes. The model presented provides a terminological framework that can facilitate precise discussion of software engineering process issues and a set of criteria that can support activities for software process definition, evaluation and improvement. The potential exists for further development of this model to facilitate optimization of process properties to match organizational needs.  相似文献   

12.
ContextEye-tracking is a mean to collect evidence regarding some participants’ cognitive processes. Eye-trackers monitor participants’ visual attention by collecting eye-movement data. These data are useful to get insights into participants’ cognitive processes during reasoning tasks.ObjectiveThe Evidence-based Software Engineering (EBSE) paradigm has been proposed in 2004 and, since then, has been used to provide detailed insights regarding different topics in software engineering research and practice. Systematic Literature Reviews (SLR) are also useful in the context of EBSE by bringing together all existing evidence of research and results about a particular topic. This SLR evaluates the current state of the art of using eye-trackers in software engineering and provides evidence on the uses and contributions of eye-trackers to empirical studies in software engineering.MethodWe perform a SLR covering eye-tracking studies in software engineering published from 1990 up to the end of 2014. To search all recognised resources, instead of applying manual search, we perform an extensive automated search using Engineering Village. We identify 36 relevant publications, including nine journal papers, two workshop papers, and 25 conference papers.ResultsThe software engineering community started using eye-trackers in the 1990s and they have become increasingly recognised as useful tools to conduct empirical studies from 2006. We observe that researchers use eye-trackers to study model comprehension, code comprehension, debugging, collaborative interaction, and traceability. Moreover, we find that studies use different metrics based on eye-movement data to obtain quantitative measures. We also report the limitations of current eye-tracking technology, which threaten the validity of previous studies, along with suggestions to mitigate these limitations.ConclusionHowever, not withstanding these limitations and threats, we conclude that the advent of new eye-trackers makes the use of these tools easier and less obtrusive and that the software engineering community could benefit more from this technology.  相似文献   

13.
ContextThe reuse of software has been a research topic for more than 50 years. Throughout that time, many approaches, tools and proposed techniques have reached maturity. However, it is not yet a widespread practice and some issues need to be further investigated. The latest study on software reuse trends dates back to 2005 and we think that it should be updated.ObjectiveTo identify the current trends in software reuse research.MethodA tertiary study based on systematic secondary studies published up to July 2018.ResultsWe identified 4,423 works related to software reuse, from which 3,102 were filtered by selection criteria and quality assessment to produce a final set of 56 relevant studies. We identified 30 current research topics and 127 proposals for future work, grouped into three broad categories: Software Product Lines, Other reuse approaches and General reuse topics.ConclusionsFrequently reported topics include: Requirements and Testing in the category of Lifecycle phases for Software Product Lines, and Systematic reuse for decision making in the category of General Reuse. The most mentioned future work proposals were Requirements, and Evolution and Variability management for Software Product Lines, and Systematic reuse for decision making. The identified trends, based on future work proposals, demonstrate that software reuse is still an interesting area for research. Researchers can use these trends as a guide to lead their future projects.  相似文献   

14.
ContextTechnical debt is a software engineering metaphor, referring to the eventual financial consequences of trade-offs between shrinking product time to market and poorly specifying, or implementing a software product, throughout all development phases. Based on its inter-disciplinary nature, i.e. software engineering and economics, research on managing technical debt should be balanced between software engineering and economic theories.ObjectiveThe aim of this study is to analyze research efforts on technical debt, by focusing on their financial aspect. Specifically, the analysis is carried out with respect to: (a) how financial aspects are defined in the context of technical debt and (b) how they relate to the underlying software engineering concepts.MethodIn order to achieve the abovementioned goals, we employed a standard method for SLRs and applied it on studies retrieved from seven general-scope digital libraries. In total we selected 69 studies relevant to the financial aspect of technical debt.ResultsThe most common financial terms that are used in technical debt research are principal and interest, whereas the financial approaches that have been more frequently applied for managing technical debt are real options, portfolio management, cost/benefit analysis and value-based analysis. However, the application of such approaches lacks consistency, i.e., the same approach is differently applied in different studies, and in some cases lacks a clear mapping between financial and software engineering concepts.ConclusionThe results are expected to prove beneficial for the communication between technical managers and project managers, in the sense that they will provide a common vocabulary, and will help in setting up quality-related goals, during software development. To achieve this we introduce: (a) a glossary of terms and (b) a classification scheme for financial approaches used for managing technical debt. Based on these, we have been able to underline interesting implications for researchers and practitioners.  相似文献   

15.
An empirical study of predicting software faults with case-based reasoning   总被引:1,自引:0,他引:1  
The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering Laboratory. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, and statistical modeling. He has published more than 200 refereed papers in these areas. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the Association for Computing Machinery, the IEEE Computer Society, and IEEE Reliability Society. He served as the general chair of the 1999 International Symposium on Software Reliability Engineering (ISSRE’99), and the general chair of the 2001 International Conference on Engineering of Computer Based Systems. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Naeem Seliya received the M.S. degree in Computer Science from Florida Atlantic University, Boca Raton, FL, USA, in 2001. He is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include software engineering, computational intelligence, data mining, software measurement, software reliability and quality engineering, software architecture, computer data security, and network intrusion detection. He is a student member of the IEEE Computer Society and the Association for Computing Machinery.  相似文献   

16.
ContextSoftware networks are directed graphs of static dependencies between source code entities (functions, classes, modules, etc.). These structures can be used to investigate the complexity and evolution of large-scale software systems and to compute metrics associated with software design. The extraction of software networks is also the first step in reverse engineering activities.ObjectiveThe aim of this paper is to present SNEIPL, a novel approach to the extraction of software networks that is based on a language-independent, enriched concrete syntax tree representation of the source code.MethodThe applicability of the approach is demonstrated by the extraction of software networks representing real-world, medium to large software systems written in different languages which belong to different programming paradigms. To investigate the completeness and correctness of the approach, class collaboration networks (CCNs) extracted from real-world Java software systems are compared to CCNs obtained by other tools. Namely, we used Dependency Finder which extracts entity-level dependencies from Java bytecode, and Doxygen which realizes language-independent fuzzy parsing approach to dependency extraction. We also compared SNEIPL to fact extractors present in language-independent reverse engineering tools.ResultsOur approach to dependency extraction is validated on six real-world medium to large-scale software systems written in Java, Modula-2, and Delphi. The results of the comparative analysis involving ten Java software systems show that the networks formed by SNEIPL are highly similar to those formed by Dependency Finder and more precise than the comparable networks formed with the help of Doxygen. Regarding the comparison with language-independent reverse engineering tools, SNEIPL provides both language-independent extraction and representation of fact bases.ConclusionSNEIPL is a language-independent extractor of software networks and consequently enables language-independent network-based analysis of software systems, computation of design software metrics, and extraction of fact bases for reverse engineering activities.  相似文献   

17.
ContextSoftware product line engineering (SPLE) is a growing area showing promising results in research and practice. In order to foster its further development and acceptance in industry, it is necessary to assess the quality of the research so that proper evidence for adoption and validity are ensured. This holds in particular for requirements engineering (RE) within SPLE, where a growing number of approaches have been proposed.ObjectiveThis paper focuses on RE within SPLE and has the following goals: assess research quality, synthesize evidence to suggest important implications for practice, and identify research trends, open problems, and areas for improvement.MethodA systematic literature review was conducted with three research questions and assessed 49 studies, dated from 1990 to 2009.ResultsThe evidence for adoption of the methods is not mature, given the primary focus on toy examples. The proposed approaches still have serious limitations in terms of rigor, credibility, and validity of their findings. Additionally, most approaches still lack tool support addressing the heterogeneity and mostly textual nature of requirements formats as well as address only the proactive SPLE adoption strategy.ConclusionsFurther empirical studies should be performed with sufficient rigor to enhance the body of evidence in RE within SPLE. In this context, there is a clear need for conducting studies comparing alternative methods. In order to address scalability and popularization of the approaches, future research should be invested in tool support and in addressing combined SPLE adoption strategies.  相似文献   

18.
Context:The Unified Modeling Language (UML) has become the de facto standard for software modeling. UML models are often used to visualize, understand, and communicate the structure and behavior of a system. UML activity diagrams (ADs) are often used to elaborate and visualize individual use cases. Due to their higher level of abstraction and process-oriented perspective, UML ADs are also highly suitable for model-based test generation. In the last two decades, different researchers have used UML ADs for test generation. Despite the growing use of UML ADs for model-based testing, there are currently no comprehensive and unbiased studies on the topic.Objective:To present a comprehensive and unbiased overview of the state-of-the-art on model-based testing using UML ADs.Method:We review and structure the current body of knowledge on model-based testing using UML ADs by performing a systematic mapping study using well-known guidelines. We pose nine research questions, outline our selection criteria, and develop a classification scheme.Results:The results comprise 41 primary studies analyzed against nine research questions. We also highlight the current trends and research gaps in model-based testing using UML ADs and discuss some shortcomings for researchers and practitioners working in this area. The results show that the existing approaches on model-based testing using UML ADs tend to rely on intermediate formats and formalisms for model verification and test generation, employ a multitude of graph-based coverage criteria, and use graph search algorithms.Conclusion:We present a comprehensive overview of the existing approaches on model-based testing using UML ADs. We conclude that (1) UML ADs are not being used for non-functional testing, (2) only a few approaches have been validated against realistic, industrial case studies, (3) most approaches target very restricted application domains, and (4) there is currently a clear lack of holistic approaches for model-based testing using UML ADs.  相似文献   

19.
ContextGamification seeks for improvement of the user’s engagement, motivation, and performance when carrying out a certain task, by means of incorporating game mechanics and elements, thus making that task more attractive. Much research work has studied the application of gamification in software engineering for increasing the engagement and results of developers.ObjectiveThe objective of this paper is to carry out a systematic mapping of the field of gamification in software engineering in an attempt to characterize the state of the art of this field identifying gaps and opportunities for further research.MethodWe carried out a systematic mapping with a view to finding the primary studies in the existing literature, which were later classified and analyzed according to four criteria: the software process area addressed, the gamification elements used, the type of research method followed, and the type of forum in which they were published. A subjective evaluation of the studies was also carried out to evaluate them in terms of methodology, empirical evidence, integration with the organization, and replicability.ResultsAs a result of the systematic mapping we found 29 primary studies, published between January 2011 and June 2014. Most of them focus on software development, and to a lesser extent, requirements, project management, and other support areas. In the main, they consider very simple gamification mechanics such as points and badges, and few provide empirical evidence of the impact of gamification.ConclusionsExisting research in the field is quite preliminary, and more research effort analyzing the impact of gamification in SE would be needed. Future research work should look at other game mechanics in addition to the basic ones and should tackle software process areas that have not been fully studied, such as requirements, project management, maintenance, or testing. Most studies share a lack of methodological support that would make their proposals replicable in other settings. The integration of gamification with an organization’s existing tools is also an important challenge that needs to be taken up in this field.  相似文献   

20.
ContextSoftware industry has widely adopted Agile software development methods. Agile literature proposes a few key metrics but little is known of the actual metrics use in Agile teams.ObjectiveThe objective of this paper is to increase knowledge of the reasons for and effects of using metrics in industrial Agile development. We focus on the metrics that Agile teams use, rather than the ones used from outside by software engineering researchers. In addition, we analyse the influence of the used metrics.MethodThis paper presents a systematic literature review (SLR) on using metrics in industrial Agile software development. We identified 774 papers, which we reduced to 30 primary studies through our paper selection process.ResultsThe results indicate that the reasons for and the effects of using metrics are focused on the following areas: sprint planning, progress tracking, software quality measurement, fixing software process problems, and motivating people. Additionally, we show that although Agile teams use many metrics suggested in the Agile literature, they also use many custom metrics. Finally, the most influential metrics in the primary studies are Velocity and Effort estimate.ConclusionThe use of metrics in Agile software development is similar to Traditional software development. Projects and sprints need to be planned and tracked. Quality needs to be measured. Problems in the process need to be identified and fixed. Future work should focus on metrics that had high importance but low prevalence in our study, as they can offer the largest impact to the software industry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号