首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

2.
ContextEvidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews.ObjectivesWe extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process and (2) provide evidence of challenges and related solutions for automotive software testing processes.MethodsIn this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used.ResultsIn the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process.ConclusionsOverall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).  相似文献   

3.
ContextThe ISO/IEC29110 has been studied since its release in 2011, and its impact and evaluation over the recent years have been quite diverse. This standard is structured in five parts describing the business terms, the main Very Small Entities (VSE) profile concepts, process assessment guidelines, specification of all the generic profile group, and implementation management and engineering guide for entry and basic profiles.ObjectiveThe main purpose of this work is to provide an analysis of the research carried out about the ISO/IEC 29110 during the last ten years, and the literature that has developed around it. Literature is analyzed by using the traditional mapping study of the ISO/IEC29110 and its parts. All these studies are categorized in a set of topics where authors have been contributing. This work helps us on the identification of the main research topics within the primary studies.MethodThe mapping study is conducted as a traditional systematic mapping with a categorization of the primary studies. The main search is enhanced with additional searches for each member of the ISO/IEC 29110 series.ResultsA search strategy is defined to conduct this mapping study. 184 papers were retrieved from the literature and selected as primary studies. Our study identifies the reference studies in this area, it characterizes them, and identifies which aspects have been treated.ConclusionThe results of this mapping reveal that ISO/IEC 29110 has been used in a broad range of small contexts, and the main contributions are basically from research experiences during the recent last ten years. The literature around this standard is classified based on a well-known classification schema, the activity around this standard, and what types of studies have been carried out. Research topics are diverse, and we have identified the research methods used by the primary studies. As conclusion, more research and experimental outcomes are needed in order to observe how VSEs behave under specific circumstances.  相似文献   

4.
ContextTwo recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012).ObjectiveIn this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research.MethodWe applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 1996–2012 complemented by manual and automatic search procedures that collected articles published in 2013.ResultsWe analyzed 37 papers reporting studies about replication published in the last 17 years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012.ConclusionsReplication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.  相似文献   

5.
《Ergonomics》2012,55(10):1266-1277
Workers in physically demanding occupations (PDOs) are frequently subjected to physical selection tests. To avoid legal ramifications, workplaces must be able to show that any personnel selection procedures reflect the inherent requirements of the job. A job task analysis (JTA) is fundamental in determining the work tasks required for employees. To date, there are no published instructions guiding PDO researchers on how to conduct job task analyses. Job task analysis research for non-PDOs offers some insight into the expected reliability and validity of data obtained on the most prevalent task domains in job analysis (importance, frequency, time spent and difficulty). This review critiques such research, and the existing published material on JTA of PDOs, and provides recommendations for future research and practice.

Practitioner Summary: There are no published guidelines for physically demanding occupation (PDO) researchers conducting job task analysis (JTA). Given the legal consequences of improperly conducted JTA, scientifically valid instructions for JTA practitioners are required. This review critiques existing research which analyses reliabilities of JTA data, and provides guidelines for PDO researchers conducting JTA.  相似文献   

6.
ContextThe Web has had a significant impact on all aspects of our society. As our society relies more and more on the Web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 147 papers in the area of web application testing, which have appeared between 2000 and 2011.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends in this specialized field.MethodWe review and structure the body of knowledge related to web application testing through a systematic mapping (SM) study. As part of this study, we pose two sets of research questions, define selection and exclusion criteria, and systematically develop and refine a classification schema. In addition, we conduct a bibliometrics analysis of the papers included in our study.ResultsOur study includes a set of 79 papers (from the 147 retrieved papers) published in the area of web application testing between 2000 and 2011. We present the results of our systematic mapping study. Our mapping data is available through a publicly-accessible repository. We derive the observed trends, for instance, in terms of types of papers, sources of information to derive test cases, and types of evaluations used in papers. We also report the demographics and bibliometrics trends in this domain, including top-cited papers, active countries and researchers, and top venues in this research area.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing web application testing approaches and indentify areas in the field that require more attention from the research community.  相似文献   

7.
ContextWith the increased use of software for running key functions in modern society it is of utmost importance to understand software robustness and how to support it. Although there have been many contributions to the field there is a lack of a coherent and summary view.ObjectiveTo address this issue, we have conducted a literature review in the field of robustness.MethodThis review has been conducted by following guidelines for systematic literature reviews. Systematic reviews are used to find and classify all existing and available literature in a certain field.ResultsFrom 9193 initial papers found in three well-known research databases, the 144 relevant papers were extracted through a multi-step filtering process with independent validation in each step. These papers were then further analyzed and categorized based on their development phase, domain, research, contribution and evaluation type. The results indicate that most existing results on software robustness focus on verification and validation of Commercial of the shelf (COTS) or operating systems or propose design solutions for robustness while there is a lack of results on how to elicit and specify robustness requirements. The research is typically solution proposals with little to no evaluation and when there is some evaluation it is primarily done with small, toy/academic example systems.ConclusionWe conclude that there is a need for more software robustness research on real-world, industrial systems and on software development phases other than testing and design, in particular on requirements engineering.  相似文献   

8.
ContextThe software architecture of a system is the result of a set of architectural decisions. The topic of architectural decisions in software engineering has received significant attention in recent years. However, no systematic overview exists on the state of research on architectural decisions.ObjectiveThe goal of this study is to provide a systematic overview of the state of research on architectural decisions. Such an overview helps researchers reflect on previous research and plan future research. Furthermore, such an overview helps practitioners understand the state of research, and how research results can help practitioners in their architectural decision-making.MethodWe conducted a systematic mapping study, covering studies published between January 2002 and January 2012. We defined six research questions. We queried six reference databases and obtained an initial result set of 28,895 papers. We followed a search and filtering process that resulted in 144 relevant papers.ResultsAfter classifying the 144 relevant papers for each research question, we found that current research focuses on documenting architectural decisions. We found that only several studies describe architectural decisions from the industry. We identified potential future research topics: domain-specific architectural decisions (such as mobile), achieving specific quality attributes (such as reliability or scalability), uncertainty in decision-making, and group architectural decisions. Regarding empirical evaluations of the papers, around half of the papers use systematic empirical evaluation approaches (such as surveys, or case studies). Still, few papers on architectural decisions use experiments.ConclusionOur study confirms the increasing interest in the topic of architectural decisions. This study helps the community reflect on the past ten years of research on architectural decisions. Researchers are offered a number of promising future research directions, while practitioners learn what existing papers offer.  相似文献   

9.
ContextModel-driven code generation is being increasingly applied to enhance software development from perspectives of maintainability, extensibility and reusability. However, aspect-oriented code generation from models is an area that is currently underdeveloped.ObjectiveIn this study we provide a survey of existing research on aspect-oriented modeling and code generation to discover current work and identify needs for future research.MethodA systematic mapping study was performed to find relevant studies. Classification schemes have been defined and the 65 selected primary studies have been classified on the basis of research focus, contribution type and research type.ResultsThe papers of solution proposal research type are in a majority. All together aspect-oriented modeling appears being the most focused area divided into modeling notations and process (36%) and model composition and interaction management (26%). The majority of contributions are methods.ConclusionAspect-oriented modeling and composition mechanisms have been significantly discussed in existing literature while more research is needed in the area of model-driven code generation. Furthermore, we have observed that previous research has frequently focused on proposing solutions and thus there is need for research that validates and evaluates the existing proposals in order to provide firm foundations for aspect-oriented model-driven code generation.  相似文献   

10.
ContextIdentifying refactoring opportunities in object-oriented code is an important stage that precedes the actual refactoring process. Several techniques have been proposed in the literature to identify opportunities for various refactoring activities.ObjectiveThis paper provides a systematic literature review of existing studies identifying opportunities for code refactoring activities.MethodWe performed an automatic search of the relevant digital libraries for potentially relevant studies published through the end of 2013, performed pilot and author-based searches, and selected 47 primary studies (PSs) based on inclusion and exclusion criteria. The PSs were analyzed based on a number of criteria, including the refactoring activities, the approaches to refactoring opportunity identification, the empirical evaluation approaches, and the data sets used.ResultsThe results indicate that research in the area of identifying refactoring opportunities is highly active. Most of the studies have been performed by academic researchers using nonindustrial data sets. Extract Class and Move Method were found to be the most frequently considered refactoring activities. The results show that researchers use six primary existing approaches to identify refactoring opportunities and six approaches to empirically evaluate the identification techniques. Most of the systems used in the evaluation process were open-source, which helps to make the studies repeatable. However, a relatively high percentage of the data sets used in the empirical evaluations were small, which limits the generality of the results.ConclusionsIt would be beneficial to perform further studies that consider more refactoring activities, involve researchers from industry, and use large-scale and industrial-based systems.  相似文献   

11.
ContextEnterprise Architecture (EA) is a strategy to align business and Information Technology (IT) within an enterprise. EA is managed, developed, and maintained throughout the EA Implementation Methodology (EAIM).ObjectiveThe aims of this study are to identify the existing effective practices that are used by existing EAIMs, identify the factors that affect the effectiveness of EAIM, identify the current tools that are used by existing EAIMs, and identify the open problems and areas related to EAIM for improvement.MethodA Systematic Literature Review (SLR) was carried out. 669 papers were retrieved by a manual search in 6 databases and 46 primary studies were finally included.ResultFrom these studies 33% were journal articles, 41% were conference papers while 26% were contributions from the studies consisted of book chapters. Consequently, 28 practices, 19 factors, and 15 tools were identified and analysed.ConclusionSeveral rigorous researches have been done in order to provide effective EAIM, however there are still problems in components of EAIM, including: there is lack of tool support for whole part of EA implementation, there are deficiency in addressing the EAIM’s practices especially in modeling, management, and maintenance, there is lack of consideration on non-functional requirement in existing EAIM, there is no appropriate consideration on requirement analysis in most existing EAIM. This review provides researchers with some guidelines for future research on this topic. It also provides broad information on EAIM, which could be useful for practitioners.  相似文献   

12.
Internet topology mapping studies utilize large scale topology maps to analyze various characteristics of the Internet. IP alias resolution, the task of mapping IP addresses to their corresponding routers, is an important task in building such topology maps. In this paper, we present a new probe-based IP alias resolution tool called palmtree. Palmtree can be used to complement the existing schemes in improving the overall success of alias resolution process during topology map construction. In addition, palmtree incurs a linear probing overhead to identify IP aliases. The experimental results obtained over Internet2 and GEANT networks as well as four major Internet Service Providers (ISPs) present quite promising results on the utility of palmtree in obtaining more accurate network topology maps.  相似文献   

13.
ContextThe software product line engineering (SPLE) community has provided several different approaches for assessing the feasibility of SPLE adoption and selecting transition strategies. These approaches usually include many rules and guidelines which are very often implicit or scattered over different publications. Hence, for the practitioners it is not always easy to select and use these rules to support the decision making process. Even in case the rules are known, the lack of automated support for storing and executing the rules seriously impedes the decision making process.ObjectiveWe aim to evaluate the impact of a decision support system (DSS) on decision-making in SPLE adoption. In alignment with this goal, we provide a decision support model (DSM) and the corresponding DSS.MethodFirst, we apply a systematic literature review (SLR) on the existing primary studies that discuss and present approaches for analyzing the feasibility of SPLE adoption and transition strategies. Second, based on the data extraction and synthesis activities of the SLR, the required questions and rules are derived and implemented in the DSS. Third, for validation of the approach we conduct multiple case studies.ResultsIn the course of the SLR, 31 primary studies were identified from which we could construct 25 aspects, 39 questions and 312 rules. We have developed the DSS tool Transit-PL that embodies these elements.ConclusionsThe multiple case study validation showed that the adoption of the developed DSS tool is justified to support the decision making process in SPLE adoption.  相似文献   

14.
ContextSoftware testing is a knowledge intensive process, and, thus, Knowledge Management (KM) principles and techniques should be applied to manage software testing knowledge.ObjectiveThis study conducts a survey on existing research on KM initiatives in software testing, in order to identify the state of the art in the area as well as the future research. Aspects such as purposes, types of knowledge, technologies and research type are investigated.MethodThe mapping study was performed by searching seven electronic databases. We considered studies published until December 2013. The initial resulting set was comprised of 562 studies. From this set, a total of 13 studies were selected. For these 13, we performed snowballing and direct search to publications of researchers and research groups that accomplished these studies.ResultsFrom the mapping study, we identified 15 studies addressing KM initiatives in software testing that have been reviewed in order to extract relevant information on a set of research questions.ConclusionsAlthough only a few studies were found that addressed KM initiatives in software testing, the mapping shows an increasing interest in the topic in the recent years. Reuse of test cases is the perspective that has received more attention. From the KM point of view, most of the studies discuss aspects related to providing automated support for managing testing knowledge by means of a KM system. Moreover, as a main conclusion, the results show that KM is pointed out as an important strategy for increasing test effectiveness, as well as for improving the selection and application of suited techniques, methods and test cases. On the other hand, inadequacy of existing KM systems appears as the most cited problem related to applying KM in software testing.  相似文献   

15.
Systematic literature studies have received much attention in empirical software engineering in recent years. They have become a powerful tool to collect and structure reported knowledge in a systematic and reproducible way. We distinguish systematic literature reviews to systematically analyze reported evidence in depth, and systematic mapping studies to structure a field of interest in a broader, usually quantified manner. Due to the rapidly increasing body of knowledge in software engineering, researchers who want to capture the published work in a domain often face an extensive amount of publications, which need to be screened, rated for relevance, classified, and eventually analyzed. Although there are several guidelines to conduct literature studies, they do not yet help researchers coping with the specific difficulties encountered in the practical application of these guidelines. In this article, we present an experience-based guideline to aid researchers in designing systematic literature studies with special emphasis on the data collection and selection procedures. Our guideline aims at providing a blueprint for a practical and pragmatic path through the plethora of currently available practices and deliverables capturing the dependencies among the single steps. The guideline emerges from various mapping studies and literature reviews conducted by the authors and provides recommendations for the general study design, data collection, and study selection procedures. Finally, we share our experiences and lessons learned in applying the different practices of the proposed guideline.  相似文献   

16.
Ontologies are becoming the preferred way of representing, dealing and reasoning with large volumes of information in several domains. In consequence, the creation, evaluation and maintenance of ontologies has become an engineering process that needs to be managed and measured using sound and reliable methods. As part of any ontology engineering or revision process, metrics can play a role helping in identifying possible problems or incorrect use of ontology elements, along with providing a kind of quality assessment that complements reviews requiring expert inspection. However, in spite of the fact that there are a number of ontology metric proposals described in the literature, there is a lack of empirical studies that provide a basis for their interpretation. This paper reports a systematic exploration of existing ontology metrics from a large set of ontologies extracted from the Swoogle search engine. The OntoRank value used inside the Swoogle to rank search results is used as a contrast for some existing metrics. The results show that the existing proposed metrics evaluated in general do not seem to be indicators of that ranking, but they can be helpful to identify particular kinds of ontologies.  相似文献   

17.
ContextTest driven development (TDD) has been extensively researched and compared to traditional approaches (test last development, TLD). Existing literature reviews show varying results for TDD.ObjectiveThis study investigates how the conclusions of existing literature reviews change when taking two study quality dimension into account, namely rigor and relevance.MethodIn this study a systematic literature review has been conducted and the results of the identified primary studies have been analyzed with respect to rigor and relevance scores using the assessment rubric proposed by Ivarsson and Gorschek 2011. Rigor and relevance are rated on a scale, which is explained in this paper. Four categories of studies were defined based on high/low rigor and relevance.ResultsWe found that studies in the four categories come to different conclusions. In particular, studies with a high rigor and relevance scores show clear results for improvement in external quality, which seem to come with a loss of productivity. At the same time high rigor and relevance studies only investigate a small set of variables. Other categories contain many studies showing no difference, hence biasing the results negatively for the overall set of primary studies. Given the classification differences to previous literature reviews could be highlighted.ConclusionStrong indications are obtained that external quality is positively influenced, which has to be further substantiated by industry experiments and longitudinal case studies. Future studies in the high rigor and relevance category would contribute largely by focusing on a wider set of outcome variables (e.g. internal code quality). We also conclude that considering rigor and relevance in TDD evaluation is important given the differences in results between categories and in comparison to previous reviews.  相似文献   

18.
BackgroundIn 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference.AimsThis study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence.MethodWe used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings.ResultsOf 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4.ConclusionsCurrently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners.  相似文献   

19.
Context:The Unified Modeling Language (UML) has become the de facto standard for software modeling. UML models are often used to visualize, understand, and communicate the structure and behavior of a system. UML activity diagrams (ADs) are often used to elaborate and visualize individual use cases. Due to their higher level of abstraction and process-oriented perspective, UML ADs are also highly suitable for model-based test generation. In the last two decades, different researchers have used UML ADs for test generation. Despite the growing use of UML ADs for model-based testing, there are currently no comprehensive and unbiased studies on the topic.Objective:To present a comprehensive and unbiased overview of the state-of-the-art on model-based testing using UML ADs.Method:We review and structure the current body of knowledge on model-based testing using UML ADs by performing a systematic mapping study using well-known guidelines. We pose nine research questions, outline our selection criteria, and develop a classification scheme.Results:The results comprise 41 primary studies analyzed against nine research questions. We also highlight the current trends and research gaps in model-based testing using UML ADs and discuss some shortcomings for researchers and practitioners working in this area. The results show that the existing approaches on model-based testing using UML ADs tend to rely on intermediate formats and formalisms for model verification and test generation, employ a multitude of graph-based coverage criteria, and use graph search algorithms.Conclusion:We present a comprehensive overview of the existing approaches on model-based testing using UML ADs. We conclude that (1) UML ADs are not being used for non-functional testing, (2) only a few approaches have been validated against realistic, industrial case studies, (3) most approaches target very restricted application domains, and (4) there is currently a clear lack of holistic approaches for model-based testing using UML ADs.  相似文献   

20.
ContextGlobal Software Engineering (GSE) continues to experience substantial growth and is fundamentally different to collocated development. As a result, software managers have a pressing need for support in how to successfully manage teams in a global environment. Unfortunately, de facto process frameworks such as the Capability Maturity Model Integration (CMMI®) do not explicitly cater for the complex and changing needs of global software management.ObjectiveTo develop a Global Teaming (GT) process area to address specific problems relating to temporal, cultural, geographic and linguistic distance which will meet the complex and changing needs of global software management.MethodWe carried out three in-depth case studies of GSE within industry from 1999 to 2007. To supplement these studies we conducted three literature reviews. This allowed us to identify factors which are important to GSE. Based on a gap analysis between these GSE factors and the CMMI®, we developed the GT process area. Finally, the literature and our empirical data were used to identify threats to software projects if these processes are not implemented.ResultsOur new GT process area brings together practices drawn from the GSE literature and our previous empirical work, including many socio-technical factors important to global software development. The GT process area presented in this paper encompasses recommended practices that can be used independently or with existing models. We found that if managers are not proactive in implementing new GT practices they are putting their projects under threat of failure. We therefore include a list of threats that if ignored could have an adverse effect on an organization’s competitive advantage, employee satisfaction, timescales, and software quality.ConclusionThe GT process area and associated threats presented in this paper provides both a guide and motivation for software managers to better understand how to manage technical talent across the globe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号