首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Preliminary guidelines for empirical research in software engineering   总被引:5,自引:0,他引:5  
Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers, and meta-analysts in designing, conducting, and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection, and analysis and reporting of empirical studies.  相似文献   

2.
Qualitative methods in empirical studies of software engineering   总被引:1,自引:0,他引:1  
While empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behaviour. The paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout  相似文献   

3.
Many software engineering problems have been addressed with search algorithms. Search algorithms usually depend on several parameters (e.g., population size and crossover rate in genetic algorithms), and the choice of these parameters can have an impact on the performance of the algorithm. It has been formally proven in the No Free Lunch theorem that it is impossible to tune a search algorithm such that it will have optimal settings for all possible problems. So, how to properly set the parameters of a search algorithm for a given software engineering problem? In this paper, we carry out the largest empirical analysis so far on parameter tuning in search-based software engineering. More than one million experiments were carried out and statistically analyzed in the context of test data generation for object-oriented software using the EvoSuite tool. Results show that tuning does indeed have impact on the performance of a search algorithm. But, at least in the context of test data generation, it does not seem easy to find good settings that significantly outperform the “default” values suggested in the literature. This has very practical value for both researchers (e.g., when different techniques are compared) and practitioners. Using “default” values is a reasonable and justified choice, whereas parameter tuning is a long and expensive process that might or might not pay off in the end.  相似文献   

4.
ContextTwo recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012).ObjectiveIn this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research.MethodWe applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 1996–2012 complemented by manual and automatic search procedures that collected articles published in 2013.ResultsWe analyzed 37 papers reporting studies about replication published in the last 17 years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012.ConclusionsReplication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.  相似文献   

5.
Software engineering is knowledge-intensive work, and how to manage software engineering knowledge has received much attention. This systematic review identifies empirical studies of knowledge management initiatives in software engineering, and discusses the concepts studied, the major findings, and the research methods used. Seven hundred and sixty-two articles were identified, of which 68 were studies in an industry context. Of these, 29 were empirical studies and 39 reports of lessons learned. More than half of the empirical studies were case studies.The majority of empirical studies relate to technocratic and behavioural aspects of knowledge management, while there are few studies relating to economic, spatial and cartographic approaches. A finding reported across multiple papers was the need to not focus exclusively on explicit knowledge, but also consider tacit knowledge. We also describe implications for research and for practice.  相似文献   

6.
Elmidaoui  Sara  Cheikhi  Laila  Idri  Ali  Abran  Alain 《计算机科学技术学报》2020,35(5):1147-1174

Maintaining software once implemented on the end-user side is laborious and, over its lifetime, is most often considerably more expensive than the initial software development. The prediction of software maintainability has emerged as an important research topic to address industry expectations for reducing costs, in particular, maintenance costs. Researchers and practitioners have been working on proposing and identifying a variety of techniques ranging from statistical to machine learning (ML) for better prediction of software maintainability. This review has been carried out to analyze the empirical evidence on the accuracy of software product maintainability prediction (SPMP) using ML techniques. This paper analyzes and discusses the findings of 77 selected studies published from 2000 to 2018 according to the following criteria: maintainability prediction techniques, validation methods, accuracy criteria, overall accuracy of ML techniques, and the techniques offering the best performance. The review process followed the well-known systematic review process. The results show that ML techniques are frequently used in predicting maintainability. In particular, artificial neural network (ANN), support vector machine/regression (SVM/R), regression &; decision trees (DT), and fuzzy &; neuro fuzzy (FNF) techniques are more accurate in terms of PRED and MMRE. The N-fold and leave-one-out cross-validation methods, and the MMRE and PRED accuracy criteria are frequently used in empirical studies. In general, ML techniques outperformed non-machine learning techniques, e.g., regression analysis (RA) techniques, while FNF outperformed SVM/R, DT, and ANN in most experiments. However, while many techniques were reported superior, no specific one can be identified as the best.

  相似文献   

7.
The popularity of empirical methods in software engineering research is on the rise. Surveys, experiments, metrics, case studies, and field studies are examples of empirical methods used to investigate both software engineering processes and products. The increased application of empirical methods has also brought about an increase in discussions about adapting these methods to the peculiarities of software engineering. In contrast, the ethical issues raised by empirical methods have received little, if any, attention in the software engineering literature. This article is intended to introduce the ethical issues raised by empirical research to the software engineering research community and to stimulate discussion of how best to deal with these ethical issues. Through a review of the ethical codes of several fields that commonly employ humans and artifacts as research subjects, we have identified major ethical issues relevant to empirical studies of software engineering. These issues are illustrated with real empirical studies of software engineering.  相似文献   

8.
Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. They are particularly useful for (a) poorly understood problem domains where little knowledge exists for the humans to develop effective algorithms; (b) domains where there are large databases containing valuable implicit regularities to be discovered; or (c) domains where programs must adapt to changing conditions. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development and maintenance tasks could be formulated as learning problems and approached in terms of learning algorithms. This paper deals with the subject of applying machine learning in software engineering. In the paper, we first provide the characteristics and applicability of some frequently utilized machine learning algorithms. We then summarize and analyze the existing work and discuss some general issues in this niche area. Finally we offer some guidelines on applying machine learning methods to software engineering tasks and use some software development and maintenance tasks as examples to show how they can be formulated as learning problems and approached in terms of learning algorithms.  相似文献   

9.
Empirical software engineering has a long history of utilizing statistical significance testing, and in many ways, it has become the backbone of the topic. What is less obvious is how much consideration has been given to its adoption. Statistical significance testing was initially designed for testing hypotheses in a very different area, and hence the question must be asked: does it transfer into empirical software engineering research? This paper attempts to address this question. The paper finds that this transference is far from straightforward, resulting in several problems in its deployment within the area. Principally problems exist in: formulating hypotheses, the calculation of the probability values and its associated cut-off value, and the construction of the sample and its distribution. Hence, the paper concludes that the topic should explore other avenues of analysis, in an attempt to establish which analysis approaches are preferable under which conditions, when conducting empirical software engineering studies.  相似文献   

10.
One of the most important aspects in the achievement of secure software systems in the software development process is what is known as Security Requirements Engineering. However, very few reviews focus on this theme in a systematic, thorough and unbiased manner, that is, none of them perform a systematic review of security requirements engineering, and there is not, therefore, a sufficiently good context in which to operate. In this paper we carry out a systematic review of the existing literature concerning security requirements engineering in order to summarize the evidence regarding this issue and to provide a framework/background in which to appropriately position new research activities.  相似文献   

11.
ContextWith the increased use of software for running key functions in modern society it is of utmost importance to understand software robustness and how to support it. Although there have been many contributions to the field there is a lack of a coherent and summary view.ObjectiveTo address this issue, we have conducted a literature review in the field of robustness.MethodThis review has been conducted by following guidelines for systematic literature reviews. Systematic reviews are used to find and classify all existing and available literature in a certain field.ResultsFrom 9193 initial papers found in three well-known research databases, the 144 relevant papers were extracted through a multi-step filtering process with independent validation in each step. These papers were then further analyzed and categorized based on their development phase, domain, research, contribution and evaluation type. The results indicate that most existing results on software robustness focus on verification and validation of Commercial of the shelf (COTS) or operating systems or propose design solutions for robustness while there is a lack of results on how to elicit and specify robustness requirements. The research is typically solution proposals with little to no evaluation and when there is some evaluation it is primarily done with small, toy/academic example systems.ConclusionWe conclude that there is a need for more software robustness research on real-world, industrial systems and on software development phases other than testing and design, in particular on requirements engineering.  相似文献   

12.
One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies. This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering. The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research field does not show any improvements in terms of industrial relevance over time.  相似文献   

13.
The efforts of addressing user experience (UX) in product development keep growing, as demonstrated by the proliferation of workshops and conferences bringing together academics and practitioners, who aim at creating interactive software able to satisfy their users. This special issue focuses on “Interplay between User Experience Evaluation and Software Development”, stating that the gap between human-computer interaction and software engineering with regard to usability has somewhat been narrowed. Unfortunately, our experience shows that software development organizations perform few usability engineering activities or none at all. Several authors acknowledge that, in order to understand the reasons of the limited impact of usability engineering and UX methods, and to try to modify this situation, it is fundamental to thoroughly analyze current software development practices, involving practitioners and possibly working from inside the companies. This article contributes to this research line by reporting an experimental study conducted with software companies. The study has confirmed that still too many companies either neglect usability and UX, or do not properly consider them. Interesting problems emerged. This article gives suggestions on how they may be properly addressed, since their solution is the starting point for reducing the gap between research and practice of usability and UX. It also provides further evidence on the value of the research method, called Cooperative Method Development, based on the collaboration of researchers and practitioners in carrying out empirical research; it has been used in a step of the performed study and has revealed to be instrumental for showing practitioners why to improve their development processes and how to do so.  相似文献   

14.
If empirical software engineering is to grow as a valid scientific endeavor, the ability to acquire, use, share, and compare data collected from a variety of sources must be encouraged. This is necessary to validate the formal models being developed within computer science. However, within the empirical software engineering community this has not been easily accomplished. This paper analyses experiences from a number of projects, and defines the issues, which include the following: (1) How should data, testbeds, and artifacts be shared? (2) What limits should be placed on who can use them and how? How does one limit potential misuse? (3) What is the appropriate way to credit the organization and individual that spent the effort collecting the data, developing the testbed, and building the artifact? (4) Once shared, who owns the evolved asset? As a solution to these issues, the paper proposes a framework for an empirical software engineering artifact agreement. Such an agreement is intended to address the needs for both creator and user of such artifacts and should foster a market in making available and using such artifacts. If this framework for sharing software engineering artifacts is commonly accepted, it should encourage artifact owners to make the artifacts accessible to others (gaining credit is more likely and misuse is less likely). It may be easier for other researchers to request artifacts since there will be a well-defined protocol for how to deal with relevant matters.  相似文献   

15.
In many studiesin software engineering students are used instead of professionalsoftware developers, although the objective is to draw conclusionsvalid for professional software developers. This paper presentsa study where the difference between the two groups is evaluated.People from the two groups have individually carried out a non-trivialsoftware engineering judgement task involving the assessmentof how ten different factors affect the lead-time of softwaredevelopment projects. It is found that the differences are onlyminor, and it is concluded that software engineering studentsmay be used instead of professional software developers undercertain conditions. These conditions are identified and describedbased on generally accepted criteria for validity evaluationof empirical studies.  相似文献   

16.
Metamodels for Computer-based Engineering Design: Survey and recommendations   总被引:47,自引:1,他引:46  
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of today’s engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper, we review several of these techniques, including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning and kriging. We survey their existing application in engineering design, and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations, and how common pitfalls can be avoided.  相似文献   

17.
18.
Based on empirical material from the area of software engineering, this article discusses the issue of plans and planning as an integral part of and prerequisite for software development work. It relates observed practices to literature produced by the Computer Supported Cooperative Work community. Empirical studies of software development practice seldom address re-planning. By analyzing the empirical material from one project we are able to show how certain kinds of co-ordination problems arise and how they may be dealt with. The empirical research does not focus primarily on the character of plans; instead, it raises the question ‘what means are necessary and should be provided in order to cope with situations when plans do not work out? In relation to plans, especial emphasis is on “due process”, i.e. how the project plan and the company wide project model are maintained to enable the identification and articulation of deviations from it. On the basis of our empirical analysis we propose to support the articulation and coordination work necessary in situations where plans do not adequately work out.  相似文献   

19.
Domain analysis is crucial and central to software product line engineering (SPLE) as it is one of the main instruments to decide what to include in a product and how it should fit in to the overall software product line. For this reason many domain analysis solutions have been proposed both by researchers and industry practitioners. Domain analysis comprises various modeling and scoping activities. This paper presents a systematic review of all the domain analysis solutions presented until 2007. The goal of the review is to analyze the level of industrial application and/or empirical validation of the proposed solutions with the purpose of mapping maturity in terms of industrial application, as well as to what extent proposed solutions might have been evaluated in terms of usability and usefulness. The finding of this review indicates that, although many new domain analysis solutions for software product lines have been proposed over the years, the absence of qualitative and quantitative results from empirical application and/or validation makes it hard to evaluate the potential of proposed solutions with respect to their usability and/or usefulness for industry adoption. The detailed results of the systematic review can be used by individual researchers to see large gaps in research that give opportunities for future work, and from a general research perspective lessons can be learned from the absence of validation as well as from good examples presented. From an industry practitioner view, the results can be used to gauge as to what extent solutions have been applied and/or validated and in what manner, both valuable as input prior to industry adoption of a domain analysis solution.  相似文献   

20.
Systematic literature studies have received much attention in empirical software engineering in recent years. They have become a powerful tool to collect and structure reported knowledge in a systematic and reproducible way. We distinguish systematic literature reviews to systematically analyze reported evidence in depth, and systematic mapping studies to structure a field of interest in a broader, usually quantified manner. Due to the rapidly increasing body of knowledge in software engineering, researchers who want to capture the published work in a domain often face an extensive amount of publications, which need to be screened, rated for relevance, classified, and eventually analyzed. Although there are several guidelines to conduct literature studies, they do not yet help researchers coping with the specific difficulties encountered in the practical application of these guidelines. In this article, we present an experience-based guideline to aid researchers in designing systematic literature studies with special emphasis on the data collection and selection procedures. Our guideline aims at providing a blueprint for a practical and pragmatic path through the plethora of currently available practices and deliverables capturing the dependencies among the single steps. The guideline emerges from various mapping studies and literature reviews conducted by the authors and provides recommendations for the general study design, data collection, and study selection procedures. Finally, we share our experiences and lessons learned in applying the different practices of the proposed guideline.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号