首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The increasing competition among industries has leveraged the emergence of various tools and methods for maintenance decision-making support. This paper identifies in literature the application areas of industrial maintenance decision-making, the relationships between these areas and the ways in which authors integrate tools and methods. This information makes it possible to identify trends and deficiencies in this context, helping to centralize the efforts required for future work. This work follows a series of structured steps for a systematic literature review of papers related to the main topic available in online databases. The selected papers are subject to a content assessment and grouped according to the application areas. The direct comparison between these areas and the construction of a relational matrix provide a quantitative interpretation of the results and well-structured information. Additionally, this paper proposes a framework based on information from the literature, which summarizes the origin and flow of information used in the development of models, showing the relationship among application areas of decision making. The research undertaken identifies trends focused on joint production systems optimization and increasing the deployment of methods for autonomous equipment predictions.  相似文献   

2.
ContextThe web has had a significant impact on all aspects of our society. As our society relies more and more on the web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 193 papers in the area of web application testing, which have appeared between 2000 and 2013.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends and empirical evidence in this specialized field.MethodsWe systematically review the body of knowledge related to functional testing of web application through a systematic literature review (SLR) study. This SLR is a follow-up and complimentary study to a recent systematic mapping (SM) study that we conducted in this area. As part of this study, we pose three sets of research questions, define selection and exclusion criteria, and synthesize the empirical evidence in this area.ResultsOur pool of studies includes a set of 95 papers (from the 193 retrieved papers) published in the area of web application testing between 2000 and 2013. The data extracted during our SLR study is available through a publicly-accessible online repository. Among our results are the followings: (1) the list of test tools in this area and their capabilities, (2) the types of test models and fault models proposed in this domain, (3) the way the empirical studies in this area have been designed and reported, and (4) the state of empirical evidence and industrial relevance.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our SLR can help researchers to obtain an overview of existing web application testing approaches, fault models, tools, metrics and empirical evidence, and subsequently identify areas in the field that require more attention from the research community.  相似文献   

3.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

4.
ContextScientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code.ObjectiveThis study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software.MethodWe conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software.ResultsWe found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them.ConclusionsScientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.  相似文献   

5.

Context

In recent years, many software companies have considered Software Process Improvement (SPI) as essential for successful software development. These companies have also shown special interest in IT Service Management (ITSM). SPI standards have evolved to incorporate ITSM best practices.

Objective

This paper presents a systematic literature review of ITSM Process Improvement initiatives based on the ISO/IEC 15504 standard for process assessment and improvement.

Method

A systematic literature review based on the guidelines proposed by Kitchenham and the review protocol template developed by Biolchini et al. is performed.

Results

Twenty-eight relevant studies related to ITSM Process Improvement have been found. From the analysis of these studies, nine different ITSM Process Improvement initiatives have been detected. Seven of these initiatives use ISO/IEC 15504 conformant process assessment methods.

Conclusion

During the last decade, in order to satisfy the on-going demand of mature software development companies for assessing and improving ITSM processes, different models which use the measurement framework of ISO/IEC 15504 have been developed. However, it is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.  相似文献   

6.
ContextOrganizations working in software development are aware that processes are very important assets as well as they are very conscious of the need to deploy well-defined processes with the goal of improving software product development and, particularly, quality. Software process modeling languages are an important support for describing and managing software processes in software-intensive organizations.ObjectiveThis paper seeks to identify what software process modeling languages have been defined in last decade, the relationships and dependencies among them and, starting from the current state, to define directions for future research.MethodA systematic literature review was developed. 1929 papers were retrieved by a manual search in 9 databases and 46 primary studies were finally included.ResultsSince 2000 more than 40 languages have been first reported, each of which with a concrete purpose. We show that different base technologies have been used to define software process modeling languages. We provide a scheme where each language is registered together with the year it was created, the base technology used to define it and whether it is considered a starting point for later languages. This scheme is used to illustrate the trend in software process modeling languages. Finally, we present directions for future research.ConclusionThis review presents the different software process modeling languages that have been developed in the last ten years, showing the relevant fact that model-based SPMLs (Software Process Modeling Languages) are being considered as a current trend. Each one of these languages has been designed with a particular motivation, to solve problems which had been detected. However, there are still several problems to face, which have become evident in this review. This let us provide researchers with some guidelines for future research on this topic.  相似文献   

7.

Context

Service-Oriented Computing (SOC) is a promising computing paradigm which facilitates the development of adaptive and loosely coupled service-based applications (SBAs). Many of the technical challenges pertaining to the development of SBAs have been addressed, however, there are still outstanding questions relating to the processes required to develop them.

Objective

The objective of this study is to systematically identify process models for developing service-based applications (SBAs) and review the processes within them. This will provide a useful starting point for any further research in the area. A secondary objective of the study is to identify process models which facilitate the adaptation of SBAs.

Method

In order to achieve this objective a systematic literature review (SLR) of the existing software engineering literature is conducted.

Results

During this research 722 studies were identified using a predefined search strategy, this number was narrowed down to 57 studies based on a set of strict inclusion and exclusion criteria. The results are reported both quantitatively in the form of a mapping study, as well as qualitatively in the form of a narrative summary of the key processes identified.

Conclusion

There are many process models reported for the development of SBAs varying in detail and maturity, this review has identified and categorised the processes within those process models. The review has also identified and evaluated process models which facilitate the adaptation of SBAs.  相似文献   

8.
ContextIdentifying refactoring opportunities in object-oriented code is an important stage that precedes the actual refactoring process. Several techniques have been proposed in the literature to identify opportunities for various refactoring activities.ObjectiveThis paper provides a systematic literature review of existing studies identifying opportunities for code refactoring activities.MethodWe performed an automatic search of the relevant digital libraries for potentially relevant studies published through the end of 2013, performed pilot and author-based searches, and selected 47 primary studies (PSs) based on inclusion and exclusion criteria. The PSs were analyzed based on a number of criteria, including the refactoring activities, the approaches to refactoring opportunity identification, the empirical evaluation approaches, and the data sets used.ResultsThe results indicate that research in the area of identifying refactoring opportunities is highly active. Most of the studies have been performed by academic researchers using nonindustrial data sets. Extract Class and Move Method were found to be the most frequently considered refactoring activities. The results show that researchers use six primary existing approaches to identify refactoring opportunities and six approaches to empirically evaluate the identification techniques. Most of the systems used in the evaluation process were open-source, which helps to make the studies repeatable. However, a relatively high percentage of the data sets used in the empirical evaluations were small, which limits the generality of the results.ConclusionsIt would be beneficial to perform further studies that consider more refactoring activities, involve researchers from industry, and use large-scale and industrial-based systems.  相似文献   

9.
As Building Information Modeling (BIM) workflows are becoming very relevant for the different stages of the project’s lifecycle, more data is produced and managed across it. The information and data accumulated in BIM-based projects present an opportunity for analysis and extraction of project knowledge from the inception to the operation phase. In other industries, Machine Learning (ML) has been demonstrated to be an effective approach to automate processes and extract useful insights from different types and sources of data. The rapid development of ML applications, the growing generation of BIM-related data in projects, and the different needs for use of this data present serious challenges to adopt and effectively apply ML techniques to BIM-based projects in the Architecture, Engineering, Construction and Operations (AECO) industry. While research on the use of BIM data through ML has increased in the past decade, it is still in a nascent stage. In order to asses where the industry stands today, this paper carries out a systematic literature review (SLR) identifying and summarizing common emerging areas of application and utilization of ML within the context of BIM-generated data. Moreover, the paper identifies research gaps and trends. Based on the observed limitations, prominent future research directions are suggested, focusing on information architecture and data, applications scalability, and human information interactions.  相似文献   

10.
ContextNumerous open source software projects are based on volunteers collaboration and require a continuous influx of newcomers for their continuity. Newcomers face barriers that can lead them to give up. These barriers hinder both developers willing to make a single contribution and those willing to become a project member.ObjectiveThis study aims to identify and classify the barriers that newcomers face when contributing to open source software projects.MethodWe conducted a systematic literature review of papers reporting empirical evidence regarding the barriers that newcomers face when contributing to open source software (OSS) projects. We retrieved 291 studies by querying 4 digital libraries. Twenty studies were identified as primary. We performed a backward snowballing approach, and searched for other papers published by the authors of the selected papers to identify potential studies. Then, we used a coding approach inspired by open coding and axial coding procedures from Grounded Theory to categorize the barriers reported by the selected studies.ResultsWe identified 20 studies providing empirical evidence of barriers faced by newcomers to OSS projects while making a contribution. From the analysis, we identified 15 different barriers, which we grouped into five categories: social interaction, newcomers’ previous knowledge, finding a way to start, documentation, and technical hurdles. We also classified the problems with regard to their origin: newcomers, community, or product.ConclusionThe results are useful to researchers and OSS practitioners willing to investigate or to implement tools to support newcomers. We mapped technical and non-technical barriers that hinder newcomers’ first contributions. The most evidenced barriers are related to socialization, appearing in 75% (15 out of 20) of the studies analyzed, with a high focus on interactions in mailing lists (receiving answers and socialization with other members). There is a lack of in-depth studies on technical issues, such as code issues. We also noticed that the majority of the studies relied on historical data gathered from software repositories and that there was a lack of experiments and qualitative studies in this area.  相似文献   

11.
ContextSoftware documents are core artifacts produced and consumed in documentation activity in the software lifecycle. Meanwhile, knowledge-based approaches have been extensively used in software development for decades, however, the software engineering community lacks a comprehensive understanding on how knowledge-based approaches are used in software documentation, especially documentation of software architecture design.ObjectiveThe objective of this work is to explore how knowledge-based approaches are employed in software documentation, their influences to the quality of software documentation, and the costs and benefits of using these approaches.MethodWe use a systematic literature review method to identify the primary studies on knowledge-based approaches in software documentation, following a pre-defined review protocol.ResultsSixty studies are finally selected, in which twelve quality attributes of software documents, four cost categories, and nine benefit categories of using knowledge-based approaches in software documentation are identified. Architecture understanding is the top benefit of using knowledge-based approaches in software documentation. The cost of retrieving information from documents is the major concern when using knowledge-based approaches in software documentation.ConclusionsThe findings of this review suggest several future research directions that are critical and promising but underexplored in current research and practice: (1) there is a need to use knowledge-based approaches to improve the quality attributes of software documents that receive less attention, especially credibility, conciseness, and unambiguity; (2) using knowledge-based approaches with the knowledge content in software documents which gets less attention in current applications of knowledge-based approaches in software documentation, to further improve the practice of software documentation activity; (3) putting more focus on the application of software documents using the knowledge-based approaches (knowledge reuse, retrieval, reasoning, and sharing) in order to make the most use of software documents; and (4) evaluating the costs and benefits of using knowledge-based approaches in software documentation qualitatively and quantitatively.  相似文献   

12.
Although lean production (LP) is usually associated with complexity reduction, it has been increasingly applied in highly complex socio-technical systems (CSS) (e.g. healthcare), in which the complexity level cannot be reduced below a certain (high) threshold. This creates a paradoxical situation, which is not well understood in theory and can be underlying the frustrating results of many lean implementations. This article presents a systematic literature review of how LP has dealt with complexity, both in theory and in practice, from a complexity science perspective. The review was based on 94 papers, which were analyzed according to seven criteria: how the concept of complexity is being used in lean research; the complexity level of the studied systems; the compatibility between the methodological approach and the nature of complexity; how complexity is managed by LP; barriers to LP in CSS; side-effects of LP in CSS; and whether complexity is always detrimental to LP. A research agenda is also proposed.  相似文献   

13.
ContextTechnical debt is a software engineering metaphor, referring to the eventual financial consequences of trade-offs between shrinking product time to market and poorly specifying, or implementing a software product, throughout all development phases. Based on its inter-disciplinary nature, i.e. software engineering and economics, research on managing technical debt should be balanced between software engineering and economic theories.ObjectiveThe aim of this study is to analyze research efforts on technical debt, by focusing on their financial aspect. Specifically, the analysis is carried out with respect to: (a) how financial aspects are defined in the context of technical debt and (b) how they relate to the underlying software engineering concepts.MethodIn order to achieve the abovementioned goals, we employed a standard method for SLRs and applied it on studies retrieved from seven general-scope digital libraries. In total we selected 69 studies relevant to the financial aspect of technical debt.ResultsThe most common financial terms that are used in technical debt research are principal and interest, whereas the financial approaches that have been more frequently applied for managing technical debt are real options, portfolio management, cost/benefit analysis and value-based analysis. However, the application of such approaches lacks consistency, i.e., the same approach is differently applied in different studies, and in some cases lacks a clear mapping between financial and software engineering concepts.ConclusionThe results are expected to prove beneficial for the communication between technical managers and project managers, in the sense that they will provide a common vocabulary, and will help in setting up quality-related goals, during software development. To achieve this we introduce: (a) a glossary of terms and (b) a classification scheme for financial approaches used for managing technical debt. Based on these, we have been able to underline interesting implications for researchers and practitioners.  相似文献   

14.
ContextBusiness process modeling is an essential part of understanding and redesigning the activities that a typical enterprise uses to achieve its business goals. The quality of a business process model has a significant impact on the development of any enterprise and IT support for that process.ObjectiveSince the insights on what constitutes modeling quality are constantly evolving, it is unclear whether research on business process modeling quality already covers all major aspects of modeling quality. Therefore, the objective of this research is to determine the state of the art on business process modeling quality: What aspects of process modeling quality have been addressed until now and which gaps remain to be covered?MethodWe performed a systematic literature review of peer reviewed articles as published between 2000 and August 2013 on business process modeling quality. To analyze the contributions of the papers we use the Formal Concept Analysis technique.ResultsWe found 72 studies addressing quality aspects of business process models. These studies were classified into different dimensions: addressed model quality type, research goal, research method, and type of research result. Our findings suggest that there is no generally accepted framework of model quality types. Most research focuses on empirical and pragmatic quality aspects, specifically with respect to improving the understandability or readability of models. Among the various research methods, experimentation is the most popular one. The results from published research most often take the form of intangible knowledge.ConclusionWe believe there is a lack of an encompassing and generally accepted definition of business process modeling quality. This evidences the need for the development of a broader quality framework capable of dealing with the different aspects of business process modeling quality. Different dimensions of business process quality and of the process of modeling still require further research.  相似文献   

15.
16.
17.
ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services.ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions.MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported.ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility.ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services.  相似文献   

18.
The increasing tendency of network service users to use cloud computing encourages web service vendors to supply services that have different functional and nonfunctional (quality of service) features and provide them in a service pool. Based on supply and demand rules and because of the exuberant growth of the services that are offered, cloud service brokers face tough competition against each other in providing quality of service enhancements. Such competition leads to a difficult and complicated process to provide simple service selection and composition in supplying composite services in the cloud, which should be considered an NP-hard problem. How to select appropriate services from the service pool, overcome composition restrictions, determine the importance of different quality of service parameters, focus on the dynamic characteristics of the problem, and address rapid changes in the properties of the services and network appear to be among the most important issues that must be investigated and addressed. In this paper, utilizing a systematic literature review, important questions that can be raised about the research performed in addressing the above-mentioned problem have been extracted and put forth. Then, by dividing the research into four main groups based on the problem-solving approaches and identifying the investigated quality of service parameters, intended objectives, and developing environments, beneficial results and statistics are obtained that can contribute to future research.  相似文献   

19.
These days, endless streams of data are generated by various sources such as sensors, applications, users, etc. Due to possible issues in sources, such as malfunctions in sensors, platforms, or communication, the generated data might be of low quality, and this can lead to wrong outcomes for the tasks that rely on these data streams. Therefore, controlling the quality of data streams has become increasingly significant. Many approaches have been proposed for controlling the quality of data streams, and hence, various research areas have emerged in this field. To the best of our knowledge, there is no systematic literature review of research papers within this field that comprehensively reviews approaches, classifies them, and highlights the challenges.In this paper, we present the state of the art in the area of quality control of data streams, and characterize it along four dimensions. The first dimension represents the goal of the quality analysis, which can be either quality assessment, or quality improvement. The second dimension focuses on the quality control method, which can be online, offline, or hybrid. The third dimension focuses on the quality control technique, and finally, the fourth dimension represents whether the quality control approach uses any contextual information (inherent, system, organizational, or spatiotemporal context) or not. We compare and critically review the related approaches proposed in the last two decades along these dimensions. We also discuss the open challenges and future research directions.  相似文献   

20.
ContextSoftware process simulation modelling (SPSM) captures the dynamic behaviour and uncertainty in the software process. Existing literature has conflicting claims about its practical usefulness: SPSM is useful and has an industrial impact; SPSM is useful and has no industrial impact yet; SPSM is not useful and has little potential for industry.ObjectiveTo assess the conflicting standpoints on the usefulness of SPSM.MethodA systematic literature review was performed to identify, assess and aggregate empirical evidence on the usefulness of SPSM.ResultsIn the primary studies, to date, the persistent trend is that of proof-of-concept applications of software process simulation for various purposes (e.g. estimation, training, process improvement, etc.). They score poorly on the stated quality criteria. Also only a few studies report some initial evaluation of the simulation models for the intended purposes.ConclusionThere is a lack of conclusive evidence to substantiate the claimed usefulness of SPSM for any of the intended purposes. A few studies that report the cost of applying simulation do not support the claim that it is an inexpensive method. Furthermore, there is a paramount need for improvement in conducting and reporting simulation studies with an emphasis on evaluation against the intended purpose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号