首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.

Context

Service-Oriented Computing (SOC) is a promising computing paradigm which facilitates the development of adaptive and loosely coupled service-based applications (SBAs). Many of the technical challenges pertaining to the development of SBAs have been addressed, however, there are still outstanding questions relating to the processes required to develop them.

Objective

The objective of this study is to systematically identify process models for developing service-based applications (SBAs) and review the processes within them. This will provide a useful starting point for any further research in the area. A secondary objective of the study is to identify process models which facilitate the adaptation of SBAs.

Method

In order to achieve this objective a systematic literature review (SLR) of the existing software engineering literature is conducted.

Results

During this research 722 studies were identified using a predefined search strategy, this number was narrowed down to 57 studies based on a set of strict inclusion and exclusion criteria. The results are reported both quantitatively in the form of a mapping study, as well as qualitatively in the form of a narrative summary of the key processes identified.

Conclusion

There are many process models reported for the development of SBAs varying in detail and maturity, this review has identified and categorised the processes within those process models. The review has also identified and evaluated process models which facilitate the adaptation of SBAs.  相似文献   

2.
The increasing competition among industries has leveraged the emergence of various tools and methods for maintenance decision-making support. This paper identifies in literature the application areas of industrial maintenance decision-making, the relationships between these areas and the ways in which authors integrate tools and methods. This information makes it possible to identify trends and deficiencies in this context, helping to centralize the efforts required for future work. This work follows a series of structured steps for a systematic literature review of papers related to the main topic available in online databases. The selected papers are subject to a content assessment and grouped according to the application areas. The direct comparison between these areas and the construction of a relational matrix provide a quantitative interpretation of the results and well-structured information. Additionally, this paper proposes a framework based on information from the literature, which summarizes the origin and flow of information used in the development of models, showing the relationship among application areas of decision making. The research undertaken identifies trends focused on joint production systems optimization and increasing the deployment of methods for autonomous equipment predictions.  相似文献   

3.
Semantic web services are gaining more attention as an important element of the emerging semantic web. Therefore, testing semantic web services is becoming a key concern as an essential quality assurance measure. The objective of this systematic literature review is to summarize the current state of the art of functional testing of semantic web services by providing answers to a set of research questions. The review follows a predefined procedure that involves automatically searching 5 well-known digital libraries. After applying the selection criteria to the results, a total of 34 studies were identified as relevant. Required information was extracted from the studies and summarized. Our systematic literature review identified some approaches available for deriving test cases from the specifications of semantic web services. However, many of the approaches are either not validated or the validation done lacks credibility. We believe that a substantial amount of work remains to be done to improve the current state of research in the area of testing semantic web services.  相似文献   

4.
5.

Context

Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped.

Objective

This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research.

Method

A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type.

Results

A majority of the papers are of proposal research types (64%). System testing is the largest group with respect to research focus (40%), followed by management (23%). Method contributions are in majority.

Conclusions

More validation and evaluation research is needed to provide a better foundation for SPL testing.  相似文献   

6.
ContextScientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code.ObjectiveThis study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software.MethodWe conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software.ResultsWe found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them.ConclusionsScientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.  相似文献   

7.
8.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

9.
ContextOrganizations working in software development are aware that processes are very important assets as well as they are very conscious of the need to deploy well-defined processes with the goal of improving software product development and, particularly, quality. Software process modeling languages are an important support for describing and managing software processes in software-intensive organizations.ObjectiveThis paper seeks to identify what software process modeling languages have been defined in last decade, the relationships and dependencies among them and, starting from the current state, to define directions for future research.MethodA systematic literature review was developed. 1929 papers were retrieved by a manual search in 9 databases and 46 primary studies were finally included.ResultsSince 2000 more than 40 languages have been first reported, each of which with a concrete purpose. We show that different base technologies have been used to define software process modeling languages. We provide a scheme where each language is registered together with the year it was created, the base technology used to define it and whether it is considered a starting point for later languages. This scheme is used to illustrate the trend in software process modeling languages. Finally, we present directions for future research.ConclusionThis review presents the different software process modeling languages that have been developed in the last ten years, showing the relevant fact that model-based SPMLs (Software Process Modeling Languages) are being considered as a current trend. Each one of these languages has been designed with a particular motivation, to solve problems which had been detected. However, there are still several problems to face, which have become evident in this review. This let us provide researchers with some guidelines for future research on this topic.  相似文献   

10.
ContextIdentifying refactoring opportunities in object-oriented code is an important stage that precedes the actual refactoring process. Several techniques have been proposed in the literature to identify opportunities for various refactoring activities.ObjectiveThis paper provides a systematic literature review of existing studies identifying opportunities for code refactoring activities.MethodWe performed an automatic search of the relevant digital libraries for potentially relevant studies published through the end of 2013, performed pilot and author-based searches, and selected 47 primary studies (PSs) based on inclusion and exclusion criteria. The PSs were analyzed based on a number of criteria, including the refactoring activities, the approaches to refactoring opportunity identification, the empirical evaluation approaches, and the data sets used.ResultsThe results indicate that research in the area of identifying refactoring opportunities is highly active. Most of the studies have been performed by academic researchers using nonindustrial data sets. Extract Class and Move Method were found to be the most frequently considered refactoring activities. The results show that researchers use six primary existing approaches to identify refactoring opportunities and six approaches to empirically evaluate the identification techniques. Most of the systems used in the evaluation process were open-source, which helps to make the studies repeatable. However, a relatively high percentage of the data sets used in the empirical evaluations were small, which limits the generality of the results.ConclusionsIt would be beneficial to perform further studies that consider more refactoring activities, involve researchers from industry, and use large-scale and industrial-based systems.  相似文献   

11.
With the advancement of web 2.0 and the development of the Internet of Things (IoT), all tasks can be handled with the help of handheld devices. Web APIs or web services are providing immense power to IoT and are working as a backbone in the successful journey of IoT. Web services can perform any task on a single click event, and these are available over the internet in terms of quantity, quality, and variety. It leads to the requirement of service management in the service repository. The well-managed and structured service repository is still challenging as services are dynamic, and documentation is limited. It is also not a piece of cake to discover, select and recommend services easily from a pool of services. Web service clustering (WSC) plays a vital role in enhancing the service discovery, selection, and recommendation process by analyzing the similarity among services. In this paper, with a systematic process total of 84 research papers are selected, and different state-of-the-art techniques based on web service clustering are investigated and analyzed. Furthermore, this Systematic Literature Review (SLR) also presents the various mandatory and optional steps of WSC, evaluation measures, and datasets. Research challenges and future directions are also identified, which will help the researchers to provide innovative solutions in this area.  相似文献   

12.
ContextBusiness process modeling is an essential part of understanding and redesigning the activities that a typical enterprise uses to achieve its business goals. The quality of a business process model has a significant impact on the development of any enterprise and IT support for that process.ObjectiveSince the insights on what constitutes modeling quality are constantly evolving, it is unclear whether research on business process modeling quality already covers all major aspects of modeling quality. Therefore, the objective of this research is to determine the state of the art on business process modeling quality: What aspects of process modeling quality have been addressed until now and which gaps remain to be covered?MethodWe performed a systematic literature review of peer reviewed articles as published between 2000 and August 2013 on business process modeling quality. To analyze the contributions of the papers we use the Formal Concept Analysis technique.ResultsWe found 72 studies addressing quality aspects of business process models. These studies were classified into different dimensions: addressed model quality type, research goal, research method, and type of research result. Our findings suggest that there is no generally accepted framework of model quality types. Most research focuses on empirical and pragmatic quality aspects, specifically with respect to improving the understandability or readability of models. Among the various research methods, experimentation is the most popular one. The results from published research most often take the form of intangible knowledge.ConclusionWe believe there is a lack of an encompassing and generally accepted definition of business process modeling quality. This evidences the need for the development of a broader quality framework capable of dealing with the different aspects of business process modeling quality. Different dimensions of business process quality and of the process of modeling still require further research.  相似文献   

13.
ContextTechnical debt is a software engineering metaphor, referring to the eventual financial consequences of trade-offs between shrinking product time to market and poorly specifying, or implementing a software product, throughout all development phases. Based on its inter-disciplinary nature, i.e. software engineering and economics, research on managing technical debt should be balanced between software engineering and economic theories.ObjectiveThe aim of this study is to analyze research efforts on technical debt, by focusing on their financial aspect. Specifically, the analysis is carried out with respect to: (a) how financial aspects are defined in the context of technical debt and (b) how they relate to the underlying software engineering concepts.MethodIn order to achieve the abovementioned goals, we employed a standard method for SLRs and applied it on studies retrieved from seven general-scope digital libraries. In total we selected 69 studies relevant to the financial aspect of technical debt.ResultsThe most common financial terms that are used in technical debt research are principal and interest, whereas the financial approaches that have been more frequently applied for managing technical debt are real options, portfolio management, cost/benefit analysis and value-based analysis. However, the application of such approaches lacks consistency, i.e., the same approach is differently applied in different studies, and in some cases lacks a clear mapping between financial and software engineering concepts.ConclusionThe results are expected to prove beneficial for the communication between technical managers and project managers, in the sense that they will provide a common vocabulary, and will help in setting up quality-related goals, during software development. To achieve this we introduce: (a) a glossary of terms and (b) a classification scheme for financial approaches used for managing technical debt. Based on these, we have been able to underline interesting implications for researchers and practitioners.  相似文献   

14.
Although lean production (LP) is usually associated with complexity reduction, it has been increasingly applied in highly complex socio-technical systems (CSS) (e.g. healthcare), in which the complexity level cannot be reduced below a certain (high) threshold. This creates a paradoxical situation, which is not well understood in theory and can be underlying the frustrating results of many lean implementations. This article presents a systematic literature review of how LP has dealt with complexity, both in theory and in practice, from a complexity science perspective. The review was based on 94 papers, which were analyzed according to seven criteria: how the concept of complexity is being used in lean research; the complexity level of the studied systems; the compatibility between the methodological approach and the nature of complexity; how complexity is managed by LP; barriers to LP in CSS; side-effects of LP in CSS; and whether complexity is always detrimental to LP. A research agenda is also proposed.  相似文献   

15.
ContextSoftware documents are core artifacts produced and consumed in documentation activity in the software lifecycle. Meanwhile, knowledge-based approaches have been extensively used in software development for decades, however, the software engineering community lacks a comprehensive understanding on how knowledge-based approaches are used in software documentation, especially documentation of software architecture design.ObjectiveThe objective of this work is to explore how knowledge-based approaches are employed in software documentation, their influences to the quality of software documentation, and the costs and benefits of using these approaches.MethodWe use a systematic literature review method to identify the primary studies on knowledge-based approaches in software documentation, following a pre-defined review protocol.ResultsSixty studies are finally selected, in which twelve quality attributes of software documents, four cost categories, and nine benefit categories of using knowledge-based approaches in software documentation are identified. Architecture understanding is the top benefit of using knowledge-based approaches in software documentation. The cost of retrieving information from documents is the major concern when using knowledge-based approaches in software documentation.ConclusionsThe findings of this review suggest several future research directions that are critical and promising but underexplored in current research and practice: (1) there is a need to use knowledge-based approaches to improve the quality attributes of software documents that receive less attention, especially credibility, conciseness, and unambiguity; (2) using knowledge-based approaches with the knowledge content in software documents which gets less attention in current applications of knowledge-based approaches in software documentation, to further improve the practice of software documentation activity; (3) putting more focus on the application of software documents using the knowledge-based approaches (knowledge reuse, retrieval, reasoning, and sharing) in order to make the most use of software documents; and (4) evaluating the costs and benefits of using knowledge-based approaches in software documentation qualitatively and quantitatively.  相似文献   

16.
17.
As Building Information Modeling (BIM) workflows are becoming very relevant for the different stages of the project’s lifecycle, more data is produced and managed across it. The information and data accumulated in BIM-based projects present an opportunity for analysis and extraction of project knowledge from the inception to the operation phase. In other industries, Machine Learning (ML) has been demonstrated to be an effective approach to automate processes and extract useful insights from different types and sources of data. The rapid development of ML applications, the growing generation of BIM-related data in projects, and the different needs for use of this data present serious challenges to adopt and effectively apply ML techniques to BIM-based projects in the Architecture, Engineering, Construction and Operations (AECO) industry. While research on the use of BIM data through ML has increased in the past decade, it is still in a nascent stage. In order to asses where the industry stands today, this paper carries out a systematic literature review (SLR) identifying and summarizing common emerging areas of application and utilization of ML within the context of BIM-generated data. Moreover, the paper identifies research gaps and trends. Based on the observed limitations, prominent future research directions are suggested, focusing on information architecture and data, applications scalability, and human information interactions.  相似文献   

18.
ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services.ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions.MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported.ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility.ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services.  相似文献   

19.
ContextSoftware process simulation modelling (SPSM) captures the dynamic behaviour and uncertainty in the software process. Existing literature has conflicting claims about its practical usefulness: SPSM is useful and has an industrial impact; SPSM is useful and has no industrial impact yet; SPSM is not useful and has little potential for industry.ObjectiveTo assess the conflicting standpoints on the usefulness of SPSM.MethodA systematic literature review was performed to identify, assess and aggregate empirical evidence on the usefulness of SPSM.ResultsIn the primary studies, to date, the persistent trend is that of proof-of-concept applications of software process simulation for various purposes (e.g. estimation, training, process improvement, etc.). They score poorly on the stated quality criteria. Also only a few studies report some initial evaluation of the simulation models for the intended purposes.ConclusionThere is a lack of conclusive evidence to substantiate the claimed usefulness of SPSM for any of the intended purposes. A few studies that report the cost of applying simulation do not support the claim that it is an inexpensive method. Furthermore, there is a paramount need for improvement in conducting and reporting simulation studies with an emphasis on evaluation against the intended purpose.  相似文献   

20.
These days, endless streams of data are generated by various sources such as sensors, applications, users, etc. Due to possible issues in sources, such as malfunctions in sensors, platforms, or communication, the generated data might be of low quality, and this can lead to wrong outcomes for the tasks that rely on these data streams. Therefore, controlling the quality of data streams has become increasingly significant. Many approaches have been proposed for controlling the quality of data streams, and hence, various research areas have emerged in this field. To the best of our knowledge, there is no systematic literature review of research papers within this field that comprehensively reviews approaches, classifies them, and highlights the challenges.In this paper, we present the state of the art in the area of quality control of data streams, and characterize it along four dimensions. The first dimension represents the goal of the quality analysis, which can be either quality assessment, or quality improvement. The second dimension focuses on the quality control method, which can be online, offline, or hybrid. The third dimension focuses on the quality control technique, and finally, the fourth dimension represents whether the quality control approach uses any contextual information (inherent, system, organizational, or spatiotemporal context) or not. We compare and critically review the related approaches proposed in the last two decades along these dimensions. We also discuss the open challenges and future research directions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号