首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Context

Cost advantage has been one of the primary drivers of successful offshoring engagements of Indian software and services companies. However, the emphasis has shifted to the ability of the vendors to provide high quality over cost advantage in delivering software products and services. Meeting high quality requirements of the clients is a challenge due to the very nature of development and delivery of software through offshoring.

Objective

The objective of this research paper is to identify and evaluate the key determinants of quality in the case of software projects delivered through offshoring model.

Method

A detailed survey was conducted among project managers/project leaders (leads) of a leading midsize Indian IT services company to evaluate the relationship of the determinants on the attributes of quality.

Results

Out of six determinants, our research reveals requirements uncertainty has significant association with all the attributes of quality. While process maturity and trained personnel have moderate association, communication and control, knowledge transfer and integration and technical infrastructure have relatively low association on software quality attributes in the case of offshoring.

Conclusion

It is concluded that the complexities in offshoring necessitates proper capturing of requirements. In addition high level of process maturity and availability of trained personnel to the project will help vendors to achieve software quality. The paper provides a set of implications for practice and directions for further research.  相似文献   

2.

Aim

In this article, factors influencing the motivation of software engineers is studied with the goal of guiding the definition of motivational programs.

Method

Using a set of 20 motivational factors compiled in a systematic literature review and a general theory of motivation, a survey questionnaire was created to evaluate the influence of these factors on individual motivation. Then, the questionnaire was applied on a semi-random sample of 176 software engineers from 20 software companies located in Recife-PE, Brazil.

Results

The survey results show the actual level of motivation for each motivator in the target population. Using principal component analysis on the values of all motivators, a five factor structure was identified and used to propose a guideline for the creation of motivational programs for software engineers.

Conclusions

The five factor structure provides an intuitive categorization for the set of variables and can be used to explain other motivational models presented in the literature. This contributes to a better understanding of motivation in software engineering.  相似文献   

3.

Context

Model-driven approaches deal with the provision of models, transformations between them and code generators to address software development. This approach has the advantage of defining a conceptual structure, where the models used by business managers and analysts can be mapped into more detailed models used by software developers. This alignment between high-level business specifications and the lower-level information technologies (ITs) models is crucial to the field of service-oriented development, where meaningful business services and process specifications are those relevant to real business scenarios.

Objective

This paper presents a model-driven approach which, starting from high-level computational-independent business models (CIMs) - the business view - sets out guidelines for obtaining lower-level platform-independent behavioural models (PIMs) - the information system view. A key advantage of our approach is the use of real high-level business models, not just requirements models, which, by means of model transformations, helps software developers to make the most of the business knowledge for specifying and developing business services.

Method

This proposal is framed in a method for service-oriented development of information systems whose main characteristic is the use of services as first-class objects. The method follows an MDA-based approach, proposing a set of models at different levels of abstraction and model transformations to connect them.

Results

The paper present the complete set of CIM and PIM metamodels and the specification of the mappings between them, which clear advantage is the support for the alignment between high-level business view and ITs. The proposed model-driven process is being implemented in an MDA tool. A first prototype has been used to develop a travel agency case study that illustrates the proposal.

Conclusion

This study shows how a model-driven approach helps to solve the alignment problem between the business view and the information system view that arises when adopting service-oriented approaches for software development.  相似文献   

4.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

5.

Context

In order to ensure high quality of a process model repository, refactoring operations can be applied to correct anti-patterns, such as overlap of process models, inconsistent labeling of activities and overly complex models. However, if a process model collection is created and maintained by different people over a longer period of time, manual detection of such refactoring opportunities becomes difficult, simply due to the number of processes in the repository. Consequently, there is a need for techniques to detect refactoring opportunities automatically.

Objective

This paper proposes a technique for automatically detecting refactoring opportunities.

Method

We developed the technique based on metrics that can be used to measure the consistency of activity labels as well as the extent to which processes overlap and the type of overlap that they have. We evaluated it, by applying it to two large process model repositories.

Results

The evaluation shows that the technique can be used to pinpoint the approximate location of three types of refactoring opportunities with high precision and recall and of one type of refactoring opportunity with high recall, but low precision.

Conclusion

We conclude that the technique presented in this paper can be used in practice to automatically detect a number of anti-patterns that can be corrected by refactoring.  相似文献   

6.

Context

Customer collaboration is a vital feature of Agile software development.

Objective

This article addresses the importance of adequate customer involvement on Agile projects, and the impact of different levels of customer involvement on real-life Agile projects.

Method

We conducted a Grounded Theory study involving 30 Agile practitioners from 16 software development organizations in New Zealand and India, over a period of 3 years.

Results

We discovered that Lack of Customer Involvement was one of the biggest challenges faced by Agile teams. Customers were not as involved on these Agile projects as Agile methods demand. We describe the causes of inadequate customer collaboration, its adverse consequences on self-organizing Agile teams, and Agile Undercover — a set of strategies used by the teams to practice Agile despite insufficient or ineffective customer involvement.

Conclusion

Customer involvement is important on Agile projects. Inadequate customer involvement causes adverse problems for Agile teams. The Agile Undercover strategies we’ve identified can assist Agile teams facing similar lack of customer involvement.  相似文献   

7.

Context

Studying work practices in the context of Global Software Development (GSD) projects entails multiple opportunities and challenges for the researchers. Understanding and tackling these challenges requires a careful and rigor application of research methods.

Objective

We want to contribute to the understanding of the challenges of studying GSD by reflecting on several obstacles we had to deal with when conducting ethnographically-informed research on offshoring in German small to medium-sized enterprises.

Method

The material for this paper is based on reflections and field notes from two research projects: an exploratory ethnographic field study, and a study that was framed as a Business Ethnography. For the analysis, we took a Grounded Theory-oriented coding and analysis approach in order to identify issues and challenges documented in our research notes.

Results

We introduce the concept of Business Ethnography and discuss our experiences of adapting and implementing this action research concept for our study. We identify and discuss three primary issues: understanding complex global work practices from a local perspective, adapting to changing interests of the participants, and dealing with micro-political frictions between the cooperating sites.

Conclusions

We identify common interests between the researchers and the companies as a challenge and chance for studies on offshoring. Building on our experiences from the field, we argue for an active conceptualization of struggles and conflicts in the field as well as for extending the role of the ethnographer to that of a learning mediator.  相似文献   

8.

Introduction

Maintaining a large diagnostic knowledge base (KB) is a demanding task for any person or organization. Future clinical decision support system (CDSS) may rely on multiple, smaller and more focused KBs developed and maintained at different locations that work together seamlessly. A cross-domain inference tool has great clinical import and utility.

Methods

We developed a modified multi-membership Bayes formulation to facilitate the cross-domain probabilistic inferencing among KBs with overlapping diseases. Two KBs developed for evaluation were non-infectious generalized blistering diseases (GBD) and autoimmune diseases (AID). After the KBs were finalized, they were evaluated separately for validity.

Result

Ten cases from medical journal case reports were used to evaluate this “cross-domain” inference across the two KBs. The resultant non-error rate (NER) was 90%, and the average of probabilities assigned to the correct diagnosis (AVP) was 64.8% for cross-domain consultations.

Conclusion

A novel formulation is now available to deal with problems occurring in a clinical diagnostic decision support system with multi-domain KBs. The utilization of this formulation will help in the development of more integrated KBs with greater focused knowledge domains.  相似文献   

9.

Context

Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile.

Objective

We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology.

Method

Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses.

Results

Our analyses reveal that subjective norm and training play a significant role in influencing software developers’ use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices.

Conclusion

We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change.  相似文献   

10.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

11.

Context

To guarantee the success of Business Process Modelling (BPM) it is necessary to check whether the activities and tasks described by Business Processes (BPs) are sound and well coordinated.

Objective

This article describes and validates a Formal Compositional Verification Approach (FCVA) that uses a Model-Checking (MC) technique to specify and verify BPs.

Method

This is performed using the Communicating Sequential Processes +Time (CSP+T) process calculus, which adds new constructions to timed Business Process Model and Notation (BPMN) modelling entities for non- functional requirement specification.

Results

Using our proposal we are able to specify the BP Task Model (BPTM) associated with BPs by formalising the timed BPMN notational elements. The proposal also allows us to apply MC to BPTM verification. A real-life example of verifying a BPTM in the field of Customer Relationship Management (CRM) is discussed as a practical application of FCVA.

Conclusion

This approach facilitates the verification of complex BPs from independently verified local processes, and establishes a feasible way to use process calculi to verify BPs using state-of-the-art MC tools.  相似文献   

12.

Context

Service-Oriented Computing (SOC) is a promising computing paradigm which facilitates the development of adaptive and loosely coupled service-based applications (SBAs). Many of the technical challenges pertaining to the development of SBAs have been addressed, however, there are still outstanding questions relating to the processes required to develop them.

Objective

The objective of this study is to systematically identify process models for developing service-based applications (SBAs) and review the processes within them. This will provide a useful starting point for any further research in the area. A secondary objective of the study is to identify process models which facilitate the adaptation of SBAs.

Method

In order to achieve this objective a systematic literature review (SLR) of the existing software engineering literature is conducted.

Results

During this research 722 studies were identified using a predefined search strategy, this number was narrowed down to 57 studies based on a set of strict inclusion and exclusion criteria. The results are reported both quantitatively in the form of a mapping study, as well as qualitatively in the form of a narrative summary of the key processes identified.

Conclusion

There are many process models reported for the development of SBAs varying in detail and maturity, this review has identified and categorised the processes within those process models. The review has also identified and evaluated process models which facilitate the adaptation of SBAs.  相似文献   

13.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

14.

Context

A particular strength of agile systems development approaches is that they encourage a move away from ‘introverted’ development, involving the customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single customer representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way.

Objective

This paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment.

Method

We draw on two illustrative cases from industry.

Results

We highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site customer reduce the amount of time for sharing ideas outside the team.

Conclusion

A clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.  相似文献   

15.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

16.

Context

Software defect prediction studies usually built models using within-company data, but very few focused on the prediction models trained with cross-company data. It is difficult to employ these models which are built on the within-company data in practice, because of the lack of these local data repositories. Recently, transfer learning has attracted more and more attention for building classifier in target domain using the data from related source domain. It is very useful in cases when distributions of training and test instances differ, but is it appropriate for cross-company software defect prediction?

Objective

In this paper, we consider the cross-company defect prediction scenario where source and target data are drawn from different companies. In order to harness cross company data, we try to exploit the transfer learning method to build faster and highly effective prediction model.

Method

Unlike the prior works selecting training data which are similar from the test data, we proposed a novel algorithm called Transfer Naive Bayes (TNB), by using the information of all the proper features in training data. Our solution estimates the distribution of the test data, and transfers cross-company data information into the weights of the training data. On these weighted data, the defect prediction model is built.

Results

This article presents a theoretical analysis for the comparative methods, and shows the experiment results on the data sets from different organizations. It indicates that TNB is more accurate in terms of AUC (The area under the receiver operating characteristic curve), within less runtime than the state of the art methods.

Conclusion

It is concluded that when there are too few local training data to train good classifiers, the useful knowledge from different-distribution training data on feature level may help. We are optimistic that our transfer learning method can guide optimal resource allocation strategies, which may reduce software testing cost and increase effectiveness of software testing process.  相似文献   

17.
Identifying relevant studies in software engineering   总被引:2,自引:0,他引:2  

Context

Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries.

Objective

The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE.

Method

We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of ‘quasi-gold standard’ (QGS), which consists of collection of known studies, and corresponding ‘quasi-sensitivity’ into the search process for evaluating search performance.

Results

We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research.

Conclusion

We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE.  相似文献   

18.

Context

One of the important issues of software testing is to provide an automated test oracle. Test oracles are reliable sources of how the software under test must operate. In particular, they are used to evaluate the actual results that produced by the software. However, in order to generate an automated test oracle, oracle challenges need to be addressed. These challenges are output-domain generation, input domain to output domain mapping, and a comparator to decide on the accuracy of the actual outputs.

Objective

This paper proposes an automated test oracle framework to address all of these challenges.

Method

I/O Relationship Analysis is used to generate the output domain automatically and Multi-Networks Oracles based on artificial neural networks are introduced to handle the second challenge. The last challenge is addressed using an automated comparator that adjusts the oracle precision by defining the comparison tolerance. The proposed approach was evaluated using an industry strength case study, which was injected with some faults. The quality of the proposed oracle was measured by assessing its accuracy, precision, misclassification error and practicality. Mutation testing was considered to provide the evaluation framework by implementing two different versions of the case study: a Golden Version and a Mutated Version. Furthermore, a comparative study between the existing automated oracles and the proposed one is provided based on which challenges they can automate.

Results

Results indicate that the proposed approach automated the oracle generation process 97% in this experiment. Accuracy of the proposed oracle was up to 98.26%, and the oracle detected up to 97.7% of the injected faults.

Conclusion

Consequently, the results of the study highlight the practicality of the proposed oracle in addition to the automation it offers.  相似文献   

19.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

20.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号