首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 590 毫秒
1.

Context

Studies on global software development have documented severe coordination and communication problems among coworkers due to geographic dispersion and consequent dependency on technology. These problems are exacerbated by increase in the complexity of work undertaken by global teams. However, despite these problems, global software development is on the rise and firms are adopting global practices across the board, raising the question: What does successful global software development look like and what can we learn from its practitioners?

Objective

This study draws on practice-based studies of work to examine successful work practices of global software developers. The primary aim of this study was to understand how workers develop practices that allow them to function effectively across geographically dispersed locations.

Method

An ethnographically-informed field study was conducted with data collection at two international locations of a firm. Interview, observation and archival data were collected. A total of 42 interviews and 3 weeks of observations were conducted.

Results

Teams spread across different locations around the world developed work practices through sociomaterial bricolage. Two facets of technology use were necessary for the creation of these practices: multiplicity of media and relational personalization at dyadic and team levels. New practices were triggered by the need to achieve a work-life balance, which was disturbed by global development. Reflecting on my role as a researcher, I underscore the importance of understanding researchers’ own frames of reference and using research practices that mirror informants’ work practices.

Conclusion

Software developers on global teams face unique challenges which necessitate a shift in their work practices. Successful teams are able to create practices that span locations while still being tied to location based practices. Inventive use of material and social resources is central to the creation of these practices.  相似文献   

2.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

3.

Context

Customer collaboration is a vital feature of Agile software development.

Objective

This article addresses the importance of adequate customer involvement on Agile projects, and the impact of different levels of customer involvement on real-life Agile projects.

Method

We conducted a Grounded Theory study involving 30 Agile practitioners from 16 software development organizations in New Zealand and India, over a period of 3 years.

Results

We discovered that Lack of Customer Involvement was one of the biggest challenges faced by Agile teams. Customers were not as involved on these Agile projects as Agile methods demand. We describe the causes of inadequate customer collaboration, its adverse consequences on self-organizing Agile teams, and Agile Undercover — a set of strategies used by the teams to practice Agile despite insufficient or ineffective customer involvement.

Conclusion

Customer involvement is important on Agile projects. Inadequate customer involvement causes adverse problems for Agile teams. The Agile Undercover strategies we’ve identified can assist Agile teams facing similar lack of customer involvement.  相似文献   

4.
Identifying relevant studies in software engineering   总被引:2,自引:0,他引:2  

Context

Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries.

Objective

The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE.

Method

We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of ‘quasi-gold standard’ (QGS), which consists of collection of known studies, and corresponding ‘quasi-sensitivity’ into the search process for evaluating search performance.

Results

We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research.

Conclusion

We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE.  相似文献   

5.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

6.
7.

Context

Studying work practices in the context of Global Software Development (GSD) projects entails multiple opportunities and challenges for the researchers. Understanding and tackling these challenges requires a careful and rigor application of research methods.

Objective

We want to contribute to the understanding of the challenges of studying GSD by reflecting on several obstacles we had to deal with when conducting ethnographically-informed research on offshoring in German small to medium-sized enterprises.

Method

The material for this paper is based on reflections and field notes from two research projects: an exploratory ethnographic field study, and a study that was framed as a Business Ethnography. For the analysis, we took a Grounded Theory-oriented coding and analysis approach in order to identify issues and challenges documented in our research notes.

Results

We introduce the concept of Business Ethnography and discuss our experiences of adapting and implementing this action research concept for our study. We identify and discuss three primary issues: understanding complex global work practices from a local perspective, adapting to changing interests of the participants, and dealing with micro-political frictions between the cooperating sites.

Conclusions

We identify common interests between the researchers and the companies as a challenge and chance for studies on offshoring. Building on our experiences from the field, we argue for an active conceptualization of struggles and conflicts in the field as well as for extending the role of the ethnographer to that of a learning mediator.  相似文献   

8.

Context

User participation in information system (IS) development has received much research attention. However, prior empirical research regarding the effect of user participation on IS success is inconclusive. This might be because previous studies overlook the effect of the particular components of user participation and other possible mediating factors.

Objective

The objective of this study is to empirically examine how user influence and user responsibility affect IS project performance. We inspect whether user influence and user responsibility improve the quality of the IS development process and in turn leads to project success, or if they have a direct positive influence on project success.

Method

We conducted a survey of 151 IS project managers in order to understand the impact of user influence and user responsibility on IS project performance. Regression analysis was conducted to assess the relationship among user influence, user responsibility, organizational technology learning, project control, user-developer interaction, and IS project management performance.

Results

This study shows that user responsibility and user influence have a positive effect on project performance through the promotion of IS development processes as mediators, including organizational technology learning, project control, and user-IS interaction.

Conclusion

Our results suggest that user responsibility and user influence respectively play an important role in indirectly and directly impacting project management performance. Results of the analysis imply that organizations and project managers should use both user participation and user influence to improve processes performance, and in turn, increase project success.  相似文献   

9.

Context

Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline.

Objective

The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice.

Method

A tertiary study of journal articles and full proceedings papers from the inception of evidence-based software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering.

Results

As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis.

Conclusion

We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice.  相似文献   

10.
11.

Context

Adopting IT innovation in organizations is a complex decision process driven by technical, social and economic issues. Thus, those organizations that decide to adopt innovation take a decision of uncertain success of implementation, as the actual use of a new technology might not be the one expected. The misalignment between planned and effective use of innovation is called assimilation gap.

Objective

This research aims at defining a quantitative instrument for measuring the assimilation gap and applying it to the case of the adoption of OSS.

Method

In this paper, we use the theory of path dependence and increasing returns of Arthur. In particular, we model the use of software applications (planned or actual) by stochastic processes defined by the daily amounts of files created with the applications. We quantify the assimilation gap by comparing the resulting models by measures of proximity.

Results

We apply and validate our method to a real case study of introduction of OpenOffice. We have found a gap between the planned and the effective use despite well-defined directives to use the new OS technology. These findings suggest a need of strategy re-calibration that takes into account environmental factors and individual attitudes.

Conclusions

The theory of path dependence is a valid instrument to model the assimilation gap provided information on strategy toward innovation and quantitative data on actual use are available.  相似文献   

12.

Context

Agile information systems development (ISD) has received much attention from both the practitioner and researcher community over the last 10-15 years. However, it is still unclear what precisely constitutes agile ISD.

Objective

Based on four empirical studies conducted over a 10-year time period from 1999 to 2008 the objective of this paper is to show how the meaning and practice of agile ISD has evolved over time and on this basis to speculate about what comes next.

Method

Four phases of research has been conducted, using a grounded theory approach. For each research phase qualitative interviews were held in American and/or Danish companies and a grounded theory was inductively discovered by careful data analysis. Subsequently, the four unique theories have been analyzed for common themes, and a global theory was identified across the empirical data.

Results

In 1999 companies were developing software at high-speed in a desperate rush to be first-to-market. In 2001 a new high-speed/quick results development process had become established practice. In 2003 changes in the market created the need for a more balanced view on speed and quality, and in 2008 companies were successfully combining agile and plan-driven approaches to achieve the benefits of both. The studies reveal a two-stage pattern in which dramatic changes in the market causes disruption of established practices and process adaptations followed by consolidation of lessons learnt into a once again stable software development process.

Conclusion

The cyclical history of punctuated process evolution makes it possible to distinguish pre-agility from current practices (agility), and on this basis, to speculate about post-agility: a possible next cycle of software process evolution concerned with proactively pursuing the dual goal of agility and alignment through a diversity of means.  相似文献   

13.

Context

Specification matching techniques are crucial for effective retrieval processes. Despite the prevalence for object-oriented methodologies, little attention has been given to Unified Modeling Language (UML) for matching.

Objective

This paper presents a two-stage framework for matching two UML specifications and quantifying the results based on the systematic integration of their structural and behavioral similarities in order to identify the candidate component set for reuse.

Method

The first stage in the framework is an evaluation of the similarities between UML class diagrams using the Structure-Mapping Engine (SME), a simulation of the analogical reasoning approach known as the structure-mapping theory. The second stage, performed on the components identified in the first stage, is based on a graph-similarity scoring algorithm in which UML class diagrams and sequence diagrams are transformed into an SME representation and a Message-Object-Order Graph (MOOG). The effectiveness of the proposed framework was evaluated using a case study.

Results

The experimental results showed a reduction in potential mismatches and an overall high precision and recall.

Conclusion

It is concluded that the two-stage framework is capable of performing more precise matching compared to those of other single-stage matching frameworks. Moreover, the two-stage framework could be utilized within a reuse process, bypassing the need for extra information for retrieval of the components described by UML.  相似文献   

14.

Context

Human resources play a critical role in software project success. However, people are still the least formalized factor in today’s process models. Generally, people are assigned to roles and project teams are formed on the basis of project leaders’ experience of people, constraints (e.g. availability) and skill requirements. Yet this process has to take multiple factors into account. Few works in the literature model this process. Most of these are informal proposals focusing on the individual assignment of people to project tasks and do not consider other aspects like team formation as a whole.

Objective

In this paper we formulate a formal model for assigning human resources to software project teams. Additionally, we describe the key results of the knowledge management process enacted to output the elements of the model.

Method

The model elements were identified using the Delphi expert consultation method and applying psychological tests. The proposed model was implemented in a software tool and validated on two software development organization assignment scenarios.

Results

We built a formal model for the process of assigning human resources to software project teams. This model takes into account as many factors as possible and aids the assignment of individuals to project roles, as well as the formation of the team as a whole.We found that the rules that were identified to form software development project teams are useful. From the tests we found that model implementation was feasible (all the executions of the implemented problem-solving algorithms output feasible solutions in response times that can be considered as acceptable).

Conclusion

Using the Delphi method we were able to propose software project roles and competences. Psychological tests and data mining tools identified useful rules for forming software project teams. These were used to build a formal model. This model was built into a tool that returns role assignments in acceptable response times. This decision support tool helps managers assign people to roles and to form teams. Using the tool, project leaders can flexibly evaluate different team make-ups, taking into account several factors, as well as different constraints and objectives.  相似文献   

15.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

16.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

17.

Context

The popularity of the open source software development in the last decade, has brought about an increased interest from the industry on how to use open source components, participate in the open source community, build business models around this type of software development, and learn more about open source development methodologies. There is a need to understand the results of research in this area.

Objective

Since there is a need to understand conducted research, the aim of this study is to summarize the findings of research that has ben carried out on usage of open source components and development methodologies by the industry, as well as companies’ participation in the open source community.

Method

Systematic review through searches in library databases and manual identification of articles from the open source conference. The search was first carried out in May 2009 and then once again in May 2010.

Results

In 2009, 237 articles were first found, from which 19 were selected based on content and quality, and in 2010, 76 new articles were found from which four were selected. Twenty three articles were identified in total.

Conclusions

The articles could be divided into four categories: open source as part of component based software engineering, business models with open source in commercial organization, company participation in open source development communities, and usage of open source processes within a company.  相似文献   

18.

Context

Since the introduction of evidence-based software engineering in 2004, systematic literature review (SLR) has been increasingly used as a method for conducting secondary studies in software engineering. Two tertiary studies, published in 2009 and 2010, identified and analysed 54 SLRs published in journals and conferences in the period between 1st January 2004 and 30th June 2008.

Objective

In this article, our goal was to extend and update the two previous tertiary studies to cover the period between 1st July 2008 and 31st December 2009. We analysed the quality, coverage of software engineering topics, and potential impact of published SLRs for education and practice.

Method

We performed automatic and manual searches for SLRs published in journals and conference proceedings, analysed the relevant studies, and compared and integrated our findings with the two previous tertiary studies.

Results

We found 67 new SLRs addressing 24 software engineering topics. Among these studies, 15 were considered relevant to the undergraduate educational curriculum, and 40 appeared of possible interest to practitioners. We found that the number of SLRs in software engineering is increasing, the overall quality of the studies is improving, and the number of researchers and research organisations worldwide that are conducting SLRs is also increasing and spreading.

Conclusion

Our findings suggest that the software engineering research community is starting to adopt SLRs consistently as a research method. However, the majority of the SLRs did not evaluate the quality of primary studies and fail to provide guidelines for practitioners, thus decreasing their potential impact on software engineering practice.  相似文献   

19.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

20.

Context

Service-Oriented Computing (SOC) is a promising computing paradigm which facilitates the development of adaptive and loosely coupled service-based applications (SBAs). Many of the technical challenges pertaining to the development of SBAs have been addressed, however, there are still outstanding questions relating to the processes required to develop them.

Objective

The objective of this study is to systematically identify process models for developing service-based applications (SBAs) and review the processes within them. This will provide a useful starting point for any further research in the area. A secondary objective of the study is to identify process models which facilitate the adaptation of SBAs.

Method

In order to achieve this objective a systematic literature review (SLR) of the existing software engineering literature is conducted.

Results

During this research 722 studies were identified using a predefined search strategy, this number was narrowed down to 57 studies based on a set of strict inclusion and exclusion criteria. The results are reported both quantitatively in the form of a mapping study, as well as qualitatively in the form of a narrative summary of the key processes identified.

Conclusion

There are many process models reported for the development of SBAs varying in detail and maturity, this review has identified and categorised the processes within those process models. The review has also identified and evaluated process models which facilitate the adaptation of SBAs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号