首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

2.

Context

Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile.

Objective

We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology.

Method

Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses.

Results

Our analyses reveal that subjective norm and training play a significant role in influencing software developers’ use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices.

Conclusion

We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change.  相似文献   

3.

Context

Software development is now facing much more challenges than ever before due to the intrinsic high complexity and the increasing demands of the quick-service-ready paradigm.

Objective

As the developers are now called for more quality software systems from the industries, there is insufficient guidance from the methodologies and standards of software engineering that can provide assistance to the rapid development of qualified business software.

Method

In this work, we discuss the advantages of the pattern-based software development. We verify the benefits using a pattern-based software framework called OS2F, and a corresponding system design architecture that is intended for the rapid development of web applications.

Results

The objective of the framework/architecture is that, through software patterns, developers should be able to separate the work of system development from the business rules so as to reduce the problems caused by a developer’s lack of business experiences.

Conclusion

Through a suitable pattern-based software framework, the quality of the product can thus be enhanced, software development time and cost decreased, and software evolution robustness improved.  相似文献   

4.

Context

Cost advantage has been one of the primary drivers of successful offshoring engagements of Indian software and services companies. However, the emphasis has shifted to the ability of the vendors to provide high quality over cost advantage in delivering software products and services. Meeting high quality requirements of the clients is a challenge due to the very nature of development and delivery of software through offshoring.

Objective

The objective of this research paper is to identify and evaluate the key determinants of quality in the case of software projects delivered through offshoring model.

Method

A detailed survey was conducted among project managers/project leaders (leads) of a leading midsize Indian IT services company to evaluate the relationship of the determinants on the attributes of quality.

Results

Out of six determinants, our research reveals requirements uncertainty has significant association with all the attributes of quality. While process maturity and trained personnel have moderate association, communication and control, knowledge transfer and integration and technical infrastructure have relatively low association on software quality attributes in the case of offshoring.

Conclusion

It is concluded that the complexities in offshoring necessitates proper capturing of requirements. In addition high level of process maturity and availability of trained personnel to the project will help vendors to achieve software quality. The paper provides a set of implications for practice and directions for further research.  相似文献   

5.
Identifying relevant studies in software engineering   总被引:2,自引:0,他引:2  

Context

Systematic literature review (SLR) has become an important research methodology in software engineering since the introduction of evidence-based software engineering (EBSE) in 2004. One critical step in applying this methodology is to design and execute appropriate and effective search strategy. This is a time-consuming and error-prone step, which needs to be carefully planned and implemented. There is an apparent need for a systematic approach to designing, executing, and evaluating a suitable search strategy for optimally retrieving the target literature from digital libraries.

Objective

The main objective of the research reported in this paper is to improve the search step of undertaking SLRs in software engineering (SE) by devising and evaluating systematic and practical approaches to identifying relevant studies in SE.

Method

We have systematically selected and analytically studied a large number of papers (SLRs) to understand the state-of-the-practice of search strategies in EBSE. Having identified the limitations of the current ad-hoc nature of search strategies used by SE researchers for SLRs, we have devised a systematic and evidence-based approach to developing and executing optimal search strategies in SLRs. The proposed approach incorporates the concept of ‘quasi-gold standard’ (QGS), which consists of collection of known studies, and corresponding ‘quasi-sensitivity’ into the search process for evaluating search performance.

Results

We conducted two participant-observer case studies to demonstrate and evaluate the adoption of the proposed QGS-based systematic search approach in support of SLRs in SE research.

Conclusion

We report their findings based on the case studies that the approach is able to improve the rigor of search process in an SLR, as well as it can serve as a supplement to the guidelines for SLRs in EBSE. We plan to further evaluate the proposed approach using a series of case studies on varying research topics in SE.  相似文献   

6.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

7.

Aim

In this article, factors influencing the motivation of software engineers is studied with the goal of guiding the definition of motivational programs.

Method

Using a set of 20 motivational factors compiled in a systematic literature review and a general theory of motivation, a survey questionnaire was created to evaluate the influence of these factors on individual motivation. Then, the questionnaire was applied on a semi-random sample of 176 software engineers from 20 software companies located in Recife-PE, Brazil.

Results

The survey results show the actual level of motivation for each motivator in the target population. Using principal component analysis on the values of all motivators, a five factor structure was identified and used to propose a guideline for the creation of motivational programs for software engineers.

Conclusions

The five factor structure provides an intuitive categorization for the set of variables and can be used to explain other motivational models presented in the literature. This contributes to a better understanding of motivation in software engineering.  相似文献   

8.

Context

Customer collaboration is a vital feature of Agile software development.

Objective

This article addresses the importance of adequate customer involvement on Agile projects, and the impact of different levels of customer involvement on real-life Agile projects.

Method

We conducted a Grounded Theory study involving 30 Agile practitioners from 16 software development organizations in New Zealand and India, over a period of 3 years.

Results

We discovered that Lack of Customer Involvement was one of the biggest challenges faced by Agile teams. Customers were not as involved on these Agile projects as Agile methods demand. We describe the causes of inadequate customer collaboration, its adverse consequences on self-organizing Agile teams, and Agile Undercover — a set of strategies used by the teams to practice Agile despite insufficient or ineffective customer involvement.

Conclusion

Customer involvement is important on Agile projects. Inadequate customer involvement causes adverse problems for Agile teams. The Agile Undercover strategies we’ve identified can assist Agile teams facing similar lack of customer involvement.  相似文献   

9.

Context

A software artefact typically makes its functionality available through a specialized Application Programming Interface (API) describing the set of services offered to client applications. In fact, building any software system usually involves managing a plethora of APIs, which complicates the development process. In Model-Driven Engineering (MDE), where models are the key elements of any software engineering activity, this API management should take place at the model level. Therefore, tools that facilitate the integration of APIs and MDE are clearly needed.

Objective

Our goal is to automate the implementation of API-MDE bridges for supporting both the creation of models from API objects and the generation of such API objects from models. In this sense, this paper presents the API2MoL approach, which provides a declarative rule-based language to easily write mapping definitions to link API specifications and the metamodel that represents them. These definitions are then executed to convert API objects into model elements or vice versa. The approach also allows both the metamodel and the mapping to be automatically obtained from the API specification (bootstrap process).

Method

After implementing the API2MoL engine, its correctness was validated using several APIs. Since APIs are normally large, we then developed a tool to implement the bootstrap process, which was also validated.

Results

We provide a toolkit (language and bootstrap tool) for the creation of bridges between APIs and MDE. The current implementation focuses on Java APIs, although its adaptation to other statically typed object-oriented languages is straightforward. The correctness, expressiveness and completeness of the approach have been validated with the Swing, SWT and JTwitter APIs.

Conclusion

API2MoL frees developers from having to manually implement the tasks of obtaining models from API objects and generating such objects from models. This helps to manage API models in MDE-based solutions.  相似文献   

10.

Context

Practitioners may use design patterns to organize program code. Various empirical studies have investigated the effects of pattern deployment and work experience on the effectiveness and efficiency of program maintenance. However, results from these studies are not all consistent. Moreover, these studies have not considered some interesting factors, such as a maintainer’s prior exposure to the program under maintenance.

Objective

This paper aims at identifying what factors may contribute to the productivity of maintainers in the context of making correct software changes when they work on programs with deployed design patterns.

Method

We performed an empirical study involving 118 human subjects with three change tasks on a medium-sized program to explore the possible effects of a suite of six human and program factors on the productivity of maintainers, measured by the time taken to produce a correctly revised program in a course-based setting. The factors we studied include the deployment of design patterns and the presence of pattern-unaware solutions, as well as the maintainer’s prior exposure to design patterns, the subject program and the programming language, and prior work experience.

Results

Among the factors under examination, we find that the deployment of design patterns, prior exposure to the program and the presence of pattern-unaware solutions are strongly correlated with the time taken to correctly complete maintenance tasks. We also report some interesting observations from the experiment.

Conclusion

A new factor, namely, the presence of pattern-unaware solutions, contributes to the efficient completion of maintenance tasks of programs with deployed design patterns. Moreover, we conclude from the study that neither prior exposure to design patterns nor prior exposure to the programming language is supported by sufficient evidences to be significant factors, whereas the subjects’ exposure to the program under maintenance is notably more important.  相似文献   

11.

Purpose

The purpose of this paper is to characterize reconciliation among the plan-driven, agile, and free/open source software models of software development.

Design/methodology/approach

An automated quasi-systematic review identified 42 papers, which were then analyzed.

Findings

The main findings are: there exist distinct - organization, group and process - levels of reconciliation; few studies deal with reconciliation among the three models of development; a significant amount of work addresses reconciliation between plan-driven and agile development; several large organizations (such as Microsoft, Motorola, and Philips) are interested in trying to combine these models; and reconciliation among software development models is still an open issue, since it is an emerging area and research on most proposals is at an early stage.

Research limitations

Automated searches may not capture relevant papers in publications that are not indexed. Other data sources not amenable to execution of the protocol were not used. Data extraction was performed by only one researcher, which may increase the risk of threats to internal validity.

Implications

This characterization is important for practitioners wanting to be current with the state of research. This review will also assist the scientific community working with software development processes to build a common understanding of the challenges that must be faced, and to identify areas where research is lacking. Finally, the results will be useful to software industry that is calling for solutions in this area.

Originality/value

There is no other systematic review on this subject, and reconciliation among software development models is an emerging area. This study helps to identify and consolidate the work done so far and to guide future research. The conclusions are an important step towards expanding the body of knowledge in the field.  相似文献   

12.
Analogy-based software effort estimation using Fuzzy numbers   总被引:1,自引:0,他引:1  

Background

Early stage software effort estimation is a crucial task for project bedding and feasibility studies. Since collected data during the early stages of a software development lifecycle is always imprecise and uncertain, it is very hard to deliver accurate estimates. Analogy-based estimation, which is one of the popular estimation methods, is rarely used during the early stage of a project because of uncertainty associated with attribute measurement and data availability.

Aims

We have integrated analogy-based estimation with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle, using all available early data. Particularly, this paper proposes a new software project similarity measure and a new adaptation technique based on Fuzzy numbers.

Method

Empirical evaluations with Jack-knifing procedure have been carried out using five benchmark data sets of software projects, namely, ISBSG, Desharnais, Kemerer, Albrecht and COCOMO, and results are reported. The results are compared to those obtained by methods employed in the literature using case-based reasoning and stepwise regression.

Results

In all data sets the empirical evaluations have shown that the proposed similarity measure and adaptation techniques method were able to significantly improve the performance of analogy-based estimation during the early stages of software development. The results have also shown that the proposed method outperforms some well know estimation techniques such as case-based reasoning and stepwise regression.

Conclusions

It is concluded that the proposed estimation model could form a useful approach for early stage estimation especially when data is almost uncertain.  相似文献   

13.

Context

Frequent changes to groups of software entities belonging to different parts of the system may indicate unwanted couplings between those parts. Visualizations of co-changing software entities have been proposed to help developers identify unwanted couplings. Identifying unwanted couplings, however, is only the first step towards an important goal of a software architect: to improve the decomposition of the software system. An in-depth analysis of co-changing entities is needed to understand the underlying reasons for co-changes, and also determine how to resolve the issues.

Objective

In this paper we discuss how interactive visualizations can support the process of analyzing the identified unwanted couplings.

Method

We applied a tool that interactively visualizes software evolution in 10 working sessions with architects and developers of a large embedded software system having a development history of more than a decade.

Results

The participants of the working sessions were overall very positive about their experiences with the interactive visualizations. In 70% of the cases investigated, a decision could be taken on how to resolve the unwanted couplings.

Conclusion

Our experience suggests that interactive visualization not only helps to identify unwanted couplings but it also helps experts to reason about and resolve them.  相似文献   

14.

Context

Modern software engineering demands professionals and researchers to proactively and collectively work towards exploring and experimenting viable and valuable mechanisms in order to extract all kinds of degenerative bugs, security holes, and possible deviations at the initial stage. Having understood the real need here, we have introduced a novel methodology for the estimation of defect proneness of class structures in object oriented (OO) software systems at design stage.

Objective

The objective of this work is to develop an estimation model that provides significant assessment of defect proneness of object oriented software packages at design phase of SDLC. This frame work enhances the efficiency of SDLC through design quality improvement.

Method

This involves a data driven methodology which is based on the empirical study of the relationship existing between design parameters and defect proneness. In the first phase, a mapping of the relationship between the design metrics and normal occurrence pattern of defects are carried out. This is represented as a set of non linear multifunctional regression equations which reflects the influence of individual design metrics on defect proneness. The defect proneness estimation model is then generated by weighted linear combination of these multifunctional regression equations. The weighted coefficients are evaluated through GQM (Goal Question Metric) paradigm.

Results

The model evaluation and validation is carried out with a selected set of cases which is found to be promising. The current study is successfully dealt with three projects and it opens up the opportunity to extend this to a wide range of projects across industries.

Conclusion

The defect proneness estimation at design stage facilitates an effective feedback to the design architect and enabling him to identify and reduce the number of defects in the modules appropriately. This results in a considerable improvement in software design leading to cost effective products.  相似文献   

15.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

16.

Context

Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost.

Objective

The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects.

Method

We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis.

Results

We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation's track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990-1999 and 2000-mid 2008).

Conclusions

Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in the OSDO business, such as skilled human resource, appropriate infrastructure and quality of products and services.  相似文献   

17.

Context

The globalisation of activities associated with software development and use has introduced many challenges in practice, and also (therefore) many for research. While the predominant approach to research in software engineering has followed a positivist science model, this approach may be sub-optimal when addressing problems with a dominant social or cultural dimension, such as those frequently encountered when studying work practices in a globally distributed team setting.The investigation of such a team reported in this paper provides one example of an alternative approach to research in a global context, through a longitudinal interpretive field study seeking to understand how global virtual teams mediated the use of technology. The study involved a large collective of faculty and support staff plus student members based in the geographically and temporally distant locations of New Zealand, the United States of America and Sweden.

Objective

Our focus in this paper is on the conduct of research in the context of global software activities, and in particular, as applied to the actions and interactions of global virtual teams. We consider the appropriateness of various methodologies and methods in enabling such issues to be addressed.

Method

We describe how we undertook a substantial field study of global virtual teams, and highlight how the adopted structuration theory, action research and grounded theory methodologies applied to the analysis of email data, enabled us to deliver effectively against our goals.

Results

We believe that the approach taken suited a research context in which situated practices were occurring over time in a highly complex domain, ensuring that our results were both strongly grounded and relevant to practice. It has resulted in the generation of substantive theory and techniques that have been adapted and applied on a pilot basis in further field settings.

Conclusion

We conclude that globally distributed teamwork presents a complex context which demands new research approaches, beyond the limited set customarily applied by software engineering researchers. We advocate experimenting with different research methodologies and methods so that we have a more rounded repertoire to address the most important and relevant issues in global software development research, with the forms of rigour that suit the chosen approach.  相似文献   

18.

Context

We are strong advocates of evidence-based software engineering (EBSE) in general and systematic literature reviews (SLRs) in particular. We believe it is essential that the SLR methodology is used constructively to support software engineering research.

Objective

This study aims to assess the value of mapping studies which are a form of SLR that aims to identify and categorise the available research on a broad software engineering topic.

Method

We used a multi-case, participant-observer case study using five examples of studies that were based on preceding mapping studies. We also validated our results by contacting two other researchers who had undertaken studies based on preceding mapping studies and by assessing review comments related to our follow-on studies.

Results

Our original case study identified 11 unique benefits that can accrue from basing research on a preceding mapping study of which only two were case specific. We also identified nine problems associated with using preceding mapping studies of which two were case specific. These results were consistent with the information obtained from the validation activities. We did not find an example of an independent research group making use of a mapping study produced by other researchers.

Conclusion

Mapping studies can save time and effort for researchers and provide baselines to assist new research efforts. However, they must be of high quality in terms of completeness and rigour if they are to be a reliable basis for follow-on research.  相似文献   

19.

Context

Formal methods are very useful in the software industry and are becoming of paramount importance in practical engineering techniques. They involve the design and modeling of various system aspects expressed usually through different paradigms. These different formalisms make the verification of global developed systems more difficult.

Objective

In this paper, we propose to combine two modeling formalisms, in order to express both functional and security timed requirements of a system to obtain all the requirements expressed in a unique formalism.

Method

First, the system behavior is specified according to its functional requirements using Timed Extended Finite State Machine (TEFSM) formalism. Second, this model is augmented by applying a set of dedicated algorithms to integrate timed security requirements specified in Nomad language. This language is adapted to express security properties such as permissions, prohibitions and obligations with time considerations.

Results

The proposed algorithms produce a global TEFSM specification of the system that includes both its functional and security timed requirements.

Conclusion

It is concluded that it is possible to merge several requirement aspects described with different formalisms into a global specification that can be used for several purposes such as code generation, specification correctness proof, model checking or automatic test generation. In this paper, we applied our approach to a France Telecom Travel service to demonstrate its scalability and feasibility.  相似文献   

20.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号