首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.

Context

Release scheduling deals with the selection and assignment of deliverable features to a sequence of consecutive product deliveries while several constraints are fulfilled. Although agile software development represents a major approach to software engineering, there is no well-established conceptual definition and sound methodological support of agile release scheduling.

Objective

To propose a solution, we present, (1) a conceptual model for agile scheduling, and (2) a novel multiple knapsack-based optimization model with (3) a branch-and-bound optimization algorithm for agile release scheduling.

Method

To evaluate our model simulations were carried out seven real life and several generated data sets.

Results

The developed algorithm strives to prevent resource overload and resource underload, and mitigates risks of delivery slippage.

Conclusion

The results of the experiment suggest that this approach can provide optimized semi-automatic release schedule generations and more informed and established decisions utilizing what-if-analysis on the fly to tailor the best schedule for the specific project context.  相似文献   

4.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

5.

Context

In training disciplined software development, the PSP is said to result in such effect as increased estimation accuracy, better software quality, earlier defect detection, and improved productivity. But a systematic mechanism that can be easily adopted to assess and interpret PSP effect is scarce within the existing literature.

Objective

The purpose of this study is to explore the possibility of devising a feasible assessment model that ties up critical software engineering values with the pertinent PSP metrics.

Method

A systematic review of the literature was conducted to establish such an assessment model (we called a Plan-Track-Review model). Both mean and median approaches along with a set of simplified procedures were used to assess the commonly accepted PSP training effects. A set of statistical analyses further followed to increase understanding of the relationships among the PSP metrics and to help interpret the application results.

Results

Based on the results of this study, PSP training effect on the controllability, manageability, and reliability of a software engineer is quite positive and largely consistent with the literature. However, its effect on one’s predictability on project in general (and on project size in particular) is not implied as said in the literature. As for one’s overall project efficiency, our results show a moderate improvement. Our initial finding also suggests that a prior stage PSP effect could have an impact on later stage training outcomes.

Conclusion

It is concluded that this Plan-Track-Review model with the associated framework can be used to assess PSP effect regarding a disciplined software development. The generated summary report serves to provide useful feedback for both PSP instructors and students based on internal as well as external standards.  相似文献   

6.

Context

Component identification, the process of evolving legacy system into finely organized component-based software systems, is a critical part of software reengineering. Currently, many component identification approaches have been developed based on agglomerative hierarchical clustering algorithms. However, there is a lack of thorough investigation on which algorithm is appropriate for component identification.

Objective

This paper focuses on analyzing agglomerative hierarchical clustering algorithms in software reengineering, and then identifying their respective strengths and weaknesses in order to apply them effectively for future practical applications.

Method

A series of experiments were conducted for 18 clustering strategies combined according to various similarity measures, weighting schemes and linkage methods. Eleven subject systems with different application domains and source code sizes were used in the experiments. The component identification results are evaluated by the proposed size, coupling and cohesion criteria.

Results

The experimental results suggested that the employed similarity measures, weighting schemes and linkage methods can have various effects on component identification results with respect to the proposed size, coupling and cohesion criteria, so the hierarchical clustering algorithms produced quite different clustering results.

Conclusions

According to the experimental results, it can be concluded that it is difficult to produce perfectly satisfactory results for a given clustering algorithm. Nevertheless, these algorithms demonstrated varied capabilities to identify components with respect to the proposed size, coupling and cohesion criteria.  相似文献   

7.

Context

During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage.

Objective

This paper describes a new, simple and practical method for assessing our confidence in a set of requirements.

Method

We identified four confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project.

Results

We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money.

Conclusion

Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.  相似文献   

8.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

9.

Context

One of the difficulties faced by software development Project Managers is estimating the cost and schedule for new projects. Previous industry surveys have concluded that software size and cost estimation is a significant technical area of concern. In order to estimate cost and schedule it is important to have a good understanding of the size of the software product to be developed. There are a number of techniques used to derive software size, with function points being amongst the most documented.

Objective

In this paper we explore the utility of function point software sizing techniques when applied to two levels of software requirements documentation in a commercial software development organisation. The goal of the research is to appraise the value (cost/benefit) which functional sizing techniques can bring to the project planning and management of software projects within a small-to-medium sized software development enterprise (SME).

Method

Functional counts were made at the bid and detailed functional specification stages for each of five commercial projects used in the research. Three variants of the NESMA method were used to determine these function counts. Through a structured interview session, feedback on the sizing results was obtained to evaluate its feasibility and potential future contribution to the company.

Results

The results of our research suggest there is value in performing size estimates at two appropriate stages in the software development lifecycle, with simplified methods providing the optimal return on effort expended.

Conclusion

The ‘Estimated NESMA’ is the most appropriate tool for use in size estimation for the company studied. The use of software sizing provides a valuable contribution which would augment, but not replace, the company’s existing cost estimation approach.  相似文献   

10.

Context

Testing a module that has memory using the black-box approach has been found to be expensive and relatively ineffective. Instead, testing without knowledge of the specifications (white-box approach) may not be effective in showing whether a program has been properly implemented as stated in its specifications. We propose instead a grey-box approach called Module Documentation-based Testing or MD-Test, the heart of which is an automatic generation of the test oracle from the external and internal views of the module.

Objective

This paper presents an empirical analysis and comparison of MD-Test against three existing testing tools.

Method

The experiment was conducted using a mutation-testing approach, in two phases that assess the capability of MD-Test in general and its capability of evaluating test results in particular.

Results

The results of the general assessment indicate that MD-Test is more effective than the other three tools under comparison, where it is able to detect all faults. The second phase of the experiment, which is significant to this study, compares the capabilities of MD-Test and JUnit-black using the test evaluation results. Likewise, an analysis of the test evaluation results shows that MD-Test is more effective and efficient, where MD-Test is able to detect at least the same number of faults as, or is at par with, the black-box approach.

Conclusion

It is concluded that test evaluation using grey-box approach is more effective and efficient that the black-box approach when testing a module that has memory.  相似文献   

11.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

12.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

13.

Context

Customer collaboration is a vital feature of Agile software development.

Objective

This article addresses the importance of adequate customer involvement on Agile projects, and the impact of different levels of customer involvement on real-life Agile projects.

Method

We conducted a Grounded Theory study involving 30 Agile practitioners from 16 software development organizations in New Zealand and India, over a period of 3 years.

Results

We discovered that Lack of Customer Involvement was one of the biggest challenges faced by Agile teams. Customers were not as involved on these Agile projects as Agile methods demand. We describe the causes of inadequate customer collaboration, its adverse consequences on self-organizing Agile teams, and Agile Undercover — a set of strategies used by the teams to practice Agile despite insufficient or ineffective customer involvement.

Conclusion

Customer involvement is important on Agile projects. Inadequate customer involvement causes adverse problems for Agile teams. The Agile Undercover strategies we’ve identified can assist Agile teams facing similar lack of customer involvement.  相似文献   

14.

Context

Studies on global software development have documented severe coordination and communication problems among coworkers due to geographic dispersion and consequent dependency on technology. These problems are exacerbated by increase in the complexity of work undertaken by global teams. However, despite these problems, global software development is on the rise and firms are adopting global practices across the board, raising the question: What does successful global software development look like and what can we learn from its practitioners?

Objective

This study draws on practice-based studies of work to examine successful work practices of global software developers. The primary aim of this study was to understand how workers develop practices that allow them to function effectively across geographically dispersed locations.

Method

An ethnographically-informed field study was conducted with data collection at two international locations of a firm. Interview, observation and archival data were collected. A total of 42 interviews and 3 weeks of observations were conducted.

Results

Teams spread across different locations around the world developed work practices through sociomaterial bricolage. Two facets of technology use were necessary for the creation of these practices: multiplicity of media and relational personalization at dyadic and team levels. New practices were triggered by the need to achieve a work-life balance, which was disturbed by global development. Reflecting on my role as a researcher, I underscore the importance of understanding researchers’ own frames of reference and using research practices that mirror informants’ work practices.

Conclusion

Software developers on global teams face unique challenges which necessitate a shift in their work practices. Successful teams are able to create practices that span locations while still being tied to location based practices. Inventive use of material and social resources is central to the creation of these practices.  相似文献   

15.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

16.

Context

For large software projects it is important to have some traceability between artefacts from different phases (e.g.requirements, designs, code), and between artefacts and the involved developers. However, if the capturing of traceability information during the project is felt as laborious to developers, they will often be sloppy in registering the relevant traceability links so that the information is incomplete. This makes automated tool-based collection of traceability links a tempting alternative, but this has the opposite challenge of generating too many potential trace relationships, not all of which are equally relevant.

Objective

This paper evaluates how to rank such auto-generated trace relationships.

Method

We present two approaches for such a ranking: a Bayesian technique and a linear inference technique. Both techniques depend on the interaction event trails left behind by collaborating developers while working within a development tool.

Results

The outcome of a preliminary study suggest the advantage of the linear approach, we also explore the challenges and potentials of the two techniques.

Conclusion

The advantage of the two techniques is that they can be used to provide traceability insights that are contextual and would have been much more difficult to capture manually. We also present some key lessons learnt during this research.  相似文献   

17.

Context

Formal methods are very useful in the software industry and are becoming of paramount importance in practical engineering techniques. They involve the design and modeling of various system aspects expressed usually through different paradigms. These different formalisms make the verification of global developed systems more difficult.

Objective

In this paper, we propose to combine two modeling formalisms, in order to express both functional and security timed requirements of a system to obtain all the requirements expressed in a unique formalism.

Method

First, the system behavior is specified according to its functional requirements using Timed Extended Finite State Machine (TEFSM) formalism. Second, this model is augmented by applying a set of dedicated algorithms to integrate timed security requirements specified in Nomad language. This language is adapted to express security properties such as permissions, prohibitions and obligations with time considerations.

Results

The proposed algorithms produce a global TEFSM specification of the system that includes both its functional and security timed requirements.

Conclusion

It is concluded that it is possible to merge several requirement aspects described with different formalisms into a global specification that can be used for several purposes such as code generation, specification correctness proof, model checking or automatic test generation. In this paper, we applied our approach to a France Telecom Travel service to demonstrate its scalability and feasibility.  相似文献   

18.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

19.

Context

User participation in information system (IS) development has received much research attention. However, prior empirical research regarding the effect of user participation on IS success is inconclusive. This might be because previous studies overlook the effect of the particular components of user participation and other possible mediating factors.

Objective

The objective of this study is to empirically examine how user influence and user responsibility affect IS project performance. We inspect whether user influence and user responsibility improve the quality of the IS development process and in turn leads to project success, or if they have a direct positive influence on project success.

Method

We conducted a survey of 151 IS project managers in order to understand the impact of user influence and user responsibility on IS project performance. Regression analysis was conducted to assess the relationship among user influence, user responsibility, organizational technology learning, project control, user-developer interaction, and IS project management performance.

Results

This study shows that user responsibility and user influence have a positive effect on project performance through the promotion of IS development processes as mediators, including organizational technology learning, project control, and user-IS interaction.

Conclusion

Our results suggest that user responsibility and user influence respectively play an important role in indirectly and directly impacting project management performance. Results of the analysis imply that organizations and project managers should use both user participation and user influence to improve processes performance, and in turn, increase project success.  相似文献   

20.

Context

Staff turnover in organizations is an important issue that should be taken into account mainly for two reasons:
1.
Employees carry an organization’s knowledge in their heads and take it with them wherever they go
2.
Knowledge accessibility is limited to the amount of knowledge employees want to share

Objective

The aim of this work is to provide a set of guidelines to develop knowledge-based Process Asset Libraries (PAL) to store software engineering best practices, implemented as a wiki.

Method

Fieldwork was carried out in a 2-year training course in agile development. This was validated in two phases (with and without PAL), which were subdivided into two stages: Training and Project.

Results

The study demonstrates that, on the one hand, the learning process can be facilitated using PAL to transfer software process knowledge, and on the other hand, products were developed by junior software engineers with a greater degree of independence.

Conclusion

PAL, as a knowledge repository, helps software engineers to learn about development processes and improves the use of agile processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号