首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.

Aim

In this article, factors influencing the motivation of software engineers is studied with the goal of guiding the definition of motivational programs.

Method

Using a set of 20 motivational factors compiled in a systematic literature review and a general theory of motivation, a survey questionnaire was created to evaluate the influence of these factors on individual motivation. Then, the questionnaire was applied on a semi-random sample of 176 software engineers from 20 software companies located in Recife-PE, Brazil.

Results

The survey results show the actual level of motivation for each motivator in the target population. Using principal component analysis on the values of all motivators, a five factor structure was identified and used to propose a guideline for the creation of motivational programs for software engineers.

Conclusions

The five factor structure provides an intuitive categorization for the set of variables and can be used to explain other motivational models presented in the literature. This contributes to a better understanding of motivation in software engineering.  相似文献   

3.

Context

User participation in information system (IS) development has received much research attention. However, prior empirical research regarding the effect of user participation on IS success is inconclusive. This might be because previous studies overlook the effect of the particular components of user participation and other possible mediating factors.

Objective

The objective of this study is to empirically examine how user influence and user responsibility affect IS project performance. We inspect whether user influence and user responsibility improve the quality of the IS development process and in turn leads to project success, or if they have a direct positive influence on project success.

Method

We conducted a survey of 151 IS project managers in order to understand the impact of user influence and user responsibility on IS project performance. Regression analysis was conducted to assess the relationship among user influence, user responsibility, organizational technology learning, project control, user-developer interaction, and IS project management performance.

Results

This study shows that user responsibility and user influence have a positive effect on project performance through the promotion of IS development processes as mediators, including organizational technology learning, project control, and user-IS interaction.

Conclusion

Our results suggest that user responsibility and user influence respectively play an important role in indirectly and directly impacting project management performance. Results of the analysis imply that organizations and project managers should use both user participation and user influence to improve processes performance, and in turn, increase project success.  相似文献   

4.

Context

Pointer analysis is an important building block of optimizing compilers and program analyzers for C language. Various methods with precision and performance trade-offs have been proposed. Among them, cycle elimination has been successfully used to improve the scalability of context-insensitive pointer analyses without losing any precision.

Objective

In this article, we present a new method on context-sensitive pointer analysis with an effective application of cycle elimination.

Method

To obtain similar benefits of cycle elimination for context-sensitive analysis, we propose a novel constraint-based formulation that uses sets of contexts as annotations. Our method is not based on binary decision diagram (BDD). Instead, we directly use invocation graphs to represent context sets and apply a hash-consing technique to deal with the exponential blow-up of contexts.

Result

Experimental results on C programs ranging from 20,000 to 290,000 lines show that applying cycle elimination to our new formulation results in 4.5 ×speedup over the previous BDD-based approach.

Conclusion

We showed that cycle elimination is an effective method for improving the scalability of context-sensitive pointer analysis.  相似文献   

5.

Context

Software defect prediction studies usually built models using within-company data, but very few focused on the prediction models trained with cross-company data. It is difficult to employ these models which are built on the within-company data in practice, because of the lack of these local data repositories. Recently, transfer learning has attracted more and more attention for building classifier in target domain using the data from related source domain. It is very useful in cases when distributions of training and test instances differ, but is it appropriate for cross-company software defect prediction?

Objective

In this paper, we consider the cross-company defect prediction scenario where source and target data are drawn from different companies. In order to harness cross company data, we try to exploit the transfer learning method to build faster and highly effective prediction model.

Method

Unlike the prior works selecting training data which are similar from the test data, we proposed a novel algorithm called Transfer Naive Bayes (TNB), by using the information of all the proper features in training data. Our solution estimates the distribution of the test data, and transfers cross-company data information into the weights of the training data. On these weighted data, the defect prediction model is built.

Results

This article presents a theoretical analysis for the comparative methods, and shows the experiment results on the data sets from different organizations. It indicates that TNB is more accurate in terms of AUC (The area under the receiver operating characteristic curve), within less runtime than the state of the art methods.

Conclusion

It is concluded that when there are too few local training data to train good classifiers, the useful knowledge from different-distribution training data on feature level may help. We are optimistic that our transfer learning method can guide optimal resource allocation strategies, which may reduce software testing cost and increase effectiveness of software testing process.  相似文献   

6.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

7.

Context

A software artefact typically makes its functionality available through a specialized Application Programming Interface (API) describing the set of services offered to client applications. In fact, building any software system usually involves managing a plethora of APIs, which complicates the development process. In Model-Driven Engineering (MDE), where models are the key elements of any software engineering activity, this API management should take place at the model level. Therefore, tools that facilitate the integration of APIs and MDE are clearly needed.

Objective

Our goal is to automate the implementation of API-MDE bridges for supporting both the creation of models from API objects and the generation of such API objects from models. In this sense, this paper presents the API2MoL approach, which provides a declarative rule-based language to easily write mapping definitions to link API specifications and the metamodel that represents them. These definitions are then executed to convert API objects into model elements or vice versa. The approach also allows both the metamodel and the mapping to be automatically obtained from the API specification (bootstrap process).

Method

After implementing the API2MoL engine, its correctness was validated using several APIs. Since APIs are normally large, we then developed a tool to implement the bootstrap process, which was also validated.

Results

We provide a toolkit (language and bootstrap tool) for the creation of bridges between APIs and MDE. The current implementation focuses on Java APIs, although its adaptation to other statically typed object-oriented languages is straightforward. The correctness, expressiveness and completeness of the approach have been validated with the Swing, SWT and JTwitter APIs.

Conclusion

API2MoL frees developers from having to manually implement the tasks of obtaining models from API objects and generating such objects from models. This helps to manage API models in MDE-based solutions.  相似文献   

8.

Objective

To determine personal and workplace factors associated with quad bike loss of control events (LCEs) on New Zealand farms.

Methods

Rural community databases were used to sample 130 farmers and farm employees (workers). Fieldwork and survey investigated for prevalence of LCEs; farm type; farm terrain; personal measures; and vehicle driving exposures.

Results

Seventy nine workers (61%) described a total of 200 LCEs. Increased driver height, increased body mass, non-flat farm terrain, increased driving speed and distance, and greater whole body vibration exposure were significantly associated with LCEs.

Conclusions

Taller and heavier drivers of quad bikes should be particularly vigilant for risk of an LCE. Vehicle speed, distance driven and choice of driving routes over difficult terrain are potentially modifiable factors which have behavioural components and should be considered as management strategies for reducing risk of on-farm quad bike LCEs.

Relevance to industry

Quad bike accidents are a considerable problem in agriculture. This research has identified a number of physical and driving factors that should be considered in the management strategies for reducing risk of on-farm quad bike accidents.  相似文献   

9.

Context

Method engineering approaches are often based on the assumption that method users are able to explicitly express their situational method requirements. Similar to systems requirements, method requirements are often vague and hard to explicate. In this paper we address the issue of involving method users early in method configuration. This is done through borrowing ideas from user-centered design and prototyping, and implementing them on the method engineering layer.

Objective

We design a computerized tool, MC Sandbox, to capture method requirements through the use of method-user-centered method configuration, hence bridging the gap between systems developers’ and method engineers’ understanding of and expectations on a situational method.

Method

The research method adopted can be characterized as multi-grounded action research. Our implementation of multi-grounded action research follows the traditional ‘canonical’ action research method, which has cycles of diagnosing, action planning, action taking, evaluating, and specifying learning. The research project comprised three such action research cycles where 10 action cases were performed.

Results

MC Sandbox has proven useful in eliciting and negotiating method requirements in a continuously ongoing dialog between the method users and the method engineers during configuration workshops. The results also show that the method engineer role rotated among the systems developers and that they were indeed committed to the negotiated methods during the systems development projects.

Conclusion

It is possible for method users to actively participate and construct suitable situational methods if they are provided with appropriate high-level modelling concepts, such as method components, configuration packages and configuration templates. This way, the project members’ understanding of the current development practice develops incrementally, both in terms of understanding the needs and available method support. In addition, both method requirements and commitments are made explicit, which are important aspects when working with method configuration from a collaboration point of view.  相似文献   

10.

Context

Customer collaboration is a vital feature of Agile software development.

Objective

This article addresses the importance of adequate customer involvement on Agile projects, and the impact of different levels of customer involvement on real-life Agile projects.

Method

We conducted a Grounded Theory study involving 30 Agile practitioners from 16 software development organizations in New Zealand and India, over a period of 3 years.

Results

We discovered that Lack of Customer Involvement was one of the biggest challenges faced by Agile teams. Customers were not as involved on these Agile projects as Agile methods demand. We describe the causes of inadequate customer collaboration, its adverse consequences on self-organizing Agile teams, and Agile Undercover — a set of strategies used by the teams to practice Agile despite insufficient or ineffective customer involvement.

Conclusion

Customer involvement is important on Agile projects. Inadequate customer involvement causes adverse problems for Agile teams. The Agile Undercover strategies we’ve identified can assist Agile teams facing similar lack of customer involvement.  相似文献   

11.

Context

Different method calls may have different contributions to the precision of the final application when abstracted into the call strings. The existing call string based pointer analysis algorithms do not consider such contribution difference and hence may not achieve best cost-effectiveness.

Objective

To be more cost-effective, we try to leverage the contribution information of each method call in call string based pointer analysis.

Method

The paper firstly proposes a contribution-based call stack abstraction method which abstracts the call stacks into call strings with the contribution information under consideration. Then, we apply the new call stack abstraction method to the pointer analysis of AspectJ programs and propose a concern-sensitive points-to analysis method. Besides, the new abstraction method is also applied to multi-threaded Java programs and results in a thread-sensitive pointer analysis method.

Results

The experimental results show that the two pointer analysis methods with contribution-based call stack abstraction can be more cost-effective than the ordinary call string based approaches for an application that detects harmful advices and an application that detects inter-thread data flow.

Conclusion

These pointer analysis methods more concretely and more clearly show that the contribution-based call stack abstraction can lead to better cost-effectiveness for the given applications.  相似文献   

12.

Context

Cost advantage has been one of the primary drivers of successful offshoring engagements of Indian software and services companies. However, the emphasis has shifted to the ability of the vendors to provide high quality over cost advantage in delivering software products and services. Meeting high quality requirements of the clients is a challenge due to the very nature of development and delivery of software through offshoring.

Objective

The objective of this research paper is to identify and evaluate the key determinants of quality in the case of software projects delivered through offshoring model.

Method

A detailed survey was conducted among project managers/project leaders (leads) of a leading midsize Indian IT services company to evaluate the relationship of the determinants on the attributes of quality.

Results

Out of six determinants, our research reveals requirements uncertainty has significant association with all the attributes of quality. While process maturity and trained personnel have moderate association, communication and control, knowledge transfer and integration and technical infrastructure have relatively low association on software quality attributes in the case of offshoring.

Conclusion

It is concluded that the complexities in offshoring necessitates proper capturing of requirements. In addition high level of process maturity and availability of trained personnel to the project will help vendors to achieve software quality. The paper provides a set of implications for practice and directions for further research.  相似文献   

13.

Context

In recent years, many usability evaluation methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers’ usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability evaluation methods for the Web or the benefits they bring.

Objective

The objective of this paper is to summarize the current knowledge that is available as regards the usability evaluation methods (UEMs) that have been employed to evaluate Web applications over the last 14 years.

Method

A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner.

Results

The results show that around 39% of the papers reviewed reported the use of evaluation methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners.

Conclusions

From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.  相似文献   

14.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

15.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

16.

Context

Input/output transition system (IOTS) models are commonly used when next input can arrive even before outputs are produced. The interaction between the tester and an implementation under test (IUT) is usually assumed to be synchronous. However, as the IUT can produce outputs at any moment, the tester should be prepared to accept all outputs from the IUT, or else be able to block (refuse) outputs of the implementation. Testing distributed, remote applications under the assumptions that communication is synchronous and actions can be blocked is unrealistic, since synchronous communication for such applications can only be achieved if special protocols are used. In this context, asynchronous tests can be more appropriate, reflecting the underlying test architecture which includes queues.

Objective

In this paper, we investigate the problem of constructing test cases for given test purposes and specification input/output transition systems, when the communication between the tester and the implementation under test is assumed to be asynchronous, performed via multiple queues.

Method

When issuing verdicts, asynchronous tests should take into account a distortion caused by the queues in the observed interactions. First, we investigate how the test purpose can be transformed to account for this distortion when there are a single input queue and a single output queue. Then, we consider a more general problem, when there may be multiple queues.

Results

We propose an algorithm which constructs a sound test case, by transforming the test purpose prior to composing it with the specification without queues.

Conclusion

The proposed algorithm mitigates the state explosion problem which usually occurs when queues are directly involved in the composition. Experimental results confirm the resulting state space reduction.  相似文献   

17.

Context

Studying work practices in the context of Global Software Development (GSD) projects entails multiple opportunities and challenges for the researchers. Understanding and tackling these challenges requires a careful and rigor application of research methods.

Objective

We want to contribute to the understanding of the challenges of studying GSD by reflecting on several obstacles we had to deal with when conducting ethnographically-informed research on offshoring in German small to medium-sized enterprises.

Method

The material for this paper is based on reflections and field notes from two research projects: an exploratory ethnographic field study, and a study that was framed as a Business Ethnography. For the analysis, we took a Grounded Theory-oriented coding and analysis approach in order to identify issues and challenges documented in our research notes.

Results

We introduce the concept of Business Ethnography and discuss our experiences of adapting and implementing this action research concept for our study. We identify and discuss three primary issues: understanding complex global work practices from a local perspective, adapting to changing interests of the participants, and dealing with micro-political frictions between the cooperating sites.

Conclusions

We identify common interests between the researchers and the companies as a challenge and chance for studies on offshoring. Building on our experiences from the field, we argue for an active conceptualization of struggles and conflicts in the field as well as for extending the role of the ethnographer to that of a learning mediator.  相似文献   

18.

Context

The implied scenarios are unexpected behaviors in the scenario specifications. Detecting and handling them is essential for the correctness of the scenario specifications. To handle such implied scenarios, identifying the causes of implied scenarios is also essential. Most recent researches focus on detecting those implied scenarios, themselves or limited causes of implied scenarios.

Objective

The purpose of this research is to provide an approach to detecting the causes of implied scenarios.

Method

The scenario specification is a set of events and a set of relative orders between the events, and enforces them for its implementation. Among the orders, a set of orders that cannot be inherently enforced is the unenforceable orders. Obviously, existence of unenforceable orders leads the implied scenarios. To obtain the unenforceable orders, we first provide a method to represent each of the specification and its implementation as a set of orders between events, called the causal order graph. Then, the differences between them are the unenforceable orders.

Results

Because the unenforceable orders consist of events and their order relation that are specified in the scenario specification, they can point out which part of the scenario specification should be considered to handle the implied scenarios. In addition, our approach supports the synchronous, asynchronous, and FIFO communication styles without the state explosion or heavy computational overhead. To validate our approach, we provide two case studies.

Conclusions

This approach helps a designer to effectively correct the scenario specification by identifying where to be fixed, especially in large cases and under the various communication styles.  相似文献   

19.

Context

The loose coupling of services and Service-Based Applications (SBAs) have made them the ideal platform for context-based run-time adaptation. There has been a lot of research into implementation techniques for adapting SBAs, without much effort focused on the software process required to guide the adaptation.

Objective

This paper aims to bridge that gap by providing an empirically grounded software process model that can be used by software practitioners who want to build adaptable SBAs. The process model will focus only on the adaptation specific issues.

Method

The process model presented in this paper is based on data collected through interviews with 10 practitioners occupying various roles within eight different companies. The data was analyzed using qualitative data analysis techniques. We used the output to develop a set of activities, tasks, stakeholders and artifacts that were used to construct the process model.

Results

The outcome of the data analysis process was a process model identifying nine sets of adaptation process attributes. These can be used in conjunction with an organisation’s existing development life-cycle or another reference life-cycle.

Conclusion

The process model developed in this paper provides a solid reference for practitioners who are planning to develop adaptable SBAs. It has advantages over similar approaches in that it focuses on software process rather than the specific adaptation mechanism implementation techniques.  相似文献   

20.

Context

Formal methods are very useful in the software industry and are becoming of paramount importance in practical engineering techniques. They involve the design and modeling of various system aspects expressed usually through different paradigms. These different formalisms make the verification of global developed systems more difficult.

Objective

In this paper, we propose to combine two modeling formalisms, in order to express both functional and security timed requirements of a system to obtain all the requirements expressed in a unique formalism.

Method

First, the system behavior is specified according to its functional requirements using Timed Extended Finite State Machine (TEFSM) formalism. Second, this model is augmented by applying a set of dedicated algorithms to integrate timed security requirements specified in Nomad language. This language is adapted to express security properties such as permissions, prohibitions and obligations with time considerations.

Results

The proposed algorithms produce a global TEFSM specification of the system that includes both its functional and security timed requirements.

Conclusion

It is concluded that it is possible to merge several requirement aspects described with different formalisms into a global specification that can be used for several purposes such as code generation, specification correctness proof, model checking or automatic test generation. In this paper, we applied our approach to a France Telecom Travel service to demonstrate its scalability and feasibility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号