首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Digital piracy intention research has yielded different sets of piracy intention determinants based on various theoretical models. In this study, we reviewed the digital piracy literature and empirically compared two theoretical models, which are the models most often used: the theory of planned behaviour (TPB) and the Hunt–Vitell ethical decision model. Data were obtained from university students in South Korea, and structural equation modelling (SEM) was employed to examine and compare the two competing theoretical models in terms of explanatory power, overall model fit and paths' significance. The findings of this study revealed that the TPB is a more appropriate model for predicting digital piracy than the Hunt–Vitell ethical decision model.  相似文献   

2.
Pressure to compress the development life cycle and reduce the duration and resources committed to testing lead to experimentation in testing at the NASA Goddard Space Flight Centerís Software Engineering Laboratory. This study investigates the trend to reduce developer testing and increasingly rely upon inspection techniques and independent functional testing to shorten the development life cycle, improve testing productivity, and improve software quality.An approach is developed to conduct this comparison. In particular, the problem faced by software researchers, having a comprehensive characterization of software projects so similar types may be identified for comparative studies, is addressed using expert opinion.  相似文献   

3.
Our study was initiated to provide a better understanding of the factors influencing employees’ non-work-related computing (NWRC) behavior by comparing two models, one based on Triandis’ theory of interpersonal behavior (TIB) and the other derived from the theory of planned behavior (TPB). Results of the study showed that the TIB-based model had higher explanatory power than the TPB-based model. Specifically, affect, social factors, and perceived consequences significantly influenced employees’ intention to engage in NWRC, while intention to engage in it, habit, and facilitating conditions determined employees’ NWRC behavior. Implications of these findings are discussed.  相似文献   

4.
This paper compared two versions of technology acceptance model (TAM) in understanding the determinants of user intention to use wireless technology in the workplace. The first model is derived from original TAM that includes perceived usefulness, perceived ease of use, attitude and behavioral intention, while the alternative model is a parsimonious version in which the attitude was taken out. The results indicated that TAM, either original or parsimonious, is successful in explaining user intention to use wireless technology in organizations. In addition, the parsimonious model showed a better model fit than that of the original model.  相似文献   

5.
Monthly Federal Fund interest rate values, set by the Federal Open Market Committee, have been the subject of much speculation prior to the announcement of their new values each period. In this study we use four competing methodologies to model and forecast the behavior of these short term Federal Fund interest rates. These methodologies are: time series, Taylor, econometric and neural network. The time series forecasts use only past values of Federal Funds rates. The celebrated Taylor rule methodology theorizes that the Federal Fund rate values are influenced solely by deviations from a desired level of inflation and from potential output. The econometric and neural network models have inputs used by both the time series and Taylor rule. Using monthly data from 1958 to the end of 2005 we distinguish between sample and out-of-sample sets to train, evaluate, and compare the models’ effectiveness. Our results indicate that the econometric modeling performs better than the other approaches when the data are divided into two sets of pre-Greenspan and Greenspan periods. However, when the data sample is divided into three groups of low, medium and high Federal Funds, the neural network approach does best. An earlier version was presented at the 2007 International Joint Conference on Neural Networks in Orlando. We are thankful to the Larry Medsker, regional editor of Neural Computing and Applications, and to two anonymous referees for very insightful comments that helped us improve the final version.  相似文献   

6.
This work aims at evaluating a graphical notation for modelling software (and other kinds of) development methodologies, thus demonstrating how useful the graphical aspects can be for sharing knowledge between the people responsible for documenting information and those responsible for understanding and putting it into practice. We acknowledge the importance of having a common set of symbols that can be used to create, use and disseminate information for a larger audience than is possible today with a variety of alternatives and lack of a common ground. Using a cognitive dimensions framework, we make a standard evaluation of the elements and diagrams of the notation proposed to support the ISO/IEC 24744 methodology metamodel standard, considering the trade-offs between different dimensions. We suggest improvements to this existing notation based on this analysis, in the context of improving communication between creators and users of methodologies.  相似文献   

7.
A comparison of time domains (i.e., execution time vs. calendar time) is made for software reliability models, with the purpose of reaching some general conclusions about their relative desirability. The comparison is made by using a generic failure intensity function that represents a large majority of the principal models. The comparison is based on how well the function fits the estimated failure intensity, where the failure intensity is estimated with respect to both kinds of time. The failure intensity in each time domain is examined for trends. Failure intensity estimates are calculated from carefully collected data. The execution time domain is found to be highly superior to the calendar time domain.  相似文献   

8.
Organizations involved on process improvement programs need to deal with different process improvement and assessment models. As not all the process improvement and assessment models have an equivalent scope, the selection of a particular model to guide the improvement strategy may result in a partial, constrained view of the areas where the organization may obtain competitive advantages. As a mitigation strategy, organizations should have a detailed understanding of the differences in the scope of the available models. Whatever the model they adopt, companies should be aware of relevant areas that may be missed or treated with more or less detail in the models under consideration. In addition, the need of dealing with different assessment models is usually found in second- and third-party assessments, when prospects or potential contractors decide to conduct an assessment of the subcontractor’s capabilities using a model that may not be the same as the reference model selected by the target subcontractor. In these situations, companies are at risks of overlooking relevant processes and practices. This paper describes a case study developed for the aerospace industry, based on the mapping of two assessment models widely deployed in this activity sector: CMMI-DEV and SPICE for Space, a variant of ISO/IEC 15504. A detailed gap analysis is provided identifying those aspects that should be considered both as potential improvement areas and as sources of risks. An extended assessment activity methodology is proposed that considers the results of model traceability analysis as a key factor for conducting the assessments.  相似文献   

9.
风险能导致软件项目失败,从而给企业带来损失;它是软件工程的研究热点之一,人员流动风险是软件项目过程中的重大风险,然而却很少有人关注。信息熵能有效度量子系统的均匀程度,提出一种基于信息熵的定量的人员流动风险度量模型;人员对软件项目的影响越均匀,风险越小,否则,关键人员的流动将对项目造成重大影响。不仅论述了模型的合理性,而且给出了模型实例,模型中需要的数据可从企业内部获得。实践表明该模型科学合理,可以作为企业控制软件项目人员流动风险的依据。  相似文献   

10.
Requirements engineering are one of the most crucial steps in software development process. Without a well-written requirements specification, developer's do not know what to build, user's do not know what to expect, and there is no way to validate that the created system actually meets the original needs of the user. Much of the emphasis in the recent attention for a software engineering discipline has centered on the formalization of software specifications and their flowdown to system design and verification. Undoubtedly, the incorporation of such sound, complete, and unambiguous traceability is vital to the success of any project. However, it has been our experience through years of work (on both sides) within the government and private sector military industrial establishment that many projects fail even before they reach the formal specification stage. That is because too often the developer does not truly understand or address the real requirements of the user and his environment.The purpose of this research and report is to investigate the key players and their roles along with the existing methods and obstacles in Requirements Elicitation. The article will concentrate on emphasizing key activities and methods for gathering this information, as well as offering new approaches and ideas for improving the transfer and record of this information. Our hope is that this article will become an informal policy reminder/guideline for engineers and project managers alike. The success of our products and systems are largely determined by our attention to the human dimensions of the requirements process. We hope this article will bring attention to this oft-neglected element in software development and encourage discussion about how to effectively address the issue.  相似文献   

11.
With the advent of social coding sites, software development has entered a new era of collaborative work. Social coding sites (e.g., GitHub) can integrate social networking and distributed version control in a unified platform to facilitate collaborative developments over the world. One unique characteristic of such sites is that the past development experiences of developers provided on the sites convey the implicit metrics of developer’s programming capability and expertise, which can be applied in many areas, such as software developer recruitment for IT corporations. Motivated by this intuition, we aim to develop a framework to effectively locate the developers with right coding skills. To achieve this goal, we devise a generativ e probabilistic expert ranking model upon which a consistency among projects is incorporated as graph regularization to enhance the expert ranking and a perspective of relevance propagation illustration is introduced. For evaluation, StackOverflow is leveraged to complement the ground truth of expert. Finally, a prototype system, SCSMiner, which provides expert search service based on a real-world dataset crawled from GitHub is implemented and demonstrated.  相似文献   

12.
This paper presents the motivation development and an application of a unique methodology to solve industrial optimization problems, using existing legacy simulation software programs. The methodology is based on approximation models generated with the utility of design of experiments methodologies and response surface methods applied on high-fidelity simulations, coupled together with classical optimization methodologies. Several DOE plans are included, in order to be able to adopt the appropriate level of detail. The approximations are based on stochastic interpolation techniques, or on classical least squares methods. The optimization methods include both local and global techniques. Finally, an application from the plastic molding industry (process simulation) demonstrates the methodology and the software package. Received December 30, 2000  相似文献   

13.
Previous research has provided evidence that a combination of static code metrics and software history metrics can be used to predict with surprising success which files in the next release of a large system will have the largest numbers of defects. In contrast, very little research exists to indicate whether information about individual developers can profitably be used to improve predictions. We investigate whether files in a large system that are modified by an individual developer consistently contain either more or fewer faults than the average of all files in the system. The goal of the investigation is to determine whether information about which particular developer modified a file is able to improve defect predictions. We also extend earlier research evaluating use of counts of the number of developers who modified a file as predictors of the file’s future faultiness. We analyze change reports filed for three large systems, each containing 18 releases, with a combined total of nearly 4 million LOC and over 11,000 files. A buggy file ratio is defined for programmers, measuring the proportion of faulty files in Release R out of all files modified by the programmer in Release R-1. We assess the consistency of the buggy file ratio across releases for individual programmers both visually and within the context of a fault prediction model. Buggy file ratios for individual programmers often varied widely across all the releases that they participated in. A prediction model that takes account of the history of faulty files that were changed by individual developers shows improvement over the standard negative binomial model of less than 0.13% according to one measure, and no improvement at all according to another measure. In contrast, augmenting a standard model with counts of cumulative developers changing files in prior releases produced up to a 2% improvement in the percentage of faults detected in the top 20% of predicted faulty files. The cumulative number of developers interacting with a file can be a useful variable for defect prediction. However, the study indicates that adding information to a model about which particular developer modified a file is not likely to improve defect predictions.  相似文献   

14.
Software and Systems Modeling - This paper provides a comprehensive overview and analysis of research work on how uncertainty is currently represented in software models. The survey presents the...  相似文献   

15.
Service-oriented computing is a paradigm for effectively delivering software services in a dynamic environment. Accordingly, many service-oriented software engineering (SOSE) methodologies have been proposed and practiced in both academia and industry. Some of these methodologies share common features (e.g. cover similar life-cycle phases) but are presented for different purposes, ranging from project management to system modernization, and from business analysis to technical solutions development. Given this diversity in the methodologies available in the literature, it is very hard for a company to decide which methodology would fit best for its specific needs. With this aim, we took a feature analysis approach and devised a framework for comparing the existing SOA methodologies. Different from existing comparison frameworks, ours specifically highlights aspects that are specific to SOA and aims to differentiate the methodologies that are truly service-oriented from those that deal little with service aspects. As such, the criteria defined in the framework can be used as a checklist for selecting a SOSE methodology.  相似文献   

16.
We present a software package that allows the construction and display of structural models of proteins starting from the amino acid sequence written in the one-letter code of standard data bank format. The software includes a very fast and efficient algorithm aimed at finding the global energy minimum of the potential function describing the molecular interactions. The whole package is conceived to have maximum flexibility. Completely automatic procedures are envisaged for standard problems. For non-standard problems, the construction procedure can be interactively adopted to meet with different options.  相似文献   

17.
Open Source Software (OSS) is an alternative to proprietary software. It is growing in popularity, which has brought about an increase in research interest. Most of the research studies have focused on identifying individual personal motives for participating in the development of an OSS project, analyzing specific solutions, or the OSS movement, itself. No studies have been found which have undertaken research on the impact of user experience and training on OSS. The study reported here sought to identify factors that predict acceptance of technologies based on OSS after training in these solutions. A research model based on the Technology Acceptance Model (Davis, 1989) was developed. Furthermore, the possible moderating effects of users’ gender, age and level of education were analyzed. It was found that external determinants such as user training, user fit, technological complexity and trainers’ support were important indicators in the success of adopting these solutions.  相似文献   

18.
The paper is based on a review of research on media selection and related topics on the one hand and on an explorative pilot survey on the other. In summarising the review, the authors propose that the factors explaining media choice be grouped into five categories: (1) the properties of the media itself affect its choice, (2) properties of the user affect media choice, (3) the communication situation plays an important role, (4) macro factors explain media choice, and (5) media choice can be explained as the outcome of a dynamic multiparty negotiation process. The pilot survey compares Japanese and Finnish students’ preference of media in various communication situations. The survey results encourage reserving, local macro factors or culture, a certain amount of explanatory force in explaining media choice.  相似文献   

19.
The development and implementation of open source software (OSS) is one of the most current topics within the academic, business and political environments. Traditionally, research in OSS has focused on identifying individual personal motives for participating in the development of an OSS project, analyzing specific OSS solutions, or the OSS movement, itself. Nevertheless, user acceptance towards this type of technology has received very little attention. For this reason, the main purpose of the current study is to identify the variables and factors that have a direct effect on individual attitude towards OSS adoption. Therefore, we have developed a technological acceptance model on behalf of the users towards a solution based on OSS. For this development, we have considered the technology acceptance model. Findings show that OSS is a viable solution for information management for organizations.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号