首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Eken  Beyza  Palma  Francis  Ayşe  Başar  Ayşe  Tosun 《Software Quality Journal》2021,29(1):159-194
Software Quality Journal - Community-aware metrics through socio-technical developer networks or organizational structures have already been studied in the software bug prediction field....  相似文献   

2.
Social network services are emerging as a promising IT-based business, with some services already being provided commercially such as Facebook, Cyworld and Xiaonei. However, it is not yet clear which potential audience groups will be key social network service participants. Moreover, the process showing how an individual actually decides to start using a social network service may be somewhat different from current web-based community services. Hence, the aims of this paper are twofold. First, we empirically examine how individual characteristics affect actual user acceptance of social network services. To examine these individual characteristics, we apply a Technology Acceptance Model (TAM) to construct an amended model that focuses on three individual differences: social identity, altruism and telepresence, and one perceived construct: the perceived encouragement, imported from psychology-based research. Next, we examine if the users’ perception to see a target social network service as human relationship-oriented service or as a task-oriented service could be a moderator between perceived constructs and actual use. As a result, we discover that the perceived encouragement and perceived orientation are significant constructs that affect actual use of social network services.  相似文献   

3.
Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70% and a Precision greater than 50%; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project.  相似文献   

4.
This study analysed Student Internet Users’ (SIUs’) perception of Web usability. Adopting a user testing method, seven Web Usability Factors (WUFs) were tested for their significance in affecting the ease of use of website. Several elements in websites were also tested for their significance in affecting the WUFs. Result shows the most significant WUF is related to the aesthetic appeal of a website, i.e., Use of Colour and Font. However, it was found that most Web developers are not focusing on this important WUF. Elements such as site map, site search, product image catalogue and others were also found to positively affect SIUs’ perception of the WUFs. The results presented in this paper can be used as guidelines for designing usable websites for SIUs.  相似文献   

5.
In the last decade there was growing interest in strategic management literature about factors that influence a company's ability to use IT. There is general consensus that knowledge and competency are necessary in developing an IT capability, but there is very little understanding of what the necessary competencies are, and how they influence IS usage in different contexts. The small and medium-sized enterprise context is particularly interesting for two reasons: it constitutes a major part of the economy and it has been relatively unsuccessful in exploiting e-business.  相似文献   

6.
ContextSoftware defect prediction plays a crucial role in estimating the most defect-prone components of software, and a large number of studies have pursued improving prediction accuracy within a project or across projects. However, the rules for making an appropriate decision between within- and cross-project defect prediction when available historical data are insufficient remain unclear.ObjectiveThe objective of this work is to validate the feasibility of the predictor built with a simplified metric set for software defect prediction in different scenarios, and to investigate practical guidelines for the choice of training data, classifier and metric subset of a given project.MethodFirst, based on six typical classifiers, three types of predictors using the size of software metric set were constructed in three scenarios. Then, we validated the acceptable performance of the predictor based on Top-k metrics in terms of statistical methods. Finally, we attempted to minimize the Top-k metric subset by removing redundant metrics, and we tested the stability of such a minimum metric subset with one-way ANOVA tests.ResultsThe study has been conducted on 34 releases of 10 open-source projects available at the PROMISE repository. The findings indicate that the predictors built with either Top-k metrics or the minimum metric subset can provide an acceptable result compared with benchmark predictors. The guideline for choosing a suitable simplified metric set in different scenarios is presented in Table 12.ConclusionThe experimental results indicate that (1) the choice of training data for defect prediction should depend on the specific requirement of accuracy; (2) the predictor built with a simplified metric set works well and is very useful in case limited resources are supplied; (3) simple classifiers (e.g., Naïve Bayes) also tend to perform well when using a simplified metric set for defect prediction; and (4) in several cases, the minimum metric subset can be identified to facilitate the procedure of general defect prediction with acceptable loss of prediction precision in practice.  相似文献   

7.
It is well-known that software defect prediction is one of the most important tasks for software quality improvement. The use of defect predictors allows test engineers to focus on defective modules. Thereby testing resources can be allocated effectively and the quality assurance costs can be reduced. For within-project defect prediction (WPDP), there should be sufficient data within a company to train any prediction model. Without such local data, cross-project defect prediction (CPDP) is feasible since it uses data collected from similar projects in other companies. Software defect datasets have the class imbalance problem increasing the difficulty for the learner to predict defects. In addition, the impact of imbalanced data on the real performance of models can be hidden by the performance measures chosen. We investigate if the class imbalance learning can be beneficial for CPDP. In our approach, the asymmetric misclassification cost and the similarity weights obtained from distributional characteristics are closely associated to guide the appropriate resampling mechanism. We performed the effect size A-statistics test to evaluate the magnitude of the improvement. For the statistical significant test, we used Wilcoxon rank-sum test. The experimental results show that our approach can provide higher prediction performance than both the existing CPDP technique and the existing class imbalance technique.  相似文献   

8.
An empirical study of a model for program error prediction   总被引:2,自引:0,他引:2  
A model is presented for estimating the number of errors remaining in a program at the beginning of the testing phase of development. The relationships between the errors occurring in a program and the various factors that affect software development, such as programmer skill, are statistically analyzed. The model is then derived using the factors significantly identified in the analysis. On the basis of data collected during the development of large-scale software systems, it is shown that factors such as frequency of program specification change, programmer skill, and volume of program design documentation are significant and that the model based on these factors is more reliable than conventional error prediction methods based on program size alone  相似文献   

9.
Software change prediction is crucial in order to efficiently plan resource allocation during testing and maintenance phases of a software. Moreover, correct identification of change-prone classes in the early phases of software development life cycle helps in developing cost-effective, good quality and maintainable software. An effective software change prediction model should equally recognize change-prone and not change-prone classes with high accuracy. However, this is not the case as software practitioners often have to deal with imbalanced data sets where instances of one type of class is much higher than the other type. In such a scenario, the minority classes are not predicted with much accuracy leading to strategic losses. This study evaluates a number of techniques for handling imbalanced data sets using various data sampling methods and MetaCost learners on six open-source data sets. The results of the study advocate the use of resample with replacement sampling method for effective imbalanced learning.  相似文献   

10.
Although many researchers have studied different factors which affect E-Learning outcomes, there is little research on assessment of the intervening role of readiness factors in E-Learning outcomes. This study proposes a conceptual model to determine the role of readiness factors in the relationship between E-Learning factors and E-Learning outcomes. Readiness factors are divided into three main groups including: technical, organizational and social. A questionnaire was completed by 96 respondents. This sample consists of teachers at Tehran high schools who are utilizing a technology-based educating. Hierarchical regression analysis is done and its results strongly support the appropriateness of the proposed model and prove that readiness factors variable plays a moderating role in the relationship between E-Learning factors and outcomes. Also latent moderated structuring (LMS) technique and MPLUS3 software are used to determine each variable’s ranking. Results show that organizational readiness factors have the most important effect on E-Learning outcomes. Also teachers’ motivation and training is the most important factor in E-Learning. Findings of this research will be helpful for both academics and practitioners of E-Learning systems.  相似文献   

11.
While extensive research in data mining has been devoted to developing better classification algorithms, relatively little research has been conducted to examine the effects of feature construction, guided by domain knowledge, on classification performance. However, in many application domains, domain knowledge can be used to construct higher-level features to potentially improve performance. For example, past research and regulatory practice in early warning of bank failures has resulted in various explanatory variables, in the form of financial ratios, that are constructed based on bank accounting variables and are believed to be more effective than the original variables in identifying potential problem banks. In this study, we empirically compare the performance of two sets of classifiers for bank failure prediction, one built using raw accounting variables and the other built using constructed financial ratios. Four popular data mining methods are used to learn the classifiers: logistic regression, decision tree, neural network, and k-nearest neighbor. We evaluate the classifiers on the basis of expected misclassification cost under a wide range of possible settings. The results of the study strongly indicate that feature construction, guided by domain knowledge, significantly improves classifier performance and that the degree of improvement varies significantly across the methods.  相似文献   

12.
Recently, behavioral aspects of enterprise systems have been called to investigate further in the information systems (IS) community. The purpose of this paper is to apply individual-level measurement of cultural orientation, such as power distance and uncertainty avoidance, to the recent findings of computer self-efficacy and ERP adoption belief, such as perceived ease of use, based on the survey of 101 ERP system experts. An online survey methodology is used to gather data from the various industrial fields. The research model is constructed based on the findings of the previous studies in IS, management, and cultural psychology. The results indicate that low power distance and high uncertainty avoidance cultural orientation influence general CSE. In addition, uncertainty avoidance positively influences ease of use of ERP systems. As expected, general CSE positively influences ease of use of ERP systems. Training and managerial interventions through communication to improve these cultural orientations would be effective for the successful ERP systems project. The findings of this research would be helpful to the project managers, IS researchers, and ERP practitioners who want to understand the behavioral aspects of ERP systems adoption in the organization.  相似文献   

13.
Reuse of software assets in application development has held promise but faced challenges. In addressing these challenges, research has focused on organizational- and project-level factors while neglecting grass-root level adoption of reusable assets. Our research investigated factors associated with individual software developers’ intention to reuse software assets and integrated them in TAM. Towards that end, 13 project managers were interviewed and 207 software developers were surveyed in India. Results revealed that the technological-level (infrastructure), and individual-level factors (reuse-related experience and self-efficacy) were major determinants. Implications are discussed.  相似文献   

14.
In defect prediction studies, open-source and real-world defect data sets are frequently used. The quality of these data sets is one of the main factors affecting the validity of defect prediction methods. One of the issues is repeated data points in defect prediction data sets. The main goal of the paper is to explore how low-level metrics are derived. This paper also presents a cleansing algorithm that removes repeated data points from defect data sets. The method was applied on 20 data sets, including five open source sets, and area under the curve (AUC) and precision performance parameters have been improved by 4.05% and 6.7%, respectively. In addition, this work discusses how static code metrics should be used in bug prediction. The study provides tips to obtain better defect prediction results.  相似文献   

15.
Donald E. Knuth 《Software》1971,1(2):105-133
A sample of programs, written in FORTRAN by a wide variety of people for a wide variety of applications, was chosen ‘at random’ in an attempt to discover quantitatively ‘what programmers really do’. Statistical results of this survey are presented here, together with some of their apparent implications for future work in compiler design. The principal conclusion which may be drawn is the importance of a program ‘profile’, namely a table of frequency counts which record how often each statement is performed in a typical run; there are strong indications that profile-keeping should become a standard practice in all computer systems, for casual users as well as system programmers. This paper is the report of a three month study undertaken by the author and about a dozen students and representatives of the software industry during the summer of 1970. It is hoped that a reader who studies this report will obtain a fairly clear conception of how FORTRAN is being used, and what compilers can do about it.  相似文献   

16.
Metrics for aspect-oriented software have been proposed and used to investigate the benefits and the disadvantages of crosscutting concerns modularisation. Some of these metrics have not been rigorously defined nor analytically evaluated. Also, there are few empirical data showing typical values of these metrics in aspect-oriented software. In this paper, we provide rigorous definitions, usage guidelines, analytical evaluation, and empirical data from ten open source projects, determining the value of six metrics for aspect-oriented software (lines of code, weighted operations in module, depth of inheritance tree, number of children, crosscutting degree of an aspect, and coupling on advice execution). We discuss how each of these metrics can be used to identify shortcomings in existing aspect-oriented software.  相似文献   

17.
While argumentation-based negotiation has been accepted as a promising alternative to game-theoretic or heuristic-based negotiation, no evidence has been provided to confirm this theoretical advantage. We propose a model of bilateral negotiation extending a simple monotonic concession protocol by allowing the agents to exchange information about their underlying interests and possible alternatives to achieve them during the negotiation. We present an empirical study that demonstrates (through simulation) the advantages of this interest-based negotiation approach over the more classic monotonic concession approach to negotiation.  相似文献   

18.
An empirical crew rostering problem drawn from the customer service section of a department store in southern Taiwan is addressed in this paper. The service section established relevant service facilities and functions to provide services for customers as well as distinguished guests and visitors. The crew rostering task is concerned with assigning multi-functional workers to different types of job and scheduling working shifts for each worker within a given time horizon, where the available and demand workforce vary from one shift to another. The current crew rostering method is a seniority orientation method. In developing the roster under current method, lines of job are generated and then bids are taken in order of decreasing seniority. The most senior worker has the widest range of job lines from which to select so as to best satisfy his/her preference. Successive crew members bid for the remaining lines of job. The current method has some drawbacks. To overcome the drawbacks, this paper develops a problem-specific approach with three stages to deal with the crew rostering problem, making it more equitable and personalized for workers by considering the management goals concerning worker–job suitability, worker–worker compatibility and worker–shift fondness. Due to the vagueness of job characteristics and the personal attributes, fuzzy method is used to improve the evaluation results of suitability, compatibility and fondness. The utility similarities of fuzzy assessments with the linguistic grade of very good are used to measure the fitness grade for the management goals. A linear goal programming model is proposed to fulfill the “efficient assignment/match from the right” policy. The proposed approach ensures the right workers are assigned to the right jobs, the right workers are placed together in a job and the pleasing working shifts are given to the workers. An illustrative application demonstrates the implementation of the proposed approach.  相似文献   

19.
An empirical study of predicting software faults with case-based reasoning   总被引:1,自引:0,他引:1  
The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering Laboratory. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, and statistical modeling. He has published more than 200 refereed papers in these areas. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the Association for Computing Machinery, the IEEE Computer Society, and IEEE Reliability Society. He served as the general chair of the 1999 International Symposium on Software Reliability Engineering (ISSRE’99), and the general chair of the 2001 International Conference on Engineering of Computer Based Systems. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Naeem Seliya received the M.S. degree in Computer Science from Florida Atlantic University, Boca Raton, FL, USA, in 2001. He is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include software engineering, computational intelligence, data mining, software measurement, software reliability and quality engineering, software architecture, computer data security, and network intrusion detection. He is a student member of the IEEE Computer Society and the Association for Computing Machinery.  相似文献   

20.
Several studies have demonstrated the superior performance of ensemble classification algorithms, whereby multiple member classifiers are combined into one aggregated and powerful classification model, over single models. In this paper, two rotation-based ensemble classifiers are proposed as modeling techniques for customer churn prediction. In Rotation Forests, feature extraction is applied to feature subsets in order to rotate the input data for training base classifiers, while RotBoost combines Rotation Forest with AdaBoost. In an experimental validation based on data sets from four real-life customer churn prediction projects, Rotation Forest and RotBoost are compared to a set of well-known benchmark classifiers. Moreover, variations of Rotation Forest and RotBoost are compared, implementing three alternative feature extraction algorithms: principal component analysis (PCA), independent component analysis (ICA) and sparse random projections (SRP). The performance of rotation-based ensemble classifier is found to depend upon: (i) the performance criterion used to measure classification performance, and (ii) the implemented feature extraction algorithm. In terms of accuracy, RotBoost outperforms Rotation Forest, but none of the considered variations offers a clear advantage over the benchmark algorithms. However, in terms of AUC and top-decile lift, results clearly demonstrate the competitive performance of Rotation Forests compared to the benchmark algorithms. Moreover, ICA-based Rotation Forests outperform all other considered classifiers and are therefore recommended as a well-suited alternative classification technique for the prediction of customer churn that allows for improved marketing decision making.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号