首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

2.
Debugging—the process of identifying, localizing and fixing bugs—is a key activity in software development. Due to issues such as non-determinism and difficulties of reproducing failures, debugging concurrent software is significantly more challenging than debugging sequential software. A number of methods, models and tools for debugging concurrent and multicore software have been proposed, but the body of work partially lacks a common terminology and a more recent view of the problems to solve. This suggests the need for a classification, and an up-to-date comprehensive overview of the area. This paper presents the results of a systematic mapping study in the field of debugging of concurrent and multicore software in the last decade (2005–2014). The study is guided by two objectives: (1) to summarize the recent publication trends and (2) to clarify current research gaps in the field. Through a multi-stage selection process, we identified 145 relevant papers. Based on these, we summarize the publication trend in the field by showing distribution of publications with respect to year, publication venues, representation of academia and industry, and active research institutes. We also identify research gaps in the field based on attributes such as types of concurrency bugs, types of debugging processes, types of research and research contributions. The main observations from the study are that during the years 2005–2014: (1) there is no focal conference or venue to publish papers in this area; hence, a large variety of conferences and journal venues (90) are used to publish relevant papers in this area; (2) in terms of publication contribution, academia was more active in this area than industry; (3) most publications in the field address the data race bug; (4) bug identification is the most common stage of debugging addressed by articles in the period; (5) there are six types of research approaches found, with solution proposals being the most common one; and (6) the published papers essentially focus on four different types of contributions, with “methods” being the most common type. We can further conclude that there are still quite a number of aspects that are not sufficiently covered in the field, most notably including (1) exploring correction and fixing bugs in terms of debugging process; (2) order violation, suspension and starvation in terms of concurrency bugs; (3) validation and evaluation research in the matter of research type; (4) metric in terms of research contribution. It is clear that the concurrent, parallel and multicore software community needs broader studies in debugging. This systematic mapping study can help direct such efforts.  相似文献   

3.
Software has become ubiquitous in our daily lives, and with its increasing functionality and complexity comes a frequently tedious and prolonged debugging process. Of the three activities in program debugging (failure detection, fault localization, and bug fixing), the focus of this paper is on the first, failure detection, under the condition that there is no test oracle that can be used to automatically determine the success or failure of all the executions. More precisely, the outputs for many executions have to be verified manually, or the expected outputs are not even available. We want to determine whether there is a solution to help programmers predict the execution results. How good are these predicted results when they are used to help programmers find the locations of bugs? A framework is proposed to reduce the effort on output verification using a strategy based on the Hamming distance or K-Means clustering to predict results of test executions. Such data and the statement coverage of each test case are used to compute the suspiciousness of each statement according to a fault localization technique and produce a ranking for examination to locate bugs. Case studies using 22 programs and seven fault localization techniques were conducted to evaluate the fault localization effectiveness of the proposed framework on 1203 faulty versions, some of which have a single bug and others with multiple bugs. A discussion on factors that may affect the accuracy of execution result prediction and the resulting fault localization effectiveness is also presented. Our data suggests that, in general, with respect to fault localization techniques using execution results verified against the expected outputs, those using predicted execution results can be even more effective than (by examining a smaller number of statements to locate the first faulty statement) or as good as the former (the verified).  相似文献   

4.
Software developers, testers and customers routinely submit issue reports to software issue trackers to record the problems they face in using a software. The issues are then directed to appropriate experts for analysis and fixing. However, submitters often misclassify an improvement request as a bug and vice versa. This costs valuable developer time. Hence automated classification of the submitted reports would be of great practical utility. In this paper, we analyze how machine learning techniques may be used to perform this task. We apply different classification algorithms, namely naive Bayes, linear discriminant analysis, k-nearest neighbors, support vector machine (SVM) with various kernels, decision tree and random forest separately to classify the reports from three open-source projects. We evaluate their performance in terms of F-measure, average accuracy and weighted average F-measure. Our experiments show that random forests perform best, while SVM with certain kernels also achieve high performance.  相似文献   

5.
This paper suggests ways to facilitate creativity and innovation in software development. The paper applies four perspectives – Product, Project, Process, and People – to identify an outlook for software innovation. The paper then describes a new facility – Software Innovation Research Lab (SIRL) – and a new method concept for software innovation – Essence – based on views, modes, and team roles. Finally, the paper reports from an early experiment using SIRL and Essence and identifies further research.  相似文献   

6.
Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.  相似文献   

7.
Given a source string u and a target string w, to decide whether w can be obtained by applying a string morphism on u (i. e., uniformly replacing the symbols in u by strings) constitutes an \(\mathcal {NP}\)-complete problem. We present a multivariate analysis of this problem (and its many variants) from the viewpoint of parameterised complexity theory, thereby pinning down the sources of its computational hardness. Our results show that most parameterised variants of the string morphism problem are fixed-parameter intractable and, apart from some very special cases, tractable variants can only be obtained by considering a large part of the input as parameters, namely the length of w and the number of different symbols in u.  相似文献   

8.
In this paper, we identify and solve a multi-join optimization problem for Arbitrary Feature-based social image Similarity JOINs(AFS-JOIN). Given two collections(i.e., R and S) of social images that carry both visual, spatial and textual(i.e., tag) information, the multiple joins based on arbitrary features retrieves the pairs of images that are visually, textually similar or spatially close from different users. To address this problem, in this paper, we have proposed three methods to facilitate the multi-join processing: 1) two baseline approaches(i.e., a naïve join approach and a maximal threshold(MT)-based), and 2) a Batch Similarity Join(BSJ) method. For the BSJ method, given m users’ join requests, they are first conversed and grouped into m″ clusters which correspond to m″ join boxes, where m > m″. To speedup the BSJ processing, a feature distance space is first partitioned into some cubes based on four segmentation schemes; the image pairs falling in the cubes are indexed by the cube tree index; thus BSJ processing is transformed into the searching of the image pairs falling in some affected cubes for m″ AFS-JOINs with the aid of the index. An extensive experimental evaluation using real and synthetic datasets shows that our proposed BSJ technique outperforms the state-of-the-art solutions.  相似文献   

9.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

10.
Learning from imbalanced data is a challenging task in a wide range of applications, which attracts significant research efforts from machine learning and data mining community. As a natural approach to this issue, oversampling balances the training samples through replicating existing samples or synthesizing new samples. In general, synthesization outperforms replication by supplying additional information on the minority class. However, the additional information needs to follow the same normal distribution of the training set, which further constrains the new samples within the predefined range of training set. In this paper, we present the Wiener process oversampling (WPO) technique that brings the physics phenomena into sample synthesization. WPO constructs a robust decision region by expanding the attribute ranges in training set while keeping the same normal distribution. The satisfactory performance of WPO can be achieved with much lower computing complexity. In addition, by integrating WPO with ensemble learning, the WPOBoost algorithm outperformsmany prevalent imbalance learning solutions.  相似文献   

11.
Existing spatiotemporal indexes suffer from either large update cost or poor query performance, except for the B x -tree (the state-of-the-art), which consists of multiple B +-trees indexing the 1D values transformed from the (multi-dimensional) moving objects based on a space filling curve (Hilbert, in particular). This curve, however, does not consider object velocities, and as a result, query processing with a B x -tree retrieves a large number of false hits, which seriously compromises its efficiency. It is natural to wonder “can we obtain better performance by capturing also the velocity information, using a Hilbert curve of a higher dimensionality?”. This paper provides a positive answer by developing the B dual -tree, a novel spatiotemporal access method leveraging pure relational methodology. We show, with theoretical evidence, that the B dual -tree indeed outperforms the B x -tree in most circum- stances. Furthermore, our technique can effectively answer progressive spatiotemporal queries, which are poorly supported by B x -trees.  相似文献   

12.
The article deals with the issue of identification of an unknown frequency of a shifted sinusoidal signal y(t) = σ 0 + σ sin(ωt + ?). A new approach taken to estimate the frequency of a shifted sinusoidal signal is suggested, which is a robust approach relative to undetermined disturbances present in the measurement of a useful signal. In contrast to known analogs, the given approach permits controlling the time of estimation of the unknown frequency ω. The dimension of the identification algorithm suggested is less than that of known analogs.  相似文献   

13.
Representative skyline computation is a fundamental issue in database area, which has attracted much attention in recent years. A notable definition of representative skyline is the distance-based representative skyline (DBRS). Given an integer k, a DBRS includes k representative skyline points that aims at minimizing the maximal distance between a non-representative skyline point and its nearest representative. In the 2D space, the state-of-the-art algorithm to compute the DBRS is based on dynamic programming (DP) which takes O(k m 2) time complexity, where m is the number of skyline points. Clearly, such a DP-based algorithm cannot be used for handling large scale datasets due to the quadratic time cost. To overcome this problem, in this paper, we propose a new approximate algorithm called ARS, and a new exact algorithm named PSRS, based on a carefully-designed parametric search technique. We show that the ARS algorithm can guarantee a solution that is at most ?? larger than the optimal solution. The proposed ARS and PSRS algorithms run in O(klog2mlog(T/??)) and O(k 2 log3m) time respectively, where T is no more than the maximal distance between any two skyline points. We also propose an improved exact algorithm, called PSRS+, based on an effective lower and upper bounding technique. We conduct extensive experimental studies over both synthetic and real-world datasets, and the results demonstrate the efficiency and effectiveness of the proposed algorithms.  相似文献   

14.
Disjunctive Temporal Problems (DTPs) with Preferences (DTPPs) extend DTPs with piece-wise constant preference functions associated to each constraint of the form lx ? yu, where x,y are (real or integer) variables, and l,u are numeric constants. The goal is to find an assignment to the variables of the problem that maximizes the sum of the preference values of satisfied DTP constraints, where such values are obtained by aggregating the preference functions of the satisfied constraints in it under a “max” semantic. The state-of-the-art approach in the field, implemented in the native DTPP solver Maxilitis, extends the approach of the native DTP solver Epilitis. In this paper we present alternative approaches that translate DTPPs to Maximum Satisfiability of a set of Boolean combination of constraints of the form l?x ? y?u, ? ∈{<,≤}, that extend previous work dealing with constant preference functions only. We prove correctness and completeness of the approaches. Results obtained with the Satisfiability Modulo Theories (SMT) solvers Yices and MathSAT on randomly generated DTPPs and DTPPs built from real-world benchmarks, show that one of our translation is competitive to, and can be faster than, Maxilitis (This is an extended and revised version of Bourguet et al. 2013).  相似文献   

15.
In this paper, typical learning task including data condensation, binary classification, identification of the independence between random variables and conditional density estimation is described from a unified perspective of a linear combination of densities, and accordingly a direct estimation framework based on a linear combination of Gaussian components (i.e., Gaussian basis functions) under integrated square error criterion is proposed to solve these learning tasks. The proposed direct estimation framework has three advantages. Firstly, different from most of the existing state-of-the-art methods in which estimating each component’s density in this linear combination of densities and then combining them linearly are required, it can directly estimate the linear combination of densities as a whole, and it has at least comparable to or even better approximation accuracy than the existing density estimation methods. Secondly, the time complexity of the proposed direct estimation framework is O(l 3) in which l is the number of Gaussian components in this framework which are generally viewed as the Gaussian distributions of the clusters in a dataset, and hence l is generally much less than the size of the dataset, so it is very suitable for large datasets. Thirdly, this proposed framework can be typically used to develop alternative approaches to classification, data condensation, identification of the independence between random variables, conditional density estimation and the similarity identification between multiple source domains and a target domain. Our preliminary results about experiments on several typical applications indicate the power of the proposed direct estimation framework.  相似文献   

16.
Mobile apps (applications) have become a popular form of software, and the app reviews by users have become an important feedback resource. Users may raise some issues in their reviews when they use apps, such as a functional bug, a network lag, or a request for a feature. Understanding these issues can help developers to focus on users’ concerns, and help users to evaluate similar apps for download or purchase. However, we do not know which types of issues are raised in a review. Moreover, the amount of user reviews is huge and the nature of the reviews’ text is unstructured and informal. In this paper, we analyze 3 902 user reviews from 11 mobile apps in a Chinese app store — 360 Mobile Assistant, and uncover 17 issue types. Then, we propose an approach CSLabel that can label user reviews based on the raised issue types. CSLabel uses a cost-sensitive learning method to mitigate the effects of the imbalanced data, and optimizes the setting of the support vector machine (SVM) classifier’s kernel function. Results show that CSLabel can correctly label reviews with the precision of 66.5%, the recall of 69.8%, and the F1 measure of 69.8%. In comparison with the state-of-the-art approach, CSLabel improves the precision by 14%, the recall by 30%, the F1 measure by 22%. Finally, we apply our approach to two real scenarios: 1) we provide an overview of 1 076 786 user reviews from 1 100 apps in the 360 Mobile Assistant and 2) we find that some issue types have a negative correlation with users’ evaluation of apps.  相似文献   

17.
Efficiently answering reachability queries, which checks whether one vertex can reach another in a directed graph, has been studied extensively during recent years. However, the size of the graph that people are facing and generating nowadays is growing so rapidly that simple algorithms, such as BFS and DFS, are no longer feasible. Although Refined Online Search algorithms can scale to large graphs, they all suffer from the false positive problem. In this paper, we analyze the cause of false positive and propose an efficient High Dimensional coordinate generating method based on Graph Dominance Drawing (HD-GDD) to answer reachability queries in linear or even constant time. We conduct experiments on different graph structures and different graph sizes to fully evaluate the performance and behavior of our proposal. Empirical results demonstrate that our method outperforms state-of-the-art algorithms and can handle extensive graphs.  相似文献   

18.
In this research we studied the assimilation process of a technological innovation (i.e. technovation) called Radio Frequency Identification (RFID). Like many other technovations, RFID is considered as a revolutionary one, but its assimilation is an evolutionary process. Here, we extended the conventional assimilation theories and initiated an intellectual argument by introducing extension as an important stage of assimilation, which is contextual and highly relevant for RFID assimilation process. Data for the empirical tests were collected via survey from 221 livestock farms in Australia that are using RFID for livestock identification and tracing. We examined ten Technology-Organization-Environmental (TOE) factors on four stages of RFID assimilation process. Empirical results, based on Partial Least Square (PLS)-based Structural Equation Modelling (SEM), suggest that assimilation of RFID technovation does involve four stages: initiation, adoption, routinization, and extension. We also found that one single factor may have different effect on different stages of assimilation, which may even be different directioned. For instance, external environmental uncertainty has a positive impact on RFID adoption while it has a negative impact on RFID extension. The paper discusses the results and practical implications in detail.  相似文献   

19.
The Doob graph D(m, n), where m > 0, is a Cartesian product of m copies of the Shrikhande graph and n copies of the complete graph K 4 on four vertices. The Doob graph D(m, n) is a distance-regular graph with the same parameters as the Hamming graph H(2m + n, 4). We give a characterization of MDS codes in Doob graphs D(m, n) with code distance at least 3. Up to equivalence, there are m 3/36+7m 2/24+11m/12+1?(m mod 2)/8?(m mod 3)/9 MDS codes with code distance 2m + n in D(m, n), two codes with distance 3 in each of D(2, 0) and D(2, 1) and with distance 4 in D(2, 1), and one code with distance 3 in each of D(1, 2) and D(1, 3) and with distance 4 in each of D(1, 3) and D(2, 2).  相似文献   

20.
We propose techniques for processing SPARQL queries over a large RDF graph in a distributed environment. We adopt a “partial evaluation and assembly” framework. Answering a SPARQL query Q is equivalent to finding subgraph matches of the query graph Q over RDF graph G. Based on properties of subgraph matching over a distributed graph, we introduce local partial match as partial answers in each fragment of RDF graph G. For assembly, we propose two methods: centralized and distributed assembly. We analyze our algorithms from both theoretically and experimentally. Extensive experiments over both real and benchmark RDF repositories of billions of triples confirm that our method is superior to the state-of-the-art methods in both the system’s performance and scalability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号