首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We use 810 versions of the Linux kernel, released over a period of 14 years, to characterize the system’s evolution, using Lehman’s laws of software evolution as a basis. We investigate different possible interpretations of these laws, as reflected by different metrics that can be used to quantify them. For example, system growth has traditionally been quantified using lines of code or number of functions, but functional growth of an operating system like Linux can also be quantified using the number of system calls. In addition we use the availability of the source code to track metrics, such as McCabe’s cyclomatic complexity, that have not been tracked across so many versions previously. We find that the data supports several of Lehman’s laws, mainly those concerned with growth and with the stability of the process. We also make some novel observations, e.g. that the average complexity of functions is decreasing with time, but this is mainly due to the addition of many small functions.  相似文献   

2.
Lines of code metrics are routinely used as measures of software system complexity, programmer productivity, and defect density, and are used to predict both effort and cost. The guidelines for using a direct metric, such as lines of code, as a proxy for a quality factor such as complexity or defect density, or in derived metrics such as cost and effort are clear. Amongst other criteria, the direct metric must be linearly related to, and accurately predict, the quality factor and these must be validated through statistical analysis following a rigorous validation methodology. In this paper, we conduct such an analysis to determine the validity and utility of lines of code as a measure using the ISBGS-10 data set. We find that it fails to meet the specified validity tests and, therefore, has limited utility in derived measures.  相似文献   

3.
Applying metrics to software is a way to measure and improve software quality. Many metrics apply to software implementations (code), so they cannot be used early in the life cycle. We survey ten modularity and structural complexity metrics applicable to software designs, and summarize the results of empirical validation studies when they are available. We present a database schema from which most of these metrics can be computed. Aspects of each metric require further study and refinement. However, using some of them during design may still be beneficial.  相似文献   

4.
This paper defines two suites of metrics, which address static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilization of various components. In this paper both static and dynamic metrics are evaluated using Weyuker’s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall’s Quality Model and illustrate their impact on product quality and to the management of component-based product development.  相似文献   

5.
Three software complexity measures (Halstead's E, McCabe's u(G), and the length as measured by number of statements) were compared to programmer performance on two software maintenance tasks. In an experiment on understanding, length and u(G) correlated with the percent of statements correctly recalled. In an experiment on modification, most significant correlations were obtained with metrics computed on modified rather than unmodified code. All three metrics correlated with both the accuracy of the modification and the time to completion. Relationships in both experiments occurred primarily in unstructured rather than structured code, and in code with no comments. The metrics were also most predictive of performance for less experienced programmers. Thus, these metrics appear to assess psychological complexity primarily where programming practices do not provide assistance in understanding the code.  相似文献   

6.
Ontology languages such as OWL are being widely used as the Semantic Web movement gains momentum. With the proliferation of the Semantic Web, more and more large-scale ontologies are being developed in real-world applications to represent and integrate knowledge and data. There is an increasing need for measuring the complexity of these ontologies in order for people to better understand, maintain, reuse and integrate them. In this paper, inspired by the concept of software metrics, we propose a suite of ontology metrics, at both the ontology-level and class-level, to measure the design complexity of ontologies. The proposed metrics are analytically evaluated against Weyuker’s criteria. We have also performed empirical analysis on public domain ontologies to show the characteristics and usefulness of the metrics. We point out possible applications of the proposed metrics to ontology quality control. We believe that the proposed metric suite is useful for managing ontology development projects.  相似文献   

7.
Algorithms can be used to prove and to discover new theorems. This paper shows how algorithmic skills in general, and the notion of invariance in particular, can be used to derive many results from Euclid’s algorithm. We illustrate how to use the algorithm as a verification interface (i.e., how to verify theorems) and as a construction interface (i.e., how to investigate and derive new theorems).The theorems that we verify are well-known and most of them are included in standard number-theory books. The new results concern distributivity properties of the greatest common divisor and a new algorithm for efficiently enumerating the positive rationals in two different ways. One way is known and is due to Moshe Newman. The second is new and corresponds to a deforestation of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Newman’s algorithm. A short review of the original papers by Stern and Brocot is also included.  相似文献   

8.
Measuring software products and processes is essential for improving software productivity and quality. In order to evaluate the complexity of object-oriented software, several complexity metrics have been proposed. Among them, Chidamber and Kemerer’s metrics are the most well-known for object-oriented software. Their metrics evaluate the complexity of the classes in terms of internal, inheritance, and coupling complexity. Though the reused classes of the class library usually have better quality than the newly-developed ones, their metrics deal with inheritance and coupling complexity in the same way. This article first proposes a revision of the Chidamber and Kemerer’s metrics which can be applied to software which had been constructed by reusing software components. Then, we give an analysis of data collected from the development of an object-oriented software using a GUI framework. We compare the original metrics with the revised ones by evaluating the accuracy of estimating the effort to fix faults and show the validity and usefulness of the revised metrics.  相似文献   

9.
ContextSoftware metrics may be used in fault prediction models to improve software quality by predicting fault location.ObjectiveThis paper aims to identify software metrics and to assess their applicability in software fault prediction. We investigated the influence of context on metrics’ selection and performance.MethodThis systematic literature review includes 106 papers published between 1991 and 2011. The selected papers are classified according to metrics and context properties.ResultsObject-oriented metrics (49%) were used nearly twice as often compared to traditional source code metrics (27%) or process metrics (24%). Chidamber and Kemerer’s (CK) object-oriented metrics were most frequently used. According to the selected studies there are significant differences between the metrics used in fault prediction performance. Object-oriented and process metrics have been reported to be more successful in finding faults compared to traditional size and complexity metrics. Process metrics seem to be better at predicting post-release faults compared to any static code metrics.ConclusionMore studies should be performed on large industrial software systems to find metrics more relevant for the industry and to answer the question as to which metrics should be used in a given context.  相似文献   

10.
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation.  相似文献   

11.
Software practitioners need ways to assess their software, and metrics can provide an automated way to do that, providing valuable feedback with little effort earlier than the testing phase. Semantic metrics were proposed to quantify aspects of software quality based on the meaning of software's task in the domain. Unlike traditional software metrics, semantic metrics do not rely on code syntax. Instead, semantic metrics are calculated from domain information, using the knowledge base of a program understanding system. Because semantic metrics do not rely on code syntax, they can be calculated before code is fully implemented. This article evaluates the semantic metrics theoretically and empirically. We find that the semantic metrics compare well to existing metrics and show promise as early indicators of software quality.  相似文献   

12.
13.
In a multicore transactional memory (TM) system, concurrent execution threads interact and interfere with each other through shared memory. The less interference a thread provokes the better for the system. However, as a programmer is primarily interested in optimizing her individual code’s performance rather than the system’s overall performance, she does not have a natural incentive to provoke as little interference as possible. Hence, a TM system must be designed compatible with good programming incentives (GPI), i.e., writing efficient code for the overall system should coincide with writing code that optimizes an individual thread’s performance. We show that with most contention managers (CM) proposed in the literature so far, TM systems are not GPI compatible. We provide a generic framework for CMs that base their decisions on priorities and explain how to modify Timestamp-like CMs so as to feature GPI compatibility. In general, however, priority-based conflict resolution policies are prone to be exploited by selfish programmers. In contrast, a simple non-priority-based manager that resolves conflicts at random is GPI compatible.  相似文献   

14.
15.
A software complexity metric is a quantitative measure of the difficulty of comprehending and working with a specific piece of software. The majority of metrics currently in use focus on a program's “microcomplexity.” This refers to how difficult the details of the software are to deal with. This paper proposes a method of measuring the “macrocomplexity,” i.e., how difficult the overall structure of the software is to deal with, as well as the microcomplexity. We evaluate this metric using data obtained during the development of a compiler/environment project, involving over 30,000 lines of C code. The new metric's performance is compared to the performance of several other popular metrics, with mixed results. We then discuss how these metrics, or any other metrics, may be used to help increase the project management efficiency.  相似文献   

16.
T. R. HOPKINS 《Software》1996,26(8):967-982
We use knot count and path count metrics to identify which routines in the Level 1 basic linear algebra subroutines (BLAS) might benefit from code restructuring. We then consider how logical restructuring and the improvements in the facilities available from successive versions of Fortran have allowed us to improve the complexity of the code as measured by knot count, path count and cyclomatic complexity, and the user interface of one of the identified routines which computes the Euclidean norm of a vector. With these reductions in complexity we hope that we have contributed to improvements in the maintainability and clarity of the code. Software complexity metrics and the control graph are used to quantify and provide a visual guide to the quality of the software, and the performance of two Fortran code restructuring tools is reported. Finally, we give some indication of the cost of the extra numerical robustness offered by the BLAS routine over the use of new Fortran 90 intrinsic functions.  相似文献   

17.
All previously proposed threshold proxy signature schemes have been based on discrete logarithms required for a protocol to generate and verify a shared secret among the proxy group. Therefore, it is necessary for the proxy signers to perform many expensive modular exponential computations and communications to obtain and verify a shared secret. Moreover, most of the existing threshold proxy signature schemes reveal that the receiver cannot find out who signed the proxy signatures. We propose an efficient (tn) threshold proxy signature scheme based on Schnorr’s scheme. Compared with existing (tn) threshold proxy signature schemes, our scheme can reduce the amount of computations and communications. In our method, not only the original signer can know who generated the proxy signature, but also the receiver can certify the actuality of the group signers who made the proxy signature. We offer convenience and fair distribution of auditing a document’s signers.  相似文献   

18.
As the cost of programming becomes a major component of the cost of computer systems, it becomes imperative that program development and maintenance be better managed. One measurement a manager could use is programming complexity. Such a measure can be very useful if the manager is confident that the higher the complexity measure is for a programming project, the more effort it takes to complete the project and perhaps to maintain it. Until recently most measures of complexity were based only on intuition and experience. In the past 3 years two objective metrics have been introduced, McCabe's cyclomatic number v(G) and Halstead's effort measure E. This paper reports an empirical study designed to compare these two metrics with a classic size measure, lines of code. A fourth metric based on a model of programming is introduced and shown to be better than the previously known metrics for some experimental data.  相似文献   

19.
ContextEffort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files.ObjectiveAlthough the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models?MethodWe perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort.ResultsWe find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%.ConclusionStudies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required.  相似文献   

20.
Correcting design decay in source code is not a trivial task. Diagnosing and subsequently correcting inconsistencies between a software system’s code and its design rules (e.g., database queries are only allowed in the persistence layer) and coding conventions can be complex, time-consuming and error-prone. Providing support for this process is therefore highly desirable, but of a far greater complexity than suggesting basic corrective actions for simplistic implementation problems (like the “declare a local variable for non-declared variable” suggested by Eclipse).We present an abductive reasoning approach to inconsistency correction that consists of (1) a means for developers to document and verify a system’s design and coding rules, (2) an abductive logic reasoner that hypothesizes possible causes of inconsistencies between the system’s code and the documented rules and (3) a library of corrective actions for each hypothesized cause. This work builds on our previous work, where we expressed design rules as equality relationships between sets of source code artifacts (e.g., the set of methods in the persistence layer is the same as the set of methods that query the database). In this paper, we generalize our approach to design rules expressed as user-defined binary relationships between two sets of source code artifacts (e.g., every state changing method should invoke a persistence method).We illustrate our approach on the design of IntensiVE, a tool suite that enables defining sets of source code artifacts intensionally (by means of logic queries) and verifying relationships between such sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号