首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一种基于切片技术度量Java耦合性的框架   总被引:7,自引:0,他引:7  
在研究面向对象的度量问题时,人们通过简单的统计方法和基于信息源的方法来度量其中的一些特征,例如基本度量、CK度量和AoKi度量等。文中采用一种基于程序切片的方法来度量Java的耦合性问题,通过对J ava源程序中存在的耦合关系的度量,得到了一种比传统方法更精确的耦合度量方法。  相似文献   

2.
This paper presents an integration of chamfer metrics into mathematical morphology. Because chamfer metrics can approximate the Euclidean metric accurately, morphological operations based on chamfer metrics give a good approximation to morphological operations that use Euclidean discs as structuring elements. First, a formal definition of chamfer metrics is presented and some properties are discussed. Then, a number of morphological operations based on chamfer metrics are defined. These include the medial axis, the medial line, size and antisize distributions, and the opening transform. A theoretical analysis of some properties of these operators is provided. This analysis concentrates on the relation between distance transformations and reconstructions and the morphological operators just mentioned. This leads to a number of efficient algorithms for the computation of the morphological operators. All algorithms (except for the opening transform) require a fixed number of image scans and are based on local operations only. An algorithm for the opening transform that is 50–100 times as fast as the brute-force algorithm is presented.This research was supported by the Foundation for Computer Science in the Netherlands (SION), with financial support from the Netherlands Organization for Scientific Research (NWO). This research was part of a project in which the TNO Human Factors Research Institute, CWI, and the University of Amsterdam cooperated.  相似文献   

3.
Machine learning offers a systematic framework for developing metrics that use multiple criteria to assess the quality of machine translation (MT). However, learning introduces additional complexities that may impact on the resulting metric’s effectiveness. First, a learned metric is more reliable for translations that are similar to its training examples; this calls into question whether it is as effective in evaluating translations from systems that are not its contemporaries. Second, metrics trained from different sets of training examples may exhibit variations in their evaluations. Third, expensive developmental resources (such as translations that have been evaluated by humans) may be needed as training examples. This paper investigates these concerns in the context of using regression to develop metrics for evaluating machine-translated sentences. We track a learned metric’s reliability across a 5 year period to measure the extent to which the learned metric can evaluate sentences produced by other systems. We compare metrics trained under different conditions to measure their variations. Finally, we present an alternative formulation of metric training in which the features are based on comparisons against pseudo-references in order to reduce the demand on human produced resources. Our results confirm that regression is a useful approach for developing new metrics for MT evaluation at the sentence level.  相似文献   

4.
It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users’ privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents’ knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena.  相似文献   

5.
In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed there has been extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification and describe an algorithmic way to check it.  相似文献   

6.
This work addresses the problem of detecting novel sentences from an incoming stream of text data, by studying the performance of different novelty metrics, and proposing a mixed metric that is able to adapt to different performance requirements. Existing novelty metrics can be divided into two types, symmetric and asymmetric, based on whether the ordering of sentences is taken into account. After a comparative study of several different novelty metrics, we observe complementary behavior in the two types of metrics. This finding motivates a new framework of novelty measurement, i.e. the mixture of both symmetric and asymmetric metrics. This new framework of novelty measurement performs superiorly under different performance requirements varying from high-precision to high-recall as well as for data with different percentages of novel sentences. Because it does not require any prior information, the new metric is very suitable for real-time knowledge base applications such as novelty mining systems where no training data is available beforehand.  相似文献   

7.
Software metrics are used to measure different attributes of software. To practically measure software attributes using these metrics, metric thresholds are needed. Many researchers attempted to identify these thresholds based on personal experiences. However, the resulted experience-based thresholds cannot be generalized due to the variability in personal experiences and the subjectivity of opinions. The goal of this paper is to propose an automated clustering framework based on the expectation maximization (EM) algorithm where clusters are generated using a simplified 3-metric set (LOC, LCOM, and CBO). Given these clusters, different threshold levels for software metrics are systematically determined such that each threshold reflects a specific level of software quality. The proposed framework comprises two major steps:the clustering step where the software quality historical dataset is decomposed into a fixed set of clusters using the EM algorithm, and the threshold extraction step where thresholds, specific to each software metric in the resulting clusters, are estimated using statistical measures such as the mean (μ) and the standard deviation (σ) of each software metric in each cluster. The paper's findings highlight the capability of EM-based clustering, using a minimum metric set, to group software quality datasets according to different quality levels.  相似文献   

8.
Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common Feature Envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel Entity Placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects.  相似文献   

9.
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU, METEOR and the related NIST metric, are becoming increasingly important in MT research and development. This paper presents a significance test-driven comparison of n-gram-based automatic MT evaluation metrics. Statistical significance tests use bootstrapping methods to estimate the reliability of automatic machine translation evaluations. Based on this reliability estimation, we study the characteristics of different MT evaluation metrics and how to construct reliable and efficient evaluation suites.  相似文献   

10.
This article investigates how software metrics can be defined as part of a metasystem approach to the development of software specifications environments. Environment transformation language (ETL) is used to define metric computations. In our approach, metrics applicable to a particular specification environment (e.g., a data flow diagram environment) are defined in conjunction with a formal definition of that environment as given in environment definition language (EDL). The goal is to support a metric-driven approach to software development by incorporating, with relative ease, the metric computation into the software specification environment. As representative examples, metrics for data flow diagrams, structure charts, and resource flow graphs are described and analyzed. We demonstrate how an analyst, interacting with the system, can use metrics to improve the quality of the deliverables.  相似文献   

11.
Similarity measurements between 3D objects and 2D images are useful for the tasks of object recognition and classification. The authors distinguish between two types of similarity metrics: metrics computed in image-space (image metrics) and metrics computed in transformation-space (transformation metrics). Existing methods typically use image metrics; namely, metrics that measure the difference in the image between the observed image and the nearest view of the object. Example for such a measure is the Euclidean distance between feature points in the image and their corresponding points in the nearest view. (This measure can be computed by solving the exterior orientation calibration problem.) In this paper the authors introduce a different type of metrics: transformation metrics. These metrics penalize for the deformations applied to the object to produce the observed image. In particular, the authors define a transformation metric that optimally penalizes for “affine deformations” under weak-perspective. A closed-form solution, together with the nearest view according to this metric, are derived. The metric is shown to be equivalent to the Euclidean image metric, in the sense that they bound each other from both above and below. It therefore provides an easy-to-use closed-form approximation for the commonly-used least-squares distance between models and images. The authors demonstrate an image understanding application, where the true dimensions of a photographed battery charger are estimated by minimizing the transformation metric  相似文献   

12.
基于图神经网络的故障诊断方法, 通常需要根据度量衡确定样本之间的相似性, 进而构建图的拓扑结构.然而, 根据单一度量衡可能无法准确衡量数据样本之间的相似性, 进而导致无法准确表征样本之间的关系. 因此, 选用不同的度量衡会极大地影响图神经网络的诊断性能. 为了解决通过单一度量衡无法准确表征数据样本之间相关性的问题, 本文提出了一种基于多度量衡构造图的故障诊断模型???Multi-GAT. 通过结合3种度量衡的计算结果,从而判断数据样本之间相关性的强弱. 本文改进了图注意力网络的评分函数, 使其能够依据样本之间相关性的强弱更准确地确定数据样本之间的相似性. 在本文基准数据集上的实验表明, Multi-GAT能够提升模型的诊断精度,且拥有较好的稳定性.  相似文献   

13.
《Performance Evaluation》2006,63(9-10):988-1004
Web prefetching techniques have been pointed out to be especially important to reduce perceived web latencies and, consequently, an important amount of work can be found in the open literature. But, in general, it is not possible to do a fair comparison among the proposed prefetching techniques due to three main reasons: (i) the underlying baseline system where prefetching is applied differs widely among the studies; (ii) the workload used in the presented experiments is not the same; (iii) different performance key metrics are used to evaluate their benefits.This paper focuses on the third reason. Our main concern is to identify which are the meaningful indexes when studying the performance of different prefetching techniques. For this purpose, we propose a taxonomy based on three categories, which permits us to identify analogies and differences among the indexes commonly used. In order to check, in a more formal way, the relation between them, we run experiments and estimate statistically the correlation among a representative subset of those metrics. The statistical results help us to suggest which indexes should be selected when performing evaluation studies depending on the different elements in the considered web architecture.The choice of the appropriate key metric is of paramount importance for a correct and representative study. As our experimental results show, depending on the metric used to check the system performance, results cannot only widely vary but also reach opposite conclusions.  相似文献   

14.
ContextSoftware quality attributes are assessed by employing appropriate metrics. However, the choice of such metrics is not always obvious and is further complicated by the multitude of available metrics. To assist metrics selection, several properties have been proposed. However, although metrics are often used to assess successive software versions, there is no property that assesses their ability to capture structural changes along evolution.ObjectiveWe introduce a property, Software Metric Fluctuation (SMF), which quantifies the degree to which a metric score varies, due to changes occurring between successive system's versions. Regarding SMF, metrics can be characterized as sensitive (changes induce high variation on the metric score) or stable (changes induce low variation on the metric score).MethodSMF property has been evaluated by: (a) a case study on 20 OSS projects to assess the ability of SMF to differently characterize different metrics, and (b) a case study on 10 software engineers to assess SMF's usefulness in the metric selection process.ResultsThe results of the first case study suggest that different metrics that quantify the same quality attributes present differences in their fluctuation. We also provide evidence that an additional factor that is related to metrics’ fluctuation is the function that is used for aggregating metric from the micro to the macro level. In addition, the outcome of the second case study suggested that SMF is capable of helping practitioners in metric selection, since: (a) different practitioners have different perception of metric fluctuation, and (b) this perception is less accurate than the systematic approach that SMF offers.ConclusionsSMF is a useful metric property that can improve the accuracy of metrics selection. Based on SMF, we can differentiate metrics, based on their degree of fluctuation. Such results can provide input to researchers and practitioners in their metric selection processes.  相似文献   

15.
Software development is fundamentally based on cognitive processes. Our motivating hypothesis is that amounts of various kinds of information in software artifacts may have useful statistical relationships with software-engineering attributes. This paper proposes measures of size, complexity and coupling in terms of the amount of information, building on formal definitions of these software-metric families proposed by Briand, Morasca, and Basili. Ordinary graphs represent relationships between pairs of nodes. We extend prior work with ordinary graphs to hypergraphs representing relationships among sets of nodes. Some software engineering abstractions, such as set-use relations for public variables, are better represented as hypergraphs than ordinary (binary) graphs. Traditional software metrics are based on counting. In contrast, we adopt information theory as the basis for measurement, because the design decisions embodied by software are information. This paper proposes software metrics of size, complexity, and coupling based on information in the pattern of incident hyperedges. For comparison, we also define corresponding counting-based metrics. Three exploratory case studies illustrate some of the distinctive features of the proposed metrics. The case studies found that information theory-based software metrics make distinctions that counting metrics do not, which may be relevant to software engineering quality and process. We also identify situations when information theory-based metrics are simply proportional to corresponding counting metrics.
Rajiv GovindarajanEmail:
  相似文献   

16.
This paper describes a number of efficient algorithms for morphological operations which use discs defined by chamfer distances as structuring elements. It presents an extension to previous work on extending metrics (such as the p-q-metrics). Theoretical results and algorithms are presented for p-q-r-metrics, which are not extending. These metrics can approximate the Euclidean metric close enough for most practical situations. The algorithms are based on an analysis of the structure of shortest paths in the p-q-r-metric and of the set of values this metric can assume. Efficient algorithms are presented for the medial axis and the opening transform. The opening transform algorithm is two orders of magnitude faster than a more straightforward algorithm.This research was supported by the Foundation for Computer Science in The Netherlands (SION) with financial support from The Netherlands Organisation for Scientific Research (NWO). Part of it was performed while the author was a guest at the Technical University of Vienna in the framework of the Erasmus exchange program of the European Community.  相似文献   

17.
Multicast multi-layered communications must implement efficient control algorithms to address undesirable network behaviors. This paper proposes two multi-metric algorithms for computing the rates of the video layers and improve the global video quality of a multicast session. In fact, we show that a single-metric approach may degrade some network parameters without obtaining any substantial improvements. Our first algorithm combines three metrics and a set of weights in such a way that one metric can be prioritized. This leads to an improved quality of multicast sessions, as we show through a number of experiments. In networks where the available resources are highly variable, however, the stability of the video quality is compromised if absolute values of the metrics are adopted. We then propose a second algorithm that uses the relative values of the metrics on a per-entry basis. Computation of the global quality of the multicast session is based on a differential matrix that stores the metrics for each receiver. This scheme takes into account the dynamics of the available resources and the heterogeneity of receivers. The great benefit of this approach is that the global video quality is always improved for every loop of the algorithm.  相似文献   

18.
Graph layout algorithms typically conform to one or more aesthetic criteria (e.g. minimizing the number of bends, maximizing orthogonality). Determining the extent to which a graph drawing conforms to an aesthetic criterion tends to be done informally, and varies between different algorithms. This paper presents formal metrics for measuring the aesthetic presence in a graph drawing for seven common aesthetic criteria, applicable to any graph drawing of any size. The metrics are useful for determining the aesthetic quality of a given graph drawing, or for defining a cost function for genetic algorithms or simulated annealing programs. The metrics are continuous, so that aesthetic quality is not stated as a binary conformance decision (i.e. the drawing either conforms to the aesthetic or not), but can be stated as the extent of aesthetic conformance using a number between 0 and 1. The paper presents the seven metric formulae. The application of these metrics is demonstrated through the aesthetic analysis of example graph drawings produced by common layout algorithms.  相似文献   

19.
Discrete surface Ricci flow   总被引:1,自引:0,他引:1  
This work introduces a unified framework for discrete surface Ricci flow algorithms, including spherical, Euclidean, and hyperbolic Ricci flows, which can design Riemannian metrics on surfaces with arbitrary topologies by user-defined Gaussian curvatures. Furthermore, the target metrics are conformal (angle-preserving) to the original metrics. A Ricci flow conformally deforms the Riemannian metric on a surface according to its induced curvature, such that the curvature evolves like a heat diffusion process. Eventually, the curvature becomes the user defined curvature. Discrete Ricci flow algorithms are based on a variational framework. Given a mesh, all possible metrics form a linear space, and all possible curvatures form a convex polytope. The Ricci energy is defined on the metric space, which reaches its minimum at the desired metric. The Ricci flow is the negative gradient flow of the Ricci energy. Furthermore, the Ricci energy can be optimized using Newton's method more efficiently. Discrete Ricci flow algorithms are rigorous and efficient. Our experimental results demonstrate the efficiency, accuracy and flexibility of the algorithms. They have the potential for a wide range of applications in graphics, geometric modeling, and medical imaging. We demonstrate their practical values by global surface parameterizations.  相似文献   

20.
High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号