共查询到20条相似文献,搜索用时 0 毫秒
1.
《IEEE transactions on pattern analysis and machine intelligence》1987,(6):697-708
Software metrics are computed for the purpose of evaluating certain characteristics of the software developed. A Fortran static source code analyzer, FORTRANAL, was developed to study 31 metrics, including a new hybrid metric introduced in this paper, and applied to a database of 255 programs, all of which were student assignments. Comparisons among these metrics are performed. Their cross-correlation confirms the internal consistency of some of these metrics which belong to the same class. To remedy the incompleteness of most of these metrics, the proposed metric incorporates context sensitivity to structural attributes extracted from a flow graph. It is also concluded that many volume metrics have similar performance while some control metrics surprisingly correlate well with typical volume metrics in the test samples used. A flexible class of hybrid metric can incorporate both volume and control attributes in assessing software complexity. 相似文献
2.
Howard A. Rubin 《Information Systems Management》1997,14(2):7-14
Successful outsourcing experiences are not based on the levying of penalties for failure but on the accrual of expected benefits by both parties involved in the outsourcing agreement. For these benefits to accrue, IS managers must implement a proactive, forward-looking oversight mechanism designed to ensure that the outsourcing provider operates in a performance zone that provides the expected business value. Outsourcing oversight metrics—key performance-monitoring parameters built into the outsourcing agreement and assessed on an on-going basis—form the basis of such a mechanism. 相似文献
3.
度量数据的分布信息对于理解和使用面向对象软件度量有重要意义.人们对面向对象软件规模度量、耦合度度量乃至继承维度的度量数据的分布都有研究,但对除内聚度缺乏度LCOM之外的内聚度度量数据的分布却缺乏研究.已有的实证研究表明,LCOM并不是好的内聚度度量,因此探讨其他内聚度度量数据分布很有必要.对包括内聚度缺乏度、基于连通性的内聚度度量和基于相似性的内聚度度量总共17个度量指标在112个Java开源软件项目的分布情况进行实证研究,对每个度量指标的每个项目数据使用幂律分布和对数正态分布进行拟合,并使用荟萃分析方法对拟合结果进行了分析.实证研究结果表明,非规范化的内聚度量可使用对数正态分布和幂律分布拟合,但规范化的基于相似性的内聚度量(包括CC、LSCC、SCOM和SCC)需要排除方法数小于等于1或字段数为0的特殊类才能使用对数正态分布拟合,而基于连通性的内聚度度量(包括TCC、LCC、DCD和DCI)则只有对应的非规范化版本的数据才符合对数正态分布或幂律分布.实证研究可帮助人们更好地理解和使用内聚度度量,特别是可以帮助人们如何利用已有的方法确定内聚度度量的阈值. 相似文献
4.
Mark Klein 《Computer Supported Cooperative Work (CSCW)》2012,21(4-5):449-473
Humanity now finds itself faced with a range of highly complex and controversial challenges—such as climate change, the spread of disease, international security, scientific collaborations, product development, and so on—that call upon us to bring together large numbers of experts and stakeholders to deliberate collectively on a global scale. Collocated meetings can however be impractically expensive, severely limit the concurrency and thus breadth of interaction, and are prone to serious dysfunctions such as polarization and hidden profiles. Social media such as email, blogs, wikis, chat rooms, and web forums provide unprecedented opportunities for interacting on a massive scale, but have yet to realize their potential for helping people deliberate effectively, typically generating poorly-organized, unsystematic and highly redundant contributions of widely varying quality. Large-scale argumentation systems represent a promising approach for addressing these challenges, by virtue of providing a simple systematic structure that radically reduces redundancy and encourages clarity. They do, however, raise an important challenge. How can we ensure that the attention of the deliberation participants is drawn, especially in large complex argument maps, to where it can best serve the goals of the deliberation? How can users, for example, find the issues they can best contribute to, assess whether some intervention is needed, or identify the results that are mature and ready to “harvest”? Can we enable, for large-scale distributed discussions, the ready understanding that participants typically have about the progress and needs of small-scale, collocated discussions?. This paper will address these important questions, discussing (1) the strengths and limitations of current deliberation technologies, (2) how argumentation technology can help address these limitations, and (3) how we can use attention-mediation metrics to enhance the effectiveness of large-scale argumentation-based deliberations. 相似文献
5.
《Information Security Journal: A Global Perspective》2013,22(3):57-67
ABSTRACTMalware is becoming more and more aggressive and new techniques are emerging to allow malicious code to evade detection by antiviruses. Metamorphic malware is a particularly insidious kind of virus that changes its form at each infection. In this article, a technique for detecting metamorphic viruses is proposed that is based on identifying specific features of the assembly code, such as the instructions that change the contents of the registers, the instructions that change the control flow, and the potential code fragmentation. Such features have been derived by the analysis of a large dataset of malware. The experimentation suggests that the proposed technique produces very high precision (over 97%) in recognizing metamorphic malware, and allows also for distinguishing among different families of malware. 相似文献
6.
Distinguishing Computer-Generated Images from Natural Images Using Channel and Pixel Correlation 下载免费PDF全文
Journal of Computer Science and Technology - With the recent tremendous advances of computer graphics rendering and image editing technologies, computergenerated fake images, which in general do... 相似文献
7.
Modular Robot Motion Planning Using Similarity Metrics 总被引:1,自引:0,他引:1
In order for a modular self-reconfigurable robotic system to autonomously change from its current state to a desired one, it is critical to have a cost function (or metric) that reflects the effort required to reconfigure. A reconfiguration sequence can consist of single module motions, or the motion of a branch of modules. For single module motions, the minimization of metrics on the set of sets of module center locations serves as the driving force for reconfiguration. For branch motions, the question becomes which branches should be moved so as to minimize overall effort. Another way to view this is as a pattern matching problem in which the desired configuration is viewed as a void, and we seek branch motions that best fill the void. A precise definition of goodness of fit is therefore required. In this paper, we address the fundamental question of how closely geometric figures can be made to match under a given group of transformations (e.g., rigid-body motions), and what it means to bisect two shapes. We illustrate these ideas in the context of applications in modular robot motion planning. 相似文献
8.
Giancarlo Succi Witold Pedrycz Snezana Djokic Paolo Zuliani Barbara Russo 《Empirical Software Engineering》2005,10(1):81-104
The object-oriented metrics suite proposed by Chidamber and Kemerer (CK) is a measurement approach towards improved object-oriented design and development practices. However, existing studies evidence traces of collinearity between some of the metrics and low ranges of other metrics, two facts which may endanger the validity of models based on the CK suite. As high correlation may be an indicator of collinearity, in this paper, we empirically determine to what extent high correlations and low ranges might be expected among CK metrics.To draw as much general conclusions as possible, we extract the CK metrics from a large data set (200 public domain projects) and we apply statistical meta-analysis techniques to strengthen the validity of our results. Homogenously through the projects, we found a moderate (0.50) to high correlation (>0.80) between some of the metrics and low ranges of other metrics.Results of this empirical analysis supply researchers and practitioners with three main advises: a) to avoid the use in prediction systems of CK metrics that have correlation more than 0.80 b) to test for collinearity those metrics that present moderate correlations (between 0.50 and 0.60) c) to avoid the use as response in continuous parametric regression analysis of the metrics presenting low variance. This might therefore suggest that a prediction system may not be based on the whole CK metrics suite, but only on a subset consisting of those metrics that do not present either high correlation or low ranges. 相似文献
9.
This paper describes an approach to detecting distributed denial of service (DDoS) attacks that is based on fundamentals of Information Theory, specifically Kolmogorov Complexity. A theorem derived using principles of Kolmogorov Complexity states that the joint complexity measure of random strings is lower than the sum of the complexities of the individual strings when the strings exhibit some correlation. Furthermore, the joint complexity measure varies inversely with the amount of correlation. We propose a distributed active network-based algorithm that exploits this property to correlate arbitrary traffic flows in the network to detect possible denial-of-service attacks. One of the strengths of this algorithm is that it does not require special filtering rules and hence it can be used to detect any type of DDoS attack. We implement and investigate the performance of the algorithm in an active network. Our results show that DDoS attacks can be detected in a manner that is not sensitive to legitimate background traffic.This research has been funded by the Defense Advanced Research Projects Agency (DARPA) contract F30602-01-C-0182 and managed by the Air Force Research Laboratory (AFRL) Information Directorate.General Electric Global Research Center, Niskayuna, New York. 相似文献
10.
A study is presented in which it is determined whether software product metrics gathered statically from designs or source code may be helpful in predicting the number of run-time faults that will be encountered during execution. Metrics examined include intermodule metrics such as fan-in and fan-out, as well as intramodule metrics such as cyclomatic complexity and size. Our study indicates that it may be possible, with certain classes of software products, to predict the run-time behaviour using well-known static intermodule metrics. 相似文献
11.
Olague H.M. Etzkorn L.H. Gholston S. Quattlebaum S. 《IEEE transactions on pattern analysis and machine intelligence》2007,33(6):402-419
Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes. 相似文献
12.
It has long been accepted that construction of holograms by computer simulation of Maxwell's equations of wave propagation is extremely difficult and expensive. A noteworthy and considerably less expensive departure from computer hologram generation in the strict sense is the binary Fourier hologram1 1< technique, but it still leaves much to be desired. These and other problems have, for most practical purposes, forced computer holography into the category of a novelty and an educational exercise.2 相似文献
13.
Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults 总被引:4,自引:0,他引:4
《IEEE transactions on pattern analysis and machine intelligence》2006,32(10):771-789
In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes 相似文献
14.
Quality of software is one of the most critical concerns in software system development, and many products fail to meet the quality objectives when constructed initially. Software quality is highly affected by the development process's actual dynamics. This article proposes the use of the Markov decision process (MDP) for the assessment of software quality because MDP is a useful technique to abstract the model of dynamics of the development process and to test its impact on quality. Additionally, the MDP modeling of the dynamics leads to early prediction of the quality, from the design phases all the way through the different stages of development. The proposed approach is based on the stochastic nature of the software development process, including project architecture, construction strategy of Software Quality Assurance system, its qualification actions, and team assignment strategy. It accepts these factors as inputs, generating a relative quality degree as an output. The proposed approach has been demonstrated for the design phase with a case study taken from the literature. The results prove its robustness and capability to identify appropriate policies in terms of quality, cost, and time. © 2011 Wiley Periodicals, Inc. 相似文献
15.
Programming and Computer Software - The scope of volunteer computing systems is permanently expanding. The Berkeley Open Infrastructure for Network Computing (BOINC) is currently the most popular... 相似文献
16.
Existing clustering-based methods for segmentation and fiber tracking of diffusion tensor magnetic resonance images (DT-MRI) are based on a formulation of a similarity measure between diffusion tensors, or measures that combine translational and diffusion tensor distances in some ad hoc way. In this paper we propose to use the Fisher information-based geodesic distance on the space of multivariate normal distributions as an intrinsic distance metric. An efficient and numerically robust shooting method is developed for computing the minimum geodesic distance between two normal distributions, together with an efficient graph-clustering algorithm for segmentation. Extensive experimental results involving both synthetic data and real DT-MRI images demonstrate that in many cases our method leads to more accurate and intuitively plausible segmentation results vis-à-vis existing methods. 相似文献
17.
18.
Computer-Generated Graphite Pencil Rendering of 3D Polygonal Models 总被引:12,自引:0,他引:12
Researchers in non-photorealistic rendering have investigated the display of three-dimensional worlds using various display models. In particular, recent work has focused on the modeling of traditional artistic media and styles such as pen-and-ink illustration and watercolor painting. By providing 3D rendering systems that use these alternative display models users can generate traditional illustration renderings of their three-dimensional worlds. In this paper we present our graphite pencil 3D renderer. We have broken the problem of simulating pencil drawing down into four fundamental parts: (1) simulating the drawing materials (graphite pencil and drawing paper, blenders and kneaded eraser), (2) modeling the drawing primitives (individual pencil strokes and mark-making to create tones and textures), (3) simulating the basic rendering techniques used by artists and illustrators familiar with pencil rendering, and (4) modeling the control of the drawing composition. Each part builds upon the others and is essential to developing the framework for higher-level rendering methods and tools. In this paper we present parts 2, 3, and 4 of our research. We present non-photorealistic graphite pencil rendering methods for outlining and shading. We also present the control of drawing steps from preparatory sketches to finished rendering results. We demonstrate the capabilities of our approach with a variety of images generated from 3D models. 相似文献
19.
一个三维计算机水粉笔刷模型 总被引:12,自引:2,他引:12
提出一个三维计算机水粉笔刷模型,包括笔刷单元选取、颜料模拟、颜料扩散以及整体控制4部分,该模型结构简单开可以直接应用到传统三维曲面上,给了的结果具有手工绘制水粉画的效果,文中图例表明该模型的模拟效果令人满意。 相似文献
20.
Project evaluation is essential to understand and assess the key aspects of a project that make it either a success or failure. The latter is influenced by a large number of factors, and many times it is hard to measure them objectively. This paper addresses this by introducing a new method for identifying and assessing key project characteristics, which are crucial for a project's success. The method consists of a number of well-defined steps, which are described in detail. The method is applied to two case studies from different application domains and continents. It is concluded that patterns are possible to detect from the data sets. Further, the analysis of the two data sets shows that the proposed method using subjective factors is useful, since it provides an increased understanding, insight and assessment of which project factors might affect project success. 相似文献