首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The need to improve software productivity and software quality has put forward the research on software metrics technology and the development of software metrics tool to support related activities.To support object-oriented software metrics practice efectively,a model-absed approach to object-oriented software metrics is proposed in this paper.This approach guides the metrics users to adopt the quality metrics model to measure the object-oriented software products .The development of the model can be achieved by using a top-down approach.This approach explicitly proposes the conception of absolute normalization computation and relative normalization computation for a metrics model.Moreover,a generic software metrics tool-Jade Bird Object-Oriented Metrics Tool(JBOOMT)is designed to implement this approach.The parser-based approach adopted by the tool makes the information of the source program accurate and complete for measurement.It supports various customizable hierarchical metrics models and provides a flexible user interface for users to manipulate the models.It also supports absolute and relative normalization mechanisms in different situations.  相似文献   

2.
Like all engineering disciplines, software engineering relies on quantitative analysis to support rationalized decision making. Software engineering researchers and practitioners have traditionally relied on software metrics to quantify attributes of software products and processes. Whereas traditional software metrics are typically based on a syntactic analysis of software products, we introduce and discuss metrics that are based on a semantic analysis: our metrics do not reflect the form or structure of software products, but rather the properties of their function. At a time when software systems grow increasingly large and complex, the focus on diagnosing, identifying and removing every fault in the software product ought to relinquish the stage to a more measured, more balanced, and more realistic approach, which emphasizes failure avoidance, in addition to fault avoidance and fault removal. Semantic metrics are a good fit for this purpose, reflecting as they do a system’s ability to avoid failure rather than its proneness to being free of faults.  相似文献   

3.
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.  相似文献   

4.
5.
The complexities of computer auditing have created a need for decision support for the EDP auditor. Traditional statistical techniques have proven valuable; however, there are important qualitative components which must be incorporated in the analysis. More importantly, there is a need for decision aids which not only produce analysis and probability estimates-but are able to explain their analysis and conclusions. Recent developments in artificial intelligence have made possible the development of expert systems which provide these capabilities. In this paper we present the motivation, framework, and development strategy for a decision support system for EDP auditing.This project was funded by a grant from the Peat, Marwick, Mitchell Foundation through its Research Opportunities in Auditing program. The views expressed herein are those of the authors and do not necessarily reflect the view of the Peat, Marwick, Mitchell Foundation. The authors are indebted to the participants of the workshops at the University of Florida and the University of Wisconsin for their helpful comments. In particular, we would like to thank Rowland Ataise, Alan Friedberg, Steve Johnson, Andy Luzi, Stan Biggs, Lynn McKell, and Marshall Romney.  相似文献   

6.
Job shop scheduling with setup times, deadlines and precedence constraints   总被引:1,自引:0,他引:1  
In the last 15 years several procedures have been developed that can find solutions of acceptable quality in reasonable computing time to Job Shop Scheduling problems in environments that do not involve sequence-dependent setup times of the machines. The presence of the latter, however, changes the picture dramatically. In this paper we adapt one of the best known heuristics, the Shifting Bottleneck Procedure, to the case when sequence dependent setup times play an important role. This is done by treating the single machine scheduling problems that arise in the process as Traveling Salesman Problems with time windows, and solving the latter by an efficient dynamic programming algorithm. The model treated here also incorporates precedence constraints, release times and deadlines. Computational experience on a vast array of instances, mainly from the semiconductor industry, shows our procedure to advance substantially the state of the art. Paper presented in New York at MISTA 2005. E. Balas supported by the National Science Foundation through grant DMI-9802773 and by the Office of Naval Research through contract #N00014-97-1-0133.  相似文献   

7.
为了有效的利用蛋白质串联质谱数据,提高蛋白质鉴定的准确性,提出一种基于KNN的蛋白质序列与蛋白质串联质谱的匹配打分算法.蛋白质序列与蛋白质串联质谱的匹配打分是蛋白质数据库搜索鉴定过程中的关键技术.然而,现有的算法没有很好的利用蛋白质串联质谱中离子的强度信息.针对此问题,本文根据质谱中离子的类型给出了全体离子的一个合理的划分.进而抽象出一个高维的强度特征向量,在已知的高精度的数据集上建立了强度匹配知识集合,最后基于KNN技术构造了序列和质谱的匹配打分算法.实验结果表明,本文算法更加有效的利用了蛋白质串联质谱的结构信息,提高了蛋白质鉴定的准确性.  相似文献   

8.
Attacks on computer systems are now attracting increased attention. While the current trends in software vulnerability discovery indicate that the number of newly discovered vulnerabilities continues to be significant, the time between the public disclosure of vulnerabilities and the release of an automated exploit is shrinking. Thus, assessing the vulnerability exploitability risk is critical because this allows decision-makers to prioritize among vulnerabilities, allocate resources to patch and protect systems from these vulnerabilities, and choose between alternatives. Common vulnerability scoring system (CVSS) metrics have become the de facto standard for assessing the severity of vulnerabilities. However, the CVSS exploitability measures assign subjective values based on the views of experts. Two of the factors in CVSS, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence it is unknown whether it considers software properties as a factor. In this work, we introduce a novel measure, Structural Severity, which is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. These properties represent metrics that can be objectively derived from attack surface analysis, vulnerability analysis, and exploitation analysis. To illustrate the proposed approach, 25 reported vulnerabilities of Apache HTTP server and 86 reported vulnerabilities of Linux Kernel have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can objectively measure the risk of vulnerability exploitability and results can be different from the CVSS base scores.  相似文献   

9.
The rising costs of software development and maintenance have naturally aroused intere5t in tools and measures to quantify and analyze software complexity. Many software metrics have been studied widely because of the potential usefulness in predicting the complexity and quality of software. Most of the work reported in this area has been related to nonreal-time software. In this paper we report and discuss the results of an experimental investigation of some important metrics and their relationship for a class of 202 Pascal programs used in a real-time distributed processing environment. While some of our observations confirm independent studies, we have noted significant differences. For instance the correlations between McCabe's control complexity measure and Halstead's metrics are low in comparison to a previous study. Studies of the type reported here are important for understanding the relationship between software metrics.  相似文献   

10.
A relative complexity technique that combines the features of many complexity metrics to predict performance and reliability of a computer program is presented. Relative complexity aggregates many similar metrics into a linear compound metric that describes a program. Since relative complexity is a static measure, it is expanded by measuring relative complexity over time to find a program's functional complexity. It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do. Thus, developers can save time by choosing one metric to do the work of many  相似文献   

11.
面向对象软件度量是理解和保证面向对象软件质量的重要手段之一.通过将面向对象软件的度量值与其阈值比较,可简单直观评价其是否有可能包含缺陷.确定度量阈值方法主要有基于数据分布特征的无监督学习方法和基于缺陷相关性的有监督学习方法.两类方法各有利弊:无监督学习方法无需标签信息而易于实现,但所得阈值的缺陷预测性能通常较差;有监督学习方法通过机器学习算法提升所得阈值的缺陷预测性能,但标签信息在实际过程中不易获得且度量与缺陷链接技术复杂.近年来,两类方法的研究者不断探索并取得较大进展.同时,面向对象软件度量阈值确定方法研究仍存在一些亟待解决的挑战.对近年来国内外学者在该领域的研究成果进行系统性的总结.首先,阐述面向对象软件度量阈值确定方法的研究问题.其次,分别从无监督学习方法和有监督学习方法总结相关研究进展,并梳理具体的理论和实现的技术路径.然后,简要介绍面向对象软件度量阈值的其他相关技术.最后,总结当前该领域研究过程面临的挑战并给出建议的研究方向.  相似文献   

12.
Jones  C. 《Computer》1994,27(9):98-100
The software industry is an embarrassment when it comes to measurement and metrics. Many software managers and practitioners, including tenured academics in software engineering and computer science, seem to know little or nothing about these topics. Many of the measurements found in the software literature are not used with enough precision to replicate the author's findings-a canon of scientific writing in other fields. Several of the most widely used software metrics have been proved unworkable, yet they continue to show up in books, encyclopedias, and refereed journals. So long as these invalid metrics are used carelessly, there can be no true software engineering, only a kind of amateurish craft that uses rough approximations instead of precise measurement. The paper considers three significant and widely used software metrics that are invalid under various conditions: lines of code or LOC metrics, software science or Halstead metrics, and the cost-per-defect metric. Fortunately, two metrics that actually generate useful information-complexity metrics and function-point metrics-are growing in use and importance  相似文献   

13.
In this study, defect tracking is used as a proxy method to predict software readiness. The number of remaining defects in an application under development is one of the most important factors that allow one to decide if a piece of software is ready to be released. By comparing predicted number of faults and number of faults discovered in testing, software manager can decide whether the software is likely ready to be released or not.The predictive model developed in this research can predict: (i) the number of faults (defects) likely to exist, (ii) the estimated number of code changes required to correct a fault and (iii) the estimated amount of time (in minutes) needed to make the changes in respective classes of the application. The model uses product metrics as independent variables to do predictions. These metrics are selected depending on the nature of source code with regards to architecture layers, types of faults and contribution factors of these metrics. The use of neural network model with genetic training strategy is introduced to improve prediction results for estimating software readiness in this study. This genetic-net combines a genetic algorithm with a statistical estimator to produce a model which also shows the usefulness of inputs.The model is divided into three parts: (1) prediction model for presentation logic tier (2) prediction model for business tier and (3) prediction model for data access tier. Existing object-oriented metrics and complexity software metrics are used in the business tier prediction model. New sets of metrics have been proposed for the presentation logic tier and data access tier. These metrics are validated using data extracted from real world applications. The trained models can be used as tools to assist software mangers in making software release decisions.  相似文献   

14.
In this paper,a control integration method based on agent cooperation,called ASOJI,is proposed,which designs the architecture of integrated application systems in distributed computation environments as an agent community composed of nested agent fed-erations in three aspects:architecture style,agent cooperation,and composition semantics.Through defining activity-sharing-orented joint intention in the way of stepwise refinement,ASOJI can not only support the transparent specification of the architecture for software com-position,but also eliminate the gap between agent theory and the engineering realiztion of control integration.  相似文献   

15.
Software practitioners need ways to assess their software, and metrics can provide an automated way to do that, providing valuable feedback with little effort earlier than the testing phase. Semantic metrics were proposed to quantify aspects of software quality based on the meaning of software's task in the domain. Unlike traditional software metrics, semantic metrics do not rely on code syntax. Instead, semantic metrics are calculated from domain information, using the knowledge base of a program understanding system. Because semantic metrics do not rely on code syntax, they can be calculated before code is fully implemented. This article evaluates the semantic metrics theoretically and empirically. We find that the semantic metrics compare well to existing metrics and show promise as early indicators of software quality.  相似文献   

16.
In geographic information retrieval, queries often name geographic regions that do not have a well-defined boundary, such as “Southern France.” We provide two algorithmic approaches to the problem of computing reasonable boundaries of such regions based on data points that have evidence indicating that they lie either inside or outside the region. Our problem formulation leads to a number of subproblems related to red-blue point separation and minimum-perimeter polygons, many of which we solve algorithmically. We give experimental results from our implementation and a comparison of the two approaches. This research is supported by the EU-IST Project No. IST-2001-35047 (SPIRIT) and by grant WO 758/4-2 of the German Research Foundation (DFG).  相似文献   

17.
In this paper, we propose an approach for the real-time performance analysis of distributed software with reliability constraints, called Athena. The approach is based on the real-time and reliability performance analysis of distributed program. In Athena, two important factors, imperfect nodes and the links reliability, are introduced. The algorithms proposed in Athena generates sub-graphs, counts the reliability of each sub-graph, calculates the transmission time for all the transmission paths of each data file, and computes response time of each data file with reliability constraint. In this way, the real-time performance of distributed software with reliability constrains can be evaluated.This paper is supported by National Science Foundation of China under grant 60273076.  相似文献   

18.
The Object-Oriented (OO) paradigm has become increasingly popular in recent years. Researchers agree that, although maintenance may turn out to be easier for OO systems, it is unlikely that the maintenance burden will completely disappear. One approach to controlling software maintenance costs is the utilization of software metrics during the development phase, to help identify potential problem areas. Many new metrics have been proposed for OO systems, but only a few of them have been validated. The purpose of this research is to empirically explore the validation of three existing OO design complexity metrics and, specifically, to assess their ability to predict maintenance time. This research reports the results of validating three metrics, Interaction Level (IL), Interface Size (IS), and Operation Argument Complexity (OAC). A controlled experiment was conducted to investigate the effect of design complexity (as measured by the above metrics) on maintenance time. Each of the three metrics by itself was found to be useful in the experiment in predicting maintenance performance.  相似文献   

19.
We consider cryptographic and physical zero-knowledge proof schemes for Sudoku, a popular combinatorial puzzle. We discuss methods that allow one party, the prover, to convince another party, the verifier, that the prover has solved a Sudoku puzzle, without revealing the solution to the verifier. The question of interest is how a prover can show: (i) that there is a solution to the given puzzle, and (ii) that he knows the solution, while not giving away any information about the solution to the verifier. In this paper we consider several protocols that achieve these goals. Broadly speaking, the protocols are either cryptographic or physical. By a cryptographic protocol we mean one in the usual model found in the foundations of cryptography literature. In this model, two machines exchange messages, and the security of the protocol relies on computational hardness. By a physical protocol we mean one that is implementable by humans using common objects, and preferably without the aid of computers. In particular, our physical protocols utilize items such as scratch-off cards, similar to those used in lotteries, or even just simple playing cards. The cryptographic protocols are direct and efficient, and do not involve a reduction to other problems. The physical protocols are meant to be understood by “lay-people” and implementable without the use of computers. Research of R. Gradwohl was supported by US-Israel Binational Science Foundation Grant 2002246. Research of M. Naor was supported in part by a grant from the Israel Science Foundation. Research of B. Pinkas was supported in part by the Israel Science Foundation (grant number 860/06). Research of G.N. Rothblum was supported by NSF grant CNS-0430450 and NSF grant CFF-0635297.  相似文献   

20.
In the summer of 2001, Florida Gulf Coast University was awarded a 2-year, $200,000 grant from the National Center for Academic Transformation to redesign a required General Education course entitled Understanding the Visual and Performing Arts. The course redesign project had two main goals: infuse appropriate technology into the course in meaningful ways and reduce the cost of delivering the course. Faculty members in the humanities and arts were adamant that the redesigned course be structured in such a way that it offered a coherent and consistent learning experience for all students and that it maintained the use of essays as an important strategy for learning in the class. The redesign project led to the creation of a wholly online course with all students registered in two large sections. One of the ways in which we continued to incorporate essay writing into the course was to use a computer application, the Intelligent Essay Assessor (IEA) from Pearson Knowledge Technologies, to score two shorter essays. Through detailed assessment, we have demonstrated that the computer software has an inter-rater reliability of 81% as compared to the 54% inter-rater reliability of the holistic scoring by humans. In this essay, we provide general background on the redesign project and a more detailed discussion of the appropriate use and the reliability of the Intelligent Essay Assessor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号