首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Among the key factors for the success of a metrics program are the regularity of metrics collection, a seamless and efficient data collection methodology, and the presence of non-intrusive automated data collection tools. This paper presents the software process data warehousing architecture SPDW+ as a solution to the frequent, seamless, and automated capturing of software quality metrics, and their integration in a central repository for a full range of analyses. The striking features of the SPDW+ ETL (data extraction, transformation, and loading) approach are that it addresses heterogeneity issues related to the software development context, it is automatable and non-intrusive, and it allows different capturing frequency and latency strategies, hence allowing both analysis and monitoring of software metrics. The paper also provides a reference framework that details three orthogonal dimensions for considering ETL issues in the software development process context, used to develop SPDW+ ETL. The advantages of SPDW+ are: (1) flexibility to meet the requirements of the frequent changes in SDP environments; (2) support for monitoring, which implies the execution of frequent and incremental loads; (3) automation of the complex and time-consuming task of capturing metrics, making it seamless; (4) freedom of choice regarding management models and support tools used in projects; and (5) cohesion and consistency of the information stored in the metrics repository which will be used to compare data of different projects. The paper presents the reference framework, illustrates the key role played by the metrics capturing process in a metrics program using a case study, and presents the striking features of SPDW+ and its ETL approach, as well as an evaluation based on a prototype implementation.  相似文献   

2.
Stark  G. Durst  R.C. Vowell  C.W. 《Computer》1994,27(9):42-48
The amount of code in NASA systems has continued to grow over the past 30 years. This growth brings with it the increased risk of system failure caused by software. Thus, managing the risks inherent in software development and maintenance is becoming a highly visible and important field. The metrics effort within NASA's Mission Operations Directorate has helped managers and engineers better understand their processes and products. The toolkit helps ensure consistent data collection across projects and increases the number and types of analysis options available to project personnel. The decisions made on the basis of metrics analysis have helped project engineers make decisions about project and mission readiness by removing the inherent optimism of “engineering judgment”  相似文献   

3.
Presents empirical evidence that metrics on communication artifacts generated by groupware tools can be used to gain significant insight into the development process that produced them. We describe a test-bed for developing and testing communication metrics, a senior-level software engineering project course at Carnegie Mellon University, in which we conducted several studies and experiments from 1991-1996 with more than 400 participants. Such a test-bed is an ideal environment for empirical software engineering, providing sufficient realism while allowing for controlled observation of important project parameters. We describe three proof-of-concept experiments to illustrate the value of communication metrics in software development projects. Finally, we propose a statistical framework based on structural equations for validating these communication metrics  相似文献   

4.
ContextFormal methods, and particularly formal verification, is becoming more feasible to use in the engineering of large highly dependable software-based systems, but so far has had little rigorous empirical study. Its artefacts and activities are different to those of conventional software engineering, and the nature and drivers of productivity for formal methods are not yet understood.ObjectiveTo develop a research agenda for the empirical study of productivity in software projects using formal methods and in particular formal verification. To this end we aim to identify research questions about productivity in formal methods, and survey existing literature on these questions to establish face validity of these questions. And further we aim to identify metrics and data sources relevant to these questions.MethodWe define a space of GQM goals as an investigative framework, focusing on productivity from the perspective of managers of projects using formal methods. We then derive questions for these goals using Easterbrook et al.’s (2008) taxonomy of research questions. To establish face validity, we document the literature to date that reflects on these questions and then explore possible metrics related to these questions. Extensive use is made of literature concerning the L4.verified project completed within NICTA, as it is one of the few projects to achieve code-level formal verification for a large-scale industrially deployed software system.ResultsWe identify more than thirty research questions on the topic in need of investigation. These questions arise not just out of the new type of project context, but also because of the different artefacts and activities in formal methods projects. Prior literature supports the need for research on the questions in our catalogue, but as yet provides little evidence about them. Metrics are identified that would be needed to investigate the questions. Thus although it is obvious that at the highest level concepts such as size, effort, rework and so on are common to all software projects, in the case of formal methods, measurement at the micro level for these concepts will exhibit significant differences.ConclusionsEmpirical software engineering for formal methods is a large open research field. For the empirical software engineering community our paper provides a view into the entities and research questions in this domain. For the formal methods community we identify some of the benefits that empirical studies could bring to the effective management of large formal methods projects, and list some basic metrics and data sources that could support empirical studies. Understanding productivity is important in its own right for efficient software engineering practice, but can also support future research on cost-effectiveness of formal methods, and on the emerging field of Proof Engineering.  相似文献   

5.
Developing and selecting high quality software applications are fundamental. It is important that the software applications can be evaluated for every relevant quality characteristic using validated metrics. Software engineers have been putting forward hundreds of quality metrics for software programs, disregarding databases. However, software data aspects are important because the size of data and their system nature contribute to many aspects of a systems quality. In this paper, we proposed some internal metrics to measure relational databases which influence its complexity. Considering the main characteristics of a relational table, we can propose the number of attributes (NA) of a table, the depth of the referential tree (DRT) of a table, and the referential degree (RD) of a table. These measures are characterized using measurement theory, particularly the formal framework proposed by Zuse. As many important issues faced by the software engineering community can only be addressed by experimentation, an experiment has been carried out in order to validate these metrics.  相似文献   

6.
Software organizations face challenges in managing and sustaining their measurement programs over time. The complexity of measurement programs increase with exploding number of goals and metrics to collect. At the same time, organizations usually have limited budget and resources for metrics collection. It has been recognized for quite a while that there is the need for prioritizing goals, which then ought to drive the selection of metrics. On the other hand, the dynamic nature of the organizations requires measurement programs to adapt to the changes in the stakeholders, their goals, information needs and priorities. Therefore, it is crucial for organizations to use structured approaches that provide transparency, traceability and guidance in choosing an optimum set of metrics that would address the highest priority information needs considering limited resources. This paper proposes a decision support framework for metrics selection (DSFMS) which is built upon the widely used Goal Question Metric (GQM) approach. The core of the framework includes an iterative goal-based metrics selection process incorporating decision making mechanisms in metrics selection, a pre-defined Attributes/Metrics Repository, and a Traceability Model among GQM elements. We also discuss alternative prioritization and optimization techniques for organizations to tailor the framework according to their needs. The evaluation of the GQM-DSFMS framework was done through a case study in a CMMI Level 3 software company.  相似文献   

7.
Cultivation and engineering of a software metrics program   总被引:1,自引:0,他引:1  
Abstract. This paper reports from a case study of an organization that implements a software metrics program to measure the effects of its improvement efforts. The program measures key indicators of all completed projects and summarizes progress information in a quarterly management report. The implementation turns out to be long and complex, as the organization is confronted with dilemmas based on contradictory demands and value conflicts. The process is interpreted as a combination of a rational engineering process in which a metrics program is constructed and put into use, and an evolutionary cultivation process in which basic values of the software organization are confronted and transformed. The analysis exemplifies the difficulties and challenges that software organizations face when bringing known principles for software metrics programs into practical use. The article discusses the insights gained from the case in six lessons that may be used by Software Process Improvement managers in implementing a successful metrics program.  相似文献   

8.
Jones  C. 《Computer》1994,27(9):98-100
The software industry is an embarrassment when it comes to measurement and metrics. Many software managers and practitioners, including tenured academics in software engineering and computer science, seem to know little or nothing about these topics. Many of the measurements found in the software literature are not used with enough precision to replicate the author's findings-a canon of scientific writing in other fields. Several of the most widely used software metrics have been proved unworkable, yet they continue to show up in books, encyclopedias, and refereed journals. So long as these invalid metrics are used carelessly, there can be no true software engineering, only a kind of amateurish craft that uses rough approximations instead of precise measurement. The paper considers three significant and widely used software metrics that are invalid under various conditions: lines of code or LOC metrics, software science or Halstead metrics, and the cost-per-defect metric. Fortunately, two metrics that actually generate useful information-complexity metrics and function-point metrics-are growing in use and importance  相似文献   

9.
Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.  相似文献   

10.
Managing software engineering projects requires an ability to comprehend and balance the technological, economic, and social bases through which large software systems are developed. It requires people who can formulate strategies for developing systems in the presence of ill-defined requirements, new computing technologies, and recurring dilemmas with existing computing arrangements. This necessarily assumes skill in acquiring adequate computing resources, controlling projects, coordinating development schedules, and employing and directing competent staff. It also requires people who can organize the process for developing and evolving software products with locally available resources. Managing software engineering projects is as much a job of social interaction as it is one of technical direction. This paper examines the social arrangements that a software manager must deal with in developing and using new computing systems, evaluating the appropriateness of software engineering tools or techniques, directing the evolution of a system through its life cycle, organizing and staffing software engineering projects, and assessing the distributed costs and benefits of local software engineering practices. Ths purpose is to underscore the role of social analysis of software engineering practices as a cornerstone in understanding what it takes to productively manage software projects.  相似文献   

11.
PSP支持RUP的应用研究   总被引:2,自引:2,他引:2  
江瑜 《计算机工程与设计》2005,26(9):2543-2545,2564
RUP(Rattonal Unified Process)是由Rational软件公司开发和营销的一种软件工程过程,它提供了如何在开发组织中严格分配任务和职责的方法.PSP是软件工程师个体软件过程改进的指导框架,它提供了一些度量标准、操作步骤和模板帮助工程师改进个人的软件工程技巧.在阐述PSP和RUP原理的基础上,探讨以PSP支持RUP,通过个人软件过程的改进,从而提高组织整体的过程改进效果,达到改善软件产品质量、提高软件开发效率的目的.  相似文献   

12.
Driven by market requirements,software services organizations have adopted various software engineering process models (such as capability maturity model (CMM),capability maturity model integration (CMMI),ISO 9001:2000,etc.) and practice of the project management concepts defined in the project management body of knowledge.While this has definitely helped organizations to bring some methods into the software development madness,there always exists a demand for comparing various groups within the organization in terms of the practice of these defined process models.Even though there exist many metrics for comparison,considering the variety of projects in terms of technology,life cycle,etc.,finding a single metric that caters to this is a difficult task.This paper proposes a model for arriving at a rating on group maturity within the organization.Considering the linguistic or imprecise and uncertain nature of software measurements,fuzzy logic approach is used for the proposed model.Without the barriers like technology or life cycle difference,the proposed model helps the organization to compare different groups within it with reasonable precision.  相似文献   

13.
Analyzing software measurement data with clustering techniques   总被引:1,自引:0,他引:1  
For software quality estimation, software development practitioners typically construct quality-classification or fault prediction models using software metrics and fault data from a previous system release or a similar software project. Engineers then use these models to predict the fault proneness of software modules in development. Software quality estimation using supervised-learning approaches is difficult without software fault measurement data from similar projects or earlier system releases. Cluster analysis with expert input is a viable unsupervised-learning solution for predicting software modules' fault proneness and potential noisy modules. Data analysts and software engineering experts can collaborate more closely to construct and collect more informative software metrics.  相似文献   

14.
A metrics suite for object oriented design   总被引:3,自引:0,他引:3  
Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement  相似文献   

15.
Evolving software programs requires that software developers reason quantitatively about the modularity impact of several concerns, which are often scattered over the system. To this respect, concern-oriented software analysis is rising to a dominant position in software development. Hence, measurement techniques play a fundamental role in assessing the concern modularity of a software system. Unfortunately, existing measurements are still fundamentally module-oriented rather than concern-oriented. Moreover, the few available concern-oriented metrics are defined in a non-systematic and shared way and mainly focus on static properties of a concern, even if many properties can only be accurately quantified at run-time. Hence, novel concern-oriented measurements and, in particular, shared and systematic ways to define them are still welcome. This paper poses the basis for a unified framework for concern-driven measurement. The framework provides a basic terminology and criteria for defining novel concern metrics. To evaluate the framework feasibility and effectiveness, we have shown how it can be used to adapt some classic metrics to quantify concerns and in particular to instantiate new dynamic concern metrics from their static counterparts.  相似文献   

16.
Mining software repositories using analytics-driven dashboards provides a unifying mechanism for understanding, evaluating, and predicting the development, management, and economics of large-scale systems and processes. Dashboards enable measurement and interactive graphical displays of complex information and support flexible analytic capabilities for user customizability and extensibility. Dashboards commonly include system requirements and design metrics because they provide leading indicators for project size, growth, and volatility. This article focuses on dashboards that have been used on actual large-scale software projects as well as example empirical relationships revealed by the dashboards. The empirical results focus on leading indicators for requirements and designs of large-scale software systems based on insights from two sets of software projects containing 14 systems and 23 systems.  相似文献   

17.
The Method for Method Configuration (MMC) has been proposed as a method engineering approach to tailoring information systems development methods. This meta-method has been used on a variety of methods, but none of these studies have focused on the ability to manage method tailoring with the intention to promote specific values and goals, such as agile ones. This paper explores how MMC has been used during three software development projects to manage method tailoring with the intention to promote agile goals and values. Through content examples of method configurations we have shown that it is possible to use MMC and its conceptual framework on eXtreme Programming and we report on lessons learned with regard to maintaining coherency with the overall goals of the original method.  相似文献   

18.
Measurement of software development productivity is needed in order to control software costs, but it is discouragingly labor-intensive and expensive. Computer-aided software engineering (CASE) technologies-especially repository-based, integrated CASE-have the potential to support the automation of this measurement. We discuss the conceptual basis for the development of automated analyzers for function point and software reuse measurement for object-based CASE. Both analyzers take advantage of the existence of a representation of the application system that is stored within an object repository, and that contains the necessary information about the application system. We also discuss metrics for software reuse measurement, including reuse leverage, reuse value, and reuse classification that are motivated by managerial requirements and the efforts, within industry and the IEEE, to standardize measurement. The functionality and the analytical capabilities of state-of-the-art automated software metrics analyzers are illustrated in the context of an investment banking industry application that is similar to systems deployed at the New York City-based investment bank where these tools were developed and tested  相似文献   

19.
The paper shows how software metrics can be used to plan and control software projects. Software metrics will be essential if the software industry is to continue growing and developing complex systems. The only way to increase knowledge of the software development and maintenance processes and the final product is to measure them and use the measurements in models for estimating their future behaviour. The emphasis of this paper is on complexity metrics and reliability models, and especially on their use for fault content estimation and control of the development and maintenance processes. Empirical results and guidelines of how to use complexity metrics and reliability models are presented.  相似文献   

20.
李旭  刘宗田  强宇 《计算机工程》2006,32(19):71-73
TSP开发过程强调用数据说话,要求较高的精确度,这对于大多数软件企业难以达到,因此应遵循一种“适度度量”的策略。对过程数据的分析不仅可以减少度量的工作量,还可为后续的开发及过程的改进提供参考和建议。该文提出了将形式概念分析(FCA)应用于TSP度量模型中,通过基于概念格的关联规则,挖掘出了有价值的信息。通过实验项目验证了该方法的有效性和实用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号