共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Mark A. Johnson 《Software Quality Journal》1995,4(1):15-31
This paper presents a case history of Mentor Graphics using a set of quality metrics to track development progress for a recent major software release. It provides background on how Mentor Graphics originally began using software metrics to measure product quality, how this became accepted, and how these metrics later fell out of favour. To restore these metrics to effective use, process changes were required for setting quality and metric targets, and for the way the metrics are used for tracking development progress. With these process changes in place, and the addition of a new metric, the case history demonstrates that the metric set could be used effectively to indicate problems in this release and help manage changes to the plan for completion of the release. The lessons learned in this case history are presented, along with subsequent data that further validates these metrics. 相似文献
3.
The aim of this research is to improve the usability and acceptance of the quality index in practice. The quality index (QI) that is used to calculate acknowledged project effort is empirically evaluated in order to find interchangeable metric sets that can be used when calculating QI. In the evaluation, ten metric sets with four metrics in the set were used to calculate ten different quality indexes that were then evaluated on 25 projects. The results indicate that six metric sets are interchangeable, making the calculation of the QI easy. 相似文献
4.
J. H. Poore 《Software》1988,18(11):1017-1027
Software is a product in serious need of quality control technology. Major effort notwithstanding, software engineering has produced few metrics for aspects of software quality that have the potential of being universally applicable. The present paper suggests that, although universal metrics are elusive, metrics that are applicable and useful in a fully defined setting are readily available. A theory is presented that a well-defined software work group can articulate their operational concept of quality and derive useful metrics for that concept and their environment. 相似文献
5.
Fernando Alonso 《Expert systems with applications》2012,39(8):7524-7535
Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types. 相似文献
6.
Some theoretical considerations for a suite of metrics for the integration of software components 总被引:1,自引:0,他引:1
V. Lakshmi Narasimhan 《Information Sciences》2007,177(3):844-864
This paper defines two suites of metrics, which address static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilization of various components. In this paper both static and dynamic metrics are evaluated using Weyuker’s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall’s Quality Model and illustrate their impact on product quality and to the management of component-based product development. 相似文献
7.
8.
Norman F. Schneidewind 《Software Quality Journal》1995,4(1):49-68
Software quality metrics have potential for helping to assure the quality of software on large projects such as the Space Shuttle flight software. It is feasible to validate metrics for controlling and predicting software quality during design by validating metrics against a quality factor. Quality factors, like reliability, are of more interest to customers than metrics, like complexity. However quality factors cannot be collected until late in a project. Therefore the need arises to validate metrics, which developers can collect early in a project, against a quality factor. We investigate the feasibility of validating metrics for controlling and predicting quality on the Space Shuttle. The key to the approach is the use of validated metrics for early identification and resolution of quality problems. 相似文献
9.
Ontologies, which are formal representations of knowledge within a domain, can be used for designing and sharing conceptual models of enterprises information for the purpose of enhancing understanding, communication and interoperability. For representing a body of knowledge, different ontologies may be designed. Recently, designing ontologies in a modular manner has emerged for achieving better reasoning performance, more efficient ontology management and change handling. One of the important challenges in the employment of ontologies and modular ontologies in modeling information within enterprises is the evaluation of the suitability of an ontology for a domain and the performance of inference operations over it. In this paper, we present a set of semantic metrics for evaluating ontologies and modular ontologies. These metrics measure cohesion and coupling of ontologies, which are two important notions in the process of assessing ontologies for enterprise modeling. The proposed metrics are based on semantic-based definitions of relativeness, and dependencies between local symbols, and also between local and external symbols of ontologies. Based on these semantic definitions, not only the explicitly asserted knowledge in ontologies but also the implied knowledge, which is derived through inference, is considered for the sake of ontology assessment. We present several empirical case studies for investigating the correlation between the proposed metrics and reasoning performance, which is an important issue in applicability of employing ontologies in real-world information systems. 相似文献
10.
ROGET: A knowledge-based system for acquiring the conceptual structure of a diagnostic expert system
James S. Bennett 《Journal of Automated Reasoning》1985,1(1):49-74
This paper describes ROGET, a knowledge-based system that assists a domain expert with an important design task encountered during the early phases of expert-system construction. ROGET conducts a dialogue with the expert to acquire the expert system's conceptual structure, a representation of the kinds of domain-specific inferences that the consultant will perform and the facts that will support these inferences. ROGET guides this dialogue on the basis of a set of advice and evidence categories. These abstract categories are domain independent and can be employed to guide initial knowledge acquisition dialogues with experts for new applications. This paper discusses the nature of an expert system's conceptual structure and describes the organization and operation of the ROGET system that supports the acquisition of conceptual structures. 相似文献
11.
The quality of group tacit knowledge 总被引:1,自引:1,他引:1
Organizational knowledge creation theory explains the process of making available and amplifying knowledge created by individuals as well as crystallizing and connecting it to an organization’s knowledge system. What individuals get to know in their (working) lives benefits their colleagues and, eventually, the wider organization. In this article, we briefly review central elements in organizational knowledge creation theory and show a research gap related to the quality of tacit knowledge in a group. We advance organizational knowledge creation theory by developing the concept of “quality of group tacit knowledge.” Based on this concept, we further develop a comprehensive model explaining different levels of tacit knowledge quality that a group can achieve. Finally, we discuss managerial implications resulting from our model and outline imperatives for future theory building and empirical research. 相似文献
12.
《Expert systems with applications》2014,41(11):5466-5482
In a context characterized by a growing demand for networked services, users of advanced applications sometimes face network performance troubles that may actually prevent them from completing their tasks. Therefore, providing assistance for user communities that have difficulties using the network has been identified as one of the major issues of performance-related support activities. Despite the advances network management has made over the last years, there is a lack of guidance services to provide users with information that goes beyond merely presenting network properties. In this light, the research community has been highlighting the importance of User-Perceived Quality (UPQ) scores during the evaluation of network services for network applications, such as Quality of Experience (QoE) and Mean Opinion Score (MOS). However, despite their potential to assist end-users to deal with network performance troubles, only few types of network applications have well established UPQ scores. Besides that, they are defined through experiments essentially conducted in laboratory, rather than actual usage. This paper thus presents a knowledge and Collaboration-based Network Users’ Support (CNUS) Case-Based Reasoning (CBR) Process that predicts UPQ scores to assist users by focusing on the collaboration among them through the sharing of their experiences in using network applications. It builds (i) a knowledge base that includes not only information about network performance problems, but also applications’ characteristics, (ii) a case base that contains users’ opinions, and (iii) a user database that stores users’ profiles. By processing them, CNUS benefits users through the indication of the degree of satisfaction they may achieve based on the general opinion from members of their communities in similar contexts. In order to evaluate the suitability of CNUS, a CBR system was built and validated through an experimental study conducted in laboratory with a multi-agent system that simulated scenarios where users request for assistance. The simulation was supported by an ontology of network services and applications and reputation scheme implemented through the PageRank algorithm. The results of the study pointed to the effectiveness of CNUS, and its resilience to users’ collusive and incoherent behaviors. Besides that, they showed the influence of the knowledge about network characteristics, users’ profiles and application features on computer-based support activities. 相似文献
13.
E-learning systems provide a promising solution as an information exchanging channel. Improved technologies could mean faster and easier access to information but do not necessarily ensure the quality of this information; for this reason it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. This paper proposes an assessment model for information quality in e-learning systems based on the quality framework we proposed previously: the proposed framework consists of 14 quality dimensions grouped in three quality factors: intrinsic, contextual representation and accessibility. We use the relative importance as a parameter in a linear equation for the measurement scheme. Formerly, we implemented a goal-question-metrics approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. In this paper, the proposed metrics were computed to produce a numerical rating indicating the overall information quality published in a particular e-learning system. The data collection and evaluation processes were automated using a web data extraction technique and results on a case study are discussed. This assessment model could be useful to e-learning systems designers, providers and users as it provides a comprehensive indication of the quality of information in such systems. 相似文献
14.
In the field of software architecture, a paradigm shift is occurring from describing the outcome of architecting process to describing the Architectural Knowledge (AK) created and used during architecting. Many AK models have been defined to represent domain concepts and their relationships, and they can be used for sharing and reusing AK across organizations, especially in geographically distributed contexts. However, different AK domain models can represent concepts that are different, thereby making effective AK sharing challenging. In order to understand the mapping quality from one AK model to another when more than one AK model coexists, AK sharing quality prediction based on the concept differences across AK models is necessary. Previous works in this area lack validation in the actual practice of AK sharing. In this paper, we carry out validation using four AK sharing case studies. We also improve the previous prediction models. We developed a new advanced mapping quality prediction model, this model (i) improves the prediction accuracy of the recall rate of AK sharing quality; (ii) provides a better balance between prediction effort and accuracy for AK sharing quality. 相似文献
15.
Solving problems in a complex application domain often requires a seamles integration of some existing knowledge derivation systems which have been independently developed for solving subproblems using different inferencing schemes. This paper presents the design and implementation of an Integrated Knowledge Derivation System (IKDS) which allows the user to query against a global database containing data derivable by the rules and constraints of a number of cooperative heterogeneous systems. The global knowledge representation scheme, the global knowledge manipulation language and the global knowledge processing mechanism of IKDS are described in detail. For global knowledge representation, the dynamic aspects of knowledge such as derivational relationships and restrictive dependencies among data items are modeled by a Function Graph to uniformly represent the capabilities (or knowledge) of the rule-based systems, while the usual static aspects such as data items and their structural interrelationships are modeled by an object-oriented model. For knowledge manipulation, three types of high-level, exploratory queries are introduced to allow the user to query the global knowledge base. For deriving the best global answers for queries, the global knowledge processing mechanism allows the rules and constraints in different component systems to be indiscriminately exploited despite the incompatibilities in their inferencing mechanisms and interpretation schemes. Several key algorithms required for the knowledge processing mechanism are described in this paper. The main advantage of this integration approach is that rules and constraints can in effect be shared among heterogeneous rule-based systems so that they can freely exchange their data and operate as parts of a single system. IKDS achieves the integration at the rule level instead of at the system level. It has been implemented in C running in a network of heterogenous component systems which contain three independently developed expert systems with different rule formats and inferencing mechanisms.Database Systems Research and Development Center, Department of Computer Information Sciences, Department of Electrical Engineering, University of Florida 相似文献
16.
专家系统是人工智能的一个分支,是一种模拟专家决策能力的计算机系统,知识库是系统的重要组成部分,是系统的核心。本文结合纺织工艺设计及管理专家系统介绍了基于知识的专家系统的概念和结构,对系统中知识的获取、存储方式予以说明.并对其知识的表示方法加以阐述。 相似文献
17.
Abstract. Knowledge systems development and use have been significantly encumbered by the difficulties of eliciting and formalizing the expertise upon which knowledge workers rely. This paper approaches the problem from an examination of the knowledge competencies of knowledge workers in order to define a universe of discourse for knowledge elicitation. It outlines two categories and several types of knowledge that could serve as the foundations for the development of a theory of expertise. 相似文献
18.
Context
Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.Aims
This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.Method
The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.Results
The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.Conclusions
Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring. 相似文献19.
Packages are important high-level organizational units for large object-oriented systems. Package-level metrics characterize the attributes of packages such as size, complexity, and coupling. There is a need for empirical evidence to support the collection of these metrics and using them as early indicators of some important external software quality attributes. In this paper, three suites of package-level metrics (Martin, MOOD and CK) are evaluated and compared empirically in predicting the number of pre-release faults and the number of post-release faults in packages. Eclipse, one of the largest open source systems, is used as a case study. The results indicate that the prediction models that are based on Martin suite are more accurate than those that are based on MOOD and CK suites across releases of Eclipse. 相似文献
20.
In this paper we develop an evaluation framework for Knowledge Management Systems (KMS). The framework builds on the theoretical
foundations underlying organizational Knowledge Management (KM) to identify key KM activities and the KMS capabilities required
to support each activity. These capabilities are then used to form a benchmark for evaluating KMS. Organizations selecting
KMS can use the framework to identify gaps and overlaps in the extent to which the capabilities provided and utilized by their
current KMS portfolio meet the KM needs of the organization. Other applications of the framework are also discussed.
相似文献
Brent FurneauxEmail: |