首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Inductive machine learning has become an important approach to automated knowledge acquisition from databases. The disjunctive normal form (DNF), as the common analytic representation of decision trees and decision tables (rules), provides a basis for formal analysis of uncertainty and complexity in inductive learning. A theory for general decision trees is developed based on C. Shannon's (1949) expansion of the discrete DNF, and a probabilistic induction system PIK is further developed for extracting knowledge from real world data. Then we combine formal and practical approaches to study how data characteristics affect the uncertainty and complexity in inductive learning. Three important data characteristics, namely, disjunctiveness, noise and incompleteness, are studied. The combination of leveled pruning, leveled condensing and resampling estimation turns out to be a very powerful method for dealing with highly disjunctive and inadequate data. Finally the PIK system is compared with other recent inductive learning systems on a number of real world domains  相似文献   

2.
This paper describes knowledge acquisition strategies developed in the course of handcrafting a diagnostic system and reports on their consequent implementation in MORE, an automated knowledge acquisition system. We describe MORE in some detail, focusing on its representation of domain knowledge, rule generation capabilities, and interviewing techniques. MORE's approach is shown to embody methods which may prove fruitful to the development of knowledge acquisition systems in other domains.  相似文献   

3.
4.
5.
6.
A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. Whilte it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a straightforward task. This paper presents a statistical method for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. This paper describes the systematic development of a Navigation Sensor Management (NSM) Expert System from Kalman Filter covariance data. The development method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithm identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. ANOVA results show that statistically different position accuracies are obtained when different navigation aids are used, the number of navigation aids is changed, the trajectory is varied, or the performance history is altered. By indicating that these four factors significantly affect the decision metric, an appropriate parameter framework was designed, and a simulation example base was created. The example base contained over 900 training examples from nearly 300 simulations. The ID3 algorithm was then applied to the example base, yielding classification “rules” in the form of decision trees. The NSM expert system consists of seventeen decision trees that predict the performance of a specified integrated navigation sensor configuration. The performance of these decision trees was assessed on two arbitrary trajectories, and the performance results are presented using a predictive metric. The test trajectories used to evaluate the system's performance show that the NSM Expert adapts to new situations and provides reasonable estimates of sensor configuration performance.  相似文献   

7.
Very large knowledge bases constitute an important step for artificial intelligence and will have significant effects on the field of natural language processing. This paper describes LUKE, a tool that allows a knowledge base builder to create an English language interface by associating words and phrases with knowledge base entities. The philosophy behind LUKE is that knowledge about language is built up at the same time as knowledge about the world. LUKE assumes no linguistic expertise on the part of the user—that expertise is built directly into the tool itself. LUKE draws its power from a large set of heuristics about how words are typically used to describe the world.This research was supported in part by the National Science Foundation under contract IRI-8858085.  相似文献   

8.
This article is an account of the evolution of the French-speaking research community on knowledge acquisition and knowledge modelling echoing the complex and cross-disciplinary trajectory of the field. In particular, it reports the most significant steps in the parallel evolution of the web and the knowledge acquisition paradigm, which finally converged with the project of a semantic web. As a consequence of the huge amount of available data in the web, a paradigm shift occurred in the domain, from knowledge-intensive problem solving to large-scale data acquisition and management. We also pay a tribute to Rose Dieng, one of the pioneers of this research community.  相似文献   

9.
10.
The attribute reduction and rule generation (the attribute value reduction) are two main processes for knowledge acquisition. A self-optimizing approach based on a difference comparison table for knowledge acquisition aimed at the above processes was proposed. In the attribute reduction process, the conventional logic computation was transferred to a matrix computation along with some added thoughts on the evolution computation used to construct the self-adaptive optimizing algorithm. In addition, some sub-algorithms and proofs were presented in detail. In the rule generation process, the orderly attribute value reduction algorithm (OAVRA), which simplified the complexity of rule knowledge, was presented. The approach provided an effective and efficient method for knowledge acquisition that was supported by the experimentation.  相似文献   

11.
In this research, we present the concept of Hyperaudio as non-linear presentation of auditory information in the context of underlying theoretical assumptions of how Hyperaudio differs from existing non-linear information media. We present a study comparing text and auditory represented information either in a linear or non-linear manner and the interaction of these presentation formats with different underlying text types. Learners had to learn from two different text sorts either from text only in linear or non-linear manner from a computer screen or the same information presented as audio files also presented either in linear or non-linear manner. Results show overall advantages of linear information presentation compared with non-linear information presentation, and the advantages of written text versus auditory text on learning performance assessed with an essay task and a multiple-choice test. Interaction effects indicate that non-linearity increases cognitive load assessed with a self-report measure in auditory instruction compared to linear information presentation while cognitive load in processing written text is not affected by linearity. Further, effects reveal that the text type (ex-pository vs. linear text type) interacts with presentation format showing that expository text leads to comparable learning outcomes in linear and non-linear formats, while presenting linear text type as hypertext or Hyperaudio is here rather unbeneficial.  相似文献   

12.
Human experts tend to introduce intermediate terms in giving their explanations. The expert's explanation of such terms is operational for the context that triggered the explanation; however, term definitions remain often incomplete. Further, the expert's (re) use of these terms is hierarchical (similar to natural language). In this paper, we argue that a hierarchical incremental knowledge acquisition (KA) process that captures the expert terms and operationalizes them while incompletely defined makes the KA task more effective. Towards this we present our knowledge representation formalism Nested Ripple Down Rules (NRDR) that is a substantial extension to the (Multiple Classification) Ripple Down Rule (RDR) KA framework. The incremental KA process with NRDR as the underlying knowledge representation has confirmation holistic features. This allows simultaneous incremental modelling and KA and eases the knowledge base (KB) development process.Our NRDR formalism preserves the strength of incremental refinement methods, that is the ease of maintenance of the KB. It also addresses some of their shortcomings: repetition, lack of explicit modelling and readability. KBs developed with NRDR describe an explicit model of the domain. This greatly enhances the reuseability of the acquired knowledge.This paper also presents a theoretical framework for analysing the structure of RDR in general and NRDR in particular. Using this framework, we analyse the conditions under which RDR converges towards the target KB. We discuss the maintenance problems of NRDR as a function of this convergence. Further, we analyse the conditions under which NRDR offers an effective approach for domain modelling. We show that the maintenance of NRDR requires similar effort to maintaining RDR for most of the KB development cycle. We show that when an NRDR KB shows an increase in maintenance requirement in comparison with RDR during its development, this added requirement can be automatically handled using stored past seen cases.  相似文献   

13.
14.
An architecture for knowledge acquisition systems is proposed based upon the integration of existing methodologies, techniques and tools which have been developed within the knowledge acquisition, machine learning, expert systems, hypermedia and knowledge representation research communities. Existing tools are analyzed within a common framework to show that their integration can be achieved in a natural and principled fashion. A system design is synthesized from what already exists, putting a diversity of well-founded and widely used approaches to knowledge acquisition within an integrative framework. The design is intended to be clean and simple, easy to understand, and easy to implement. A detailed architecture for integrated knowledge acquisition systems is proposed that also derives from parallel cognitive and theoretical studies.  相似文献   

15.
While knowledge-based systems are being used extensively to assist in making decisions, a critical factor that affects their performance and reliability is the quantity and quality of the knowledge bases. Knowledge acquisition requires the design and development of an in-depth comprehension of knowledge modeling and of applicable domain. Many knowledge acquisition tools have been developed to support knowledge base development. However, a weakness that is revealed in these tools is the domain-dependent and complex acquisition process. Domain dependence limits the applicable areas and the complex acquisition process makes the tool difficult to use. In this paper, we present a goal-driven knowledge acquisition tool (GDKAT) that helps elicit and store experts' declarative and procedural knowledge in knowledge bases for a user-defined domain. The designed tool is implemented using the object-oriented design methodology under C++ Windows environment. An example that is used to demonstrate the GDKAT is also delineated. While the application domain for the example presented is reflow soldering in surface mount printed circuit board assembly, the GDKAT can be used to develop knowledge bases for other domains also.  相似文献   

16.
This paper presents an approach for the design and the validation of a prototype knowledge acquisition tool in the domain of business planning. Results from previous work in the area of problem-solving in business domain indicate that there are wide differences in both the ways problems are represented, and solution strategies are selected. These differences can have a significant effect on the suitability of knowledge acquisition techniques. The knowledge acquisition tool has been designed to accommodate these differences. Problem decomposition and simplification techniques are employed by the tool in order to elicit the appropriate information for managerial decision making. The prototype tool has been validated in the field with 35 managers using ten test scenarios. The results of the validation process are presented, and implications for the design of such tools in business domain are discussed.  相似文献   

17.
Knowledge acquisition has been identified as the bottleneck for knowledge engineering. One of the reasons is the lack of an integrated methodology that is able to provide tools and guidelines for the elicitation of knowledge as well as the verification and validation of the system developed. Even though methods that address this issue have been proposed, they only loosely relate knowledge acquisition to the remaining part of the software development life cycle. to alleviate this problem, we have developed a framework in which knowledge acquisition is integrated with system specifications to facilitate the verification, validation, and testing of the prototypes as well as the final implementation. to support the framework, we have developed a knowledge acquisition tool, TAME. It provides an integrated environment to acquire and generate specifications about the functionality and behavior of the target system, and the representation of the domain knowledge and domain heuristics. the tool and the framework, together, can thus enhance the verification, validation, and the maintenance of expert systems through their life cycles. © 1994 John Wiley & Sons, Inc.  相似文献   

18.
Frame-based systems that employ inheritance networks as a form of knowledge representation have a number of inherent knowledge acquisition problems, one of the most significant being the transfer to the representation system of knowledge itself. The problem of concept classification, and specifically that of determining the location of a new concept in an existing network inheritance hierarchy, is discussed here using an experimental knowledge-base editor, KRE. Tools which support the process of knowledge base construction must allow the user to concentrate on the domain problems, and not on low level, representation system decisions. KRE, written in C, is a knowledge acquisition tool which assists the knowledge engineer by using an interactive acquisition strategy during the process of concept classification. The processes of classification, and its advantages over other knowledge representation systems, are presented.  相似文献   

19.
The paper presents results of a study on collecting machining strategies for machining assistants and process planning. These efforts are being conducted at the NMSU-Integrated Manufacturing Systems Laboratory (IMSL). Goals of the project aim at improving and advancing the solicitation, documentation, and automation of machining knowledge/data acquisition, and integration with CAD/CAM/CAE systems. This paper emphasizes the knowledge acquisition phase of the study utilizing artificial neural networks.  相似文献   

20.
By combining both vague sets and rough sets in fuzzy data processing, we propose a vague-rough set approach for extracting knowledge under uncertain environments. We compute all attribute reductions using the vague-rough lower approximation distribution, concepts of attribute reduction and the discernibility matrix in a vague decision information system (VDIS). Research results for extracting decision rules from the VDIS show the proposed approaches extend the corresponding method in classical rough set theory and provide a new avenue to uncertain vague knowledge acquisition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号