首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowledge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the consistency check can also be implemented by algebraic approach. Furthermore, algorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.  相似文献   

3.
Extracting expertise from experts: Methods for knowledge acquisition   总被引:3,自引:0,他引:3  
Abstract: Knowledge acquisition is the biggest bottleneck in the development of expert systems. Fortunately, the process of translating expert knowledge to a form suitable for expert system development can benefit from methods developed by cognitive science to reveal human knowledge structures. There are two classes of these investigative methods, direct and indirect. We provide reviews, criteria for use, and literature sources for all principal methods. Direct methods discussed are: interviews, questionnaires, observation of task performance, protocol analysis, interruption analysis, closed curves, and inferential flow analysis. Indirect methods include: multidimensional scaling, hierarchical clustering, general weighted networks, ordered trees, and repertory grid analysis.  相似文献   

4.
Abstract: In building a knowledge-based system, it is sometimes possible to save time by applying some machine learning process to a set of historical cases. In some problem domains, however, such cases may not be available. In addition, the classes, attributes and attribute values that comprise the partial domain model in terms of which cases are expressed may also not be available explicitly. In these circumstances, the repertory grid technique offers a single process for both building a partial domain model and generating a training set of examples. Alternatively, examples can be elicited directly. This paper explores the relationship between knowledge acquisition from examples and the repertory grid technique, and discusses the shared need for machine learning. Fragments of business-strategy knowledge are used to illustrate the discussion.  相似文献   

5.
A formal, foundational approach to autonomous knowledge acquisition is presented. In particular, "learning from examples" and "learning from being told" and the relation of these approaches to first-order representation systems are investigated. It is assumed initially that the only information available for acquisition is a stream of facts, or ground atomic formulae, describing a domain. On the basis of this information, hypotheses expressed in set-theoretic terms and concerning the application domain may be proposed. As further instances are received, the hypothesized relations may be modified or discarded, and new relations formed. The intent though is to characterize those hypotheses that may potentially be formed, rather than to specify the subset of the hypotheses that, for whatever reason, should be held.
Formal systems are derived by means of which the set of potential hypotheses is precisely specified, and a procedure is derived for restoring the consistency of a set of hypotheses after conflicting evidence is encountered. In addition, this work is extended to where a learning system may be "told" arbitrary sentences concerning a domain. Included in this is an investigation of the relation between acquiring knowledge and reasoning deductively. However, the interaction of these approaches leads to immediate difficulties which likely require informal, pragmatic techniques for their resolution. The overall framework is intended both as a foundation for investigating autonomous approaches to learning and as a basis for the development of such autonomous systems.  相似文献   

6.
7.
An approach toward improving the accessbility of the knowledge and information structures of expert systems is described; it is based upon a foundation development environment called the Rule-Based Frame System (RBFS), which forms the kernel of a larger system, IDEAS. RBFS is a knowledge representation language, within which a distinction is drawn between information which represents the world or domain, and knowledge which states how to make conclusions based upon the domain. Information takes the form of frames, for system processing, but is presented to the user/developer as an associative network via a Visual Editor for the Generation of Associative Networks (VEGAN). Knowledge takes the form of production rules, which are connected at suitable points in the domain model, but again it is presented to the user via a graphical interface known as the Knowledge Encoding Tool (KET). KET is designed to assist in knowledge acquisition in expert systems. It uses a combination of decision support trees and associative networks as its representation. A combined use of VEGAN and KET will enable domain experts to interactively create and test their knowledge base with minimum involvement on behalf of a knowledge engineer. An inclusion of learning features in VEGAN/KET is desirable for this purpose. The main objective of these tools, therefore, is to encourage rapid prototyping by the domain expert. VEGAN and KET are implemented in the Poplog environment on SUN 3/50 workstations.  相似文献   

8.
Abstract. Knowledge systems development and use have been significantly encumbered by the difficulties of eliciting and formalizing the expertise upon which knowledge workers rely. This paper approaches the problem from an examination of the knowledge competencies of knowledge workers in order to define a universe of discourse for knowledge elicitation. It outlines two categories and several types of knowledge that could serve as the foundations for the development of a theory of expertise.  相似文献   

9.
The Web is becoming a global market place, where the same services and products are offered by different providers. When obtaining a service, consumers have to select one provider among many alternatives to receive a service or buy a product. In real life, when obtaining a service, many consumers depend on the user reviews. User reviews—presumably written by other consumers—provide details on the consumers’ experiences and thus are more informative than ratings. The down side is that such user reviews are written in natural language, making it extremely difficult to be interpreted by computers. Therefore, current technologies do not allow automation of user reviews and require too much human effort for tasks such as writing and reading reviews for the providers, aggregating existing information, and finally choosing among the possible candidates. In this paper, we represent consumers’ reviews as machine processable structures using ontologies and develop a layered multiagent framework to enable consumers to find satisfactory service providers for their needs automatically. The framework can still function successfully when consumers evolve their language and when deceptive reviewers enter the system. We show the flexibility of the framework by employing different algorithms for various tasks and evaluate them for different circumstances.  相似文献   

10.
As the world increasingly moves towards a knowledge-based economy, user requirements become an important factor for enterprises to drive product collaborative design evolution. To map user requirements to the product model, user requirements are generally extracted into knowledge that can be used for design decisions. However, because users are interest-driven participants and not professional design engineers, the effect of user knowledge acquisition is not ideal. There are significant challenges for rapid knowledge acquisition with dynamic user requirements. This paper presents an approach to user knowledge acquisition in the product design process, which obtains the tangible requirements of users under the premise that users are adequate for participation. In this approach, the typical information flow is divided into four stages: submission, interaction, knowledge discovery, and model evolution. In the submission stage, natural language processing technology is used to transform text form solutions into data, so that computer technology can be applied to manage large-scale user requirements. In the interaction stage, users are helped to improve their solutions by the iterative recommendation process. In the knowledge discovery stage, after less concerned partial solutions are removed and vacant items are predicted to be supplemented, the final collection of user design information is obtained. Finally, based on rough set theory, design knowledge can be extracted to support the decision of the product model. The washing machine design project is used as a case study to explain the implementation of the proposed approach.  相似文献   

11.
Classical expert systems are rule based, depending on predicates expressed over attributes and their values. In the process of building expert systems, the attributes and constants used to interpret their values need to be specified. Standard techniques for doing this are drawn from psychology, for instance, interviewing and protocol analysis. This paper describes a statistical approach to deriving interpreting constants for given attributes. It is also possible to suggest the need for attributes beyond those given.The approach for selecting an interpreting constant is demonstrated by an example. The data to be fitted are first generated by selecting a representative collection of instances of the narrow decision addressed by a rule, then making a judgement for each instance, and defining an initial set of potentially explanatory attributes. A decision rule graph plots the judgements made against pairs of attributes. It reveals rules and key instances directly. It also shows when no rule is possible, thus suggesting the need for additional attributes. A study of a collection of seven rule based models shows that the attributes defined during the fitting process improved the fit of the final models to the judgements by twenty percent over models built with only the initial attributes.  相似文献   

12.
This article is an account of the evolution of the French-speaking research community on knowledge acquisition and knowledge modelling echoing the complex and cross-disciplinary trajectory of the field. In particular, it reports the most significant steps in the parallel evolution of the web and the knowledge acquisition paradigm, which finally converged with the project of a semantic web. As a consequence of the huge amount of available data in the web, a paradigm shift occurred in the domain, from knowledge-intensive problem solving to large-scale data acquisition and management. We also pay a tribute to Rose Dieng, one of the pioneers of this research community.  相似文献   

13.
Knowledge acquisition is the process of collecting domain knowledge, documenting the knowledge, and transforming it into a computerized representation. Due to the difficulties involved in eliciting knowledge from human experts, knowledge acquisition was identified as a bottleneck in the development of knowledge-based system. Over the past decades, a number of automatic knowledge acquisition techniques have been developed. However, the performance of these techniques suffers from the so called curse of dimensionality, i.e., difficulties arise when many irrelevant (or redundant) parameters exist. This paper presents a heuristic approach based on statistics and greedy search for dimensionality reduction to facilitate automatic knowledge acquisition. The approach deals with classification problems. Specifically, Chi-square statistics are used to rank the importance of individual parameters. Then, a backward search procedure is employed to eliminate parameters (less important parameters first) that do not contribute to class separability. The algorithm is very efficient and was found to be effective when applied to a variety of problems with different characteristics.  相似文献   

14.
Horn knowledge bases are widely used in many applications. The paper is concerned with the optimal compression of propositional Horn production rule bases-one of the most important knowledge bases used in practice. The problem of knowledge compression is interpreted as a problem of Boolean function minimization. It was proved by P.L. Hammer and A. Kogan (1993) that the minimization of Horn functions, i.e., Boolean functions associated with Horn knowledge bases, is NP complete. The paper deals with the minimization of quasi acyclic Horn functions, the class of which properly includes the two practically significant classes of quadratic and of acyclic functions. A procedure is developed for recognizing in quadratic time the quasi acyclicity of a function given by a Horn CNF, and a graph based algorithm is proposed for the quadratic time minimization of quasi acyclic Horn functions  相似文献   

15.
Describes how, in the process of extracting the optical flow through space-time filtering, one has to consider the constraints associated with the motion uncertainty, as well as the spatial and temporal sampling rates of the sequence of images. The motion uncertainty satisfies the Cramer-Rao (CR) inequality, which is shown to be a function of the filter parameters. On the other hand, the spatial and temporal sampling rates have lower bounds, which depend on the motion uncertainty, the maximum support in the frequency domain, and the optical flow. These lower bounds on the sampling rates and on the motion uncertainty are constraints that constitute an intrinsic part of the computational structure of space-time filtering. The author shows that if he uses these constraints simultaneously, the filter parameters cannot be arbitrarily determined but instead have to satisfy consistency constraints. By using explicit representations of uncertainties in extracting visual attributes, one can constrain the range of values assumed by the filter parameters  相似文献   

16.
The attribute reduction and rule generation (the attribute value reduction) are two main processes for knowledge acquisition. A self-optimizing approach based on a difference comparison table for knowledge acquisition aimed at the above processes was proposed. In the attribute reduction process, the conventional logic computation was transferred to a matrix computation along with some added thoughts on the evolution computation used to construct the self-adaptive optimizing algorithm. In addition, some sub-algorithms and proofs were presented in detail. In the rule generation process, the orderly attribute value reduction algorithm (OAVRA), which simplified the complexity of rule knowledge, was presented. The approach provided an effective and efficient method for knowledge acquisition that was supported by the experimentation.  相似文献   

17.
This paper addresses the critical issues of knowledge acquisition in developing knowledge-based expert systems for engineering tasks. First, it reviews the role of knowledge acquisition and its current practice in expert system development. Then, a new approach based on three stages of knowledge refinement is suggested to improve the process of knowledge acquisition. This approach, calledrule verification without rule construction, is proposed to allow knowledge engineers and domain experts to experience a more intimate and balanced role in developing intelligent systems. The communication tool developed for this concept is calledknowledge map, which provides a systematic way of indexing and quantifying a piece of knowledge in the problem space by defining important attributes as the axes of the map. This approach is demonstrated by constructing a twodimensional map for a knowledge-based engineering design system, IDRILL, which we are currently developing. Future expansions of this knowledge acquisition technique are summarized as the conclusions of this paper.This paper was presented in part at the 1986 ASME International Computers in Engineering Conference in Chicago, IL, and appeared in the proceedings of that conference.  相似文献   

18.
One of the key technologies related to knowledge and data engineering is the acquisition of knowledge and data in the development and utilization of information system and the strategies to capture new knowledge and data. Actually, millions of documents, including technical reports, government files, newspapers, books, magazines, letters, bank checks, etc., have to be processed every day, and knowledge has to be acquired from them. This paper presents a new approach to document analysis for automatic knowledge acquisition. The traditional approaches have two major disadvantages: (1) They are not effective for processing documents with high geometrical complexity. Specially, the top-down approach can process only the simple documents which have specific format or contain some a priori information. (2) The top-down approach needs to split large components into small ones iteratively, while the bottom-up approach needs to merge small components into large ones iteratively. They are time consuming. This new approach is based on modified fractal signature. It can overcome the above weaknesses  相似文献   

19.
In this paper, we develop a technique for acquiring the finite set of attributes or variables which the expert uses in a classification problem for characterising and discriminating a set of elements. This set will constitute the schema of a training data set to which an inductive learning algorithm will be applied. The technique developed uses ideas taken from psychology, in particular from Kelly's Personal Construct Theory. While we agree that Kelly's repertory grid technique is an efficient way to do this, it has several disadvantages which we shall try to solve by using a fuzzy repertory table. With the suggested technique, we aim to obtain the set of attributes and values which the expert can use to "measure" the object type (class) on the classification problem in some way. We will also acquire some general rules to identify the expert's evident knowledge; these rules will comprise concepts belonging to their conceptual structure.  相似文献   

20.
Previous studies have reported the importance and benefits of situating students in a real-world learning environment with access to digital-world resources. At the same time, researchers have indicated the need to develop learning guidance mechanisms or tools for assisting students to learn in such a complex learning scenario. In this study, a grid-based knowledge acquisition approach is proposed and a Mindtool is developed to help students organize and share knowledge for differentiating a set of learning targets based on what they have observed in the field. An experiment has been conducted in an elementary school Natural Science course for differentiating different species of butterflies. Forty-one fifth-grade students have been assigned to a control group and an experimental group to compare the effect of the conventional approach and that of the proposed approach. The experimental results show that the proposed approach not only improves students’ learning achievements, but also significantly enhances their ability of identifying species in the field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号