首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a new approach to the effective development of complex retrieval components for case-based reasoning systems (CBR). Our approach goes beyond the traditional CBR approach by allowing an incremental refinement of an existing retrieval knowledge base during routine use of the system. The refinement takes place through a direct expert-system interaction while the expert is accomplishing their given tasks. We lend ideas from ripple-down rules (RDR), a proven method for the very effective and efficient acquisition of classification knowledge during the routine use of a knowledge-based system (KBS).

In our approach the expert is only required to provide explanations of why, for a given problem, a certain case should be retrieved. Incrementally a complex retrieval knowledge base as a composition of many simple retrieval functions is developed. This approach is effective with respect to both the development of highly tailored and complex retrieval knowledge bases for CBR as well as providing an intuitive and feasible approach for the expert. The approach has been implemented in our CBR system MIKAS (Menu construction using an Incremental Knowledge Acquisition System) that allows to automatically construct a menu that is strongly tailored to the individual requirements and food preferences of a client.  相似文献   

2.
Automated cinematic reasoning about camera behavior   总被引:1,自引:0,他引:1  
Automated control of a virtual camera is useful for both linear animation and interactive virtual environments. It has been partially addressed in the past by numeric constraint optimization and by idiom-based approaches. We have constructed a knowledge-based system that allows users to experiment with various cinematic genres and view the results in the form of animated 3D movies. We have followed a knowledge acquisition process converting domain expert principles into declarative rules, and our system uses non-monotonic reasoning in order to support absolute rules, default rules, and arbitrary user choices. We evaluated the tool by generating various movies and showing some of the results to a group of expert viewers.  相似文献   

3.
基于着色Petri网模糊专家系统的研究   总被引:1,自引:0,他引:1  
针对变电站无功控制模糊专家系统知识表示不确定性及规则数量多的特点,文章以模糊、着色Petri网为基础,提出了一种基于模糊着色Petri网的知识表示与规则获取方法。该方法利用Petri网的图形化环境特点,将模糊规则库的不同变量用不同的颜色加以区分,不同规则中的同一个变量用该变量的颜色集表示,构成一个模糊着色Petri网模型。充分利用着色Petri网的特点,对推理过程进行了仔细研究,并提出一种基于着色模糊Petri网的启发式搜索策略。将其用于变电站无功控制的模糊专家系统中,结果表明,基于着色Petri网的模糊知识表示和获取方法,对于大型、复杂变电站模糊专家控制系统是非常有效的。  相似文献   

4.
We present techniques used in ADELE, a second-generation expert system (SGES), to support the knowledge acquisition activity in the diagnostic domain. The approach has been studied inside the framework of SGES; it is based on the reunions between knowledge acquisition and explanations. When new heuristic knowledge is acquired, its justifications are looked for in domain models to support the knowledge acquisition process. ADELE is a medical diagnostic reasoning system for electromyography.  相似文献   

5.
A general language for specifying resource allocation and time-tabling problems is presented. The language is based on an expert system paradigm that was developed previously by the authors and that enables the solution of resource allocation problems by using experts' knowledge and heuristics. The language enables the specification of a problem in terms of resources, activities, allocation rules, and constraints, and thus provides a convenient knowledge acquisition tool. The language syntax is powerful and allows the specification of rules and constraints that are very difficult to formulate with traditional approaches, and it also supports the specification of various control and backtracking strategies. We constructed a generalized inference engine that runs compiled resource allocation problem specification language (RAPS) programs and provides all necessary control structures. This engine acts as an expert system shell and is called expert system for resource allocation (ESRA). The performance of RAPS combined with ESRA is demonstrated by analyzing its solution of a typical resource allocation problem  相似文献   

6.
一种改进的规则知识获取方法   总被引:1,自引:0,他引:1  
知识获取是建立专家系统的最基本最重要的过程,但它又是研制和开发专家系统的“瓶颈”。文章提出了一种改进的规则知识机器自动获取技术,它将学习看作是在一个符号描述空间中的启发式搜索过程,能够通过归纳从专家决策的例子中确定决策规则,从而大大简化了从专家到机器的知识转换过程。  相似文献   

7.
计算机棋类游戏学习中的自对弈学习指仅依赖行棋过程及最终的输赢结果的学习.整个过程中除下棋规则外不预设任何领域知识,也无专家指导.虽然基于极大极小算法、α-β剪枝算法和蒙特卡洛搜索的自对弈学习已经取得了卓越成果,但是目前仍旧缺乏对于学习样例质量评价的针对性研究.因此,本文首次提出了一种自对弈棋局学习样例质量评价方法,该方法采用样本规模综合指标T—使用样例重复度和样例个数的线性组合—来决定学习样例大小.在西洋跳棋上的实验表明,本评价方法可以达到有效控制学习样例规模的目的,在不降低学习效果的前提下大幅降低学习样例产生的计算成本.  相似文献   

8.
Knowledge-based search in competitive domains   总被引:1,自引:0,他引:1  
Artificial intelligence programs operating in competitive domains typically use brute-force search if the domain can be modeled using a search tree or alternately use nonsearch heuristics as in production rule-based expert systems. While brute-force techniques have recently proven to be a viable method for modeling domains with smaller search spaces, such as checkers and chess, the same techniques cannot succeed in more complex domains, such as shogi or go. This research uses a cognitive-based modeling strategy to develop a heuristic search technique based on cognitive thought processes with minimal domain specific knowledge. The cognitive-based search technique provides a significant reduction in search space complexity and, furthermore, enables the search paradigms to be extended to domains that are not typically thought of as search domains such as aerial combat or corporate takeovers.  相似文献   

9.
分类判断策略的自动获取   总被引:3,自引:0,他引:3  
获取专家在决策、问题求解过程中使用的决策策略和问题求解规则是知识获取的核心目标.本文以“theMovingBasisHeuristics”模型为背景,介绍了自动获取专家在分类判断任务中运用的判断策略的算法思想与系统Polynome的组成.  相似文献   

10.
Abstract

Most of the expert systems applied in vegetal pathology treat the problem of selecting treatment in a conventional manner, by means of production rules that associate to each pathology the most adequate chemical product. This makes it difficult to generate useful explanations. In order to generate satisfactory explanations the knowledge of the system must be based on the strategies used by human experts. This article introduces our approach for the identification and representation of strategic knowledge in an expert system for plague control in greenhouses. We present an introduction to the application domain and make an analysis of the strategic knowledge implied. We distinguish between the underlying strategy and the practical strategy used by the expert for the solution of the problem. From this we propose a preliminary architecture based on strategic reasoning agents.  相似文献   

11.
This paper focuses on industrial design and simulation processes especially in automotive and aerospace areas. Designers use business models (called expert models) such as CAD (computed aided design) and CAE (computed aided engineering) models to optimize and streamline the engineering process. Each expert model contains information such as parameters, expert rules, mathematic relations (parametric models, for example) which are shared by several users and in several different domains (mechanical, thermal, acoustic, fluid, etc.). This information is exploited at the same time in a concurrent engineering context. It is the basis of an imperfect collaboration process due to the fact that existing tools do not manage encapsulated information well and are unable to ensure that parameters and rules are consistent (same value of parameters for example) throughout different heterogeneous expert models. In this context, we propose an approach to manage knowledge using configurations synchronized with expert models which enable designers to use parameters consistently in a collaborative context. Our approach is called KCModel (knowledge configuration model): it allows acquisition, traceability, re-use and consistency of explicit knowledge used in configuration.  相似文献   

12.
CHUNKER is a chess program that uses chunked knowledge to improve its performance. Its domain is a subset of king and pawn endings in chess that has been studied for over 300 years. CHUNKER has a large library of chunk instances where each chunk type has a property list and each instance has a set of values for these properties. This allows CHUNKER to reason about positions that come up in the search that would otherwise have to be handled by means of additional search. Thus the program is able to solve the most difficult problem of its present domain (a problem that would require 45 ply of search and on the order of 1013 years of CPU time to be solved by the best of present day chess programs) in 18 ply and one minute of CPU time. Further, CHUNKER is undoubtedly the world's foremost expert in its domain, and has discovered 2 mistakes in the literature and has been instrumental in discovering a new theorem about the domain that allows the assessing of positions with a new degree of ease and confidence. In this paper we show how the libraries are compiled, how CHUNKER works, and discuss our plans for extending it to play the whole domain of king and pawn endings.  相似文献   

13.
Fuzzy rule induction in a set covering framework   总被引:1,自引:0,他引:1  
  相似文献   

14.
The use of expert systems has vast potential in an intelligent process-control system. In this paper, the issue of process-control system design using a knowledge-based approach is discussed. Expert-system techniques have been used for the design of controllers for process-control systems. The expert system is developed in conjunction with the successful application of a systematic design approach. Design knowledge has been represented using rules, facts and frames. The design process consists of a sequence of operations obtained by heuristics and experience in the design techniques.  相似文献   

15.
General-purpose generative planners use domain-independent search heuristics to generate solutions for problems in a variety of domains. However, in some situations these heuristics force the planner to perform inefficiently or obtain solutions of poor quality. Learning from experience can help to identify the particular situations for which the domain-independent heuristics need to be overridden. Most of the past learning approaches are fully deductive and eagerly acquire correct control knowledge from a necessarily complete domain theory and a few examples to focus their scope. These learning strategies are hard to generalize in the case of nonlinear planning, where it is difficult to capture correct explanations of the interactions among goals, multiple planning operator choices, and situational data. In this article, we present a lazy learning method that combines a deductive and an inductive strategy to efficiently learn control knowledge incrementally with experience. We present hamlet, a system we developed that learns control knowledge to improve both search efficiency and the quality of the solutions generated by a nonlinear planner, namely prodigy4.0. We have identified three lazy aspects of our approach from which we believe hamlet greatly benefits: lazy explanation of successes, incremental refinement of acquired knowledge, and lazy learning to override only the default behavior of the problem solver. We show empirical results that support the effectiveness of this overall lazy learning approach, in terms of improving the efficiency of the problem solver and the quality of the solutions produced.  相似文献   

16.
Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function—i.e. a ratio of two polynomials—of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGS’s performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.  相似文献   

17.
Knowledge acquisition for medical knowledge bases can be aided by programs that suggest possible values for portions of the data. The paper presents an experiment which was used in designing a heuristic to help the process of knowledge acquisition. The heuristic helps to determine numerical data from stylized literature excerpts in the context of knowledge acquisition for the QMR medical knowledge base. Quantitative suggestions from the heuristics are shown to agree substantially with the data incorporated in the final version of the knowledge base. The experiment shows the potential of knowledge base specific heuristics in simplifying the task of knowledge base creation.  相似文献   

18.
We propose a grammar-based genetic programming framework that generates variable-selection heuristics for solving constraint satisfaction problems. This approach can be considered as a generation hyper-heuristic. A grammar to express heuristics is extracted from successful human-designed variable-selection heuristics. The search is performed on the derivation sequences of this grammar using a strongly typed genetic programming framework. The approach brings two innovations to grammar-based hyper-heuristics in this domain: the incorporation of if-then-else rules to the function set, and the implementation of overloaded functions capable of handling different input dimensionality. Moreover, the heuristic search space is explored using not only evolutionary search, but also two alternative simpler strategies, namely, iterated local search and parallel hill climbing. We tested our approach on synthetic and real-world instances. The newly generated heuristics have an improved performance when compared against human-designed heuristics. Our results suggest that the constrained search space imposed by the proposed grammar is the main factor in the generation of good heuristics. However, to generate more general heuristics, the composition of the training set and the search methodology played an important role. We found that increasing the variability of the training set improved the generality of the evolved heuristics, and the evolutionary search strategy produced slightly better results.  相似文献   

19.
The use of ontologies in knowledge engineering arose as a solution to the difficulties associated with acquiring knowledge, commonly referred to as the knowledge acquisition bottleneck. The knowledge-level model represented in an ontology provides a much more structured and principled approach compared with earlier transfer-of-symbolic-knowledge approaches but brings with it a new problem, which can be termed the ontology-acquisition (and maintenance) bottleneck. Each ontological approach offers a different structure, different terms and different meanings for those terms. The unifying theme across approaches is the considerable effort associated with developing, validating and connecting ontologies. We propose an approach to engineering ontologies by retrospectively and automatically discovering them from existing data and knowledge sources in the organization. The method offered assists in the identification of similar and different terms and includes strategies for developing a shared ontology. The approach uses a human-centered, concept-based knowledge processing technique, known as formal concept analysis, to generate an ontology from examples. To assist classification of examples and to identify the salient features of the example, we use a rapid and incremental knowledge acquisition and representation technique, known as ripple-down rules. The method can be used as an alternative or complement to other approaches.  相似文献   

20.

A value approximation-based global search algorithm is suggested to solve resource-constrained allocation in high level synthesis problems. Value approximation is preferred, because it can start by using expert heuristics, can estimate the global structure of the search problem, and can optimize heuristics. We are concerned by those allocation problems that have hidden global structure that value approximation may unravel. The value approximation applied here computes the cost of the actual solution and estimates the cost of the solution that could be achieved upon performing a global search on the hidden structure starting from the actual solution. We transcribed the allocation problem into a special form of weighted CNF formulae to suit our approach. We also extended the formalism to pipeline operations. Comparisons are made with expert heuristics. Scaling of computation time and performance are compared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号