共查询到20条相似文献,搜索用时 0 毫秒
1.
Elio Toppano 《Applied Artificial Intelligence》2013,27(3):191-224
It is generally admitted that several models differing along various dimensions are needed for executing complex engineering tasks such as diagnosis and monitoring. A key problem is thus to decide what model to use in a particular situation in front of a specified problem-solving task and reasoning objectives. We address this problem within the Multimodeling framework for reasoning about physical systems that we proposed in a previous work. After having characterized the space of possible models in the Multimodeling approach, we formulate the selection problem using the conceptual tools offered by the economic theory of rationality. In this frame we illustrate a preference-based model selection method that is used to navigate in the universe of available models of a system searching for the model that best matches a given task and reasoning objectives. The method exploits the use of a model map that is a metalevel concept representing the ontology and teleology of each model and the transformational relations (abstractions and approximations) connecting each model to other models. The model map is used to compare models on the basis of their content and to understand what can be gained or lost when switching from one model to another. Finally, some implications of the foregoing selection method in developing action-based diagnostic systems are discussed. 相似文献
2.
《Knowledge and Data Engineering, IEEE Transactions on》1991,3(1):89-99
A browser concept based on a connectionist architecture is presented. The concept utilizes both distributed and local representations. A proof-of-concept system is implemented for an integrally developed, Honeywell-proprietary knowledge acquisition tool. In the browser, concepts and relations in a knowledge base are represented using microfeatures. The microfeatures can encode semantic attributes, structural features, contextual information, etc. Desired portions of the knowledge base can then be associatively retrieved based on a structured cue. An ordered list of partial matches is presented to the user for selection. Microfeatures can also be used as bookmarks-they can be placed dynamically at appropriate points in the knowledge base and subsequently used as retrieval cues. The browser concept can be applied wherever there is a need for conveniently inspecting and manipulating structured information 相似文献
3.
Jos de Bruijn David Pearce Axel Polleres Agustín Valverde 《Knowledge and Information Systems》2010,25(1):81-104
In the ongoing discussion about combining rules and ontologies on the Semantic Web a recurring issue is how to combine first-order
classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such
hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this
paper, we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as
well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL, we relax the unique
names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the
existing modular approaches for hybrid knowledge bases in a unified way. 相似文献
4.
Yang Chen Daisy Zhe Wang Sean Goldberg 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(6):893-918
Recent years have seen a drastic rise in the construction of web knowledge bases (e.g., Freebase, YAGO, DBPedia). These knowledge bases store structured information about real-world people, places, organizations, etc. However, due to the limitations of human knowledge, web corpora, and information extraction algorithms, the knowledge bases are still far from complete. To infer the missing knowledge, we propose the Ontological Pathfinding (OP) algorithm to mine first-order inference rules from these web knowledge bases. The OP algorithm scales up via a series of optimization techniques, including a new parallel-rule-mining algorithm, a pruning strategy to eliminate unsound and inefficient rules before applying them, and a novel partitioning algorithm to break the learning task into smaller independent sub-tasks. Combining these techniques, we develop a first rule mining system that scales to Freebase, the largest public knowledge base with 112 million entities and 388 million facts. We mine 36,625 inference rules in 34 h; no existing system achieves this scale.Based on the mining algorithm and the optimizations, we develop an efficient inference engine. As a result, we infer 0.9 billion new facts from Freebase in 17.19 h. We use cross validation to evaluate the inferred facts and estimate a degree of expansion by 0.6 over Freebase, with a precision approaching 1.0. Our approach outperforms state-of-the-art mining algorithms and inference engines in terms of both performance and quality. 相似文献
5.
Kathryn Blackmond Laskey 《Artificial Intelligence》2008,172(2-3):140-178
Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the best-understood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory. 相似文献
6.
Updating knowledge bases 总被引:2,自引:0,他引:2
We consider the problem of updating a knowledge base, where a knowledge base is realised as a normal (logic) program. We present
procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating particularly
on the case when negative literals appear in the bodies of program clauses. We also prove various properties of the procedures
including their correctness. 相似文献
7.
Lu J.J. Nerode A. Subrahmanian V.S. 《Knowledge and Data Engineering, IEEE Transactions on》1996,8(5):773-785
Deductive databases that interact with, and are accessed by, reasoning agents in the real world (such as logic controllers in automated manufacturing, weapons guidance systems, aircraft landing systems, land-vehicle maneuvering systems, and air-traffic control systems) must have the ability to deal with multiple modes of reasoning. Specifically, the types of reasoning we are concerned with include, among others, reasoning about time, reasoning about quantitative relationships that may be expressed in the form of differential equations or optimization problems, and reasoning about numeric modes of uncertainty about the domain which the database seeks to describe. Such databases may need to handle diverse forms of data structures, and frequently they may require use of the assumption-based nonmonotonic representation of knowledge. A hybrid knowledge base is a theoretical framework capturing all the above modes of reasoning. The theory tightly unifies the constraint logic programming scheme of Jaffar and Lassez (1987), the generalized annotated logic programming theory of Kifer and Subrahmanian (1989), and the stable model semantics of Gelfond and Lifschitz (1988). New techniques are introduced which extend both the work on annotated logic programming and the stable model semantics 相似文献
8.
9.
Probabilistic knowledge bases 总被引:1,自引:0,他引:1
We define a new fixpoint semantics for rule based reasoning in the presence of weighted information. The semantics is illustrated on a real world application requiring such reasoning. Optimizations and approximations of the semantics are shown so as to make the semantics amenable to very large scale real world applications. We finally prove that the semantics is probabilistic and reduces to the usual fixpoint semantics of stratified Datalog if all information is certain. We implemented various knowledge discovery systems which automatically generate such probabilistic decision rules. In collaboration with a bank in Hong Kong we use one such system to forecast currency exchange rates 相似文献
10.
11.
12.
LUAN Shangmin DAI Guozhong & LI Wei . Institute of Software Chinese Academy of Sciences Beijing China . Department of Computer Science Technology Beijing University of Aeronautics Astronautics Beijing China 《中国科学F辑(英文版)》2005,48(6):681-692
Knowledge base revision,which is also called belief revision,is an important topic in artificial intelligence and philosophy,and many approaches have been introduced in re-cent years[1―26].An important topic for knowledge base revision is to introduce a pro-grammable approach[9].This paper focuses on showing a programmable approach to revise a knowledge base consisting of clauses.Knowledge base revision is important from both the theoretical and applied point of view[27].Doyle’s truth mainte… 相似文献
13.
Distributed shared memory for roaming large volumes 总被引:1,自引:0,他引:1
Castanié L Mion C Cavin X Lévy B 《IEEE transactions on visualization and computer graphics》2006,12(5):1299-1306
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming. 相似文献
14.
Human knowledge in any expertise area changes with respect to time. Two types of such knowledge can be identified, time independent and time dependent. It is shown that the maintenance effort of the latter is harder than that of the former. The present paper applies research results in the area of temporal databases, in order to maintain a rule-based knowledge base whose content changes with respect to the real world time. It is shown that the approach simplifies the maintenance of time dependent knowledge. It also enables the study of the evolution of knowledge with respect to time, which is knowledge on its own. Three distinct solutions are actually proposed and evaluated. Their common characteristic is that knowledge is stored in a database; therefore, all the advantages of databases are inherited by knowledge bases. Implementations are also reported. 相似文献
15.
Combining multiple knowledge bases 总被引:2,自引:0,他引:2
Combining knowledge present in multiple knowledge base systems into a single knowledge base is discussed. A knowledge based system can be considered an extension of a deductive database in that it permits function symbols as part of the theory. Alternative knowledge bases that deal with the same subject matter are considered. The authors define the concept of combining knowledge present in a set of knowledge bases and present algorithms to maximally combine them so that the combination is consistent with respect to the integrity constraints associated with the knowledge bases. For this, the authors define the concept of maximality and prove that the algorithms presented combine the knowledge bases to generate a maximal theory. The authors also discuss the relationships between combining multiple knowledge bases and the view update problem 相似文献
16.
The framework for representing domain ontologies presented in this paper extends existing ontological models and traditional frame-based formalisms. This work was motivated by the representational challenges posed by the domains of experimental sciences (biology, chemistry, physics) and the task of intelligent text retrieval. A detailed ontology for the field of experimental molecular biology is presented, which is used to illustrate the need for and application of the features of the framework. An extended frame-based formalism is defined to support these features. The ability of the framework to support intelligent retrieval from a knowledge base of molecular-biology research papers is demonstrated by providing answers to queries that could not be fully answered using previous approaches. The extensions to ontological framework include : category conversions, processes that change the category or identity of their participants; object histories to track substances through a series of experimental processes, including category conversions; object complexes, temporary configurations of objects with properties of their own; and process complexes, groups or sequences of interrelated actions that comprise an experimental technique or procedure. Features of the frame-based formalism include: slot groups for identifying sets of relations that license common inferences; and open-filler classes that combine knowledge of likely slot values with the ability to handle unexpected values. Evaluation techniques that are used to assess the adequacy of the ontology are presented: the ontology's conceptual coverage of the domain, its potential usefulness in improving the quality of query answering, and its formal consistency and reusability by the knowledge-sharing community are evaluated. 相似文献
17.
Updating knowledge bases II 总被引:2,自引:0,他引:2
We consider the problem of updating a knowledge base, where a knowledge base is realised as a (logic) program. In a previous
paper, we presented procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating
particularly on the case when negative literals appear in the bodies of program clauses. We also proved various properties
of the procedures including their correctness. Here we present mutually recursive versions of the update procedures and prove
their correctness and other properties. We then generalise the procedures so that we can update an (arbitrary) program with
an (arbitrary) formula. The correctness of the update procedures for programs is also proved. 相似文献
18.
19.
Verification of non-monotonic knowledge bases 总被引:1,自引:0,他引:1
Neli P Zlatareva Author vitae 《Decision Support Systems》1997,21(4):253-261
Non-monotonic Knowledge-Based Systems (KBSs) must undergo quality assurance procedures for the following two reasons: (i) belief revision (if such is provided) cannot always guarantee the structural correctness of the knowledge base, and in certain cases may introduce new semantic errors in the revised theory; (ii) non-monotonic theories may have multiple extensions, and some types of functional errors which do not violate structural properties of a given extension are hard to detect without testing the overall performance of the KBS. This paper presents an extension of the distributed verification method, which is meant to reveal structural and functional anomalies in non-monotonic KBSs. Two classes of anomalies are considered: (i) structural anomalies which manifest themselves within a given extension (such as logical inconsistencies, structural incompleteness, and intractabilities caused by circular rule chains), and (ii) functional anomalies related to the overall performance of the KBS (such as the existence of complementary rules and some types of rule subsumptions). The corresponding verification tests are presented and illustrated on an extended example. 相似文献
20.