首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many formalisms for reasoning about knowing commit an agent to be logically omniscient. Logical omniscience is an unrealistic principle for us to use to build a real-world agent, since it commits the agent to knowing infinitely many things. A number of formalizations of knowledge have been developed that do not ascribe logical omniscience to agents. With few exceptions, these approaches are modifications of the possible-worlds semantics. In this paper we use a combination of several general techniques for building non-omniscient reasoners. First we provide for the explicit representation of notions such as problems, solutions, and problem solving activities, notions which are usually left implicit in the discussions of autonomous agents. A second technique is to take explicitly into account the notion of resource when we formalize reasoning principles. We use the notion of resource to describe interesting principles of reasoning that are used for ascribing knowledge to agents. For us, resources are abstract objects. We make extensive use of ordering and inaccessibility relations on resources, but we do not find it necessary to define a metric. Using principles about resources without using a metric is one of the strengths of our approach.We describe the architecture of a reasoner, built from a finite number of components, who solves a puzzle, involving reasoning about knowing, by explicitly using the notion of resource. Our approach allows the use of axioms about belief ordinarily used in problem solving – such as axiom K of modal logic – without being forced to attribute logical omniscience to any agent. In particular we address the issue of how we can use resource-unbounded (e.g., logically omniscient) reasoning to attribute knowledge to others without introducing contradictions. We do this by showing how omniscient reasoning can be introduced as a conservative extension over resource-bounded reasoning.  相似文献   

2.
The role of abduction in database view updating   总被引:2,自引:2,他引:0  
The problem of view updating in databases consists in modifying the extension of view relations (i.e., relations defined in terms of base ones) transforming only the content of the extensional database, i.e., the extensional representation of base relations. This task is non-deductive in nature and its relationships with non-monotonic reasoning, and specifically with abduction, have been recently pointed out.In the paper we investigate the role of abduction in view updating, singling out similarities and differences between view updating and abduction. View updating is regarded as a two-step process: first, view definitions (and constraints) are used to reduce a view update into updates on base relations; then, the content of the extensional database is taken into account to determine the actual transformations to be performed. The first step is abductive in nature; we apply to such a step a definition of abduction based on deduction, which characterizes by means of a unique logical formula the conditions on base predicates which accomplish an update request. We then show how, in the second step, the set of transactions to be performed can be obtained from the formula generated in the first step. We provide a formal result showing the correctness of the approach.  相似文献   

3.
We show how the ntcc calculus, a model of temporal concurrent constraint programming with the capability of modeling asynchronous and non-deterministic timed behavior, can be used for modeling real musical processes. In particular, we show how the expressiveness of ntcc allows to implement complex interactions among musical processes handling different kinds of partial information. The ntcc calculus integrates two dimensions of soft computing: a horizontal dimension dealing with partial information and a vertical one in which non determinism comes into play. This integration is an improvement over constraint satisfaction and concurrent constraint programming models, allowing a more natural representation of a variety of musical processes. We use the nondeterminism facility of ntcc to build weaker representations of musical processes that greatly simplifies the formal expression and analysis of its properties. We argue that this modeling strategy provides a runnable specification for music problems that eases the task of formally reasoning about them. We show how the linear temporal logic associated with ntcc gives a very expressive setting for formally proving the existence of interesting musical properties of a process. We give examples of musical specifications in ntcc and use the linear temporal logic for proving properties of a realistic musical problem.  相似文献   

4.
Intelligent behavior of database and knowledge base management systems is often seen restricted to comfortable support for query answering (including limited forms of reasoning) and navigating within the stored data. However, active notification for clients about changes in the database is an important requirement for advanced interaction between the database and its client applications. These usually hold (derived) subsets of the database contents under their control. The incremental maintenance of such externally materialized views is an important open problem. In addition to some necessary changes in the known view maintenance procedures the issue of translating updates through an API and a way for clients to accept such updates have to be defined. We describe the main features of a solution to this problem implemented in the knowledge base server ConceptBase.  相似文献   

5.
General Convergence Results for Linear Discriminant Updates   总被引:1,自引:0,他引:1  
The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method automatically produces close variants of existing proofs (recovering similar bounds)—thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.  相似文献   

6.
We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. Our main result shows that the training problem for 2-cascade neural nets (which have only two non-input nodes, one of which is hidden) is N-complete, which implies that finding an optimal net (in terms of the number of non-input units) that is consistent with a set of examples is also N-complete. This result also demonstrates a surprising gap between the computational complexities of one-node (perceptron) and two-node neural net training problems, since the perceptron training problem can be solved in polynomial time by linear programming techniques. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also N-complete, for each fixed k2. We also show that the problem of finding an optimal perceptron (in terms of the number of non-zero weights) consistent with a set of training examples is N-hard.Our neural net learning model encapsulates the idea of modular neural nets, which is a popular approach to overcoming the scaling problem in training neural nets. We investigate how much easier the training problem becomes if the class of concepts to be learned is known a priori and the net architecture is allowed to be sufficiently non-optimal. Finally, we classify several neural net optimization problems within the polynomial-time hierarchy.  相似文献   

7.
In this essay I will consider two theses that are associated with Frege,and will investigate the extent to which Frege really believed them.Much of what I have to say will come as no surprise to scholars of thehistorical Frege. But Frege is not only a historical figure; he alsooccupies a site on the philosophical landscape that has allowed hisdoctrines to seep into the subconscious water table. And scholars in a widevariety of different scholarly establishments then sip from thesedoctrines. I believe that some Frege-interested philosophers at various ofthese establishments might find my conclusions surprising.Some of these philosophical establishments have arisen from an educationalmilieu in which Frege is associated with some specific doctrine at theexpense of not even being aware of other milieux where other specificdoctrines are given sole prominence. The two theses which I will discussillustrate this point. Each of them is called Frege's Principle, but byphilosophers from different milieux. By calling them milieux I do not want to convey the idea that they are each located at some specificsocio-politico-geographico-temporal location. Rather, it is a matter oftheir each being located at different places on the intellectuallandscape. For this reason one might (and I sometimes will) call them(interpretative) traditions.  相似文献   

8.
9.
In traditional approaches to object-oriented programming, objects are active, while relations between them are passive. The activeness of an object reveals itself when the object invokes a method (function) as a reaction to a message from another object (or itself). While this model is suitable for some tasks, like arranging interactions between windows, widgets and the end-user in a typical GUI environment, it's not appropriate for others. Business applications development is one of the examples. In this domain, relations between conceptual objects are at least as important as objects themselves and the more appropriate model for this field would be the one where relations are active while objects are passive. A version of such a model is presented in the paper. The model considers a system as consisting of a set of objects, a code of laws, and a set of connectors, each connector hanging on a group of objects that must obey a certain law. The formal logical semantics of this model is presented as a way of analyzing the set of all possible trajectories of all possible systems. The analysis allows to differentiate valid trajectories from invalid ones. The procedural semantics is presented as a state machine that given an initial state, generates all possible trajectories that can be derived from this state. This generator can be considered as a model of a connectors scheduler that allows various degrees of parallelism, from sequential execution to the maxim possible parallelism. In conclusion, a programming language that could be appropriate for the proposed computer environment is discussed, and the problems of applying the model to the business domain are outlined.  相似文献   

10.
Xiang  Y.  Wong  S.K.M.  Cercone  N. 《Machine Learning》1997,26(1):65-92
Several scoring metrics are used in different search procedures for learning probabilistic networks. We study the properties of cross entropy in learning a decomposable Markov network. Though entropy and related scoring metrics were widely used, its microscopic properties and asymptotic behavior in a search have not been analyzed. We present such a microscopic study of a minimum entropy search algorithm, and show that it learns an I-map of the domain model when the data size is large.Search procedures that modify a network structure one link at a time have been commonly used for efficiency. Our study indicates that a class of domain models cannot be learned by such procedures. This suggests that prior knowledge about the problem domain together with a multi-link search strategy would provide an effective way to uncover many domain models.  相似文献   

11.
Representing diagnosis knowledge   总被引:1,自引:0,他引:1  
This paper considers therepresentation problem: namely how to go from an abstract problem to a formal representation of the problem. We consider this for two conceptions of logic-based diagnosis, namely abductive and consistency-based diagnosis. We show how to represent diagnostic problems that can be conceptualised causally in each of the frameworks, and show that both representations of the same problems give the same answers. This is a local transformation that allows for an expressive (albeit propositional) language for giving the constraints on what symptoms and causes can coexist, including non-strict causation. This non-strict causation can be represented in each frameworkwithout adding special reasoning constructs to either framework. This is presented as a starting point for a study of the representation problem in diagnosis, rather than as an end in itself.  相似文献   

12.
There is growing interest in algorithms for processing and querying continuous data streams (i.e., data seen only once in a fixed order) with limited memory resources. In its most general form, a data stream is actually an update stream, i.e., comprising data-item deletions as well as insertions. Such massive update streams arise naturally in several application domains (e.g., monitoring of large IP network installations or processing of retail-chain transactions). Estimating the cardinality of set expressions defined over several (possibly distributed) update streams is perhaps one of the most fundamental query classes of interest; as an example, such a query may ask what is the number of distinct IP source addresses seen in passing packets from both router R 1 and R 2 but not router R 3?. Earlier work only addressed very restricted forms of this problem, focusing solely on the special case of insert-only streams and specific operators (e.g., union). In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed 2-level hash sketch. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only space that is significantly sublinear in the sizes of the streaming input (multi-)sets. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Finally, we propose an optimized, time-efficient stream synopsis (based on 2-level hash sketches) that provides similar, strong accuracy-space guarantees while requiring only guaranteed logarithmic maintenance time per update, thus making our methods applicable for truly rapid-rate data streams. Our results from an empirical study of our synopsis and estimation techniques verify the effectiveness of our approach.Received: 20 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. Hellerstein.Sumit Ganguly: sganguly@cse.iitk.ac.in Current affiliation: Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, India  相似文献   

13.
We present a new computational approach to the problem of detection of potential inconsistencies in knowledge bases. For such inconsistencies, we characterize the sets of possible input facts that will allow the knowledge based system to derive the contradiction. the state-of-the-art approach to a solution of this problem is represented by the COVADIS system which checks simple rule bases. the COVADIS approach relies on forward chaining and is strongly related to the way ATMS computes labels for deducible facts. Here, we present an alternative computation method that employs backward chaining in a kind of abductive reasoning. This approach gives a more focused reasoning, thus requiring much less computation and memory than COVADIS. Further, since our method is very similar to SLD-resolution, it is suitable for handling the more powerful knowledge base form represented by Horn claause bases. Finally, our method is easily extended to uncertain knowledge bases, assuming that the uncertainty calculus is modeled by possibilistic logic. This extension allows us to model the effect of user defined belief thresholds for inference chains.  相似文献   

14.
We describe an algorithm for finding a minimum spanning tree of the weighted complete graph induced by a set ofn points in Euclideand-space. The algorithm requires nearly linear expected time for points that are independently uniformly distributed in the unitd-cube. The first step of the algorithm is the spiral search procedure described by Bentleyet al. [BWY82] for finding a supergraph of the MST that hasO(n) edges. (The constant factor in the bound depends ond.) The next step is that of sorting the edges of the supergraph by weight using a radix distribution, or bucket, sort. These steps require linear expected time. Finally, Kruskal's algorithm is used with the sorted edges, requiringO(n(cn, n)) time in the worst case, withc>6. Since the function (cn, n) grows very slowly, this step requires linear time for all practical purposes. This result improves the previous bestO(n log log*n), and employs a much simpler algorithm. Also, this result demonstrates the robustness of bucket sorting, which requiresO(n) expected time in this case despite the probability dependency between the edge weights.  相似文献   

15.
Resource-Bounded Paraconsistent Inference   总被引:1,自引:0,他引:1  
In this paper, a new framework for reasoning from inconsistent propositional belief bases is presented. A family of resource-bounded paraconsistent inference relations is introduced. Such inference relations are based on S-3 entailment, an inference relation logically weaker than the classical one and parametrized by a set S of propositional variables. The computational complexity of our relations is identified, and their logical properties are analyzed. Among the strong features of our framework is the fact that tractability is ensured each time |S| is bounded and a limited amount of knowledge is taken into account within the belief base. Furthermore, binary connectives , behave in a classical manner. Finally, our framework is general enough to encompass several paraconsistent multi-valued logics (including S-3, J 3 and its restrictions), the standard coherence-based approach to inconsistency handling (based on the selection of consistent subbases) and some signed systems for paraconsistent reasoning as specific cases.  相似文献   

16.
Artificial intelligence researchers have been designing representation systems for default and abductive reasoning. Logic Programming researchers have been working on techniques to improve the efficiency of Horn clause deduction systems This paper describes how one such default and abductive reasoning system (namelyTheorist) can be translated into Horn clauses (with negation as failure), so that we can use the clarity of abductive reasoning systems and the efficiency of Horn clause deduction systems. We thus show how advances in expressive power that artificial intelligence workers are working on can directly utilise advances in efficiency that logic programming researchers are working on. Actual code from a running system is given.  相似文献   

17.
When updating a knowledge base, several problems may arise. One of the most important problems is that of integrity constraints satisfaction. The classic approach to this problem has been to develop methods forchecking whether a given update violates an integrity constraint. An alternative approach consists of trying to repair integrity constraints violations by performing additional updates thatmaintain knowledge base consistency. Another major problem in knowledge base updating is that ofview updating, which determines how an update request should be translated into an update of the underlying base facts. We propose a new method for updating knowledge bases while maintaining their consistency. Our method can be used for both integrity constraints maintenance and view updating. It can also be combined with any integrity checking method for view updating and integrity checking. The kind of updates handled by our method are: updates of base facts, view updates, updates of deductive rules, and updates of integrity constraints. Our method is based on events and transition rules, which explicity define the insertions and deletions induced by a knowledge base update. Using these rules, an extension of the SLDNF procedure allows us to obtain all possible minimal ways of updating a knowledge base without violating any integrity constraint.  相似文献   

18.
We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight m where is a constant in [0, 1) andm is the number of mistakes made by the expert in the past. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise.  相似文献   

19.
Maximal word functions occur in data retrieval applications and have connections with ranking problems, which in turn were first investigated in relation to data compression [21]. By the maximal word function of a languageL *, we mean the problem of finding, on inputx, the lexicographically largest word belonging toL that is smaller than or equal tox.In this paper we present a parallel algorithm for computing maximal word functions for languages recognized by one-way nondeterministic auxiliary pushdown automata (and hence for the class of context-free languages).This paper is a continuation of a stream of research focusing on the problem of identifying properties others than membership which are easily computable for certain classes of languages. For a survey, see [24].  相似文献   

20.
The development of the semantic Web will require agents to use common domain ontologies to facilitate communication of conceptual knowledge. However, the proliferation of domain ontologies may also result in conflicts between the meanings assigned to the various terms. That is, agents with diverse ontologies may use different terms to refer to the same meaning or the same term to refer to different meanings. Agents will need a method for learning and translating similar semantic concepts between diverse ontologies. Only until recently have researchers diverged from the last decade's common ontology paradigm to a paradigm involving agents that can share knowledge using diverse ontologies. This paper describes how we address this agent knowledge sharing problem of how agents deal with diverse ontologies by introducing a methodology and algorithms for multi-agent knowledge sharing and learning in a peer-to-peer setting. We demonstrate how this approach will enable multi-agent systems to assist groups of people in locating, translating, and sharing knowledge using our Distributed Ontology Gathering Group Integration Environment (DOGGIE) and describe our proof-of-concept experiments. DOGGIE synthesizes agent communication, machine learning, and reasoning for information sharing in the Web domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号