排序方式: 共有17条查询结果,搜索用时 31 毫秒
1.
2.
This paper introduces a novel logical framework for concept-learning called brave induction. Brave induction uses brave inference for induction and is useful for learning from incomplete information. Brave induction
is weaker than explanatory induction which is normally used in inductive logic programming, and is stronger than learning from satisfiability, a general setting of concept-learning in clausal logic. We first investigate formal properties of brave induction, then
develop an algorithm for computing hypotheses in full clausal theories. Next we extend the framework to induction in nonmonotonic logic programs. We analyze computational complexity of decision problems for induction on propositional theories. Further, we provide examples
of problem solving by brave induction in systems biology, requirement engineering, and multiagent negotiation. 相似文献
3.
4.
Sakama Chiaki Inoue Katsumi Sato Taisuke 《Annals of Mathematics and Artificial Intelligence》2021,89(12):1133-1153
Annals of Mathematics and Artificial Intelligence - This paper introduces a novel approach to computing logic programming semantics. First, a propositional Herbrand base is represented in a vector... 相似文献
5.
We propose a novel framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I,J) such that J=T P (I), where T P is the immediate consequence operator, we infer the program P. The learning framework can be repeatedly applied for identifying Boolean networks from basins of attraction. Two algorithms have been implemented for this learning task, and are compared using examples from the biological literature. We also show how to incorporate background knowledge and inductive biases, then apply the framework to learning transition rules of cellular automata. 相似文献
6.
7.
8.
9.
10.
To explain observations from nonmonotonic background theories, one often needs removal of some hypotheses as well as addition
of other hypotheses. Moreover, some observations should not be explained, while some are to be explained. In order to formalize
these situations, extended abduction was introduced by Inoue and Sakama (1995) to generalize traditional abduction in the
sense that it can compute negative explanations by removing hypotheses and anti‐explanations to unexplain negative observations.
In this paper, we propose a computational mechanism for extended abduction. When a background theory is written in a normal
logic program, we introduce its transaction program for computing extended abduction. A transaction program is a set of non‐deterministic
production rules that declaratively specify addition and deletion of abductive hypotheses. Abductive explanations are then
computed by the fixpoint of a transaction program using a bottom‐up model generation procedure. The correctness of the proposed
procedure is shown for the class of acyclic covered abductive logic programs. In the context of deductive databases, a transaction
program provides a declarative specification of database update.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献