首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 0 毫秒
1.
This article considers some semantical properties of the postulatesWeak Determinacy (WD) and Rational Contraposition (RC). In particular,we provide two representation theorems for preferential inferencerelations satisfying WD and RC respectively. This solves twoopen problems presented by H.Bezzazi, D.Makinson and R.PinoPérez in [Journal of Logic and Computation, 7, 1997].  相似文献   

2.
非单调推理的研究现状   总被引:1,自引:0,他引:1  
一、引言 早在1959年,McCart~[1]就发现常识和常识推理很难处理,因为在常识推理中当前得出的结论,可能会由于以后新事实的加入而被取消.这就是所谓的“非单调性”。  相似文献   

3.
4.
5.
6.
In this paper, I compare the accuracy, efficiency and stability of different numerical strategies for computing approximate solutions to the nonlinear rational expectations commodity market model. I find that polynomial and spline function collocation methods are superior to the space discretization, linearization and least squares curve-fitting methods that have been preferred by economists in the past.  相似文献   

7.
The task of generating minimal models of a knowledge base is at the computational heart of diagnosis systems like truth maintenance systems, and of nonmonotonic systems like autoepistemic logic, default logic, and disjunctive logic programs. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Ψ1,Ψ2,… , with the following properties: first, Ψ1 is the class of all Horn knowledge bases; second, if a knowledge base T is in Ψk, then T has at most k minimal models, and all of them may be found in time O(lk2), where l is the length of the knowledge base; third, for an arbitrary knowledge base T, we can find the minimum k such that T belongs to Ψk in time polynomial in the size of T; and, last, where K is the class of all knowledge bases, it is the case that , that is, every knowledge base belongs to some class in the hierarchy. The algorithm is incremental, that is, it is capable of generating one model at a time.  相似文献   

8.
Software metrics-based quality estimation models can be effective tools for identifying which modules are likely to be fault-prone or not fault-prone. The use of such models prior to system deployment can considerably reduce the likelihood of faults discovered during operations, hence improving system reliability. A software quality classification model is calibrated using metrics from a past release or similar project, and is then applied to modules currently under development. Subsequently, a timely prediction of which modules are likely to have faults can be obtained. However, software quality classification models used in practice may not provide a useful balance between the two misclassification rates, especially when there are very few faulty modules in the system being modeled.This paper presents, in the context of case-based reasoning, two practical classification rules that allow appropriate emphasis on each type of misclassification as per the project requirements. The suggested techniques are especially useful for high-assurance systems where faulty modules are rare. The proposed generalized classification methods emphasize on the costs of misclassifications, and the unbalanced distribution of the faulty program modules. We illustrate the proposed techniques with a case study that consists of software measurements and fault data collected over multiple releases of a large-scale legacy telecommunication system. In addition to investigating the two classification methods, a brief relative comparison of the techniques is also presented. It is indicated that the level of classification accuracy and model-robustness observed for the case study would be beneficial in achieving high software reliability of its subsequent system releases. Similar observations are made from our empirical studies with other case studies.  相似文献   

9.
10.
Graphical models have been widely applied to uncertain reasoning in knowledge-based systems. For many of the problems tackled, a single graphical model is constructed before individual cases are presented and the model is used to reason about each new case. In this work, we consider a class of problems whose solution requires inference over a very large number of models that are impractical to construct a priori. We conduct a case study in the domain of vehicle monitoring and then generalize the approach taken. We show that the previously held negative belief on the applicability of graphical models to such problems is unjustified. We propose a set of techniques based on domain decomposition, model separation, model approximation, model compilation, and re-analysis to meet the computational challenges imposed by the combinatorial explosion. Experimental results on vehicle monitoring demonstrated good performance at near-real-time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号