首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 620 毫秒
1.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

2.
A process is calledcomputable if it can be modelled by a transition system that has a recursive structure—implying finite branching. The equivalence relation between transition systems considered is strong bisimulation equivalence. The transition systems studied in this paper can be associated to processes specified in common specification languages such as CCS, LOTOS, ACP and PSF. As a means for defining transition systems up to bisimulation equivalence, the specification languageCRL is used. Two simple fragments of,CRL are singled out, yielding universal expressivity with respect to recursive and primitive recursive transition systems. For both these domains the following properties are classified in the arithmetical hierarchy:bisimilarity, perpetuity (both 1 0 ),regularity (having a bisimilar, finite representation, 2 0 ),acyclic regularity ( 1 0 ), anddeadlock freedom (distinguishing deadlock from successful termination, 1 0 ). Finally, it is shown that in the domain of primitive recursive transition systems over a fixed, finite label set, a genuine hierarchy in bisimilarity can be defined by the complexity of the witnessing relations, which extends r.e. bisimilarity. Hence, primitive recursive transition systems already form an interesting class.  相似文献   

3.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

4.
We describe the architecture of a sentence generation module that maps a language-neutral deep representation to a language-specific sentence-semantic specification, which is then handed over to a conventional front-end generator. Lexicalization is seen as the main task in the mapping step, and we specifically examine the role of verb semantics in the process. By separating the various kinds of knowledge involved, for related languages (such as English and German) the task of multilingual sentence generation can be treated as a variant of the monolingual paraphrasing problem.  相似文献   

5.
Pseudo-Linear Scale-Space Theory   总被引:2,自引:2,他引:0  
It has been observed that linear, Gaussian scale-space, and nonlinear, morphological erosion and dilation scale-spaces generated by a quadratic structuring function have a lot in common. Indeed, far-reaching analogies have been reported, which seems to suggest the existence of an underlying isomorphism. However, an actual mapping appears to be missing.In the present work a one-parameter isomorphism is constructed in closed-form, which encompasses linear and both types of morphological scale-spaces as (non-uniform) limiting cases. The unfolding of the one-parameter family provides a means to transfer known results from one domain to the other. Moreover, for any fixed and non-degenerate parameter value one obtains a novel type of pseudo-linear multiscale representation that is, in a precise way, in-between the familiar ones. This is of interest in its own right, as it enables one to balance pros and cons of linear versus morphological scale-space representations in any particular situation.  相似文献   

6.
The degree to which information sources are pre-processed by Web-based information systems varies greatly. In search engines like Altavista, little pre-processing is done, while in knowledge integration systems, complex site-specific wrappers are used to integrate different information sources into a common database representation. In this paper we describe an intermediate point between these two models. In our system, information sources are converted into a highly structured collection of small fragments of text. Database-like queries to this structured collection of text fragments are approximated using a novel logic called WHIRL, which combines inference in the style of deductive databases with ranked retrieval methods from information retrieval (IR). WHIRL allows queries that integrate information from multiple Web sites, without requiring the extraction and normalization of object identifiers that can be used as keys; instead, operations that in conventional databases require equality tests on keys are approximated using IR similarity metrics for text. This leads to a reduction in the amount of human engineering required to field a knowledge integration system. Experimental evidence is given showing that many information sources can be easily modeled with WHIRL, and that inferences in the logic are both accurate and efficient.  相似文献   

7.
Conceptual Clustering,Categorization, and Polymorphy   总被引:5,自引:5,他引:0  
In this paper we describe WITT, a computational model of categorization and conceptual clustering that has been motivated and guided by research on human categorization. Properties of categories to which humans are sensitive include best or prototypical members, relative contrasts between categories, and polymorphy (neither necessary nor sufficient feature rules). The system uses pairwise feature correlations to determine the similarity between objects and clusters of objects, allowing the system a flexible representation scheme that can model common-feature categories and polymorphous categories. This intercorrelation measure is cast in terms of an information-theoretic evaluation function that directs WITT'S search through the space of clusterings. This information-theoretic similarity metric also can be used to explain basic-level and typicality effects that occur in humans. WITT has been tested on both artificial domains and on data from the 1985 World Almanac, and we have examined the effect of various system parameters on the quality of the model's behavior.  相似文献   

8.
Learning to Recognize Volcanoes on Venus   总被引:1,自引:0,他引:1  
Burl  Michael C.  Asker  Lars  Smyth  Padhraic  Fayyad  Usama  Perona  Pietro  Crumpler  Larry  Aubele  Jayne 《Machine Learning》1998,30(2-3):165-194
Dramatic improvements in sensor and image acquisition technology have created a demand for automated tools that can aid in the analysis of large image databases. We describe the development of JARtool, a trainable software system that learns to recognize volcanoes in a large data set of Venusian imagery. A machine learning approach is used because it is much easier for geologists to identify examples of volcanoes in the imagery than it is to specify domain knowledge as a set of pixel-level constraints. This approach can also provide portability to other domains without the need for explicit reprogramming; the user simply supplies the system with a new set of training examples. We show how the development of such a system requires a completely different set of skills than are required for applying machine learning to toy world domains. This paper discusses important aspects of the application process not commonly encountered in the toy world, including obtaining labeled training data, the difficulties of working with pixel data, and the automatic extraction of higher-level features.  相似文献   

9.
Summary We consider the design of a strongly-typed language with userdefined types in which it is arranged that, given that a type is available, it is immaterial to the user whether it is a user-defined type or one of the primitive types with representations selected by the implementer. This scheme provides unprecedented freedom in choosing the primitive types; by making these machine-dependent we can ensure production of programs that are easily and efficiently portable between computers of different architectures. A general discussion of the implementer's responsibilities in choosing primitive types appropriate to his machine is illustrated by considering implementation choices for translation of the language into BCPL. Finally we discuss the contribution of the language to the solution of the portability problem.  相似文献   

10.
We present a framework for intensional reasoning in typed -calculus. In this family of calculi, called Modal Pure Type Systems (MPTSs), a propositions-as-types-interpretation can be given for normal modal logics. MPTSs are an extension of the Pure Type Systems (PTSs) of Barendregt (1992). We show that they retain the desirable meta-theoretical properties of PTSs, and briefly discuss applications in the area of knowledge representation.  相似文献   

11.
In this paper we propose an algorithm for structure learning in predictive expert systems based on a probabilistic network representation. The idea is to have the simplest structure (minimum number of links) with acceptable predictive capability. The algorithm starts by building a tree structure based on measuring mutual information between pairs of variables, and then it adds links as necessary to obtain certain predictive performance. We have applied this method for ozone prediction in México City, where the ozone level is used as a global indicator for the air quality in different parts of the city. It is important to predict the ozone level a day, or at least several hours in advance, to reduce the health hazards and industrial losses that occur when the ozone reaches emergency levels. We obtained as a first approximation a tree-structured dependency model for predicting ozone in one part of the city. We observe that even with only three parameters, its estimations are acceptable.A causal network representation and the structure learning techniques produced some very interesting results for the ozone prediction problem. Firstly, we got some insight into the dependence structure of the phenomena. Secondly, we got an indication of which are the important and not so important variables for ozone forecasting. Taking this into account, the measurement and computational costs for ozone prediction could be reduced. And thirdly, we have obtained satisfactory short term ozone predictions based on a small set of the most important parameters.  相似文献   

12.
Xiang  Y.  Wong  S.K.M.  Cercone  N. 《Machine Learning》1997,26(1):65-92
Several scoring metrics are used in different search procedures for learning probabilistic networks. We study the properties of cross entropy in learning a decomposable Markov network. Though entropy and related scoring metrics were widely used, its microscopic properties and asymptotic behavior in a search have not been analyzed. We present such a microscopic study of a minimum entropy search algorithm, and show that it learns an I-map of the domain model when the data size is large.Search procedures that modify a network structure one link at a time have been commonly used for efficiency. Our study indicates that a class of domain models cannot be learned by such procedures. This suggests that prior knowledge about the problem domain together with a multi-link search strategy would provide an effective way to uncover many domain models.  相似文献   

13.
With the aim to individualise human-computer interaction, an Intelligent Tutoring System (ITS) has to keep track of what and how the student has learned. Hence, it is necessary to maintain a Student Model (SM) dealing with complex knowledge representation, such as incomplete and inconsistent knowledge and belief revision. With this in view, the main objective of this paper is to present and discuss the student modelling approach we have adopted to implement Pitagora 2.0, an ITS based on a co-operative learning model, and designed to support teaching-learning activities in a Euclidean Geometry context. In particular, this approach has led us to develop two distinct modules that cooperate to implement the SM of Pitagora 2.0. The first module resembles a classical student model, in the sense that it maintains a representation of the current student knowledge level, which can be used by the teacher in order to tune its teaching strategies to the specific student needs. In addition, our system contains a second module that implements a virtual partner, called companion. This module consists of a computational model of an average student which cooperates with the student during the learning process. The above mentioned module calls for the use of machine learning algorithms that allow the companion to improve in parallel with the real student. Computational results obtained when testing this module in simulation experiments are also presented.  相似文献   

14.
In order to make micro 3-D structures, we are designing a table-sized factory, namely Nano Manufacturing World (NMW). In NMW, we challenged to use a new process fused by semiconductor process for preciseness and machine process for 3-dimensionality. In order to realize the new process, we designed three new mechanisms in this paper: multi-face shape making beam, co-focus rotational robot and micro mechanical tools. Through an evaluation to actually make a micro Gojunoto with the mechanisms, we confirmed their validities for functions of integration of 3-D shape construction and assembly.  相似文献   

15.
The modelling of natural phenomena through the use of computer-generated graphics has attracted much interest recently. It is believed that such methods will lead to new breakthroughs in understanding nature. One of the most popular methods used is the cell automata method, where cells are made to propagate and form cellular patterns according to certain predefined rules. Although much of the work in this area is for recreational purposes, as in the Game of Life, there can be more serious aspects to it. One of these is in the use of such methods to predict and simulate the growth behaviour of cell clusters in real-life situations. In this study, an attempt is made to formalise certain rules for modelling the growth characteristics of unicell populations. The methodology proposed models three fundamental factors: first, the generic propagational characteristics of a cell; second, the effect of adverse factors to growth; and, third, the effect of spatial constraints. The first two factors, relating to the population of a cell colony, can be modelled mathematically; the third factor determines the visual appearance of the cell colony. Patterns resulting from some computational simulations are presented and discussed.  相似文献   

16.
We investigate 2-tape weighted finite automata called weighted finite transducers (WFT) and their applications to image processing. We show that probabilistic mutually recursive function systems (PMRFS) can be simulated by iterative weighted fimite transductions. We conjecture that iterative WFT are stronger than PMRFS and give examples of WFT that support this conjecture. We also show that the family of images defined by iterative WFT is closed under continuous invertible WFT relations which include invertible affine transformations as a special case. We give examples of iterative WFT which can compute mathematical functions given by a Taylor series with regular coefficients which cannot be computed by WFA. We discuss the implementation of an efficient image manipulation system which includes the implementation of efficient algorithms for the application of a WFT to an image in either pixel or WFA representation and for composition of WFT. The system also includes the Culik-Kari recursive WFA inference algorithm as a conversion from pixel representation to WFA representation.This work was supported by the National Science Foundation under Grant No. CCR-9202396. The work of the second author was partially supported by Grant of Slovak Academy of Sciences No. 88 and by EC Cooperative Action IC 1000 Algorithms for Future Technologies (Project ALTEC)  相似文献   

17.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

18.
19.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

20.
This paper presents a real-time navigating system named Destination Driven Navigator for a mobile robot operating in unstructured static and dynamic environments. We have designed a new obstacle representation method named Cross-Line Obstacle Representation and a new concept work space to reduce the robot's search space and the environment storage cost, an Adapted Regression Model to predict dynamic obstacles' motion, Multi-State Path Repair rules to quickly translate an infeasible path into feasible one, and the path-planning algorithm to generate a path. A high-level Destination Driven Navigator uses these methods, models and algorithms to guide a mobile robot traveling in various environments while avoiding static and dynamic obstacles. A group of experiments has been conducted. The results exhibit that the Destination Driven Navigator is a powerful and effective paradigm for robot motion planning and obstacle avoidance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号