首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Modelling reasoning with legal cases has been a central concern of AI and Law since the 1980s. The approach which represents cases as factors and dimensions has been a central part of that work. In this paper I consider how several varieties of the approach can be applied to the interesting case of Popov v Hayashi. After briefly reviewing some of the key landmarks of the approach, the case is represented in terms of factors and dimensions, and further explored using theory construction and argumentation schemes approaches.  相似文献   

2.
Research in Artificial Intelligence (AI) and the Law has maintained an emphasis on knowledge representation and formal reasoning during a period when statistical, data-driven approaches have ascended to dominance within AI as a whole. Electronic discovery is a legal application area, with substantial commercial and research interest, where there are compelling arguments in favor of both empirical and knowledge-based approaches. We discuss the cases for both perspectives, as well as the opportunities for beneficial synergies.  相似文献   

3.
This paper studies the use of hypothetical and value-based reasoning in US Supreme-Court cases concerning the United States Fourth Amendment. Drawing upon formal AI & Law models of legal argument a semi-formal reconstruction is given of parts of the Carney case, which has been studied previously in AI & law research on case-based reasoning. As part of the reconstruction, a semi-formal proposal is made for extending the formal AI & Law models with forms of metalevel reasoning in several argument schemes. The result is compared with Rissland’s (1989) analysis in terms of dimensions and Ashley’s (2008) analysis in terms of his process model of legal argument with hypotheticals.  相似文献   

4.
AI and Law: What about the future?   总被引:1,自引:0,他引:1  
The introduction of results of AI and Law research in actual legal practice advances disturbingly slow. One of the problems is that most research can be classified as either theoretical or pragmatic, while combinations of these two are scarce. This interferes with the need for feedback as well as with the need of getting support, both financially and from actual legal practice. The conclusion of this paper is that an emphasis on research that generates operational and sophisticated systems is necessary in order to provide a future for AI and Law.  相似文献   

5.
The paper raises a problem of formalising several key concepts in legal theory, namely, goal, function and value. The law is viewed from a perspective of computer science in law. I intend to apply requirements engineering methods in law. The Berman and Hafner's [1993. Representing teleological structure in case based legal reasoning: the missing link. In: Proceedings of the Fourth International Conference on AI and Law. ACM Press, New York, pp. 50–59] challenge to model case-based reasoning in legal domain and the more systematic research, which is collected in Artificial Intelligence and Law journal in 2002, are considered as possible approaches to solutions. The term “goal” covers purposes, policies, interests, values, etc. I find a formalisation to have different levels depending on a distinct meaning of the term “law”. E.g., legal drafting is closer to engineering than to legal reasoning. European Union law and implementation of EU directives provide us with court decisions based on the teleological method. National implementation measures could employ goal-driven systems engineering techniques. Conclusions: (1) the formalisation is a challenging problem, (2) experts from other domains will gain from explicit representation of aims behind the law, (3) analysis of the structure of law is not enough; the studies of the content of law are required, too.  相似文献   

6.
Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.  相似文献   

7.
This article provides an overview of, and thematic justification for, the special issue of the journal of Artificial Intelligence and Law entitled “E-Discovery”. In attempting to define a characteristic “AI & Law” approach to e-discovery, and since a central theme of AI & Law involves computationally modeling legal knowledge, reasoning and decision making, we focus on the theme of representing and reasoning with litigators’ theories or hypotheses about document relevance through a variety of techniques including machine learning. We also identify two emerging techniques for enabling users’ document queries to better express the theories of relevance and connect them to documents: social network analysis and a hypothesis ontology.  相似文献   

8.
In this paper I shall discuss the notion of argument, and the importanceof argument in AI and Law. I shall distinguish four areas where argument hasbeen applied: in modelling legal reasoning based on cases; in thepresentation and explanation of results from a rule based legal informationsystem; in the resolution of normative conflict and problems ofnon-monotonicity; and as a basis for dialogue games to support the modellingof the process of argument. The study of argument is held to offer prospectsof real progress in the field of AI and law, and the purpose of this paperis to provide an overview of work, and the connection between the various strands.  相似文献   

9.
In this paper, we report the results of the first phase of a three-part study of the emerging role of Artificial Intelligence (AI) as an academic discipline. The stratification among journals that are formal communication vehicles for AI research is described and analyzed. Based on a survey of business faculty from AACSB-accredited schools around the country, 30 leading AI journals are ranked and evaluated, in terms of their academic quality and reputation. The paper also reports on the progress toward the development of a cumulative tradition for AI and its foundation. The second phase of this research focuses on the knowledge utilization of AI researchers and practitioners. In the third phase, the findings from the above two studies will be analyzed and used as a basis to develop a taxonomy of research for Artificial Intelligence.  相似文献   

10.
Researchers in the field of AI and Law have developed a number of computational models of the arguments that skilled attorneys make based on past cases. However, these models have not accounted for the ways that attorneys use middle-level normative background knowledge (1) to organize multi-case arguments, (2) to reason about the significance of differences between cases, and (3) to assess the relevance of precedent cases to a given problem situation. We present a novel model, that accounts for these argumentation phenomena. An evaluation study showed that arguments about the significance of distinctions based on this model help predict the outcome of cases in the area of trade secrets law, confirming the quality of these arguments. The model forms the basis of an intelligent learning environment called CATO, which was designed to help beginning law students acquire basic argumentation skills. CATO uses the model for a number of purposes, including the dynamic generation of argumentation examples. In a second evaluation study, carried out in the context of an actual legal writing course, we compared instruction with CATO against the best traditional legal writing instruction. The results indicate that CATO's example-based instructional approach is effective in teaching basic argumentation skills. However, a more “integrated” approach appears to be needed if students are to achieve better transfer of these skills to more complex contexts. CATO's argumentation model and instructional environment are a contribution to the research fields of AI and Law, Case-Based Reasoning, and AI and Education.  相似文献   

11.
There is much interest in moving AI out into real world applications, a move which has been encouraged by recent funding which has attempted to show industry and commerce can benefit from the Fifth Generation of computing. In this article I suggest that the legal application area is one which is very much more complex than it might — at first sight — seem. I use arguments from the sociology of law to indicate that the viewing of the legal system as simply a rule-bound discipline is inherently nave. This, while not new in jurisprudence, is — as the literature of AI and law indicates — certainly novel to the field of artificial intelligence. The socio-legal argument provided is set within the context of AI as one more example of the failure of scientific success and method to easily transmit itself over into the social sciences.  相似文献   

12.
This paper presents a methodology to design and implement programs intended to decide cases, described as sets of factors, according to a theory of a particular domain based on a set of precedent cases relating to that domain. We use Abstract Dialectical Frameworks (ADFs), a recent development in AI knowledge representation, as the central feature of our design method. ADFs will play a role akin to that played by Entity–Relationship models in the design of database systems. First, we explain how the factor hierarchy of the well-known legal reasoning system CATO can be used to instantiate an ADF for the domain of US Trade Secrets. This is intended to demonstrate the suitability of ADFs for expressing the design of legal cased based systems. The method is then applied to two other legal domains often used in the literature of AI and Law. In each domain, the design is provided by the domain analyst expressing the cases in terms of factors organised into an ADF from which an executable program can be implemented in a straightforward way by taking advantage of the closeness of the acceptance conditions of the ADF to components of an executable program. We evaluate the ease of implementation, the performance and efficacy of the resulting program, ease of refinement of the program and the transparency of the reasoning. This evaluation suggests ways in which factor based systems, which are limited by taking as their starting point the representation of cases as sets of factors and so abstracting away the particular facts, can be extended to address open issues in AI and Law by incorporating the case facts to improve the decision, and by considering justification and reasoning using portion of precedents.  相似文献   

13.
This article is an exercise in computational jurisprudence. It seems clear thatthe field of AI and Law should draw upon the insights of legal philosophers,whenever possible. But can the computational perspective offer anything inreturn? I will explore this question by focusing on the concept of OWNERSHIP,which has been debated in the jurisprudential literature for centuries. Althoughthe intellectual currents here flow mostly in one direction – from legal philosophy to AI – I will show that there are also some insights to be gained from a computational analysis of the OWNERSHIP relation. In particular, the article suggests a computational explanation for the emergence of abstract property rights, divorced from concrete material objects.  相似文献   

14.
Watson  David 《Minds and Machines》2019,29(3):417-440

Artificial intelligence (AI) has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning methods—namely lasso penalties, bagging, and boosting—offer subtler, more interesting analogies to human reasoning as both an individual and a social phenomenon. Despite the temptation to fall back on anthropomorphic tropes when discussing AI, however, I conclude that such rhetoric is at best misleading and at worst downright dangerous. The impulse to humanize algorithms is an obstacle to properly conceptualizing the ethical challenges posed by emerging technologies.

  相似文献   

15.
The study of the formal attributes of legal systems such as consistency, completeness, independence and generality is of special interest in legal philosophy and legal theory. Apart from concern with the content of the law, these formal attributes constitute desiderata without which a legal system is considered deficient. Legisprudence is a relatively new discipline within legal theory that studies these formal (and other) attributes of law at the level of law making (i.e. legislation). This trend in legal theory is also paralleled by research in the so-called field of legimatics, which focuses generally on the use of informatics in the process of drafting legislation. One approach within legimatics studies the limits and constraints of applying artificial intelligence (AI) techniques and methods to the law making process (e.g. Jurix, 1993) as well as application of these techniques to certain tasks within this process (Jurix, 1993; Valente, 1995; den Haan 1996). This paper discusses normative conflicts, their explication and typology, and relates these to the conceptualisation of legal knowledge and methods for representing it. We examine some common approaches in legal theory for the explication of normative conflicts and show their limitations. In particular, we argue that these common approaches do not pay sufficient attention to the role 'world knowledge' plays in the analysis of normative conflicts. Finally, we suggest alternative ways for dealing with the problems that arise from inconsistency in law. The observations we make are relevant for the development of computer programs designed to assist in the law-making process.  相似文献   

16.
‘AI & Law’ research has been around since the 1970s, even though with shifting emphasis. This is an overview of the contributions of digital technologies, both artificial intelligence and non-AI smart tools, to both the legal professions and the police. For example, we briefly consider text mining and case-automated summarization, tools supporting argumentation, tools concerning sentencing based on the technique of case-based reasoning, the role of abductive reasoning, research into applying AI to legal evidence, tools for fighting crime and tools for identification.  相似文献   

17.
为了直观地了解人工智能领域发展现状及研究前沿,剖析国内外研究存在的异同点,助力国内人工智能研究。以Web of Science数据库和CNKI数据库的2008-2019年期刊论文为依据,借助Citespace软件对期刊论文进行科学知识图谱绘制和可视化分析。根据客观数据和科学知识图谱发现:2016年后,人工智能领域迎来新的研究热潮,且呈现“中美双雄”的格局;在发文质量上,北美区域是当前人工智能研究水平最高的区域;目前,人工智能研究的主力军是高校,且尚未形成产学研相结合的体系;研究主题具有鲜明的时代特征,人工神经网络、算法、大数据、机器人、计算机视觉、法律伦理学等成为当下的研究热点;最后根据人工智能研究脉络演进图与高频突现词提出该领域的“深度强化学习”“人工智能+”“智能社会科学”三个研究前沿,为后续人工智能研究提供方向建议。  相似文献   

18.
González  Rodrigo 《AI & Society》2020,35(2):441-450

This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game and in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms.

  相似文献   

19.
Although, the Latent Damage System was produced in the late 1980s, Susskind—a co‐developer—asserts in his latest book that this and similar systems will have a profound influence upon the future direction and practice of law, by bringing about a shift in the legal paradigm (Susskind, The Future of Law Facing the Challenges of Information Technology, 2996. pp. 105, 286). As part of the research into the conflict which, in my view, exists between the artificial intelligence and law movement and adversarial argumentation in the litigatory process, I analyse the claims and objectives made by the developers of the Latent Damage System and suggest that the current technological know‐how is incapable of representing dynamic, adversarial, legal environments. In consequence, I contend that intelligent‐based applications cannot provide an authentic and automatic access to resolving adversarial legal disputes.  相似文献   

20.
I want increased confidence in my programs. I want my own and other people's programs to be more readable. I want a new discipline of programming that augments my thought processes. Therefore, I create and explore a new discipline of programming in my BabyUML laboratory. I select, simplify and twist UML and other languages to demonstrate how they help bridge the gap between me as a programmer and the objects running in my computer The focus is on the run time objects; their structure, their interaction, and their individual behaviors. Trygve Reenskaug is professor emeritus of informatics at the University of Oslo. He has 40 years experience in software engineering research and the development of industrial strength software products. He has extensive teaching and speaking experience including keynotes, talks and tutorials. His firsts include the Autokon system for computer aided design of ships with end user programming language, structured programming, and a data base oriented architecture from 1960; object oriented applications and role (collaboration) modeling from 1973; Model-View-Controller, the world's first reusable object oriented framework, from 1979; OOram role modeling method and tool from 1983. Trygve was a member of the UML Core Team and was a contributor to UML 1.4. The goal of his current research is to create a new, high level discipline of programming that lets us reclaim the mastery of software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号