首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The IDG was originally founded to carry out research into the collection and processing of documentation relating to Italian legislation, case law and legal authority. The Institute has since concentrated on automated documentation and legal informatics, as well as the application of artificial intelligence to the law. This article describes the many projects undertaken at the Institute. Elio Fameli has, since 1971, been a researcher at the Instituto per la Documentazione Giuridica of Florence of the Italian National Research Council. He has worked in legal informatics generally but is mainly interested in problems specifically related to information systems and the application of artificial intelligence to the law. Currently, his research is focused on legal expert systems and the methodology and technology for their construction. He has, since 1975, been an editor of the international journal, Informatica e Dirrito and a member of the editorial board of the International Bibliography on Computers and the Law, published as a regular issue of the aformentioned journal.  相似文献   

2.
This summary essay comments on the contents of and issues raised by the special number of Computers and Humanities on Computers and the Teaching of Literature. It argues against the use of hypertextual resources without careful pedagogical understanding of the dangers they present of encouraging students to become passive consumers rather than active thinkers. It argues for the use of computer-mediated conversation, computer-modeling, and computer analyses of texts as appropriate applications in the literature classroom.Rosanne G. Potter is a Professor of English at Iowa State University; her research interests are in computational analyses of play dialogue and reader responses to literary texts. She has published essays in Computers and the Humanities since 1981 and an edited collection, Literary Computing and Literary Criticism (Philadelphia: U of Pennsylvania P, 1989).  相似文献   

3.
In this article we describe two core ontologies of law that specify knowledge that is common to all domains of law. The first one, FOLaw describes and explains dependencies between types of knowledge in legal reasoning; the second one, LRI-Core ontology, captures the main concepts in legal information processing. Although FOLaw has shown to be of high practical value in various applied European ICT projects, its reuse is rather limited as it is rather concerned with the structure of legal reasoning than with legal knowledge itself: as many other “legal core ontologies”, FOLaw is therefore rather an epistemological framework than an ontology. Therefore, we also developed LRI-Core. As we argue here that legal knowledge is based to a large extend on common-sense knowledge, LRI-Core is particularly inspired by research on abstract common-sense concepts. The main categories of LRI-Core are: physical, mental and abstract concepts. Roles cover in particular social worlds. Another special category are occurrences; terms that denote events and situations. We illustrate the use of LRI-Core with an ontology for Dutch criminal law, developed in the e-Court European project.  相似文献   

4.
5.
This paper describes two research projects, both involving Latin and Greek lexicography. They are undertaken at the University of Florence and at the Italian National Research Council respectively. The one involves the creation of a Dictionary of Justinian's constitutions based on the emperor's legislative lexicon formed in the Corpus Iuris and elsewhere. The most demanding aspect of this task has been the creation of the Dictionary of the Novellae. The other project involves the creation of a Lexicon of the Novellae in the Authenticum version. Anna Maria Bartoletti Colombo is a Research Director of the CNR and teaches at the University of Florence. She took part in the first experiments in computational linguistics with the Historical Dictionary of the Italian language of the Accademia della Crusca. Proceeding to the Italian Legal Dictionary of the CNR, she concentrated her interests in the language of law. Her major works are Vocabolario delle Costituzioni Greche e Latine di Giustiniano and Lessico delle Novallae di Giustiniano as described in her article in this issue.  相似文献   

6.
This article presents an overview of educational computing for the non-specialist from the perspective of the classroom teacher as user of courseware rather than as programmer, developer, or researcher. It considers diverse terms associated with courseware and CAI and offers a working definition of courseware and a pragmatic task-related typology broad enough to accommodate many humanities domains. It further discusses sources for dedicated as well as crossover courseware products in the humanities, resources for information, and courseware interest in Computers and the Humanities as evidenced by articles, reviews, and special issues.Estelle Irizarry is professor of Spanish at Georgetown University and author of twenty books on Hispanic literature. She is Courseware Editor of Computers and the Humanities and Editor of Hispania.  相似文献   

7.
Considerable attention has been given to the accessibility of legal documents, such as legislation and case law, both in legal information retrieval (query formulation, search algorithms), in legal information dissemination practice (numerous examples of on-line access to formal sources of law), and in legal knowledge-based systems (by translating the contents of those documents to ready-to-use rule and case-based systems). However, within AI & law, it has hardly ever been tried to make the contents of sources of law, and the relations among them, more accessible to those without a legal education. This article presents a theory about translating sources of law into information accessible to persons without a legal education. It illustrates the theory by providing two elaborated examples of such translation ventures. In the first example, formal sources of law in the domain of exchanging police information are translated into rules of thumb useful for policemen. In the second example, the goal of providing non-legal professionals with insight into legislative procedures is translated into a framework for making available sources of law through an integrated legislative calendar. Although the theory itself does not support automating the several stages described, in this article some hints are given as to what such automation would have to look like.
Laurens MommersEmail:
  相似文献   

8.
The importance of reasoning in law is pointed out. Law and jurisprudence belong to the reasoning-conscious disciplines. Accordingly, there is a long tradition of logic in law. The specific methods of professional work in law are to be seen in close connection with legal reasoning. The advent of computers at first did not touch upon legal reasoning (or the professional work in law). At first computers could be used only for general auxiliary functions (e.g., numerical calculations in tax law). Gradually, the use of computers for auxiliary functions in law has become more specific and more sophisticated (e.g., legal information retrieval), touching more closely upon professional legal work. Moreover, renewed interest in AI has also fostered interest in AI in law, especially for legal expert systems. AI techniques can be used in support of legal reasoning. Yet until now legal expert systems have remained in the research and development stage and have hardly succeeded in becoming a profitable tool for the profession. Therefore it is hoped that the two lines of computer support, for auxiliary functions in law and for immediate support of legal reasoning, may unite in the future.Herbert Fiedler is professor of Legal Informatics, general theory of law and penal law in the Department of Economics and Law at the University of Bonn.  相似文献   

9.
This article describes an ontological model of norms. The basic assumption is that a substantial part of a legal system is grounded on the concept of agency. Since a legal system aims at regulating a society, then its goal can be achieved only by affecting the behaviour of the members of the society. We assume that a society is made up of agents (which can be individuals, institutions, software programs, etc.), that agents have beliefs, goals and preferences, and that they commit to intentions in order to choose a line of behaviour. The role of norms, within a legal system, is to specify how and when the chosen behaviour agrees with the basic principles of the legal system. In this article, we show how a model based on plans can be the basis for the ontological representation of norms, which are expressed as constraints on the possible plans an agent may choose to guide its behaviour. Moreover, the paper describes how the proposed model can be linked to the upper level of a philosophically well-founded ontology (DOLCE); in this way, the model is set in a wider perspective, which opens the way to further developments.  相似文献   

10.
Trends in Computing with DNA   总被引:2,自引:0,他引:2       下载免费PDF全文
As an emerging new research area, DNA computation, or more generally biomolecular computation, extends into other fields such as nanotechnology and material design, and is developing into a new sub-discipline of science and engineering. This paper provides a brief survey of some concepts and developments in this area. In particular several approaches are described for biomolecular solutions of the satisfiability problem (using bit strands, DNA tiles and graph self-assembly). Theoretical models such as the primer splicing systems as well as the recent model of forbidding and enforcing are also described. of DNA nanostructures and nanomechanical devices as wellWe review some experimental results of self-assemblyas the design of an autonomous finite state machine.  相似文献   

11.
This paper traces the history of the Text Encoding Initiative, through the Vassar Conference and the Poughkeepsie Principles to the publication, in May 1994, of theGuidelines for the Electronic Text Encoding and Interchange. The authors explain the types of questions that were raised, the attempts made to resolve them, the TEI project's aims, the general organization of the TEI committees, and they discuss the project's future.Nancy Ide is Associate Professor and chair of Computer Science at Vassar College, and Visiting Researcher at CNRS. She is president of the Association for Computers and the Humanities and chair of the Steering Committee of the Text Encoding Initiative. C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative.  相似文献   

12.
The use of the computer in the legal arena has seen a rapid increase over the last decade ranging from the simple level where it is used to gain access to information to the complex level where it is used for diagnostic and decision making purposes. This second use of technology makes certain assumptions at the theoretical level regarding the nature of law and legal reasoning. This paper outlines in brief the jurisprudential assumptions made and the resulting questions raised as a consequence of the possiblity of ascribing legal functions to intelligent computer systems.  相似文献   

13.
Written laws, records and legal materials form the very foundation of a democratic society. Lawmakers, legal scholars and everyday citizens alike need, and are entitled, to access the current and historic materials that comprise, explain, define, critique and contextualize their laws and legal institutions. The preservation of legal information in all formats is imperative. Thus far, the twenty-first century has witnessed unprecedented mass-scale acceptance and adoption of digital culture, which has resulted in an explosion in digital information. However, digitally born materials, especially those that are published directly and independently to the Web, are presently at an extremely high risk of permanent loss. Our legal heritage is no exception to this phenomenon, and efforts must be put forth to ensure that our current body of digital legal information is not lost. The authors explored the role of the United States law library community in the preservation of digital legal information. Through an online survey of state and academic law library directors, it was determined that those represented in the sample recognize that digitally born legal materials are at high risk for loss, yet their own digital preservation projects have primarily focused upon the preservation of digitized print materials, rather than digitally born materials. Digital preservation activities among surveyed libraries have been largely limited by a lack of funding, staffing and expertise; however, these barriers could be overcome by collaboration with other institutions, as well as participation in a large-scale regional or national digital preservation movement, which would allow for resource-sharing among participants. One such collaborative digital preservation program, the Chesapeake Project, is profiled in the article and explored as a collaborative effort that may be expanded upon or replicated by other institutions and libraries tackling the challenges of digital preservation.  相似文献   

14.
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval.When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers.Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process.Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions.The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.  相似文献   

15.
This paper addresses the problem of resource allocation for distributed real-time periodic tasks, operating in environments that undergo unpredictable changes and that defy the specification of meaningful worst-case execution times. These tasks are supplied by input data originating from various environmental workload sources. Rather than using worst-case execution times (WCETs) to describe the CPU usage of the tasks, we assume here that execution profiles are given to describe the running time of the tasks in terms of the size of the input data of each workload source. The objective of resource allocation is to produce an initial allocation that is robust against fluctuations in the environmental parameters. We try to maximize the input size (workload) that can be handled by the system, and hence to delay possible (costly) reallocations as long as possible. We present an approximation algorithm based on first-fit and binary search that we call FFBS. As we show here, the first-fit algorithm produces solutions that are often close to optimal. In particular, we show analytically that FFBS is guaranteed to produce a solution that is at least 41% of optimal, asymptotically, under certain reasonable restrictions on the running times of tasks in the system. Moreover, we show that if at most 12% of the system utilization is consumed by input independent tasks (e.g., constant time tasks), then FFBS is guaranteed to produce a solution that is at least 33% of optimal, asymptotically. Moreover, we present simulations to compare FFBS approximation algorithm with a set of standard (local search) heuristics such as hill-climbing, simulated annealing, and random search. The results suggest that FFBS, in combination with other local improvement strategies, may be a reasonable approach for resource allocation in dynamic real-time systems. David Juedes is a tenured associate professor and assistant chair for computer science in the School of Electrical Engineering and Computer Science at Ohio University. Dr. Juedes received his Ph.D. in Computer Science from Iowa State University in 1994, and his main research interests are algorithm design and analysis, the theory of computation, algorithms for real-time systems, and bioinformatics. Dr. Juedes has published numerous conference and journal papers and has acted as a referee for IEEE Transactions on Computers, Algorithmica, SIAM Journal on Computing, Theoretical Computer Science, Information and Computation, Information Processing Letters, and other conferences and journals. Dazhang Gu is a software architect and researcher at Pegasus Technologies (NeuCo), Inc. He received his Ph.D. in Electrical Engineering and Computer Science from Ohio University in 2005. His main research interests are real-time systems, distributed systems, and resource optimization. He has published conference and journal papers on these subjects and has refereed for the Journal of Real-Time Systems, IEEE Transactions on Computers, and IEEE Transactions on Parallel and Distributed Systems among others. He also served as a session chair and publications chair for several conferences. Frank Drews is an Assistant Professor of Electical Engineering and Computer Science at Ohio Unversity. Dr. Drews received his Ph.D. in Computer Science from the Clausthal Unversity of Technolgy in Germany in 2002. His main research interests are resource management for operating systems and real-time systems, and bioinformatics. Dr. Drews has numerous publications in conferences and journals and has served as a reviewer for IEEE Transactions on Computers, the Journal of Systems and Software, and other conferences and Journals. He was Publication Chair for the OCCBIO’06 conference, Guest Editor of a Special Issue of the Journal of Systems and Software on “Dynamic Resource Management for Distributed Real-Time Systems”, organizer of special tracks at the IEEE IPDPS WPDRTS workshops in 2005 and 2006. Klaus Ecker received his Ph.D. in Theoretical Physics from the University of Graz, Austria, and his Dr. habil. in Computer Science from the University of Bonn. Since 1978 he is professor in the Department of Computer Science at the Clausthal University of Technology, Germany, and since 2005 he is visiting professor at the Ohio University. His research interests are parallel processing and theory of scheduling, especially in real time systems, and bioinformatics. Prof. Ecker published widely in the above mentioned areas in well reputed journals and proceedings of international conferences as well. He is also the author of two monographs on scheduling theory. Since 1981 he is organizing annually international workshops on parallel processing. He is associate editor of Real Time Systems, and member of the German Gesellschaft fuer Informatik (GI) and of the Association for Computing Machinery (ACM). Lonnie R. Welch received a Ph.D. in Computer and Information Science from the Ohio State University. Currently, he is the Stuckey Professor of Electrical Engineering and Computer Science at Ohio University. Dr. Welch performs research in the areas of real-time systems, distributed computing and bioinformatics. His research has been sponsored by the Defense Advanced Research Projects Agency, the Navy, NASA, the National Science Foundation and the Army. Dr. Welch has twenty years of research experience in the area of high performance computing. In his graduate work at Ohio State University, he developed a high performance 3-D graphics rendering algorithm, and he invented a parallel virtual machine for object-oriented software. For the past 15 years his research has focused on middleware and optimization algorithms for high performance computing. His research has produced three successive generations of adaptive resource management (RM) middleware for high performance real-time systems. The project has resulted in two patents and more than 150 publications. Professor Welch also collaborates on diabetes research with faculty at Edison Biotechnology Institute and on genomics research with faculty in the Department of Environmental and Plant Biology at Ohio University. Dr. Welch is a member of the editorial boards of IEEE Transactions on Computers, The Journal of Scalable Computing: Practice and Experience, and The International Journal of Computers and Applications. He is also the founder of the International Workshop on Parallel and Distributed Real-time Systems and of the Ohio Collaborative Conference on Bioinformatics. Silke Schomann graduated in 2003 with a M.Sc. in Computer Science from Clausthal University Of Technology, where she has been working as a scientific assistant since then. She is currently working on her Ph.D. thesis in computer science at the same university.  相似文献   

16.
This paper presents a new algorithm to find an appropriate similarityunder which we apply legal rules analogically. Since there may exist a lotof similarities between the premises of rule and a case in inquiry, we haveto select an appropriate similarity that is relevant to both thelegal rule and a top goal of our legal reasoning. For this purpose, a newcriterion to distinguish the appropriate similarities from the others isproposed and tested. The criterion is based on Goal-DependentAbstraction (GDA) to select a similarity such that an abstraction basedon the similarity never loses the necessary information to prove the ground (purpose of legislation) of the legal rule. In order to cope withour huge space of similarities, our GDA algorithm uses some constraintsto prune useless similarities.  相似文献   

17.
This paper describes a computer-assisted analysis of semantic patterning in William Blake'sThe Four Zoas and considers the way in which such patterns contribute to the structure and meaning of the work. The analysis involves examining combinations and recombinations of images across the text for concentrations of images and images groups, recurring images, and patterns in the distribution of individual images and clusters of images. Statistical correlation routines were used to determine the degree of correlation among images across the extire text as well as in specific text segments. Principal components analysis enabled identifying thematic clusters of images, and the distribution of these clusters across these text were in turn examined to determine their patterning. Finally, time series analysis and Fourier analysis were used to find and verify patterns in the distribution of images across the text. Fourier analysis revealed striking patterns in the distribution of imagery in theZoas, which suggests that Blake may have used such patterns to help convey the poem's powerful thematic statements.Nancy M. Ide is associate professor of Computer Science at Vassar College in Poughkeepsie, New York. She is currently president of the Association for Computers and the Humanities. Professor Ide has written a textbook for humanities computing entitled Pascal for the Humanities, as well as numerous papers in the fields of humanities computing and computational linguistics.  相似文献   

18.
To help design an environment in which professionals without legal training can make effective use of public sector legal information on planning and the environment – for Add-Wijzer, a European e-government project – we evaluated their perceptions of usefulness and usability. In concurrent think-aloud usability tests, lawyers and non-lawyers carried out information retrieval tasks on a range of online legal databases. We found that non-lawyers reported twice as many difficulties as those with legal training (p = 0.001), that the number of difficulties and the choice of database affected successful completion, and that the non-lawyers had surprisingly few problems understanding legal terminology. Instead, they had more problems understanding the syntactical structure of legal documents and collections. The results support the constraint attunement hypothesis (CAH) of the effects of expertise on information retrieval, with implications for the design of systems to support the effective understanding and use of information.  相似文献   

19.
With the proliferation of extremely high-dimensional data, feature selection algorithms have become indispensable components of the learning process. Strangely, despite extensive work on the stability of learning algorithms, the stability of feature selection algorithms has been relatively neglected. This study is an attempt to fill that gap by quantifying the sensitivity of feature selection algorithms to variations in the training set. We assess the stability of feature selection algorithms based on the stability of the feature preferences that they express in the form of weights-scores, ranks, or a selected feature subset. We examine a number of measures to quantify the stability of feature preferences and propose an empirical way to estimate them. We perform a series of experiments with several feature selection algorithms on a set of proteomics datasets. The experiments allow us to explore the merits of each stability measure and create stability profiles of the feature selection algorithms. Finally, we show how stability profiles can support the choice of a feature selection algorithm. Alexandros Kalousis received the B.Sc. degree in computer science, in 1994, and the M.Sc. degree in advanced information systems, in 1997, both from the University of Athens, Greece. He received the Ph.D. degree in meta-learning for classification algorithm selection from the University of Geneva, Department of Computer Science, Geneva, in 2002. Since then he is a Senior Researcher in the same university. His research interests include relational learning with kernels and distances, stability of feature selection algorithms, and feature extraction from spectral data. Julien Prados is a Ph.D. student at the University of Geneva, Switzerland. In 1999 and 2001, he received the B.Sc. and M.Sc. degrees in computer science from the University Joseph Fourier (Grenoble, France). After a year of work in industry, he joined the Geneva Artificial Intelligence Laboratory, where he is working on bioinformatics and datamining tools for mass spectrometry data analysis. Melanie Hilario has a Ph.D. in computer science from the University of Paris VI and currently works at the University of Geneva’s Artificial Intelligence Laboratory. She has initiated and participated in several European research projects on neuro-symbolic integration, meta-learning, and biological text mining. She has served on the program committees of many conferences and workshops in machine learning, data mining, and artificial intelligence. She is currently an Associate Editor of theInternational Journal on Artificial Intelligence Toolsand a member of the Editorial Board of theIntelligent Data Analysis journal.  相似文献   

20.
Legal knowledge based systems (KBSs) are, by definition, grounded on law. Very often the relevant law is subject to routine amendment and repeal, such changes occurring at irregular and unpredictable intervals. These systems are thus particularly affected by significant problems of adaptation as a result, a fact which has limited their practical take-up. If they are to be of more practical use the maintenance issues associated with these systems must be taken seriously. In this paper we discuss the issues associated with the maintenance of legal KBSs and describe a suite of maintenance tools designed to address these issues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号