首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Situation theory in the sense of Barwise and Perry (1983) and nonmonotonic reasoning have been relatively disparate research programs in AI, the former focusing on computational approaches to natural language processing while the latter has been extensively used as a basic architecture for rational agents. The aim of the present paper is to suggest one way that these two approaches might fit together. Specifically, a situation semantics is given for a system of defeasible reasoning.  相似文献   

2.
Conclusions reached using common sense reasoning from a set of premises are often subsequently revised when additional premises are added. Because we do not always accept previous conclusions in light of subsequent information, common sense reasoning is said to be nonmonotonic. But in the standard formal systems usually studied by logicians, if a conclusion follows from a set of premises, that same conclusion still follows no matter how the premise set is augmented; that is, the consequence relations of standard logics are monotonic. Much recent research in AI has been devoted to the attempt to develop nonmonotonic logics. After some motivational material, we give four formal proofs that there can be no nonmonotonic consequence relation that is characterized by universal constraints on rational belief structures. In other words, a nonmonotonic consequence relation that corresponds to universal principles of rational belief is impossible. We show that the nonmonotonicity of common sense reasoning is a function of the way we use logic, not a function of the logic we use. We give several examples of how nonmonotonic reasoning systems may be based on monotonic logics.  相似文献   

3.
The widespread tendency, even within AI, to anthropomorphize machines makes it easier to convince us of their intelligence. How can any putative demonstration of intelligence in machines be trusted if the AI researcher readily succumbs to make-believe? This is (what I shall call) the forensic problem of anthropomorphism. I argue that the Turing test provides a solution. This paper illustrates the phenomenon of misplaced anthropomorphism and presents a new perspective on Turing?s imitation game. It also examines the role of the Turing test in relation to the current dispute between human-level AI and ‘mindless intelligence’.  相似文献   

4.
Artificial intelligence (AI) is once again a topic of huge interest for computer scientists around the world. Whilst advances in the capability of machines are being made all around the world at an incredible rate, there is also increasing focus on the need for computerised systems to be able to explain their decisions, at least to some degree. It is also clear that data and knowledge in the real world are characterised by uncertainty. Fuzzy systems can provide decision support, which both handle uncertainty and have explicit representations of uncertain knowledge and inference processes. However, it is not yet clear how any decision support systems, including those featuring fuzzy methods, should be evaluated as to whether their use is permitted. This paper presents a conceptual framework of indistinguishability as the key component of the evaluation of computerised decision support systems. Case studies are presented in which it has been clearly demonstrated that human expert performance is less than perfect, together with techniques that may enable fuzzy systems to emulate human-level performance including variability. In conclusion, this paper argues for the need for " fuzzy AI” in two senses: (i) the need for fuzzy methodologies (in the technical sense of Zadeh’s fuzzy sets and systems) as knowledge-based systems to represent and reason with uncertainty; and (ii) the need for fuzziness (in the non-technical sense) with an acceptance of imperfect performance in evaluating AI systems.   相似文献   

5.
An analysis of Ray Kurzweil's recent book The Singularity Is Near is given, along with Drew McDermott's recent critique. The conclusion is that Kurzweil does an excellent job of fleshing out one particular plausible scenario regarding the future of AI, in which human-level AI first arrives via human-brain emulation. McDermott's arguments against the notion of Singularity via iteratively self-improving AI, as described by Kurzweil, are considered and found wanting. However, it is pointed out that the scenario focused on by Kurzweil is not the only plausible one; and an alternative is discussed, in which human-level AI arrives first via non-human-like AI's operating virtual worlds.  相似文献   

6.
Defining functions by pattern matching over the arguments is advantageous for understanding and reasoning, but it tends to expose the implementation of a datatype. Significant effort has been invested in tackling this loss of modularity; however, decoupling patterns from concrete representations while maintaining soundness of reasoning has been a challenge. Inspired by the development of invertible programming, we propose an approach to program refactoring based on a right-invertible language rinv—every function has a right (or pre-) inverse. We show how this new design is able to permit a smooth incremental transition from programs with algebraic datatypes and pattern matching, to ones with proper encapsulation, while maintaining simple and sound reasoning.  相似文献   

7.
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous.  相似文献   

8.
In order to express incomplete knowledge, extended logic programs have been proposed as logic programs with classical negation along with negation as failure. This paper discusses ways to deal with a broad class of common sense knowledge by using extended logic programs. For this purpose, we present a uniform approach for dealing with both incomplete and contradictory programs, as a simple framework of hypothetical reasoning in which some rules are dealt with as candidate hypotheses that can be used to augment the background theory. This theory formation framework can be used for default reasoning, contradiction removals, the closed world assumption, and abduction. We also show a translation of the theory formation framework to an extended logic program whose answer sets correspond to the consistent belief sets of augmented theories.  相似文献   

9.
Audiences in argumentation frameworks   总被引:1,自引:0,他引:1  
  相似文献   

10.
过去10年中涌现出大量新兴的多媒体应用和服务,带来了很多可以用于多媒体前沿研究的多媒体数据。多媒体研究在图像/视频内容分析、多媒体搜索和推荐、流媒体服务和多媒体内容分发等方向均取得了重要进展。与此同时,由于在深度学习领域所取得的重大突破,人工智能(artificial intelligence,AI)在20世纪50年代被正式视为一门学科之后,迎来了一次“新”的发展浪潮。因此,一个问题就自然而然地出现了:当多媒体遇到人工智能时会带来什么?为了回答这个问题,本文通过研究多媒体和人工智能之间的相互影响引入了多媒体智能的概念。从两个方面探讨多媒体与人工智能之间的相互影响:一是多媒体促使人工智能向着更具可解释性的方向发展;二是人工智能反过来为多媒体研究注入了新的思维方式。这两个方面形成了一个良性循环,多媒体和人工智能在其中不断促进彼此发展。本文对相关研究及进展进行了讨论,并围绕值得进一步探索的研究方向分享见解。希望可以对多媒体智能的未来发展带来新的研究思路。  相似文献   

11.
AI in Latin America is healthy and growing in at least five countries and expanding to other nations. For instance, ITESM's AI graduate programs have graduated students from Bolivia, Peru, and Ecuador, who are returning to their countries to work in universities and companies. AI is a young scientific discipline compared to other sciences. Since its creation in the mid-twentieth century by Alan Turing and various American researchers, it has grown steadily and spread across the world, including LA. This has been facilitated by the sharing of a common type of language and the same culture, but most importantly, by the great scientific challenges posed by AI objectives.  相似文献   

12.
13.
In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the ??first step fallacy??. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a valid basis for predicting the ultimate success of one??s project. Climbing a hill should not give one any assurance that if he keeps going he will reach the sky. Perhaps one may have overlooked some serious problem lying ahead. There is, in fact, no reason to think that we are making progress towards AI or, indeed, that AI is even possible, in which case claiming incremental progress towards it would make no sense. In current excited waiting for the singularity, religion and technology converge. Hard headed materialists desperately yearn for a world where our bodies no longer have to grow old and die. They will be transformed into information, like Google digitizes old books, and we will achieve the promise of eternal life. As an existential philosopher, however, I suggest that we may have to overcome the desperate desire to digitalize our bodies so as to achieve immortality, and, instead, face up to and maybe even enjoy our embodied finitude.  相似文献   

14.
It was noted recently that the framework of default logics can be exploited for detecting outliers. Outliers are observations expressed by sets of literals that feature unexpected properties. These observations are not explicitly provided in input (as it happens with abduction) but, rather, they are hidden in the given knowledge base. Unfortunately, in the two related formalisms for specifying defaults — Reiter's default logic and extended disjunctive logic programs — the most general outlier detection problems turn out to lie at the third level of the polynomial hierarchy. In this note, we analyze the complexity of outlier detection for two very simple classes of default theories, namely NU and DNU, for which the entailment problem is solvable in polynomial time. We show that, for these classes, checking for the existence of an outlier is anyway intractable. This result contributes to further showing the inherent intractability of outlier detection in default reasoning.  相似文献   

15.
Modelling reasoning with legal cases has been a central concern of AI and Law since the 1980s. The approach which represents cases as factors and dimensions has been a central part of that work. In this paper I consider how several varieties of the approach can be applied to the interesting case of Popov v Hayashi. After briefly reviewing some of the key landmarks of the approach, the case is represented in terms of factors and dimensions, and further explored using theory construction and argumentation schemes approaches.  相似文献   

16.
A program property is a predicate on programs. In this paper we explore program properties for safety, progress and parallel composition, of the form U ? V where U and V are either predicates on states of a program or program properties, and ? satisfies three rules that are also enjoyed by implication. We show how such properties can be used to reason about concurrent programs. Our motivation is to explore methods of reasoning based on a very small number of widely-known rules.  相似文献   

17.
Deep learning models have achieved high performance across different domains, such as medical decision-making, autonomous vehicles, decision support systems, among many others. However, despite this success, the inner mechanisms of these models are opaque because their internal representations are too complex for a human to understand. This opacity makes it hard to understand the how or the why of the predictions of deep learning models.There has been a growing interest in model-agnostic methods that make deep learning models more transparent and explainable to humans. Some researchers recently argued that for a machine to achieve human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals.This paper presents an in-depth systematic review of the diverse existing literature on counterfactuals and causability for explainable artificial intelligence (AI). We performed a Latent Dirichlet topic modelling analysis (LDA) under a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to find the most relevant literature articles. This analysis yielded a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications to real-world data.Our research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Furthermore, our findings suggest that the explanations derived from popular algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous, or even biased explanations. Thus, this paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable AI.  相似文献   

18.
In this paper a survey of elaboration tolerance in logical AI is provided. John McCarthy views elaboration tolerance as the key property of any formalism that can represent information in the common sense informatic situa-tion. The goal of studying elaboration tolerance is finding a formalism for describing problems logically that is as elab-oration tolerant as natural language and the associated background knowledge. In the beginning, we introduce the missionaries and cannibals problem and its elaboration problems provided by John McCarthy as the test examples of studying elaboration tolerance. Then we introduce the study of elaboration tolerance from three aspects. First of all,the study of elaboration tolerance of the existing systems is introduced such as Causal Calculator and ABSFOL. Sec-ond the study of special elaboration is presented such as elaboration of actions. Last but not least a formal definition of elaboration toleration and evaluation tools is nrmvided.  相似文献   

19.
Based on the view that symmetry recognition plays an essential role in human reasoning about the laws of physical phenomena, we propose a reasoning paradigm in which symmetry assist in the discovery of physical laws. Within this paradigm, symmetries are used as constraints which enable us to specify, derive, and generalize these equations. The symmetry-based reasoning is extracted and formalized from Einstein's work on relativity. We claim that the reasoning procedure thus formalized provides a general reasoning architecture that is common to dimensional analysis in engineering, mathematical proofs, and common sense reasoning. This symmetry-based reasoning system has been implemented as a symbol-processing system with a production system and a formula-processing system. Using the symmetry-based reasoning system, the equation of Black's Law of specific heat is demonstrated to be specifie, derived, and generalized.  相似文献   

20.
Artificial intelligence: where has it been, and where is it going?   总被引:3,自引:0,他引:3  
The directions for near-future development of artificial intelligence (AI) can be described in terms of four dichotomies: the use of reasoning versus the use of knowledge; the roles of parallel and of serial systems; systems that perform and systems that learn to perform; and programming languages derived from the search metaphor versus languages derived from the logical reasoning metaphor. Although the author believes that there are reasons for emphasizing knowledge systems (production systems) that are serial, capable of expert performance, and designed in terms of the search metaphor, the other pathways are also important and should not be ignored. In particular, empirical work is needed in the construction and empirical testing of the performance of large systems to explore all of these branching pathways  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号