首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
How to Pass a Turing Test   总被引:1,自引:0,他引:1  
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding –understanding a domain in terms of itself – is syntactic understanding. (3) An internal (or narrow), first-person point of view makes an external (or wide), third-person point of view otiose for purposes of understanding cognition.  相似文献   

2.
Since Aristotle it is recognised that a valid syllogism cannot have two particular premises. However, that is not how a lay person sees it; at least as long as the premises read many, most etc, instead of a plain some. The lay people are right if one considers that these syllogisms do not have strict but approximate (Zadeh) validity. Typically there are only particular premises available in everyday life and one is dependent on such syllogisms. – Some rules on the usage of particular premises are given below.  相似文献   

3.
In this paper I consider how the computer can or should be accepted in Japanese schools. The concept of teaching in Japan stresses learning from a long-term perspective. Whereas in the instructional technology, on which the CAI or the Tutoring System depends, step-by-step attainments in relatively short time are emphasized. The former is reluctant in using the computer, but both share the Platonic perspective which are goal-oriented. However, The Socratic teacher, who intends to activate students' innate disposition to be better, would find another way of teaching and use of the computer.  相似文献   

4.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

5.
Fritz Böhle 《AI & Society》1994,8(3):207-215
The increasing use of computer-cont rolled automation systems has brought with it a bias towards a purely scientific approach to work. This tends to undermine the significance of experiential knowledge and sensory perception when working with highly automated processes. This paper argues for a recognition of the importance of subjectifying action in carrying out work practices. Without it, complex technical systems cannot be effectively operated. Moreover, the contradictory demands that arise for workers could have negative consequences in terms of work burdens and health risks.  相似文献   

6.
This paper discusses some of the key drivers that will enable businesses to operate effectively on-line, and looks at how the notion of website will become one of an on-line presence which will support the main activities of an organisation. This is placed in the context of the development of the information society which will allow individuals-as consumers or employees-quick, inexpensive and on-demand access to vast quantities of entertainment, services and information. The paper draws on an example of these developments in Australasia.  相似文献   

7.
A multicriterion optimization method is proposed for complex systems with parameters ranked by descending importance. This method requires weaker expert estimates for choosing an optimal alternative from the set of equally good solutions by formal specification of functional dependence between ranked parameter weights.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 167–170, November–December, 1991.  相似文献   

8.
Recently, researchers have mainly been interested only in the search for data content that are globally similar to the query and not in the search for inside data items. This paper presents an algorithm, called a generalized virtual node (GVN) algorithm, to search for data items where parts (subdatatype) are similar to the incoming query. We call this subdatatype-based multimedia retrieval. Each multimedia datatype, such as image and audio is represented in this paper as a k-dimensional signal in the spatio-temporal domain. A k-dimensional signal is transformed into characteristic features and these features are stored in a hierarchical multidimensional structure, called the k-tree. Each node on the k-tree contains partial content corresponding to the spatial and/or temporal positions in the data. The k-tree structure allows us to build a unified retrieval model for any types of multimedia data. It also eliminates unnecessary comparisons of cross-media querying. The experimental results of the use of the new GVN algorithm for subaudio and subimage retrievals show that it takes much less retrieval times than other earlier algorithms such as brute-force and the partial-matching algorithm, while the accuracy is acceptable.  相似文献   

9.
A note on dimensions and factors   总被引:1,自引:1,他引:0  
In this short note, we discuss several aspectsof dimensions and the related constructof factors. We concentrate on those aspectsthat are relevant to articles in this specialissue, especially those dealing with the analysisof the wild animal cases discussed inBerman and Hafner's 1993 ICAIL article. We reviewthe basic ideas about dimensions,as used in HYPO, and point out differences withfactors, as used in subsequent systemslike CATO. Our goal is to correct certainmisconceptions that have arisen over the years.  相似文献   

10.
A memory-coupled multiprocessor—well suited to bit-wise operation—can be utilized to operate as a 1024 items cellular processing unit. Each processor is working on 32 bits and 32 such processors are combined to a multiprocessor. The information is stored in vertical direction, as it is defined and described in earlier papers [1] on vertical processing. The two-dimensional array (32 times 32 bits) is composed of the 32 bit-machine-words of the coupled processors on the one hand and of 32 processors in nearest-neighbour-topology on the other hand. The bit-wise cellular operation at one of the 1024 points is realized by the program of the processor—possibly assisted by appropriate microprogam sequences.Dedicated to Professor Willard L. Miranker on the occasion of his 60th birthday  相似文献   

11.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

12.
The paper introduces the concept of Computer-based Informated Environments (CBIEs) to indicate an emergent form of work organisation facilitated by information technology. It first addresses the problem of inconsistent meanings of the informate concept in the literature, and it then focuses on those cases which, it is believed, show conditions of plausible informated environments. Finally, the paper looks at those factors that when found together contribute to building a CBIE. It makes reference to CBIEs as workplaces that comprise a non-technocentric perspective and questions whether CBIEs truly represent an anthropocentric route of information technology.  相似文献   

13.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

14.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   

15.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

16.
In two recent books, Jerry Fodor has developed a set of sufficient conditions for an object X to non-naturally and non-derivatively mean X. In an earlier paper we presented three reasons for thinking Fodor's theory to be inadequate. One of these problems we have dubbed the Pathologies Problem. In response to queries concerning the relationship between the Pathologies Problem and what Fodor calls Block's Problem, we argue that, while Block's Problem does not threatenFodor's view, the Pathologies Problem does.We would like to thank Ray Elugardo, Pat Manfredi, and Donna Summerfield for helpful comments on an earlier paper on Fodorian Semantics, X means X: Semantics Fodor-Style. We would especially like to thank Ned Block for extended e-mail conversations about Block's Problem. Block agrees that his problem is not the same as our pathologies problem. Contrary to what we say here, he still maintains that his objection can ultimately be made to work to defeat Fodor's theory of meaning. His elaboration of Block's Problem is different than the one we present here. Versions of a related paper were presented at the 1991 Annual Meeting of the Southern Society for Philosophy and Psychology as well as the Canadian Society for History and Philosophy of Science.  相似文献   

17.
In 1996, the author attended a seminar on ethics given by C. West Churchman at the University of California, Berkeley. During that year, the author also interviewed Churchman several times regarding the future direction of information sciences in general and the information systems research field in particular. This article is a compilation of the seminar and the interviews. It is set in the context of both Churchman's earlier and his current views of a global god, good, kindness, and caring. C. West Churchman holds that global ethics should lead to the study and design of information systems to solve large and difficult problems of the humankind such as poverty, crime and disease. His Global Ethical Management (GEM) of information sciences translates into abandoning the current goals and boundaries of the information sciences fields and changing what constitutes valid research to globally ethical endeavors.  相似文献   

18.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

19.
A neural network for recognition of handwritten musical notes, based on the well-known Neocognitron model, is described. The Neocognitron has been used for the what pathway (symbol recognition), while contextual knowledge has been applied for the where (symbol placement). This way, we benefit from dividing the process for dealing with this complicated recognition task. Also, different degrees of intrusiveness in learning have been incorporated in the same network: More intrusive supervised learning has been implemented in the lower neuron layers and less intrusive in the upper one. This way, the network adapts itself to the handwriting of the user. The network consists of a 13×49 input layer and three pairs of simple and complex neuron layers. It has been trained to recognize 20 symbols of unconnected notes on a musical staff and was tested with a set of unlearned input notes. Its recognition rate for the individual unseen notes was up to 93%, averaging 80% for all categories. These preliminary results indicate that a modified Neocognitron could be a good candidate for identification of handwritten musical notes.  相似文献   

20.
The derivative based approach to solve the optimal toll problem is demonstrated in this paper for a medium scale network. It is shown that although the method works for most small problems with only a few links tolled, it fails to converge for larger scale problems. This failure led to the development of an alternative genetic algorithm (GA) based approach for finding optimal toll levels for a given set of chargeable links. A variation on the GA based approach is used to identify the best toll locations making use of location indices suggested by Verhoef (2002).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号