共查询到20条相似文献,搜索用时 46 毫秒
1.
David Beisecker 《Minds and Machines》2006,16(1):43-55
No philosopher has worked harder than Dan Dennett to set the possibility of machine mentality on firm philosophical footing. Dennett’s defense of this possibility has both a positive and a negative thrust. On the positive side, he has developed an account of mental activity that is tailor-made for the attribution of intentional states to purely mechanical contrivances, while on the negative side, he pillories as mystery mongering and skyhook grasping any attempts to erect barriers to the conception of machine mentality by excavating gulfs to keep us “bona fide” thinkers apart from the rest of creation. While I think he’s “won” the rhetorical tilts with his philosophical adversaries, I worry that Dennett’s negative side sometimes gets the better of him, and that this obscures advances that can be made on the positive side of his program. In this paper, I show that Dennett is much too dismissive of original intentionality in particular, and that this notion can be put to good theoretical use after all. Though deployed to distinguish different grades of mentality, it can (and should) be incorporated into a philosophical account of the mind that is recognizably Dennettian in spirit. 相似文献
2.
Manipulatives—physical learning materials such as cubes or tiles—are prevalent in educational settings across cultures and
have generated substantial research into how actions with physical objects may support children’s learning. The ability to
integrate digital technology into physical objects—so-called ‘digital manipulatives’—has generated excitement over the potential
to create new educational materials. However, without a clear understanding of how actions with physical materials lead to
learning, it is difficult to evaluate or inform designs in this area. This paper is intended to contribute to the development
of effective tangible technologies for children’s learning by summarising key debates about the representational advantages
of manipulatives under two key headings: offloading cognition—where manipulatives may help children by freeing up valuable cognitive resources during problem solving, and conceptual metaphors—where perceptual information or actions with objects have a structural correspondence with more symbolic concepts. The review
also indicates possible limitations of physical objects—most importantly that their symbolic significance is only granted
by the context in which they are used. These arguments are then discussed in light of tangible designs drawing upon the authors’
current research into tangibles and young children’s understanding of number. 相似文献
3.
In D’Ariano in Philosophy of Quantum Information and Entanglement, Cambridge University Press, Cambridge, UK (2010), one of
the authors proposed a set of operational postulates to be considered for axiomatizing Quantum Theory. The underlying idea
is to derive Quantum Theory as the mathematical representation of a fair operational framework, i.e. a set of rules which allows the experimenter to make predictions on future events on the basis of suitable tests, e.g. without interference from uncontrollable sources and having local control and low experimental complexity. In addition
to causality, two main postulates have been considered: PFAITH (existence of a pure preparationally faithful state), and FAITHE
(existence of a faithful effect). These postulates have exhibited an unexpected theoretical power, excluding all known nonquantum
probabilistic theories. In the same paper also postulate PURIFY-1 (purifiability of all states) has been introduced, which
later has been reconsidered in the stronger version PURIFY-2 (purifiability of all states unique up to reversible channels
on the purifying system) in Chiribella et al. (Reversible realization of physical processes in probabilistic theories, arXiv:0908.1583).
There, it has been shown that Postulate PURIFY-2, along with causality and local discriminability, narrow the probabilistic
theory to something very close to the quantum one. In the present paper we test the above postulates on some nonquantum probabilistic
models. The first model—the two-box world—is an extension of the Popescu–Rohrlich model (Found Phys, 24:379, 1994), which achieves the greatest violation of the CHSH
inequality compatible with the no-signaling principle. The second model—the two-clock world— is actually a full class of models, all having a disk as convex set of states for the local system. One of them corresponds
to—the two-rebit world— namely qubits with real Hilbert space. The third model—the spin-factor—is a sort of n-dimensional generalization of the clock. Finally the last model is the classical probabilistic theory. We see how each model violates some of the proposed postulates, when and how teleportation can be achieved, and we analyze
other interesting connections between these postulate violations, along with deep relations between the local and the non-local
structures of the probabilistic theory. 相似文献
4.
In this paper, I explore the implications of Fodor’s attacks on the Computational Theory of Mind (CTM), which get their most recent airing in The Mind Doesn’t Work That Way. I argue that if Fodor is right that the CTM founders on the global nature of abductive inference, then several of the philosophical views about the mind that he has championed over the years founder as well. I focus on Fodor’s accounts of mental causation, psychological explanation, and intentionality. 相似文献
5.
A reflexive dispositional analysis of mechanistic perception 总被引:1,自引:1,他引:0
John Dilworth 《Minds and Machines》2006,16(4):479-493
The field of machine perception is based on standard informational and computational approaches to perception. But naturalistic
informational theories are widely regarded as being inadequate, while purely syntactic computational approaches give no account
of perceptual content. Thus there is a significant need for a novel, purely naturalistic perceptual theory not based on informational
or computational concepts, which could provide a new paradigm for mechanistic perception. Now specifically evolutionary naturalistic approaches to perception have been—perhaps surprisingly—almost completely neglected for this purpose. Arguably
perceptual mechanisms enhance evolutionary fitness by facilitating sensorily mediated causal interactions between an organism Z and items X in its environment. A ‘reflexive’ theory of perception of this kind is outlined, according
to which an organism Z perceives an item X just in case X causes a sensory organ zi of Z to cause Z to acquire a disposition
toward the very same item X that caused the perception. The rest of the paper shows how an intuitively plausible account of mechanistic perception can
be developed and defended in terms of the reflexive theory. Also, a compatibilist option is provided for those who wish to
preserve a distinct informational concept of perception.
相似文献
John DilworthEmail: |
6.
Joel Walmsley 《Minds and Machines》2008,18(3):331-348
In this paper, I outline two strands of evidence for the conclusion that the dynamical approach to cognitive science both
seeks and provides covering law explanations. Two of the most successful dynamical models—Kelso’s model of rhythmic finger
movement and Thelen et al.’s model of infant perseverative reaching—can be seen to provide explanations which conform to the
famous explanatory scheme first put forward by Hempel and Oppenheim. In addition, many prominent advocates of the dynamical
approach also express the provision of this kind of explanation as a goal of dynamical cognitive science. I conclude by briefly outlining two consequences. First, dynamical cognitive science’s explanatory
style may strengthen its links to the so-called “situated” approach to cognition, but, secondly, it may also undermine the
widespread intuition that dynamics is related to emergentism in the philosophy of mind.
相似文献
Joel WalmsleyEmail: |
7.
Paul Schweizer 《Minds and Machines》1994,4(3):259-282
The paper examines the status of conscious presentation with regard to mental content and intentional states. I argue that conscious presentation of mental content should be viewed on the model of a secondary quality, as a subjectiveeffect of the microstructure of an underlying brain state. The brain state is in turn viewed as the instantiation of an abstract computational state, with the result that introspectively accessible content is interpreted as a presentation of the associated computational state realized by the brain. However, if the relation between consciousness and representational content is construed in this manner, then conscious presentation does not provide an adequate foundation for the claim that human mental states areintrinsically intentional. On this model, I argue that functionalism is able to account for (non-intrinsic) intentionality, but not for consciousness, which has implications for the computational paradigm, as well as for Searle's Chinese room thought experiment. 相似文献
8.
Mark Bishop 《Minds and Machines》2009,19(4):507-516
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers,
especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates
genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented
herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge
in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and
the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements
every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine
Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and
consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate
conscious experience and hence that disembodied minds lurk everywhere. 相似文献
9.
Elise Bonzon Marie-Christine Lagasquie-Schiex Jérôme Lang Bruno Zanuttini 《Autonomous Agents and Multi-Agent Systems》2009,18(1):1-35
Game theory is a widely used formal model for studying strategical interactions between agents. Boolean games (Harrenstein, Logic in conflict, PhD thesis, 2004; Harrenstein et al., Theoretical Aspects of Rationality and Knowledge, pp. 287–298, San Francisco Morgan Kaufmann, 2001) yield a compact representation of 2-player zero-sum static games with
binary preferences: an agent’s strategy consists of a truth assignment of the propositional variables she controls, and a
player’s preferences are expressed by a plain propositional formula. These restrictions (2-player, zero-sum, binary preferences)
strongly limit the expressivity of the framework. We first generalize the framework to n-player games which are not necessarily zero-sum. We give simple characterizations of Nash equilibria and dominated strategies,
and investigate the computational complexity of the associated problems. Then, we relax the last restriction by coupling Boolean
games with a representation, namely, CP-nets.
This article is a revised and extended version of the two conference articles [4] and [3]. 相似文献
10.
11.
Bradley Rives 《Minds and Machines》2009,19(2):199-227
It is commonplace in cognitive science that concepts are individuated in terms of the roles they play in the cognitive lives
of thinkers, a view that Jerry Fodor has recently been dubbed ‘Concept Pragmatism’. Quinean critics of Pragmatism have long
argued that it founders on its commitment to the analytic/synthetic distinction, since without such a distinction there is
plausibly no way to distinguish constitutive from non-constitutive roles in cognition. This paper considers Fodor’s empirical
arguments against analyticity, and in particular his arguments against lexical decomposition and definitions, and argues that
Concept Pragmatists have two viable options with respect to them. First, Concept Pragmatists can confront them head-on, and
argue that they do not show that lexical items are semantically primitive or that lexical concepts are internally unstructured.
Second, Pragmatists may accept that these arguments show that lexical concepts are atomic, but insist that this need not entail
that Pragmatism is false. For there is a viable version of Concept Pragmatism that does not take lexical items to be semantically
structured or lexical concepts to be internally structured. Adopting a version of Pragmatism that takes meaning relations
to be specified by inference rules, or meaning postulates, allows one to accept the empirical arguments in favor of Concept
Atomism, while at the same time deny that such arguments show that there are no analyticities. The paper concludes by responding
to Fodor’s recent objection that such a version of Concept Pragmatism has unhappy consequences concerning the relation between
concept constitution and concept possession.
相似文献
Bradley RivesEmail: |
12.
Computability and Complexity in Self-assembly 总被引:1,自引:0,他引:1
James I. Lathrop Jack H. Lutz Matthew J. Patitz Scott M. Summers 《Theory of Computing Systems》2011,48(3):617-647
This paper explores the impact of geometry on computability and complexity in Winfree’s model of nanoscale self-assembly.
We work in the two-dimensional tile assembly model, i.e., in the discrete Euclidean plane ℤ×ℤ. Our first main theorem says
that there is a roughly quadratic function f such that a set A⊆ℤ+ is computably enumerable if and only if the set X
A
={(f(n),0)∣n∈A}—a simple representation of A as a set of points on the x-axis—self-assembles in Winfree’s sense. In contrast, our second main theorem says that there are decidable sets D⊆ℤ×ℤ that do not self-assemble in Winfree’s sense. 相似文献
13.
We study the bit complexity of the sorting problem for asynchronous distributed systems. We show that for every network with
a tree topology T, every sorting algorithm must send at least bits in the worst case, where is the set of possible initial values, and Δ
T
is the sum of distances from all the vertices to a median of the tree. In addition, we present an algorithm that sends at
most bits for such trees. These bounds are tight if either L=Ω(N
1+ε
) or Δ
T
=Ω(N
2
). We also present results regarding average distributions. These results suggest that sorting is an inherently nondistributive
problem, since it requires an amount of information transfer that is equal to the concentration of all the data in a single
processor, which then distributes the final results to the whole network. The importance of bit complexity—as opposed to message
complexity—stems also from the fact that, in the lower bound discussion, no assumptions are made as to the nature of the algorithm.
Received May 2, 1994; revised December 22, 1995. 相似文献
14.
Abstract. We present a new approach for designing external graph algorithms and use it to design simple, deterministic and randomized
external algorithms for computing connected components, minimum spanning forests, bottleneck minimum spanning forests, maximal
independent sets (randomized only), and maximal matchings in undirected graphs. Our I/ O bounds compete with those of previous approaches. We also introduce a semi-external model, in which the vertex set but
not the edge set of a graph fits in main memory. In this model we give an improved connected components algorithm, using new
results for external grouping and sorting with duplicates. Unlike previous approaches, ours is purely functional—without side
effects—and is thus amenable to standard checkpointing and programming language optimization techniques. This is an important
practical consideration for applications that may take hours to run. 相似文献
15.
Philippe Schlenker 《Journal of Logic, Language and Information》2007,16(3):325-356
Heim 1983 suggested that the analysis of presupposition projection requires that the classical notion of meanings as truth conditions be replaced with a dynamic notion of meanings as Context Change Potentials. But as several researchers (including Heim herself) later noted, the dynamic framework is insufficiently predictive: although
it allows one to state that, say, the dynamic effect of F and G is to first update a Context Set C with F and then with G (i.e., C[F and G] = C[F][G]), it fails to explain why there couldn’t be a ‘deviant’ conjunction and* which performed these operations in the opposite order (i.e., C[F and* G] = C[G][F]). We provide a formal introduction to
a competing framework, the Transparency theory, which addresses this problem. Unlike dynamic semantics, our analysis is fully classical, i.e., bivalent and static. And it derives the projective behavior of connectives from their bivalent meaning and their syntax. We concentrate on the formal properties
of a simple version of the theory, and we prove that (i) full equivalence with Heim’s results is guaranteed in the propositional
case (Theorem 1), and that (ii) the equivalence can be extended to the quantificational case (for any generalized quantifiers), but only
when certain conditions are met (Theorem 2). 相似文献
16.
This paper addresses one of the least studied, although very important, problems of machine translation—the problem of morphological
mismatches between languages and their handling during transfer. The level at which we assume transfer to be carried out is
the Deep-Syntactic Structure (DSyntS) as proposed in the Meaning-Text Theory. DSyntS is abstract enough to avoid all types of surface morphological divergences.
For the remaining ‘genuine’ divergences between grammatical significations, we propose a morphological transfer model. To
illustrate this model, we apply it to the transfer of grammemes of definiteness and aspect for the language pair Russian–German
and German–Russian, respectively. 相似文献
17.
Dennett's intended rapprochement between physical realism and intentional relativism fails because it is premised upon conflicting arguments governing the status of design. Indeed, Dennett's remarks on design serve to highlight tensions buried deep within his theory. For inasmuch as Dennett succeeds in objectifying attributions of design, attributions of intentionality readily follow suit, leading to a form of intentional realism. But inasmuch as Dennett is successful in relativizing attributions of design, scientific realism at large is subject to renewed anti-realistic criticism. Dennettian-inspired considerations of adaptationism substantiate the former move towards intentional realism, while considerations of the relativity of artifactual design encourage the latter move towards physical relativism. The ambivalence intrinsic to Dennett's ``mild realism' can be viewed as a function of these two conflicting positions on design, for Dennett can no more avoid objectifying intentionality when he is realistic about design than he can avoid relativizing physical causality when relativistic about design. 相似文献
18.
Hannu Nurmi 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(3):281-288
One of the founders of social choice theory, Marquis de Condorcet, assigned truth degrees to propositions expressing preferences
over options. Although his work is often discussed in terms of probability theory, it is arguable that his truth degree lends
itself to a more natural interpretation as a fuzzy preference. We shall review some of Condorcet’s results in the light of
this interpretation. The first twentieth century applications of fuzzy concepts to social choice appeared rather shortly after
the introduction by L. A. Zadeh of the concept of fuzzy binary relation in early 1970s. The early applications dealt with
experimental anomalies and their accountability with the aid of fuzzy preference relations and fuzzy goal states. Considerable
literature now exists on various solution concepts in fuzzy voting games and many important theorems of traditional social
choice theory have found their counterpart in fuzzy social choice. The natural next step would seem to be the design of fuzzy
mechanisms and institutions.
The author wishes to thank—without implicating—Didier Dubois, Javier Montero and Rudolf Seising for useful comments on an
earlier draft. 相似文献
19.
Graham Pont 《Nexus Network Journal》2005,7(1):76-85
Plato divided science (episteme) into ‘science of action’ (praktike) and ‘science of mere knowing’ (gnostike). His argument is the first known attempt to distinguish what is now recognised as technology, as distinct from more purely
rational science. Aristotle coined the compound term technologia and thereby established this new department of science within
the general system of knowledge. Plato did not develop his novel characterisation of the architect any further, for the ancient
Greeks did not consider architecture a fine or estimable art. The best available source of Greek architectural pedagogy is
the Roman Vitruvius. Graham Pont discusses Vitruvius’s distinction between the ‘practical’ side of architecture (fabrica) and the ‘theoretical’ (ratiocinatio), and examines the mathematical preparation of ancient Greek and Roman architects
相似文献
20.
We consider summation of consecutive values (φ(v), φ(v + 1), ..., φ(w) of a meromorphic function φ(z), where v, w ∈ ℤ. We assume that φ(z) satisfies a linear difference equation L(y) = 0 with polynomial coefficients, and that a summing operator for L exists (such an operator can be found—if it exists—by the Accurate Summation algorithm, or, alternatively, by Gosper’s algorithm
when ordL = 1). The notion of bottom summation which covers the case where φ(z) has poles in ℤ is introduced.
The text was submitted by the authors in English. 相似文献