首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

2.
Numeration systems, the basis of which is defined by a linear recurrence with integer coefficients, are considered. We give conditions on the recurrence under which the function of normalization which transforms any representation of an integer into the normal one—obtained by the usual algorithm—can be realized by a finite automaton. Addition is a particular case of normalization. The same questions are discussed for the representation of real numbers in basis , where is a real number > 1, in connection with symbolic dynamics. In particular it is shown that if is a Pisot number, then the normalization and the addition in basis are computable by a finite automaton.This work has been supported by the PRC Mathématiques et Informatique.  相似文献   

3.
We address the problem of training multilayer perceptrons to instantiate a target function. In particular, we explore the accuracy of the trained network on a test set of previously unseen patterns — the generalisation ability of the trained network. We systematically evaluate alternative strategies designed to improve the generalisation performance. The basic idea is to generate a diverse set of networks, each of which is designed to be an implementation of the target function. We then have a set of trained, alternative versions — a version set. The goal is to achieve useful diversity within this set, and thus generate potential for improved generalisation performance of the set as a wholewhen compared to the performance of any individual version. We define this notion of useful diversity, we define a metric for it, we explore a number of ways of generating it, and we present the results of an empirical study of a number of strategies for exploiting it to achieve maximum generalisation performance. The strategies encompass statistical measures as well as a selectornet approach which proves to be particularly promising. The selector net is a form of metanet that operates in conjunction with a version set.  相似文献   

4.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

5.
In a previous study (P. B. Slater, Eur. Phys. J. B. 17, 471 (2000)), several remarkably simple exact results were found, in certain specialized m-dimensional scenarios (m 4), for the a priori probability that a pair of qubits is unentangled/separable. The measure used was the volume element of the Bures metric (identically one-fourth the statistical distinguishability [SD] metric). Here, making use of a newly-developed (Euler angle) parameterization of the 4 × 4 density matrices of Tilma, Byrd and Sudarshan, we extend the analysis to the complete 15-dimensional convex set (C) of arbitrarily paired qubits—the total SD volume of which is known to be 8 / 1680 = 8/24 3 5 7 5.64794. Using advanced quasi-Monte Carlo procedures (scrambled Halton sequences) for numerical integration in this high-dimensional space, we approximately (5.64851) reproduce that value, while obtaining an estimate of 0.416302 for the SD volume of separable states. We conjecture that this is but an approximation to 6/2310 = 6 / (2 3 5 7 11) 0.416186. The ratio of the two volumes, 8/1122 .0736881, would then constitute the exact Bures/SD probability of separability. The SD area of the 14-dimensional boundary of C is 1427/12285 = 2 717/33 5 7 13 34.911, while we obtain a numerical estimate of 1.75414 for the SD area of the boundary of separable states. PACS: 03.67.-; 03.65.Ud; 02.60.Jh; 02.40.Ky  相似文献   

6.
'Racial' disparities among cancers, particularly of the breast and prostate, are something of a mystery. For the US, in the face of slavery and its sequelae, centuries of interbreeding has greatly leavened genetic differences between Blacks and Whites, but marked contrasts in disease prevalence and progression persist. Adjustment for socioeconomic status and lifestyle, while statistically accounting for much of the variance in breast cancer, only begs the question of ultimate causality. Here we propose a more basic biological explanation that extends the theory of immune cognition to include an elaborate tumor control mechanism constituting the principal selection pressure acting on pathologically mutating cell clones. The interplay between them occurs in the context of an embedding, highly structured, system of culturally-specific psychosocial stress. A rate distortion argument finds that larger system able to literally write an image of itself onto the disease process, in terms of enhanced risk behaviour, accelerated mutation rate, and depressed mutation control. The dynamics are analogous to punctuated equilibrium in simple evolutionary systems, accounting for the staged nature of disease progression. We conclude that 'social exposures' are, for human populations, far more than incidental cofactors in cancer etiology. Rather, they are part of the basic biology of the disorder. The aphorism that culture is as much a part of human biology as the enamel on our teeth appears literally true at a fundamental cellular level.  相似文献   

7.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

8.
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class computed in parallel is better than the one obtained on a sequential computer. What represents a better solution depends on the problem under consideration. Thus, for optimization problems, better means closer to optimal. Similarly, for numerical problems, a solution is better than another one if it is more accurate. The present paper continues this line of inquiry by exploring another class enjoying the aforementioned property, namely, cryptographic problems in a real-time setting. In this class, better means more secure. A real-time cryptographic problem is presented for which the parallel solution is provably, considerably, and consistently better than a sequential one.It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup—a phenomenon itself originally thought to be impossible.  相似文献   

9.
The “explicit-implicit” distinction   总被引:3,自引:3,他引:0  
Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980's a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, based upon the criterion ofconstant time processing.The present paper examines Kirsh's claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh's new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists.  相似文献   

10.
Modular Control and Coordination of Discrete-Event Systems   总被引:1,自引:0,他引:1  
In the supervisory control of discrete-event systems based on controllable languages, a standard way to handle state explosion in large systems is by modular supervision: either horizontal (decentralized) or vertical (hierarchical). However, unless all the relevant languages are prefix-closed, a well-known potential hazard with modularity is that of conflict. In decentralized control, modular supervisors that are individually nonblocking for the plant may nevertheless produce blocking, or even deadlock, when operating on-line concurrently. Similarly, a high-level hierarchical supervisor that predicts nonblocking at its aggregated level of abstraction may inadvertently admit blocking in a low-level implementation. In two previous papers, the authors showed that nonblocking hierarchical control can be guaranteed provided high-level aggregation is sufficiently fine; the appropriate conditions were formalized in terms of control structures and observers. In this paper we apply the same technique to decentralized control, when specifications are imposed on local models of the global process; in this way we remove the restriction in some earlier work that the plant and specification (marked) languages be prefix-closed. We then solve a more general problem of coordination: namely how to determine a high level coordinator that forestalls conflict in a decentralized architecture when it potentially arises, but is otherwise minimally intrusive on low-level control action. Coordination thus combines both vertical and horizontal modularity. The example of a simple production process is provided as a practical illustration. We conclude with an appraisal of the computational effort involved.  相似文献   

11.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

12.
This paper presents generated enhancements for robust two and three-quarter dimensional meshing, including: (1) automated interval assignment by integer programming for submapped surfaces and volumes, (2) surface submapping, and (3) volume submapping. An introduction to the simplex method, an optimization technique of integer programming, is presented. Simplification of complex geometry is required for the formulation of the integer programming problem. A method of i-j unfolding is defined which explains how irregular geometry can be realigned into a simplified form that is suitable for submap interval assignment solutions. Also presented is the processes by which submapping eliminates the decomposition of surface geometry, through a pseudodecomposition process, producing suitable mapped meshes. The process of submapping involves the creation of interpolated virtual edges, user defined vertex types and i-j-k space traversals. The creation of interpolated virtual edges is the method by which submapping automatically subdivides surface geometry. The interpolated virtual edge is formulated according to an interpolation scheme using the node discretization of curves on the surface. User defined vertex types allow direct user control of surface decomposition and interval assignment by modifying i-j-k space traversals. Volume submapping takes the geometry decomposition to a higher level by using mapped virtual surfaces to eliminate decomposition of complex volumes.  相似文献   

13.
In this paper we present a modal approach to contrastive logic, the logic of contrasts as these appear in natural language conjunctions such as but. We use a simple modal logic, which is an extension of the well-knownS5 logic, and base the contrastive operators proposed by Francez in [2] on the basic modalities that appear in this logic. We thus obtain a logic for contrastive operators that is more in accord with the tradition of intensional logic, and that, moreover — we argue — has some more natural properties. Particularly, attention is paid to nesting contrastive operators. We show that nestings of but give quite natural results, and indicate how nestings of other contrastive operators can be done adequately. Finally, we discuss the example of the Hangman's Paradox and some similarities (and differences) with default reasoning. But but us no buts, as they say.Also partially supported by Nijmegen University, Toernooiveld, 6525 ED Nijmegen, The Netherlands.  相似文献   

14.
Horst  Steven 《Minds and Machines》1999,9(3):347-381
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols.  相似文献   

15.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

16.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

17.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

18.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

19.
Existence of coherent extensions of coherent conditional probabilities is one of the major merits of de Finetti's theory of probability. However, coherent extensions which meet some special property, like -additivity or disintegrability, can fail to exist. An example is given where a coherent and -additive conditional probability cannot be extended preserving both -additivity and coherence. Motivated by such example, conditions are provided in order that a coherent and -additive conditional probability admits a coherent and -additive extension. Moreover, conditions are given for the existence of disintegrations, possibly -additive, of a probability along a partition.  相似文献   

20.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号