首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

2.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

3.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

4.
Modular Control and Coordination of Discrete-Event Systems   总被引:1,自引:0,他引:1  
In the supervisory control of discrete-event systems based on controllable languages, a standard way to handle state explosion in large systems is by modular supervision: either horizontal (decentralized) or vertical (hierarchical). However, unless all the relevant languages are prefix-closed, a well-known potential hazard with modularity is that of conflict. In decentralized control, modular supervisors that are individually nonblocking for the plant may nevertheless produce blocking, or even deadlock, when operating on-line concurrently. Similarly, a high-level hierarchical supervisor that predicts nonblocking at its aggregated level of abstraction may inadvertently admit blocking in a low-level implementation. In two previous papers, the authors showed that nonblocking hierarchical control can be guaranteed provided high-level aggregation is sufficiently fine; the appropriate conditions were formalized in terms of control structures and observers. In this paper we apply the same technique to decentralized control, when specifications are imposed on local models of the global process; in this way we remove the restriction in some earlier work that the plant and specification (marked) languages be prefix-closed. We then solve a more general problem of coordination: namely how to determine a high level coordinator that forestalls conflict in a decentralized architecture when it potentially arises, but is otherwise minimally intrusive on low-level control action. Coordination thus combines both vertical and horizontal modularity. The example of a simple production process is provided as a practical illustration. We conclude with an appraisal of the computational effort involved.  相似文献   

5.
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a no touch group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. See and touch subjects physically picked up a virtual plate possessing solidity and weight, using a technique called tactile augmentation. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. See and touch subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the no touch group. In Experiment 2 (a pilot study), subjects physically bit a chocolate bar in one condition, and imagined biting a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.Based in part on Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments' by Hunter Hoffman which appeared in the Proceedings of the IEEE Virtual Reality Annual International Symposium '98, Atlanta GA, pp 59–63. ¢ 1998 IEEE.  相似文献   

6.
Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly mechanical activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in cultural pruning of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.  相似文献   

7.
'Racial' disparities among cancers, particularly of the breast and prostate, are something of a mystery. For the US, in the face of slavery and its sequelae, centuries of interbreeding has greatly leavened genetic differences between Blacks and Whites, but marked contrasts in disease prevalence and progression persist. Adjustment for socioeconomic status and lifestyle, while statistically accounting for much of the variance in breast cancer, only begs the question of ultimate causality. Here we propose a more basic biological explanation that extends the theory of immune cognition to include an elaborate tumor control mechanism constituting the principal selection pressure acting on pathologically mutating cell clones. The interplay between them occurs in the context of an embedding, highly structured, system of culturally-specific psychosocial stress. A rate distortion argument finds that larger system able to literally write an image of itself onto the disease process, in terms of enhanced risk behaviour, accelerated mutation rate, and depressed mutation control. The dynamics are analogous to punctuated equilibrium in simple evolutionary systems, accounting for the staged nature of disease progression. We conclude that 'social exposures' are, for human populations, far more than incidental cofactors in cancer etiology. Rather, they are part of the basic biology of the disorder. The aphorism that culture is as much a part of human biology as the enamel on our teeth appears literally true at a fundamental cellular level.  相似文献   

8.
This paper is a discussion about how the Application Perspective works in practice.1 We talk about values and attitudes to system development and computer systems, and we illustrate how they have been carried out in practice by examples from the Florence project.2 The metaphors utensil and epaulet refer to questions about how we conceive the computer system we are to design in the system development process. Our experience is that, in the scientific community, technical challenges mean making computer systems that may be characterised as epaulets: they have technical, fancy features, but are not particularly useful. Making small, simple, but useful computer systems, more like utensils, does not give as much credit even if the development process may be just as challenging.  相似文献   

9.
This paper presents generated enhancements for robust two and three-quarter dimensional meshing, including: (1) automated interval assignment by integer programming for submapped surfaces and volumes, (2) surface submapping, and (3) volume submapping. An introduction to the simplex method, an optimization technique of integer programming, is presented. Simplification of complex geometry is required for the formulation of the integer programming problem. A method of i-j unfolding is defined which explains how irregular geometry can be realigned into a simplified form that is suitable for submap interval assignment solutions. Also presented is the processes by which submapping eliminates the decomposition of surface geometry, through a pseudodecomposition process, producing suitable mapped meshes. The process of submapping involves the creation of interpolated virtual edges, user defined vertex types and i-j-k space traversals. The creation of interpolated virtual edges is the method by which submapping automatically subdivides surface geometry. The interpolated virtual edge is formulated according to an interpolation scheme using the node discretization of curves on the surface. User defined vertex types allow direct user control of surface decomposition and interval assignment by modifying i-j-k space traversals. Volume submapping takes the geometry decomposition to a higher level by using mapped virtual surfaces to eliminate decomposition of complex volumes.  相似文献   

10.
Horst  Steven 《Minds and Machines》1999,9(3):347-381
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols.  相似文献   

11.
The “explicit-implicit” distinction   总被引:3,自引:3,他引:0  
Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980's a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, based upon the criterion ofconstant time processing.The present paper examines Kirsh's claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh's new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists.  相似文献   

12.
This paper presents an alternative to the speech acts with STRIPS approach to implementing dialogue a fully implemented AI planner which generates and analyses the semantics of utterances using a single linguistic act for all contexts. Using this act, the planner can model problematic conversational situations, including felicitous and infelicitous instances of bluffing, lying, sarcasm, and stating the obvious. The act has negligible effects, and its precondition can always be proved. Speaker maxims enable the speaker to plan to deceive, as well as to generate implicatures, while hearer maxims enable the hearer to recognise deceptions, and interpret implicatures. The planner proceeds by achieving parts of the constructive proof of a goal. It incorporates an epistemic theorem prover, which embodies a deduction model of belief, and a constructive logic.  相似文献   

13.
This study demonstrates an objective method used to evaluate the enhanceability of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal models of defect cause. Another definition of defect-prone software is presented that concentrates attention on software that requires unusually high repair considering the magnitude of planned enhancement.  相似文献   

14.
In terms of Groenendijk and Stokhofs (1984) formalization of exhaustive interpretation, many conversational implicatures can be accounted for. In this paper we justify and generalize this approach. Our justification proceeds by relating their account via Halpern and Moses (1984) non-monotonic theory of only knowing to the Gricean maxims of Quality and the first sub-maxim of Quantity. The approach of Groenendijk and Stokhof (1984) is generalized such that it can also account for implicatures that are triggered in subclauses not entailed by the whole complex sentence.  相似文献   

15.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

16.
This position paper argues that extending the CSP model to a richer set of tasks such as, constraint optimization, probabilistic inference and decision theoretic tasks can be done within a unifying framework called bucket elimination. The framework allows uniform hybrids for combining elimination and conditioning guided by the problem's structure and for explicating the tradeoffs between space and time and between time and accuracy.  相似文献   

17.
Coordinating Multiple Agents via Reinforcement Learning   总被引:2,自引:0,他引:2  
In this paper, we attempt to use reinforcement learning techniques to solve agent coordination problems in task-oriented environments. The Fuzzy Subjective Task Structure model (FSTS) is presented to model the general agent coordination. We show that an agent coordination problem modeled in FSTS is a Decision-Theoretic Planning (DTP) problem, to which reinforcement learning can be applied. Two learning algorithms, coarse-grained and fine-grained, are proposed to address agents coordination behavior at two different levels. The coarse-grained algorithm operates at one level and tackle hard system constraints, and the fine-grained at another level and for soft constraints. We argue that it is important to explicitly model and explore coordination-specific (particularly system constraints) information, which underpins the two algorithms and attributes to the effectiveness of the algorithms. The algorithms are formally proved to converge and experimentally shown to be effective.  相似文献   

18.
The temporal property to-always has been proposed for specifying progress properties of concurrent programs. Although the to-always properties are a subset of the leads-to properties for a given program, to-always has more convenient proof rules and in some cases more accurately describes the desired system behavior. In this paper, we give a predicate transformerwta, derive some of its properties, and use it to define to-always. Proof rules for to-always are derived from the properties ofwta. We conclude by briefly describing two application areas, nondeterministic data flow networks and self-stabilizing systems where to-always properties are useful.  相似文献   

19.
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class computed in parallel is better than the one obtained on a sequential computer. What represents a better solution depends on the problem under consideration. Thus, for optimization problems, better means closer to optimal. Similarly, for numerical problems, a solution is better than another one if it is more accurate. The present paper continues this line of inquiry by exploring another class enjoying the aforementioned property, namely, cryptographic problems in a real-time setting. In this class, better means more secure. A real-time cryptographic problem is presented for which the parallel solution is provably, considerably, and consistently better than a sequential one.It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup—a phenomenon itself originally thought to be impossible.  相似文献   

20.
The Greeks had a ready answer for what happens when the mind suddenly finds the answer to a question for which it had been searching: insight was regarded as a gift of the Muses, its origins were divine. It served to highlight the Greeks' belief that there are some things which are not meant to be scientifically explained. The essence of insight is that it comes from some supernatural source: unpredicted and unfettered. In other words, the origins of insight are unconscious, and hence, unexplainable. Wittgenstein felt that, as long as there continues to be a noun expression like to have a moment of insight which functions in the same way as the expression to have a hunger pang, thereby inducing us to treat moment of insight as the name of an experience, then people will keep stumbling over the same puzzling difficulties and find themselves staring at something which no explanation seems capable of clearing up. To the founders of AI, this argument reeked of obscurantism. The moment of insight, they felt, is indeed a mystery, but it is one that begs to be explained in causal terms. Indeed, the problem of insight served as one of the leading problems in the evolution of AI. Hence anyone interested in the foundations of AI is compelled to examine the manner in which the early pioneers of the field sought to explain the eureka experience. In this paper I will look at some of the key conceptual developments which paved the way for Newell and Simon's theory of GPS: the fundamental changes in the notion of the unconscious — the emergence of the cognitive unconscious — which took place in the nineteenth- and early twentieth-century. In so doing, I hope to clarify what Wittgenstein may have had in mind in his strictures against mechanist attempts to analyse the nature of insight.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号