首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
I begin by tracing some of the confusions regarding levels and reduction to a failure to distinguish two different principles according to which theories can be viewed as hierarchically arranged — epistemic authority and ontological constitution. I then argue that the notion of levels relevant to the debate between symbolic and connectionist paradigms of mental activity answers to neither of these models, but is rather correlative to the hierarchy of functional decompositions of cognitive tasks characteristic of homuncular functionalism. Finally, I suggest that the incommensurability of the intentional and extensional vocabularies constitutes a strongprima facie reason to conclude that there is little likelihood of filling in the story of Bechtel's missing level in such a way as to bridge the gap between such homuncular functionalism and his own model of mechanistic explanation.  相似文献   

3.
4.
5.
Common Lisp [25],[26] includes a dynamic datatype system of moderate complexity, as well as predicates for checking the types of language objects. Additionally, an interesting predicate of two type specifiers—SUBTYPEP—is included in the language. Thissubtypep predicate provides a mechanism with which to query the Common Lisp type system regarding containment relations among the various built-in and user-defined types. Whilesubtypep is rarely needed by an applications programmer, the efficiency of a Common Lisp implementation can depend critically upon the quality of itssubtypep predicate: the run-time system typically calls uponsubtypep to decide what sort of representations to use when making arrays; the compiler calls uponsubtypep to interpret userdeclarations, on which efficient data representation and code generation decisions are based.As might be expected due to the complexity of the Common Lisp type system, there may be type containment questions which cannot be decided. In these casessubtypep is expected to return can't determine, in order to avoid giving an incorrect answer. Unfortunately, most Common Lisp implementations have abused this license by answering can't determine in all but the most trivial cases.In particular, most Common Lisp implementations of SUBTYPEP fail on the basic axioms of the Common Lisp type system itself [25][26]. This situation is particularly embarrassing for Lisp-the premier symbol processing language—in which the implementation of complex symbolic logical operations should be relatively easy. Sincesubtypep was presumably included in Common Lisp to answer thehard cases of type containment, this lazy evaluation limits the usefulness of an important language feature.  相似文献   

6.
Grid-Computing     
Grid-Computing, ein Mitte der 90er-Jahre eingeführter Begriff [1,2], bezeichnet eine Architektur für verteilte Systeme, die auf dem World Wide Web aufbaut und die Web-Vision erweitert. Mit dem Grid-Computing werden die Ressourcen einer Gemeinschaft, einer so genannten virtuellen Organisation, integriert. Die Hoffnung ist, dass hierdurch rechen- und/oder datenintensiven Aufgaben, die eine einzelne Organisation nicht lösen kann, handhabbar werden. Ein Grid bezeichnet eine nach dem Grid-Computing-Ansatz aufgebaute Rechner-, Netzwerk- und Software-Infrastruktur zur Teilung von Ressourcen mit dem Ziel, die Aufgaben einer virtuellen Organisation zu erledigen.  相似文献   

7.
This work was partially supported by the Progetto Finalizzato Sistemi Informatici e Calleolo Parallelo of the CNR under grant no. 89.0052.699.  相似文献   

8.
An implicit approximate factorization (AF) algorithm is constructed, which has the following characteristics.
–  In two dimensions: The scheme is unconditionally stable, has a 3×3 stencil and at steady state has a fourth-order spatial accuracy. The temporal evolution is time accurate either to first or second order through choice of parameter.
–  In three dimensions: The scheme has almost the same properties as in two dimensions except that it is now only conditionally stable, with the stability condition (the CFL number) being dependent on the cell aspect ratios,y/x andz/x. The stencil is still compact and fourth-order accuracy at steady state is maintained.
Numerical experiments on a two-dimensional shock-reflection problem show the expected improvement over lower-order schemes, not only in accuracy (measured by theL 2 error) but also in the dispersion. It is also shown how the same technique is immediately extendable to Runge-Kutta type schemes, resulting in improved stability in addition to the enhanced accuracy.  相似文献   

9.
10.
Book reviews     
Professor Richard Ennals is author of Artificial Intelligence and Human Institutions (Springer 1991) and co-editor, with Phil Molyneux, of Managing with Information Technology (Springer 1993).  相似文献   

11.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

12.
This paper proposes the use of accessible information (data/knowledge) to infer inaccessible data in a distributed database system. Inference rules are extracted from databases by means of knowledge discovery techniques. These rules can derive inaccessible data due to a site failure or network partition in a distributed system. Such query answering requires combining incomplete and partial information from multiple sources. The derived answer may be exact or approximate. Our inference process involves two phases to reason with reconstructed information. One phase involves using local rules to infer inaccessible data. A second phase involves merging information from different sites. We shall call such reasoning processes cooperative data inference. Since the derived answer may be incomplete, new algebraic tools are developed for supporting operations on incomplete information. A weak criterion called toleration is introduced for evaluating the inferred results. The conditions that assure the correctness of combining partial results, known as sound inference paths, are developed. A solution is presented for terminating an iterative reasoning process on derived data from multiple knowledge sources. The proposed approach has been implemented on a cooperative distributed database testbed, CoBase, at UCLA. The experimental results validate the feasibility of this proposed concept and can significantly improve the availability of distributed knowledge base/database systems.List of notation Mapping - --< Logical implication - = Symbolic equality - ==< Inference path - Satisfaction - Toleration - Undefined (does not exist) - Variable-null (may or may not exist) - * Subtuple relationship - * s-membership - s-containment - Open subtuple - Open s-membership - Open s-containment - P Open base - P Program - I Interpretation - DIP Data inference program - t Tuples - R Relations - Ø Empty interpretation - Open s-union - Open s-interpretation - Set of mapping from the set of objects to the set of closed objects - W Set of attributes - W Set of sound inference paths on the set of attributes W - Set of relational schemas in a DB that satisfy MVD - + Range closure of W wrt   相似文献   

13.
While user modelling has produced many research-based systems, comparatively little progress has been made in the development of user modelling components for commercial software systems. The development of minimalist user modelling components which are simplified to provide just enough assistance to a user through a pragmatic adaptive user interface is seen by many as an important step toward this goal. This paper describes the development, implementation, and evaluation of a minimalist user modelling component for the Tax and Investment Management Strategizer (TIMS), a complex commercial software system for financial management. This user modelling component manages several levels of adaptations designed to assist novice users in dealing with the complexity of this software package. Important issues and considerations for the development of user modelling components for commercial software systems and the evaluation of such systems in commercial settings are also discussed.  相似文献   

14.
A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the right thing at the right time in the right way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.  相似文献   

15.
User modelling in interactive explanations   总被引:2,自引:0,他引:2  
In this paper I consider how user modelling can be used to improve the provision of complex explanations, and discuss in detail the user modelling component of the EDGE explanation system. This allows a user model to be both updated and used in an explanatory dialogue with the user. The model is updated based on the interactions with the user, relationships between concepts and a reviseable expertise level. The model in turn influences the planning of the explanation, allowing a more understandable explanation to be generated. I argue that both user modelling and an interactive style of presentation are important for explanations to be acceptable and understandable, and that each reinforces the other.  相似文献   

16.
User modeling in dialog systems: Potentials and hazards   总被引:1,自引:0,他引:1  
Alfred Kobsa 《AI & Society》1990,4(3):214-231
In order to be capable of exhibiting a wide range of cooperative behavior, a computer-based dialog system must have available assumptions about the current user's goals, plans, background knowledge and (false) beliefs, i.e., maintain a so-called user model. Apart from cooperativity aspects, such a model is also necessary for intelligent coherent dialog behavior in general. This article surveys recent research on the problem of how such a model can be constructed, represented and used by a system during its interaction with the user. Possible applications, as well as potential problems concerning the advisability of application, are then discussed. Finally, a number of guidelines are presented which should be observed in future research to reduce the risk of a potential misuse of user modeling technology.  相似文献   

17.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

18.
We show that a mixed state = mnamn|mn| can be realized by an ensemble of pure states {pk, |k} where . Employing this form, we discuss the relative entropy of entanglement of Schmidt correlated states. Also, we calculate the distillable entanglement of a class of mixed states. PACS: 03.67.-a; 03.65.Bz; 03.65.Ud  相似文献   

19.
In this paper, we investigate the numerical solution of a model equation u xx = exp(– ) (and several slightly more general problems) when 1 using the standard central difference scheme on nonuniform grids. In particular, we are interested in the error behaviour in two limiting cases: (i) the total mesh point number N is fixed when the regularization parameter 0, and (ii) is fixed when N. Using a formal analysis, we show that a generalized version of a special piecewise uniform mesh 12 and an adaptive grid based on the equidistribution principle share some common features. And the optimal meshes give rates of convergence bounded by |log()| as 0 and N is given, which are shown to be sharp by numerical tests.  相似文献   

20.
A linear evolution equation for a thermodynamic variable F, odd under time-reversal, is obtained from the exact equation derived by Robertson from the Liouville equation for the information-theoretic phase-space distribution. One obtains an exact expression for , the relaxation time for F. For very short , is time-independent for t > if C(t) F{exp(-i t)}Fo, the equilibrium time correlation, decays exponentially for t > . is the Liouville operator. So long as C(t) is such that decays rapidly to a steady-state value, the t limit of agrees with the fluctuation-dissipation theorem in applications to fluid transport.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号