首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses some of the key drivers that will enable businesses to operate effectively on-line, and looks at how the notion of website will become one of an on-line presence which will support the main activities of an organisation. This is placed in the context of the development of the information society which will allow individuals-as consumers or employees-quick, inexpensive and on-demand access to vast quantities of entertainment, services and information. The paper draws on an example of these developments in Australasia.  相似文献   

2.
Summary We propose and compare two induction principles called always and sometime for proving inevitability properties of programs. They are respective formalizations and generalizations of Floyd invariant assertions and Burstall intermittent assertions methods for proving total correctness of sequential programs whose methodological advantages or disadvantages have been discussed in a number of previous papers. Both principles are formalized in the abstract setting of arbitrary nondeterministic transition systems and illustrated by appropriate examples. The sometime method is interpreted as a recursive application of the always method. Hence always can be considered as a special case of sometime. These proof methods are strongly equivalent in the sense that a proof by one induction principle can be rewritten into a proof by the other one. The first two theorems of the paper show that an invariant for the always method can be translated into an invariant for the sometime method even if every recursive application of the later is required to be of finite length. The third and main theorem of the paper shows how to translate an invariant for the sometime method into an invariant for the always method. It is emphasized that this translation technique follows the idea of transforming recursive programs into iterative ones. Of course, a general translation technique does not imply that the original sometime invariant and the resulting always invariant are equally understandable. This is illustrated by an example.  相似文献   

3.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

4.
In this paper I consider how the computer can or should be accepted in Japanese schools. The concept of teaching in Japan stresses learning from a long-term perspective. Whereas in the instructional technology, on which the CAI or the Tutoring System depends, step-by-step attainments in relatively short time are emphasized. The former is reluctant in using the computer, but both share the Platonic perspective which are goal-oriented. However, The Socratic teacher, who intends to activate students' innate disposition to be better, would find another way of teaching and use of the computer.  相似文献   

5.
Harnad's proposed robotic upgrade of Turing's Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad's allegiance to Searle's controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing's Test evidences — evenstrongly evidences — thought.  相似文献   

6.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

7.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

8.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

9.
Although the top-down development paradigm has successfully been applied to master the complexity of large systems, it has not yet been accepted as a useful paradigm for fault tolerant system design. This is mainly due to a problem that is sometimes referred to as the lazy programmers paradox. The lazy programmer paradox was already present and solved in top-down development methods for non-critical systems. However, the problem has re-appeared in an even more serious variant for critical systems. A few toy examples concerning exception handling in an Ada-like language are used to explain and illustrate the paradox. One possible solution to the problem is to use a specification language in which one can express that certain behaviours of a system are preferred over others. This paper proposes deontic logic as such a specification language. Therefore, a short and rather informal introduction to deontic logic is included. A non-trivial example is included to illustrate how deontic logic can be used to solve the lazy programmer paradox.Supported by NWO/SION Project 612-316-022: Fault Tolerance: Paradigms, Models, Logics, Construction.  相似文献   

10.
How to Pass a Turing Test   总被引:1,自引:0,他引:1  
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding –understanding a domain in terms of itself – is syntactic understanding. (3) An internal (or narrow), first-person point of view makes an external (or wide), third-person point of view otiose for purposes of understanding cognition.  相似文献   

11.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

12.
For the multiring and hypercube, a method of conflictless realization of an arbitrary permutation of large data items that can be divided into many smaller data blocks was considered, and its high efficiency was demonstrated.  相似文献   

13.
The main stream of legal theory tends to incorporate unwritten principles into the law. Weighing of principles plays a great role in legal argumentation, inter alia in statutory interpretation. A weighing and balancing of principles and other prima facie reasons is a jump. The inference is not conclusive.To deal with defeasibility and weighing, a jurist needs both the belief-revision logic and the nonmonotonic logic. The systems of nonmonotonic logic included in the present volume provide logical tools enabling one to speak precisely about various kinds of rules about rules, dealing with such things as applicability of rules, what is assumed by rules, priority between rules and the burden of proof. Nonmonotonic logic is an example of an extension of the domain of logic. But the more far-reaching the extension is, the greater problems it meets. It seems impossible to make logical reconstruction of the totality of legal argumentation.The lawyers' search for reasons has no obvious end point. Ideally, the search for reasons may end when one arrives at a coherent totality of knowledge. In other words, coherence is the termination condition of reasoning. Both scientific knowledge and knowledge of legal and moral norms progresses by trial and error, and that one must resort to a certain convention to define what error means. The main difference is, however, that conventions of science are much more precise than those of legal scholarship.Consequently, determination of error in legal science is often holistic and circular. The reasons determining that a legal theory is erroneous are not more certain than the contested theory itself. A strict and formal logical analysis cannot give us the full grasp of legal rationality. A weaker logical theory, allowing for nonmonotonic steps, comes closer, at the expense of an inevitable loss of computational efficiency. Coherentist epistemology grasps even more of this rationality, at the expense of a loss of preciseness.  相似文献   

14.
Today, documents and data are likely to be encountered in electronic form. This creates a challenge for the legal system since its rules of evidence evolved to deal with tangible (physical) evidence. Digital evidence differs from tangible evidence in various respects, which raise important issues as to how digital evidence is to be authenticated, ascertained to be reliable and determined to be admissible in criminal or civil proceedings. This article explains how digital evidence differs from traditional physical evidence and reviews the current state of the law with regard to the processes of authentication, reliability and admissibility.  相似文献   

15.
Changes and interrelations among computer usage, computer attitude, and skill transfer of elderly Japanese computer users were investigated over a one-year period. Each participant, aged 60 to 76 years, was provided with one touchscreen-based computer specialized for e-mail handling for 12 months. Participants usage of the computer, mouse and/or keyboard, and computer attitudes were investigated. The results showed that the Liking factor of the computer attitude scale was a possible predictor of computer usage. The results suggested the existence of four different types of users adaptation to computers, according to a combination of the Liking and Confidence dimensions of computer attitude.  相似文献   

16.
We construct equivalent localized versions of a formula, adding assumptions simultaneously to various locations, where the particular location determines what is added. Inference rules that take advantage of localized formulas are presented for sequent calculi in which the left hand side of sequents can be used to accumulate the background assumptions (or contexts) of assertions. The intended application is to the automatic generation of tractable justifying lemmas for substitution operations for interactive proof development systems, especially those concerned with mathematical topics where manipulation of deeply embedded terms is desirable.  相似文献   

17.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

18.
Summary Making use of the fact that two-level grammars (TLGs) may be thought of as finite specification of context-free grammars (CFGs) with infinite sets of productions, known techniques for parsing CFGs are applied to TLGs by first specifying a canonical CFG G — called skeleton grammar — obtained from the cross-reference of the TLG G. Under very natural restrictions it can be shown that for these grammar pairs (G, G) there exists a 1 — 1 correspondence between leftmost derivations in G and leftmost derivations in G. With these results a straightforward parsing algorithm for restricted TLGs is given.  相似文献   

19.
Summary The current proposals for applying the so called fast O(N loga N) algorithms to multivariate polynomials is that the univariate methods be applied recursively, much in the way more conventional algorithms are used. Since the size of the problems is rather large for which a fastrd algorithm is more efficient than a classical one, the recursive approach compounds this size completely out of any practical range.The degree homomorphism is proposed here as an alternative to this recursive approach. It is argued that methods based on the degree homomorphism and a fast algorithm may be viable alternatives to more conventional algorithms for certain multivariate problems in the setting of algebraic manipulation. Several such problems are discussed including: polynomial multiplication, powering, division (both exact and with remainder), greatest common divisors and factoring.This research was supported by NRC Grant A9284.  相似文献   

20.
Modular and platform methods for product family design: literature analysis   总被引:12,自引:2,他引:10  
After the industrial revolution, the literature has mentioned different principles to allow a better management of the production and product life cycle activities. For example the principle of standardization was first mentioned in the literature by an automobile engineer and placed in a real context by Henry Ford. Standardization has made possible the configuration of different products using a large set of common components. Another strategy called modularization was first mentioned in the literature in the 60s. The modularity proposed to group components of products in a module for practical production objectives. Today, modularity and standardization are promising tools in product family development because they allow to design a variety of products using the same modules of components called platforms. Using platforms allows important family design savings and easy manufacturing. In this paper we give a literature review of the platform concept with a special interest on the efficient product family development. This paper is organized as follows. Section 1 mentions the general context of modularity to develop product variety. Section 2 details the importance of product architectures in the literature for a modular design. Section 3 points on some important works that apply some modular and platform methodologies.This revised version was published in June 2005 with corrected page numbers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号