首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

2.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

3.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   

4.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

5.
A New Class of Depth-Size Optimal Parallel Prefix Circuits   总被引:1,自引:1,他引:0  
Given n values x1, x2, ... ,xn and an associative binary operation o, the prefix problem is to compute x1ox2o··· oxi, 1in. Many combinational circuits for solving the prefix problem, called prefix circuits, have been designed. It has been proved that the size s(D(n)) and the depth d(D(n)) of an n-input prefix circuit D(n) satisfy the inequality d(D(n))+s(D(n))2n–2; thus, a prefix circuit is depth-size optimal if d(D(n))+s(D(n))=2n–2. In this paper, we construct a new depth-size optimal prefix circuit SL(n). In addition, we can build depth-size optimal prefix circuits whose depth can be any integer between d(SL(n)) and n–1. SL(n) has the same maximum fan-out lgn+1 as Snir's SN(n), but the depth of SL(n) is smaller; thus, SL(n) is faster. Compared with another optimal prefix circuit LYD(n), d(LYD(n))+2d(SL(n))d(LYD(n)). However, LYD(n) may have a fan-out of at most 2 lgn–2, and the fan-out of LYD(n) is greater than that of SL(n) for almost all n12. Because an operation node with greater fan-out occupies more chip area and is slower in VLSI implementation, in most cases, SL(n) needs less area and may be faster than LYD(n). Moreover, it is much easier to design SL(n) than LYD(n).  相似文献   

6.
Summary A framework is proposed for the structured specification and verification of database dynamics. In this framework, the conceptual model of a database is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database. The use of conceptual modelling approaches for structuring such a theory is analysed. Semantic primitives based on the notions of event and process are adopted for modelling the dynamic aspects. Events are used to model both atomic database operations and communication actions (input/output). Nonatomic operations to be performed on the database (transactions) are modelled by processes in terms of trigger/reaction patterns of behaviour. The correctness of the specification is verified by proving that the desired requirements on the evolution of the database are theorems of the conceptual model. Besides the traditional data integrity constraints, requirements of the form Under condition W, it is guaranteed that the database operation Z will be successfully performed are also considered. Such liveness requirements have been ignored in the database literature, although they are essential to a complete definition of the database dynamics.

Notation

Classical Logic Symbols (Appendix 1) for all (universal quantifier) - exists at least once (existential quantifier) - ¬ no (negation) - implies (implication) - is equivalent to (equivalence) - and (conjunction) - or (disjunction) Tense Logic Symbols (Appendix 1) G always in the future - G 0 always in the future and now - F sometime in the future - F 0 sometime in the future or now - H always in the past - H 0 always in the past and now - P sometime in the past - P 0 sometime in the past or now - X in the next moment - Y in the previous moment - L always - M sometime Event Specification Symbols (Sects. 3 and 4.1) (x) means immediately after the occurrence of x - (x) means immediately before the occurrence of x - (x) means x is enabled, i.e., x may occur next - { } ({w 1} x{w 2}) states that if w 1 holds before the occurrence of x, then w 2 will hold after the occurrence of x (change rule) - [ ] ([oa1, ..., oan]x) states that only the object attributes oa1, ..., oa n are modifiable by x (scope rule) - {{ }} ({{w}}x) states that if x may occur next, then w holds (enabling rule) Process Specification Symbols (Sects. 5.3 and 5.4) :: for causal rules - for behavioural rules Transition-Pattern Composition Symbols (Sects. 5.2 and 5.3) ; sequential composition - ¦ choice composition - parallel composition - :| guarded alternative composition Location Predicates (Sect. 5.2) (z) means immediately after the occurrence of the last event of z (after) - (z) means immediately before the occurrence of the first event of z (before) - (z) means after the beginning of z and before the end of z (during) - ( z) means before the occurrence of an event of z (at)  相似文献   

7.
In this paper I consider how the computer can or should be accepted in Japanese schools. The concept of teaching in Japan stresses learning from a long-term perspective. Whereas in the instructional technology, on which the CAI or the Tutoring System depends, step-by-step attainments in relatively short time are emphasized. The former is reluctant in using the computer, but both share the Platonic perspective which are goal-oriented. However, The Socratic teacher, who intends to activate students' innate disposition to be better, would find another way of teaching and use of the computer.  相似文献   

8.
A note on dimensions and factors   总被引:1,自引:1,他引:0  
In this short note, we discuss several aspectsof dimensions and the related constructof factors. We concentrate on those aspectsthat are relevant to articles in this specialissue, especially those dealing with the analysisof the wild animal cases discussed inBerman and Hafner's 1993 ICAIL article. We reviewthe basic ideas about dimensions,as used in HYPO, and point out differences withfactors, as used in subsequent systemslike CATO. Our goal is to correct certainmisconceptions that have arisen over the years.  相似文献   

9.
How to Pass a Turing Test   总被引:1,自引:0,他引:1  
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding –understanding a domain in terms of itself – is syntactic understanding. (3) An internal (or narrow), first-person point of view makes an external (or wide), third-person point of view otiose for purposes of understanding cognition.  相似文献   

10.
We study the question of which optimization problems can be optimally or approximately solved by greedy or greedy-like algorithms. For definiteness, we limit the present discussion to some well-studied scheduling problems although the underlying issues apply in a much more general setting. Of course, the main benefit of greedy algorithms lies in both their conceptual simplicity and their computational efficiency. Based on the experience from online competitive analysis, it seems plausible that we should be able to derive approximation bounds for greedy-like algorithms exploiting only the conceptual simplicity of these algorithms. To this end, we need (and will provide) a precise definition of what we mean by greedy and greedy-like.  相似文献   

11.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

12.
An object-oriented framework in essence defines an architecture for a family of applications or subsystems in a given domain. Every application in the family obeys these architectural restrictions. Such frameworks are typically delivered as collections of inter-dependent abstract classes, together with their concrete subclasses. The abstract classes and their interdependencies implicitly realize the architecture. Developing a new application reusing classes of a framework requires a thorough understanding of the framework architecture.We introduce an approach called Design by Framework Completion, in which an exemplar (an executable visual model for a minimal instantiation of the architecture) is used for documenting frameworks. We propose exploration of exemplars as a means for learning the architecture, following which new applications can be built by replacing selected pieces of the exemplar. For the piece to be replaced, the inheritance lattice around its class provides the space of alternatives, one of these classes may be suitably adapted (say, by sub-classing) to create the new replacement.Design by Framework Completion proposes a paradigm shift when designing in presence of reusable components: It enables a much simpler top-down approach for creating applications, as opposed to the prevalent search for components and assemble them bottom-up strategy. We believe that this paradigm shift is essential because components can only be fitted together if they all obey the same architectural rules that govern the framework.  相似文献   

13.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

14.
A multicriterion optimization method is proposed for complex systems with parameters ranked by descending importance. This method requires weaker expert estimates for choosing an optimal alternative from the set of equally good solutions by formal specification of functional dependence between ranked parameter weights.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 167–170, November–December, 1991.  相似文献   

15.
Prequential model selection and delete-one cross-validation are data-driven methodologies for choosing between rival models on the basis of their predictive abilities. For a given set of observations, the predictive ability of a model is measured by the model's accumulated prediction error and by the model's average-out-of-sample prediction error, respectively, for prequential model selection and for cross-validation. In this paper, given i.i.d. observations, we propose nonparametric regression estimators—based on neural networks—that select the number of hidden units (or neurons) using either prequential model selection or delete-one cross-validation. As our main contributions: (i) we establish rates of convergence for the integrated mean-squared errors in estimating the regression function using off-line or batch versions of the proposed estimators and (ii) we establish rates of convergence for the time-averaged expected prediction errors in using on-line versions of the proposed estimators. We also present computer simulations (i) empirically validating the proposed estimators and (ii) empirically comparing the proposed estimators with certain novel prequential and cross-validated mixture regression estimators.  相似文献   

16.
Aweaving W is a simple arrangement of lines (or line segments) in the plane together with a binary relation specifying which line is above the other. A system of lines (or line segments) in 3-space is called arealization ofW, if its projection into the plane isW and the above-below relations between the lines respect the specifications. Two weavings are equivalent if the underlying arrangements of lines are combinatorially equivalent and the above-below relations are the same. An equivalence class of weavings is said to be aweaving pattern. A weaving pattern isrealizable if at least one element of the equivalence class has a three-dimensional realization. A weaving (pattern)W is calledperfect if, along each line (line segment) ofW, the lines intersecting it are alternately above and below. We prove that (i) a perfect weaving pattern ofn lines is realizable if and only ifn 3, (ii) a perfect m byn weaving pattern of line segments (in a grid-like fashion) is realizable if and only if min(m, n) 3, (iii) ifn is sufficiently large, then almost all weaving patterns ofn lines are nonrealizable.Jànos Pach has been supported in part by Hungarian NFSR Grant 1812, NSF Grant CCR-8901484, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), a National Science Foundation Science and Technology Center, under NSF Grant STC88-09648. Richard Pollack has been supported in part by NSA Grant MDA904-89-H-2030, NSF Grants DMS-85-01947 and CCR-8901484, and DIMACS. Emo Welzl has been supported in part by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM) and DIMACS.  相似文献   

17.
Summary Making use of the fact that two-level grammars (TLGs) may be thought of as finite specification of context-free grammars (CFGs) with infinite sets of productions, known techniques for parsing CFGs are applied to TLGs by first specifying a canonical CFG G — called skeleton grammar — obtained from the cross-reference of the TLG G. Under very natural restrictions it can be shown that for these grammar pairs (G, G) there exists a 1 — 1 correspondence between leftmost derivations in G and leftmost derivations in G. With these results a straightforward parsing algorithm for restricted TLGs is given.  相似文献   

18.
Consideration was given to the open networks of single-server nodes of two types. The nodes of the first type are characterized by bypasses, those of the second type, by the possibility of arrival of negative customers. Servicing of the positive customers is done in the nodes according to the FCFS discipline. Positive and negative customers make up simplest flows. Invariance of the stationary distribution of network states to the functional form of the distributions of times of customer servicing in the nodes of first type under fixed first moments of these distributions was established.  相似文献   

19.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

20.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号