首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

2.
The concept of information is virtually ubiquitous in contemporary cognitive science. It is claimed to be processed (in cognitivist theories of perception and comprehension), stored (in cognitivist theories of memory and recognition), and otherwise manipulated and transformed by the human central nervous system. Fred Dretske's extensive philosophical defense of a theory of informational content (semantic information) based upon the Shannon-Weaver formal theory of information is subjected to critical scrutiny. A major difficulty is identified in Dretske's equivocations in the use of the concept of a signal bearing informational content. Gibson's alternative conception of information (construed as analog by Dretske), while avoiding many of the problems located in the conventional use of signal, raises different but equally serious questions. It is proposed that, taken literally, the human CNS does not extract or process information at all; rather, whatever information is construed as locatable in the CNS is information only for an observer-theorist and only for certain purposes.Blood courses through our veins, andinformation through our central nervous system.— A Neuropsychology Textbook.  相似文献   

3.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

4.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

5.
In order to give a new insight to fundamental problems of quantum mechanics, relativity and mind, we propose a world model suggested from the monadology of Leibniz. The world is assumed to consist of monads which have their individuality and whose primary attribute is a space-time frame and not a position in spacetime. Each monad has freedom to change its frame. Accompanying this change, the world time is put forward, and the world state jumps off the unitary evolution. This model explains not only the measurement process of quantum mechanics but also the passing now and the origin of free will.  相似文献   

6.
Prequential model selection and delete-one cross-validation are data-driven methodologies for choosing between rival models on the basis of their predictive abilities. For a given set of observations, the predictive ability of a model is measured by the model's accumulated prediction error and by the model's average-out-of-sample prediction error, respectively, for prequential model selection and for cross-validation. In this paper, given i.i.d. observations, we propose nonparametric regression estimators—based on neural networks—that select the number of hidden units (or neurons) using either prequential model selection or delete-one cross-validation. As our main contributions: (i) we establish rates of convergence for the integrated mean-squared errors in estimating the regression function using off-line or batch versions of the proposed estimators and (ii) we establish rates of convergence for the time-averaged expected prediction errors in using on-line versions of the proposed estimators. We also present computer simulations (i) empirically validating the proposed estimators and (ii) empirically comparing the proposed estimators with certain novel prequential and cross-validated mixture regression estimators.  相似文献   

7.
A memory-coupled multiprocessor—well suited to bit-wise operation—can be utilized to operate as a 1024 items cellular processing unit. Each processor is working on 32 bits and 32 such processors are combined to a multiprocessor. The information is stored in vertical direction, as it is defined and described in earlier papers [1] on vertical processing. The two-dimensional array (32 times 32 bits) is composed of the 32 bit-machine-words of the coupled processors on the one hand and of 32 processors in nearest-neighbour-topology on the other hand. The bit-wise cellular operation at one of the 1024 points is realized by the program of the processor—possibly assisted by appropriate microprogam sequences.Dedicated to Professor Willard L. Miranker on the occasion of his 60th birthday  相似文献   

8.
Though there are good reasons to improve instruction in pronunciation, the teaching of pronunciation has lost popularity among language teachers. This is because the traditional indirect analyses of sounds according to places and manners of articulation are clumsy when applied to classroom teaching. By shifting the focus of instruction to the direct feedback of real-time acoustic analysis in the visual mode, instructors are free from the complex and often unproductive terminology of articulatory phonetics, and students are free from the burden of translating instructors' general comments such as try again or repeat after me into plans for specific changes. Garry Molholt is Assistant Professor of Linguistics and English as a Second Language, and Coordinator of Computer Assisted Instruction. His research interests are the applications of speech processing to instruction in the acquisition of second language phonology. He has published Computer Assisted Instruction in Pronunciation for Chinese Speakers of American English, in TESOL Quarterly, and (with Ari Presler), Correlation between Human and Machine Ratings of Test of Spoken English Reading Passages, in Technology and Language Testing.  相似文献   

9.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

10.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

11.
Summary Many reductions among combinatorial problems are known in the context of NP-completeness. These reductions preserve the optimality of solutions. However, they may change the relative error of approximative solutions dramatically. In this paper, we apply a new type of reductions, called continuous reductions. When one problem is continuously reduced to another, any approximation algorithm for the latter problem can be transformed into an approximation algorithm for the former. Moreover, the performance ratio is preserved up to a constant factor. We relate the problem Minimum Number of Inverters in CMOS-Circuits, which arises in the context of logic synthesis, to several classical combinatorial problems such as Maximum Independent Set and Deletion of a Minimum Number of Vertices (Edges) in Order to Obtain a Bipartite (Partial) Subgraph.  相似文献   

12.
Summary Directed node-label controlled graph grammars (DNLC grammars) are sequential graph rewriting systems. In a direct derivation step of a DNLC grammar a single node is rewritten. Both the rewriting of a node and the embedding of a daughter graph in a host graph are controlled by the labels of nodes only. We study the use of those grammars to define string languages. In particular we provide a characterization of the class of context-free string languages in terms of DNLC grammars.  相似文献   

13.
Ward Elliott (from 1987) and Robert Valenza (from 1989) set out to the find the true Shakespeare from among 37 anti-Stratfordian Claimants. As directors of the Claremont Shakespeare Authorship Clinic, Elliott and Valenza developed novel attributional tests, from which they concluded that most Claimants are not-Shakespeare. From 1990-4, Elliott and Valenza developed tests purporting further to reject much of the Shakespeare canon as not-Shakespeare (1996a). Foster (1996b) details extensive and persistent flaws in the Clinic's work: data were collected haphazardly; canonical and comparative text-samples were chronologically mismatched; procedural controls for genre, stanzaic structure, and date were lacking. Elliott and Valenza counter by estimating maximum erosion of the Clinic's findings to include five of our 54 tests, which can amount, at most, to half of one percent (1998). This essay provides a brief history, showing why the Clinic foundered. Examining several of the Clinic's representative tests, I evaluate claims that Elliott and Valenza continue to make for their methodology. A final section addresses doubts about accuracy, validity and replicability that have dogged the Clinic's work from the outset.  相似文献   

14.
The goals of public education, as well as conceptions of human intelligence and learning, are undergoing a transformation through the application of military-sponsored information technologies and information processing models of human thought. Recent emphases in education on thinking skills, learning strategies, and computer-based technologies are the latest episodes in the postwar military agenda to engineer intelligent components, human and artificial, for the optimal performance of complex technological systems. Public education serves increasingly as a human factors laboratory and production site for this military enterprise, whose high performance technologies and command and control paradigms have also played central roles in the emergence of the information economy.Our final hope is to develop the brain as a natural resource ... Human intelligence will be the weapon of the future.Luis Alberto MachadoThis paper will also appear, under the title Mental Material inCyborg Worlds: The Military Information Society, eds. Les Levidow and Kevin Robins, London: Free Association Press, (in press).  相似文献   

15.
Summary When searching unsuccessfully for a fixed element in a random binary search tree, the number of comparisons made whose result is less is independent from the number of comparisons whose result is greater. This principle can be used to compute the mean and variance of the total number of comparisons involved in both a successful and an unsuccessful search.This work was supported in part by a Hertz Graduate Fellowship and by the Xerox Palo Alto Research Center.  相似文献   

16.
Consideration was given to the open networks of single-server nodes of two types. The nodes of the first type are characterized by bypasses, those of the second type, by the possibility of arrival of negative customers. Servicing of the positive customers is done in the nodes according to the FCFS discipline. Positive and negative customers make up simplest flows. Invariance of the stationary distribution of network states to the functional form of the distributions of times of customer servicing in the nodes of first type under fixed first moments of these distributions was established.  相似文献   

17.
In studying the surjectivity of set-valued mappings, a modification of the acute-angle lemma (or the equilibrium theorem) is used. This allows one to weaken the coerciveness condition. Some applications to differential equations (inclusions) with Neumann boundary conditions are considered on Sobolev spaces W p 1() in which operators are used that are not coercive in the classical sense.  相似文献   

18.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

19.
In this paper, a novel neural network approach to real-time collision-free path planning of robot manipulators in a nonstationary environment is proposed, which is based on a biologically inspired neural network model for dynamic trajectory generation of a point mobile robot. The state space of the proposed neural network is the joint space of the robot manipulators, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The real-time robot path is planned through the varying neural activity landscape that represents the dynamic environment. The proposed model for robot path planning with safety consideration is capable of planning a real-time comfortable path without suffering from the too close nor too far problems. The model algorithm is computationally efficient. The computational complexity is linearly dependent on the neural network size. The effectiveness and efficiency are demonstrated through simulation studies.  相似文献   

20.
In 1996, the author attended a seminar on ethics given by C. West Churchman at the University of California, Berkeley. During that year, the author also interviewed Churchman several times regarding the future direction of information sciences in general and the information systems research field in particular. This article is a compilation of the seminar and the interviews. It is set in the context of both Churchman's earlier and his current views of a global god, good, kindness, and caring. C. West Churchman holds that global ethics should lead to the study and design of information systems to solve large and difficult problems of the humankind such as poverty, crime and disease. His Global Ethical Management (GEM) of information sciences translates into abandoning the current goals and boundaries of the information sciences fields and changing what constitutes valid research to globally ethical endeavors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号