首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Schedulers for larger classes of pinwheel instances   总被引:1,自引:0,他引:1  
The pinwheel is a hard-real-time scheduling problem for scheduling satellite ground stations to service a number of satellites without data loss. Given a multiset of positive integers (instance)A={a1,..., an}, the problem is to find an infinite sequence (schedule) of symbols from {1,2,...,n} such that there is at least one symboli within any interval of ai symbols (slots). Not all instancesA can be scheduled; for example, no successful schedule exists for instances whose density,(A)= i i (l/ai), is larger than 1. It has been shown that all instances whose densities are less than a 0.5 density threshold can always be scheduled. If a schedule exists, another concern is the design of a fast on-line scheduler (FOLS) which can generate each symbol of the schedule in constant time. Based on the idea of integer reduction, two new FOLSs which can schedule different classes of pinwheel instances, are proposed in this paper. One uses single-integer reduction and the other uses double-integer reduction. They both improve the previous 0.5 result and have density thresholds of 13/20 and2/3, respectively. In particular, if the elements inA are large, the density thresholds will asymptotically approach In 2 and 1/R2, respectively.This research was supported in part by ONR Grant N00014-87-K-0833, and was done while Francis Chin was visiting the Computer Science Program, The University of Texas at Dallas, Richardson, TX 75083, USA.  相似文献   

2.
Research with computer systems and musical grammars into improvisation as found in the tabla drumming system of North India has indicated that certain musical sentences comprise (a) variable prefixes, and (b) fixed suffixes (or cadences) identical with those of their original rhythmic themes. It was assumed that the cadence functioned as a kind of target in linear musical space, and yet experiments showed that defining what exactly constituted the cadence was problematic. This paper addresses the problem of the status of cadential patterns, and demonstrates the need for a better understanding and formalization of ambiguity in musico-cognitive processing. It would appear from the discussion that the cadence is not a discrete unit in itself, but just part of an ever-present underlying framework comprising the entire original rhythmic theme. Improvisations (variations), it is suggested, merely break away from and rejoin this framework at important structural points. This endorses the theory of simultaneity. However, the general cognitive implications are still unclear, and further research is required to explore musical ambiguity and the interaction of musical, linguistic, and spatio-motor grammars.  相似文献   

3.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

4.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

5.
Infinitestimal Perturbation Analysis (IPA) estimators are based on particular couplings of parameteric families of discrete event systems where small changes in the parameter value, typically, cause small changes in the timing of events and, for finite horizons, the sequence of states visisted remains the same. We consider another coupling approach based on the uniformization procedure and a simple generalization of it. In our case any small change in the parameter value causes a change in the state of the system; our parameterization of trajectories keeps them highly synchronized, hence the effect of such changes can be estimated, sometimes efficiently. In this framework, we define three tupes of performance sensitivity estimators for a broad class of performance measures and with respect to a range of parameter values. Performance measures on finite deterministic horizons are considered and it is shown that they are unbiased under mild conditions. We show that for some systems the derivative estimators can be calculated from a nominal sample path of the system.  相似文献   

6.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

7.
We give an O(k · n2) fixed parameter tractable algorithm for the 1-Sided Crossing Minimization. The constant in the running time is the golden ratio = (1+5)/2 1.618. The constant k is the parameter of the problem: the number of allowed edge crossings.  相似文献   

8.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

9.
Agent-based technology has been identified as an important approach for developing next generation manufacturing systems. One of the key techniques needed for implementing such advanced systems will be learning. This paper first discusses learning issues in agent-based manufacturing systems and reviews related approaches, then describes how to enhance the performance of an agent-based manufacturing system through learning from history (based on distributed case-based learning and reasoning) and learning from the future (through system forecasting simulation). Learning from history is used to enhance coordination capabilities by minimizing communication and processing overheads. Learning from the future is used to adjust promissory schedules through forecasting simulation, by taking into account the shop floor interactions, production and transportation time. Detailed learning and reasoning mechanisms are described and partial experimental results are presented.  相似文献   

10.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

11.
In this paper, we consider the class of Boolean -functions, which are the Boolean functions definable by -expressions (Boolean expressions in which no variable occurs more than once). We present an algorithm which transforms a Boolean formulaE into an equivalent -expression-if possible-in time linear in E times , where E is the size ofE andn m is the number of variables that occur more than once inE. As an application, we obtain a polynomial time algorithm for Mundici's problem of recognizing -functions fromk-formulas [17]. Furthermore, we show that recognizing Boolean -functions is co-NP-complete for functions essentially dependent on all variables and we give a bound close to co-NP for the general case.  相似文献   

12.
A variotherm mold for micro metal injection molding   总被引:4,自引:1,他引:3  
In this paper, a variotherm mold was designed and fabricated for the production of 316L stainless steel microstructures by micro metal injection molding (MIM). The variotherm mold incorporated a rapid heating/cooling system, vacuum unit, hot sprue and cavity pressure transducer. The design of the variotherm mold and the process cycle of MIM using the variotherm mold were described. Experiments were conducted to evaluate the molded microstructures produced using variotherm mold and conventional mold. The experiments showed that microstructures of higher aspect ratio such as 60 m × height 191 m and 40 m × height 174 m microstructures could be injection molded with complete filling and demolded successfully using the variotherm mold. Molded microstructures with dimensions of 60 m × height 191 m were successfully debound and sintered without visual defects.  相似文献   

13.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

14.
Zusammenfassung In der folgenden Arbeit werden zunächst die Begriffe Gesamtschrittverfahren, Einzelschrittverfahren und Relaxationsverfahren allgemein formuliert und dann auf allgemeine lineare Gleichungssysteme angewandt. Im Spezialfall einer Matrix mit verschwindender Hauptdiagonale erhält man so die bekanntenJacobi-, Gauss-Seidel- und Relaxationsverfahren. Satz 1 macht eine Aussage über die Konvergenz des Einzelschrittverfahrens bei allgemeinen, nicht-negativen Matrizen. Der Beweis verläuft ähnlich wie in einem bereits 1948 vonStein undRosenberg [2] behandelten Spezialfall. Als Korollar ergibt sich eine Aussage über die Konvergenz des Relaxationsverfahrens bei nicht-negativen Matrizen. Es wird ferner der Satz 2 über die Konvergenz des Relaxationsverfahrens bei diagonaldominanten Matrizen beweisen.
Summary In this paper we give a general definition what is meant by total-step-, single-step- and successive relaxation iterative method and we apply these concepts on systems of linear equations. In the special case of a matrix with zero diagonal entries we obtain the well knownJacobi-, Gauss-Seidel- and Relaxation iterative method. Theorem 1 gives conditions for the convergence of the singlestep-iterative method for general, non-negative matrices. The proof is similar to that given byStein andRosenberg in [2] (1948) for a special case. A corollary gives conditions for the convergence of the relaxation-iterative method for non-negative matrices. Further on we prove theorem 2 about the convergence of the relaxation-iterative method with diagonally dominant matrices.
  相似文献   

15.
Speech perception relies on the human ability to decode continuous, analogue sound pressure waves into discrete, symbolic labels (phonemes) with linguistic meaning. Aspects of this signal-to-symbol transformation have been intensively studied over many decades, using psychophysical procedures. The perception of (synthetic) syllable-initial stop consonants has been especially well studied, since these sounds display a marked categorization effect: they are typically dichotomised into voiced and unvoiced classes according to their voice onset time (VOT). In this case, the category boundary is found to have a systematic relation to the (simulated) place of articulation, but there is no currently-accepted explanation of this phenomenon. Categorization effects have now been demonstrated in a variety of animal species as well as humans, indicating that their origins lie in general auditory and/or learning mechanisms, rather than in some phonetic module specialized to human speech processing.In recent work, we have demonstrated that appropriately-trained computational learning systems (neural networks) also display the same systematic behaviour as human and animal listeners. Networks are trained on simulated patterns of auditory-nerve firings in response to synthetic continuua of stop-consonant/vowel syllables varying in place of articulation and VOT. Unlike real listeners, such a software model is amenable to analysis aimed at extracting the phonetic knowledge acquired in training, so providing a putative explanation of the categorization phenomenon. Here, we study three learning systems: single-layer perceptrons, support vector machines and Fisher linear discriminants. We highlight similarities and differences between these approaches. We find that the modern inductive inference technique for small sample sizes of support vector machines gives the most convincing results. Knowledge extracted from the trained machine indicated that the phonetic percept of voicing is easily and directly recoverable from auditory (but not acoustic) representations.  相似文献   

16.
No work is inherently either visible or invisible. We always see work through a selection of indicators: straining muscles, finished artifacts, a changed state of affairs. The indicators change with context, and that context becomes a negotiation about the relationship between visible and invisible work. With shifts in industrial practice these negotiations require longer chains of inference and representation, and may become solely abstract.This article provides a framework for analyzing invisible work in CSCW systems. We sample across a variety of kinds of work to enrich the understanding of how invisibility and visibility operate. Processes examined include creating a non-person in domestic work; disembedding background work; and going backstage. Understanding these processes may inform the design of CSCW systems and the development of related social theory.  相似文献   

17.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

18.
Since Aristotle it is recognised that a valid syllogism cannot have two particular premises. However, that is not how a lay person sees it; at least as long as the premises read many, most etc, instead of a plain some. The lay people are right if one considers that these syllogisms do not have strict but approximate (Zadeh) validity. Typically there are only particular premises available in everyday life and one is dependent on such syllogisms. – Some rules on the usage of particular premises are given below.  相似文献   

19.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

20.
A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the right thing at the right time in the right way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号