首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Zusammenfassung Es geht in dieser Arbeit in der Hauptsache darum, ein vorgelegtes Differentialgleichungssystem so zu skalieren, daß in der zugehörigen Analogrechnerschaltung die Spannungen an den Ausgängen der Integratoren die durch die Referenzspannung einerseits und durch das Auflösevermögen andererseits gesetzten Schranken nicht über- bzw. unterschreiten. Es werden Abschätzungssätze hergeleitet, die diese Frage im Apriori-Sinn, also ohne die Lösung des Differentialgleichungssystems zu kennen, zu lösen gestatten. Zur Abschätzung werden zunächst Normen, dannKamke-Normen verwendet. Der im Titel erwähnte Satz vonPerron ergibt sich durch spezielle Normengebung und Verzicht auf Abschätzung nach unten. Erschwert werden die Betrachtungen durch die relative Schwäche der Forderung, daß die rechte Seite des Systemsdx/dt=f(x,t) der Bedingung aus xa folgt f(x,t)v(t)x genüge (...:=Norm,a positiv reell). Dadurch scheint es bei Abschätzungen mitKamke-Normen nicht mehr möglich, von den in der Literatur über Existenzbeweise und Abschätzungssätze üblichen Methoden Gebrauch zu machen. Zur Lösung dieser Frage wird eine bedingte Form des bekannten Satzes vonGronwall (auch Satz vonBellman genannt) entwickelt.
A conditional version of the integral inequality of gronwall, a slight generalization of a stability theorem of perron, and overflow-free scaling of analogue computer set-ups
Summary The main subject of this paper is the scaling of a given set of differential equations in such a way that the output voltages of the integrators of the associated analogue computer set-up do not exceed certain upper and lower bounds imposed by the reference voltage and the limited power of resolution of the elements of the analogue computer. The paper gives a priori bounds on the solution of the differential set. Some of these bounds work with norms, others withKamke-norms.Perron's stability theorem mentioned in the title of this paper results by inserting special norms and neglecting lower bounds. A difficulty arises by the relative weakness of the condition xa implies f(x,t)v(t)x on the right hand side of the setdx/dt=f(x,t), where ... is any norm anda is a positive real constant. As a consequence of this, it seems no longer possible to use the usual techniques known from the literature on existence theorems and bounds for the solution of differential equations. To cope with this situation, a conditional version of the well-known theorem ofGronwall (also known by the name of Lemma ofBellman) will be derived.

Diese Arbeit ist Teil einer am Institut für Angewandte Mathematik der Technischen Hochschule München unter Anleitung von Herrn o. Prof. Dr. rer. nat. habil.J. Heinhold angefertigten Dissertation.  相似文献   

2.
Dr. T. Ström 《Computing》1972,10(1-2):1-7
It is a commonly occurring problem to find good norms · or logarithmic norms (·) for a given matrix in the sense that they should be close to respectively the spectral radius (A) and the spectral abscissa (A). Examples may be the certification thatA is convergent, i.e. (A)A<1 or stable, i.e. (A)(A)<0. Often the ordinary norms do not suffice and one would like to try simple modifications of them such as using an ordinary norm for a diagonally transformed matrix. This paper treats this problem for some of the ordinary norms.
Minimisierung von Normen und Logarithmischen Normen durch Diagonale Transformationen
Zusammenfassung Ein oft vorkommendes praktisches Problem ist die Konstruktion von guten Normen · und logarithmischen Normen (·) für eine gegebene MatrixA. Mit gut wird dann verstanden, daß A den Spektralradius (A)=max |1| und (A) die Spektralabszisse (A)=max Re i gut approximieren. Beispiele findet man für konvergente Matrizen wo (A)A<1 gewünscht ist, und für stabile Matrizen wo (A)(A)<0 zu zeigen ist. Wir untersuchen hier, wie weit man mit Diagonaltransformationen und dengewöhnlichsten Normen kommen kann.
  相似文献   

3.
We consider the on-line competitiveness for scheduling a single resource non-preemptively in order to maximize its utilization. Our work examines this model when parameterizing an instance by a new value which we term the patience. This parameter measures each job's willingness to endure a delay before starting, relative to this same job's processing time. Specifically, the slack of a job is defined as the gap between its release time and the last possible time at which it may be started while still meeting its deadline. We say that a problem instance has patience , if each job with length J has a slack of at least ·J.Without any restrictions placed on the job characteristics, previous lower bounds show that no algorithm, deterministic or randomized, can guarantee a constant bound on the competitiveness of a resulting schedule. Previous researchers have analyzed a problem instance by parameterizing based on the ratio between the longest job's processing time and the shortest job's processing time. Our main contribution is to provide a fine-grained analysis of the problem when simultaneously parameterized by patience and the range of job lengths. We are able to give tight or almost tight bounds on the deterministic competitiveness for all parameter combinations.If viewing the analysis of each parameter individually, our evidence suggests that parameterizing solely on patience provides a richer analysis than parameterizing solely on the ratio of the job lengths. For example, in the special case where all jobs have the same length, we generalize a previous bound of 2 for the deterministic competitiveness with arbitrary slacks, showing that the competitiveness for any 0 is exactly 1+1/(+1). Without any bound on the job lengths, a simple greedy algorithm is (2+(1/))-competitive for any <0. More generally we will find that for any fixed ratio of job lengths, the competitiveness of the problem tends towards 1 as the patience is increased. The converse is not true, as for any fixed <0 we find that the competitiveness is bounded away from 1, no matter what further restrictions are placed on the ratio of job lengths.  相似文献   

4.
Rohn  Jiří 《Reliable Computing》1997,3(3):315-323
During the recent years, a number of linear problems with interval data have been proved to be NP-hard. These results may seem rather obscure as regards the ways in which they were obtained. This survey paper is aimed at demonstrating that in fact it is not so, since many of these results follow easily from the recently established fact that for the subordinate matrix norm · ,1 it is NP-hard to decide whether A,1 1 holds, even in the class of symmetric positive definite rational matrices. After a brief introduction into the basic topics of the complexity theory in Section 1 and formulation of the underlying norm complexity result in Section 2, we present NP-hardness results for checking properties of interval matrices (Section 3), computing enclosures (Section 4), solvability of rectangular linear interval systems (Section 5), and linear and quadratic programming (Section 6). Due to space limitations, proofs are mostly only ketched to reveal the unifying role of the norm complexity result; technical details are omitted.  相似文献   

5.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

6.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

7.
The I/O automaton paradigm of Lynch and Tuttle models asynchrony through an interleaving parallel composition. The recognition that such interleaving models in fact can be viewed as special cases of synchronous parallel composition has been very limited. Let be any set of finite-state I/O automata drawing actions from a fixed finite set containing a subset . In this article we establish a translation T : to a class of -automata closed under a synchronous parallel composition, for which T is monotonic with respect to implementation relative to , and linear with respect to composition. Thus, for A1, ..., A, B1, ..., B and A = A1 ... A, B = B1 ... B, if is the set of actions common to both A and B, then A implements B (in the sense of I/O automata) if and only if the -automaton language containment (T(A1) ... T(A)) (T(B1) ... T(B)) obtains, where denotes the interleaving parallel composition on and denotes the synchronous parallel composition on . For the class , we use the L-process model of -automata. This result enables one to verify systems specified by I/O automata through model-checkers such as COSPAN or SMV, that operate on models with synchronous parallel composition. The translation technique generalizes to other interleaving models, although in each case, the translation map must match the specific model.  相似文献   

8.
A variotherm mold for micro metal injection molding   总被引:4,自引:1,他引:3  
In this paper, a variotherm mold was designed and fabricated for the production of 316L stainless steel microstructures by micro metal injection molding (MIM). The variotherm mold incorporated a rapid heating/cooling system, vacuum unit, hot sprue and cavity pressure transducer. The design of the variotherm mold and the process cycle of MIM using the variotherm mold were described. Experiments were conducted to evaluate the molded microstructures produced using variotherm mold and conventional mold. The experiments showed that microstructures of higher aspect ratio such as 60 m × height 191 m and 40 m × height 174 m microstructures could be injection molded with complete filling and demolded successfully using the variotherm mold. Molded microstructures with dimensions of 60 m × height 191 m were successfully debound and sintered without visual defects.  相似文献   

9.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

10.
Dr. P. Thieler 《Computing》1978,19(4):303-312
LetA be an×n-matrix with the property I–A<1. LetY be an approximation of the inverse ofA. This paper shows how to get a componentwise error estimate forY, that does not require too much numerical effort but generally presents better results than global error estimates do. Although proved by means of interval mathematics, the given error estimate can also be calculated in absence of any implementation of interval arithmetic.
Über komponentenweise Fehlerabschätzungen für inverse Matrizen
Zusammenfassung SeiA einen×n-Matrix mit der Eigenschaft I–A<1. SeiY eine Approximation der Inversen vonA. In dieser Arbeit wird gezeigt, wie man eine komponentenweise Fehlerabschätzung fürY erhalten kann, deren Berechnung nicht sehr aufwendig ist, die aber im allgemeinen schärfer ist als globale Fehlerabschätzungen. Obwohl mit intervallmathematischen Mitteln bewiesen, kann die angegebene Fehlerabschätzung auch berechnet werden, wenn keine Intervallarithmetik implementiert ist.


This research was supported in part by Sonderforschungsbereich 72-Approximation und Optimierung, University of Bonn.  相似文献   

11.
This paper uses Thiele rational interpolation to derive a simple method for computing the Randles–Sevcik function 1/2(x), with relative error at most 1.9 × 10–5 for – < x < . We develop a piecewise approximation method for the numerical computation of 1/2(x) on the union (–, –10) [–10, 10] (10, ). This approximation is particularly convenient to employ in electrochemical applications where four significant digits of accuracy are usually sufficient. Although this paper is primarily concerned with the approximation of the Randles–Sevcik function, some examples are included that illustrate how Thiele rational interpolation can be employed to generate useful approximations to other functions of interest in scientific work.  相似文献   

12.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

13.
Schedulers for larger classes of pinwheel instances   总被引:1,自引:0,他引:1  
The pinwheel is a hard-real-time scheduling problem for scheduling satellite ground stations to service a number of satellites without data loss. Given a multiset of positive integers (instance)A={a1,..., an}, the problem is to find an infinite sequence (schedule) of symbols from {1,2,...,n} such that there is at least one symboli within any interval of ai symbols (slots). Not all instancesA can be scheduled; for example, no successful schedule exists for instances whose density,(A)= i i (l/ai), is larger than 1. It has been shown that all instances whose densities are less than a 0.5 density threshold can always be scheduled. If a schedule exists, another concern is the design of a fast on-line scheduler (FOLS) which can generate each symbol of the schedule in constant time. Based on the idea of integer reduction, two new FOLSs which can schedule different classes of pinwheel instances, are proposed in this paper. One uses single-integer reduction and the other uses double-integer reduction. They both improve the previous 0.5 result and have density thresholds of 13/20 and2/3, respectively. In particular, if the elements inA are large, the density thresholds will asymptotically approach In 2 and 1/R2, respectively.This research was supported in part by ONR Grant N00014-87-K-0833, and was done while Francis Chin was visiting the Computer Science Program, The University of Texas at Dallas, Richardson, TX 75083, USA.  相似文献   

14.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

15.
Zusammenfassung In der folgenden Arbeit werden zunächst die Begriffe Gesamtschrittverfahren, Einzelschrittverfahren und Relaxationsverfahren allgemein formuliert und dann auf allgemeine lineare Gleichungssysteme angewandt. Im Spezialfall einer Matrix mit verschwindender Hauptdiagonale erhält man so die bekanntenJacobi-, Gauss-Seidel- und Relaxationsverfahren. Satz 1 macht eine Aussage über die Konvergenz des Einzelschrittverfahrens bei allgemeinen, nicht-negativen Matrizen. Der Beweis verläuft ähnlich wie in einem bereits 1948 vonStein undRosenberg [2] behandelten Spezialfall. Als Korollar ergibt sich eine Aussage über die Konvergenz des Relaxationsverfahrens bei nicht-negativen Matrizen. Es wird ferner der Satz 2 über die Konvergenz des Relaxationsverfahrens bei diagonaldominanten Matrizen beweisen.
Summary In this paper we give a general definition what is meant by total-step-, single-step- and successive relaxation iterative method and we apply these concepts on systems of linear equations. In the special case of a matrix with zero diagonal entries we obtain the well knownJacobi-, Gauss-Seidel- and Relaxation iterative method. Theorem 1 gives conditions for the convergence of the singlestep-iterative method for general, non-negative matrices. The proof is similar to that given byStein andRosenberg in [2] (1948) for a special case. A corollary gives conditions for the convergence of the relaxation-iterative method for non-negative matrices. Further on we prove theorem 2 about the convergence of the relaxation-iterative method with diagonally dominant matrices.
  相似文献   

16.
Games such as CHESS, GO and OTHELLO can be represented by minimax game trees. Among various search procedures to solve such game trees,- and SSS* are perhaps most well known. Although it is proved that SSS* explores only a subset of the nodes explored by-, - is commonly believed to be faster in real applications, since it requires very little memory space and hence its storage management cost is low. Contrary to this folklore, however, this paper reports, using the OTHELLO game as an example, that SSS* is much faster than-. It is also demonstrated that SSS* can be modified to make the required memory space controllable to some extent, while retaining the high efficiency of the original SSS*.This research was partially supported by the Ministry of Education, Science and Culture of Japan, under a Scientific Grant-in-Aid.  相似文献   

17.
The first proposals for various component tools of what is now called the translator's workstation or translator's workbench are traced back to the 1970s and early 1980s in various, often independent, proposals at different stages in the development of computers and in their use by translators.  相似文献   

18.
Indecomposable local maps of one-dimensional tessellation automata are studied. The main results of this paper are the following. (1) For any alphabet containing two or more symbols and for anyn 1, there exist indecomposable scope-n local maps over . (2) If is a finite field of prime order, then a linear scope-n local map over is indecomposable if and only if its associated polynomial is an irreducible polynomial of degreen – 1 over , except for a trivial case. (3) Result (2) is no longer true if is a finite field whose order is not prime.  相似文献   

19.
Summary In a simple language for finite automata based on SCCS we introduce three different delay operators , , . The operators , are two different versions of an unbounded but finite delay operator. It is argued that the usual notion of bisimulation is inadequate and two generalisations are proposed. In both cases we give a complete axiomatisation for finite terms of the language and prove that certain forms of induction are sound. In one case we give a complete axiomatisation.  相似文献   

20.
In this paper, we define what we call a unitary immersion of a nonlinear system. We observe that, for classical Hamiltonian systems, this notion contains, in some sense, the concept of quantization. We restrict our attention to degree-zero unitary immersions, where all observation functions must be represented by operators of the type multiplication by a function. We show that the problem of classifying such degree-zero unitary immersions of a given nonlinear system is not obvious. In some cases, we solve this problem.Chargé de Recherche au CNRS.Maître de Conférences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号