首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The results of application of potential theory to optimization are used to extend the use of (Helmholtz) diffusion and diffraction equations for optimization of their solutions (x, ) with respect to both x, and . If the aim function is modified such that the optimal point does not change, then the function (x, ) is convex in (x, for small . The possibility of using heat conductivity equation with a simple boundary layer for global optimization is investigated. A method is designed for making the solution U(x,t) of such equations to have a positive-definite matrix of second mixed derivatives with respect to x for any x in the optimization domain and any small t < 0 (the point is remote from the extremum) or a negative-definite matrix in x (the point is close to the extremum). For the functions (x, ) and U(x,t) having these properties, the gradient and the Newton–Kantorovich methods are used in the first and second stages of optimization, respectively.  相似文献   

2.
Conditions are presented under which the maximum of the Kolmogorov complexity (algorithmic entropy) K(1... N ) is attained, given the cost f( i ) of a message 1... N . Various extremal relations between the message cost and the Kolmogorov complexity are also considered; in particular, the minimization problem for the function f( i ) – K(1... N ) is studied. Here, is a parameter, called the temperature by analogy with thermodynamics. We also study domains of small variation of this function.  相似文献   

3.
A nonlinear stochastic integral equation of the Hammerstein type in the formx(t; ) = h(t, x(t; )) + s k(t, s; )f(s, x(s; ); )d(s) is studied wheret S, a measure space with certain properties, , the supporting set of a probability measure space (,A, P), and the integral is a Bochner integral. A random solution of the equation is defined to be an almost surely continuousm-dimensional vector-valued stochastic process onS which is bounded with probability one for eacht S and which satisfies the equation almost surely. Several theorems are proved which give conditions such that a unique random solution exists. AMS (MOS) subject classifications (1970): Primary; 60H20, 45G99. Secondary: 60G99.  相似文献   

4.
5.
Summary It is shown how the weakest precondition approach to proving total correctness of nondeterministic programs can be formalized in infinitary logic. The weakest precondition technique is extended to hierarchically structured programs by adding a new primitive statement for operational abstraction, the nondeterministic assignment statement, to the guarded commands of Dijkstra. The infinitary logic L 1 is shown to be strong enough to express the weakest preconditions for Dijkstra's guarded commands, but too weak for the extended guarded commands. Two possible solutions are considered: going to the essentially stronger infinitary logic L 11 and restricting the power of the nondeterministic assignment statement in a way which allows the weakest preconditions to be expressed in L 1.  相似文献   

6.
Summary Tsokos [12] showed the existence of a unique random solution of the random Volterra integral equation (*)x(t; ) = h(t; ) + o t k(t, ; )f(, x(; )) d, where , the supporting set of a probability measure space (,A, P). It was required thatf must satisfy a Lipschitz condition in a certain subset of a Banach space. By using an extension of Banach's contraction-mapping principle, it is shown here that a unique random solution of (*) exists whenf is (, )-uniformly locally Lipschitz in the same subset of the Banach space considered in [12].  相似文献   

7.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

8.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

9.
Definability of Polyadic Lifts of Generalized Quantifiers   总被引:1,自引:1,他引:0  
We study generalized quantifiers on finite structures.With every function : we associate a quantifier Q by letting Q x say there are at least (n) elementsx satisfying , where n is the sizeof the universe. This is the general form ofwhat is known as a monotone quantifier of type < 1 >.We study so called polyadic liftsof such quantifiers. The particular lifts we considerare Ramseyfication, branching and resumption.In each case we get exact criteria fordefinability of the lift in terms of simpler quantifiers.  相似文献   

10.
The open exponential queuing network with negative customers (G-network) was considered.For each arriving customer, given was a set of its random parameters such as the route defining the sequence of network nodes passed by the customer, route length, size, and servicing duration at each stage of the route. It was assumed that the negative customer arriving to the sth node with the probabilities s and s + either kills the positive customer in a randomly chosen server or does not affect it at all and with the probability s =1 – s s + transforms it into a negative customer which after an exponentially distributed time arrives to the sth node with the given probability. The multidimensional stationary probability distribution of the network states was proved to be representable in the multiplicative form.  相似文献   

11.
It is not yet known (1997) whether the Solar system is stable or not. Common belief is that the Solar system is stable if and only if it is not a resonant system, i.e., whenever its orbital frequencies i satisfy an inequality | nii| for i|ni| N; a similar inequality is true for randomly chosen frequencies. In this paper, we show that the Solar system does not have such resonances, and therefore (if the above-mentioned belief is correct), it is stable.  相似文献   

12.
The two basic performance parameters that capture the complexity of any VLSI chip are the area of the chip,A, and the computation time,T. A systematic approach for establishing lower bounds onA is presented. This approach relatesA to the bisection flow, . A theory of problem transformation based on , which captures bothAT 2 andA complexity, is developed. A fundamental problem, namely, element uniqueness, is chosen as a computational prototype. It is shown under general input/output protocol assumptions that any chip that decides ifn elements (each with (1+)lognbits) are unique must have =(nlogn), and thus, AT2=(n 2log2 n), andA= (nlogn). A theory of VLSI transformability reveals the inherentAT 2 andA complexity of a large class of related problems.This work was supported in part by the Semiconductor Research Corporation under contract RSCH 84-06-049-6.  相似文献   

13.
Two video-based human-computer interaction tools are introduced that can activate a binary switch and issue a selection command. BlinkLink, as the first tool is called, automatically detects a users eye blinks and accurately measures their durations. The system is intended to provide an alternate input modality to allow people with severe disabilities to access a computer. Voluntary long blinks trigger mouse clicks, while involuntary short blinks are ignored. The system enables communication using blink patterns: sequences of long and short blinks which are interpreted as semiotic messages. The second tool, EyebrowClicker, automatically detects when a user raises his or her eyebrows and then triggers a mouse click. Both systems can initialize themselves, track the eyes at frame rate, and recover in the event of errors. No special lighting is required. The systems have been tested with interactive games and a spelling program. Results demonstrate overall detection accuracy of 95.6% for BlinkLink and 89.0% for EyebrowClicker.  相似文献   

14.
Zusammenfassung Die für lineare Gleichungssysteme bekannten Sätze über die Konvergenz von Successive overrelaxation methods (SOR) und Alternating direction methods (ADI) werden auf analoge Verfahren bei nichtlinearen Gleichungssystemen übertragen. Dabei können allerdings, wie auch bei anderen Iterationsverfahren für nichtlineare Probleme, nur sogenannte lokale Konvergenz-sätze bewiesen werden. Es wird weiter untersucht, wann es Differenzapproximationen für nichtlineare elliptische Differentialgleichungen gibt, derart, daß die Funktionalmatrix des resultierenden nichtlinearen Gleichungssystems symmetrisch und positiv definit ist. Dann konvergieren SOR für 0<<2 und ADI. Solche Approximationen können zumindest für allgemeinere halblineare Gleichungen hergeleitet werden, wenn die DifferentialgleichungEulersche Gleichung eines Variationsproblems ist. Am Schluß findet sich ein Beispiel.
Iterative solutions for systems of non-linear equation and discretisation of elliptic differential equations
Summary The theorems, known for systems of linear equations, on the convergence of Successive overrelaxation methods (SOR) and Alternating direction methods (ADI) are transferred to analogous methods for systems of nonlinear equations. In doing so, only so-called local convergence theorems can be proved, however, as it is the case with other iteration procedures for nonlinear problems. Furthermore, it is examined under what conditions there exist difference approximations for nonlinear elliptic differential equations, such as to the functional matrix of the resulting system of nonlinear equations being symmetric and positive definite. SOR for 0<<2 and ADI are then converging. Such approximations can be derived at least for more general semilinear equations if the differential equation is theEuler equation of a variational problem. Finally, an example is given.


Herrn Professor Dr.L. Collatz anläßlich seines 60. Geburtstages gewidmet  相似文献   

15.
A solution to the N-bit parity problem employing a single multiplicative neuron model, called translated multiplicative neuron ( t -neuron), is proposed. The t -neuron presents the following advantages: (a) N1, only 1 t -neuron is necessary, with a threshold activation function and parameters defined within a specific interval; (b) no learning procedures are required; and (c) the computational cost is the same as the one associated with a simple McCulloch-Pitts neuron. Therefore, the t -neuron solution to the N-bit parity problem has the lowest computational cost among the neural solutions presented to date.  相似文献   

16.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

17.
We study definability problems and algorithmic issues for infinite structures that are finitely presented. After a brief overview over different classes of finitely presentable structures, we focus on structures presented by automata or by model-theoretic interpretations. These two ways of presenting a structure are related. Indeed, a structure is automatic if, and only if, it is first-order interpretable in an appropriate expansion of Presburger arithmetic or, equivalently, in the infinite binary tree with prefix order and equal length predicate. Similar results hold for -automatic structures and appropriate expansions of the real ordered group. We also discuss the relationship to automatic groups. The model checking problem for FO(), first-order logic extended by the quantifier there are infinitely many, is proved to be decidable for automatic and -automatic structures. Further, the complexity for various fragments of first-order logic is determined. On the other hand, several important properties not expressible in FO, such as isomorphism or connectedness, turn out to be undecidable for automatic structures. Finally, we investigate methods for proving that a structure does not admit an automatic presentation, and we establish that the class of automatic structures is closed under Feferman–Vaught-like products.  相似文献   

18.
-Perfect graph is defined and some classes of -perfect graphs are described, although the characterization of the complete class of -perfect graphs remains an open question. A bound on the chromatic number for graphs without even holes is derived.Translated from Kibernetika, No. 2, pp. 8–11, March–April, 1990.  相似文献   

19.
In this work we propose and implement a new variant of the well-known write invalidate protocol called Y-invalidate. Whereas the former protocol required that every copy of a page be invalidated every time that page is updated, our variant invalidates a copy of a page at process A only at the next synchronization point which is relevant to Aand or if the copy was modified by the owner's process after the page was copied to A. We thus avoid invalidating copies of pages that were modified but never read after modification, and avoid, of course, the associated overhead. Y-Invalidate is basically a weak-consistency protocol. Its main advantage is that it implements weak consistency without the need to merge copies of a page that were updated in different machines. To the best of our knowledge, this is the first variant of weak consistency protocols which does not merge multiple copies of pages. Unlike other variants of weak consistency, Y-invalidate supports implicit synchronization points in the program by invalidating copies of shared memory pages that are referenced by while-loops. In this way, Y-invalidate approximates strict consistency. The Y-invalidate protocol was implemented in the ParC system, which is a powerful parallel extension of the C language. The ParC compiler was modified to detect some of the implicit synchronization points in the source code. Experimental results show significant improvement compared to both the traditional write-invalidate protocol and weak consistency.  相似文献   

20.
This paper presents design, fabrication, and characterization of easy-to-handle electroplated nickel microgrippers with SU-8 adaptors for heterogeneous micro/nano assembly applications. Two distinctive designs of microgrippers as end-effectors of micro/nano assembly applications have been developed in this work. The first design is 200 m thick electroplated nickel microgripper with a plastic mechanical displacement amplifier that is driven by a piezoelectric actuator. The piezoelectric actuator is capable of creating 5 m displacement which is amplified to 10 m by the plastic mechanical amplifier and finally such displacement generates 50–139 m microgripper tip displacement. The second design is 20 m thick electroplated nickel microgripper embedded in SU-8 adaptor for easy-to-handle operation. The second design is electro-thermally actuated using a set of joule-heated bent beams. With applied actuation voltage in the range of 2–4 V, the microgripper generates tip displacement of 4–32 m. Extensive thermal and mechanical finite element modeling have been carried out and measurement results were compared with the simulation results. Such developed easy-to-handle microgrippers can be used for micro/nano pick-and-place assembly applications.This work was supported by the National Institute of Standards and Technology-Advanced Technology Program (NIST-ATP 70NANB1H3021). The authors would like to thank the members of Design Engineering Group at Zyvex Corporation, Mr. Yohannes Desta from the Center for Advanced Microstructures and Devices (CAMD) at Louisiana State University for the valuable technical discussions, and the members of Micro and Nano Device and Systems (MiNDS) Laboratory and Cleanroom staffs at the University of Texas at Dallas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号