首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For nonlinear nonautonomous dynamic systems, partial equiasymptotic stability of the zero equilibrium position, equiasymptotic stability of the partial equilibrium position, and partial equiasymptotic stability of the partial equilibrium position are studied by the Lyapunov functions method. Examples are given.Translated from Avtomatika i Telemekhanika, No. 2, 2005, pp. 3–16.Original Russian Text Copyright © 2005 by Alekseeva, Vorotnikov, Feofanova.  相似文献   

2.
We apply and extend the priority algorithm framework introduced by Borodin, Nielsen, and Rackoff to define greedy-like algorithms for the (uncapacitated) facility location problems and set cover problems. These problems have been the focus of extensive research from the point of view of approximation algorithms and for both problems greedy-like algorithms have been proposed and analyzed. The priority algorithm definitions are general enough to capture a broad class of algorithms that can be characterized as greedy-like while still possible to derive non-trivial lower bounds on the approximability of the problems by algorithms in such a class. Our results are orthogonal to complexity considerations, and hence apply to algorithms that are not necessarily polynomial time.  相似文献   

3.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

4.
The transition ruleF of a cellular automaton may sometimes be regarded as a rule of growth of a crystal from a seed. A study is made of the iterates,F,F 2 .For certain one-dimensional growth rules, the limiting shapes of the crystals are computed, and an asymptotic formula for the size of the crystal as a function of time is obtained.  相似文献   

5.
Parallel construction of a suffix tree with applications   总被引:1,自引:1,他引:0  
Many string manipulations can be performed efficiently on suffix trees. In this paper a CRCW parallel RAM algorithm is presented that constructs the suffix tree associated with a string ofn symbols inO(logn) time withn processors. The algorithm requires (n 2) space. However, the space needed can be reduced toO(n 1+) for any 0< 1, with a corresponding slow-down proportional to 1/. Efficient parallel procedures are also given for some string problems that can be solved with suffix trees.The results of this paper have been achieved independently and simultaneously in [AI-86] and [LSV-86]. The research of U. Vishkin was supported by NSF Grant NSF-CCR-8615337, ONR Grant N00014-85-K-0046, and Foundation for Research in Electronics, Computers, and Communication, administered by the Israeli Academy of Sciences and Humanities. The research of A. Apostolico was carried out in part while visiting at the Istituto di Analisi dei Sistemi e Informatica, Rome, with support from the Italian National Research Council. The research of G. M. Landau, B. Schieber, and U. Vishkin was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy under Contract DE-AC02-76ER03077.  相似文献   

6.
A central component of the analysis of panel clustering techniques for the approximation of integral operators is the so-called -admissibility condition min {diam(),diam()} 2dist(,) that ensures that the kernel function is approximated only on those parts of the domain that are far from the singularity. Typical techniques based on a Taylor expansion of the kernel function require a subdomain to be far enough from the singularity such that the parameter has to be smaller than a given constant depending on properties of the kernel function. In this paper, we demonstrate that any is sufficient if interpolation instead of Taylor expansionisused for the kernel approximation, which paves the way for grey-box panel clustering algorithms.  相似文献   

7.
Let (X, #) be an orthogonality space such that the lattice C(X, #) of closed subsets of (X, #) is orthomodular and let (, ) denote the free orthogonality monoid over (X, #). Let C0(, ) be the subset of C(, ), consisting of all closures of bounded orthogonal sets. We show that C0(, ) is a suborthomodular lattice of C(, ) and we provide a necessary and sufficient condition for C0(, ) to carry a full set of dispersion free states.The work of the second author on this paper was supported by National Science Foundation Grant GP-9005.  相似文献   

8.
Coordinating Multiple Agents via Reinforcement Learning   总被引:2,自引:0,他引:2  
In this paper, we attempt to use reinforcement learning techniques to solve agent coordination problems in task-oriented environments. The Fuzzy Subjective Task Structure model (FSTS) is presented to model the general agent coordination. We show that an agent coordination problem modeled in FSTS is a Decision-Theoretic Planning (DTP) problem, to which reinforcement learning can be applied. Two learning algorithms, coarse-grained and fine-grained, are proposed to address agents coordination behavior at two different levels. The coarse-grained algorithm operates at one level and tackle hard system constraints, and the fine-grained at another level and for soft constraints. We argue that it is important to explicitly model and explore coordination-specific (particularly system constraints) information, which underpins the two algorithms and attributes to the effectiveness of the algorithms. The algorithms are formally proved to converge and experimentally shown to be effective.  相似文献   

9.
Summary Let {Y n ,– < n < } be either a function of a stationary Markov chain with countable state space, or a finitary process in the sense of Heller [3]. The purpose of this note is to prove that if {Y n ,– < n < } is mixing, then it is aK-shift. (Definitions will be given below.)IfT is a measure-preserving transformation on a probability space, then the following implications relevant to the present paper are known: (1)T is aK-shift T is (r + 1)-mixing T isr-mixing T is totally ergodic T is ergodic, and (2)T is ergodic T is totally ergodic T isr-mixing T is aK-shift.It is not known if the classes ofr-mixing and (r + 1)-mixing transformations are distinct. (1-mixing is also called mixing.) The results of this note then imply that for the classes of transformation that we are consideringr-mixing and (r + 1)-mixing are equivalent.This research was partially supported by NSF Grant GP-19660.  相似文献   

10.
A key managerial challenge, of interest to academics and practitioners alike, is the assessment and management of customer satisfaction. In this paper, we examine the underlying processes involving consumer satisfaction and switching patterns among ISPs using different satisfaction models, including the expectations-disconfirmation model, the attribution model, and an affective model. Our results indicate that the satisfaction levels of ISP consumers are generally relatively low, despite the fact that consumer expectations of ISPs are also low, reflecting mediocrity in the marketplace. In addition, consumers attribute their dissatisfaction to ISP indifference and believe that managing dissatisfaction is within the control of the ISP. Moreover, affective factors play an important role in satisfaction processes and switching behavior. Customer service including technical support and responsiveness of service staff is an important determinant factor in ISP selection. We suggest that as the ISP market matures, service providers that pay attention to affective factors and to building relationships with their customers will have a competitive advantage in the marketplace of the future.  相似文献   

11.
We generalize Cuckoo Hashing to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + )n memory cells, for any constant > 0. Assuming uniform hashing, accessing or deleting table entries takes at most d=O (ln (1/)) probes and the expected amortized insertion time is constant. This is the first dictionary that has worst case constant access time and expected constant update time, works with (1 + )n space, and supports satellite information. Experiments indicate that d = 4 probes suffice for 0.03. We also describe variants of the data structure that allow the use of hash functions that can be evaluated in constant time.  相似文献   

12.
We show that a number of geometric problems can be solved on a n × n mesh-connected computer (MCC) inO(n) time, which is optimal to within a constant factor, since a nontrivial data movement on an MCC requires (n) time. The problems studied here include multipoint location, planar point location, trapezoidal decomposition, intersection detection, intersection of two convex polygons, Voronoi diagram, the largest empty circle, the smallest enclosing circle, etc. TheO(n) algorithms for all of the above problems are based on the classical divide-and-conquer problem-solving strategy.This work was supported in part by the National Science Foundation under Grant DCR 8420814. A preliminary version was presented in the 1987 FJCC, Dallas, TX.  相似文献   

13.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

14.
A well-known problem in default logic is the ability of naive reasoners to explain bothg and ¬g from a set of observations. This problem is treated in at least two different ways within that camp.One approach is examination of the various explanations and choosing among them on the basis of various explanation comparators. A typical comparator is choosing the explanation that depends on the most specific observation, similar to the notion of narrowest reference class.Others examine default extensions of the observations and choose whatever is true in any extension, or what is true in all extensions or what is true in preferred extensions. Default extensions are sometimes thought of as acceptable models of the world that are discarded as more knowledge becomes available.We argue that the notions of specificity and extension lack clear semantics. Furthermore, we show that the problems these ideas were supposed to solve can be handled easily within a probabilistic framework.  相似文献   

15.
Let H be a separable Hilbert space. We consider the manifold M consisting of density operators on H such that p is of trace class for some p (0, 1). We say M is nearby if there exists C > 1 such that C –1C. We show that the space of nearby points to can be furnished with the two flat connections known as the (±)-affine structures, which are dual relative to the BKM metric. We furnish M with a norm making it into a Banach manifold.  相似文献   

16.
An optimalO(log logn)-time CRCW-PRAM algorithm for computing all period lengths of a string is presented. Previous parallel algorithms compute the period only if it is shorter than half of the length of the string. The algorithm can be used to find all initial palindromes of a string in the same time and processor bounds. Both algorithms are the fastest possible over a general alphabet. We derive a lower bound for finding initial palindromes by modifying a known lower bound for finding the period length of a string [9]. Whenp processors are available the bounds become (n/p+log1+p/n2p).This work was partially supported by NSF Grant CCR-90-14605. D. Breslauer was partially supported by an IBM Graduate Fellowship while studying at Columbia University and by a European Research Consortium for Informatics and Mathematics postdoctoral fellowship.  相似文献   

17.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

18.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

19.
Changes and interrelations among computer usage, computer attitude, and skill transfer of elderly Japanese computer users were investigated over a one-year period. Each participant, aged 60 to 76 years, was provided with one touchscreen-based computer specialized for e-mail handling for 12 months. Participants usage of the computer, mouse and/or keyboard, and computer attitudes were investigated. The results showed that the Liking factor of the computer attitude scale was a possible predictor of computer usage. The results suggested the existence of four different types of users adaptation to computers, according to a combination of the Liking and Confidence dimensions of computer attitude.  相似文献   

20.
Although the top-down development paradigm has successfully been applied to master the complexity of large systems, it has not yet been accepted as a useful paradigm for fault tolerant system design. This is mainly due to a problem that is sometimes referred to as the lazy programmers paradox. The lazy programmer paradox was already present and solved in top-down development methods for non-critical systems. However, the problem has re-appeared in an even more serious variant for critical systems. A few toy examples concerning exception handling in an Ada-like language are used to explain and illustrate the paradox. One possible solution to the problem is to use a specification language in which one can express that certain behaviours of a system are preferred over others. This paper proposes deontic logic as such a specification language. Therefore, a short and rather informal introduction to deontic logic is included. A non-trivial example is included to illustrate how deontic logic can be used to solve the lazy programmer paradox.Supported by NWO/SION Project 612-316-022: Fault Tolerance: Paradigms, Models, Logics, Construction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号