首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Consider a binary string x 0 of Kolmogorov complexity K(x 0) n. The question is whether there exist two strings x 1 and x 2 such that the approximate equalities K(x i x j ) n and K(x i x j , x k ) n hold for all 0 i, j, k 2, i j k, i k. We prove that the answer is positive if we require the equalities to hold up to an additive term O(log K(x 0)). It becomes negative in the case of better accuracy, namely, O(log n).  相似文献   

2.
Summary We investigate the valuedness of finite transducers in connection with their inner structure. We show: The valuedness of a finite-valued nondeterministic generalized sequential machine (NGSM) M with n states and output alphabet is at most the maximum of (1-1/#) · (2 k 1 · k 3 ) n · n n ·# n 3 ·k 4 / 3 and 1/#·(2 k 2 ·k 3 ·(1+k 4 )) n ·n n where k 16.25 and k 211.89 are constants and k 31 and k 40 are local structural parameters of M. There are two simple criteria which characterize the infinite valuedness of an NGSM. By these criteria, it is decidable in polynomial time whether or not an NGSM is infinite-valued. In both cases, # > 1 and # = 1, the above upper bound for the valuedness is almost optimal. By reduction, all results can be easily generalized to normalized finite transducers.  相似文献   

3.
LanguagesL n ={1 x 2 ix :i, x , 1in} were used to show that, for eachk, one-way non-sensing deterministic finite automata (1-MFA) withk+1 heads are more powerful than such automata withk heads, even if we consider only 2-bounded languages (Chrobak). Fork letf(k) be the maximal numbern such that languageL n can be recognized by a 1-MFA withk heads. We present a precise inductive formula forf(k). It may be shown that, fork3,
  相似文献   

4.
Summary The k-th threshold function, T k n , is defined as: where x i{0,1} and the summation is arithmetic. We prove that any monotone network computing T 3/n(x 1,...,x n) contains at least 2.5n-5.5 gates.This research was supported by the Science and Engineering Research Council of Great Britain, UK  相似文献   

5.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

6.
Richardson splitting applied to a consistent system of linear equations Cx = b with a singular matrix C yields to an iterative method xk+1 = Axk + b where A has the eigenvalue one. It is known that each sequence of iterates is convergent to a vector x* = x* (x0) if and only if A is semi-convergent. In order to enclose such vectors we consider the corresponding interval iteration with (|[A]|) = 1 where |[A]| denotes the absolute value of the interval matrix [A]. If |[A]| is irreducible we derive a necessary and sufficient criterion for the existence of a limit of each sequence of interval iterates. We describe the shape of and give a connection between the convergence of ( ) and the convergence of the powers of [A].Dedicated to Professor G. Mae on the occasion of his 65th birthday  相似文献   

7.
A review of the methods for global optimization reveals that most methods have been developed for unconstrained problems. They need to be extended to general constrained problems because most of the engineering applications have constraints. Some of the methods can be easily extended while others need further work. It is also possible to transform a constrained problem to an unconstrained one by using penalty or augmented Lagrangian methods and solve the problem that way. Some of the global optimization methods find all the local minimum points while others find only a few of them. In any case, all the methods require a very large number of calculations. Therefore, the computational effort to obtain a global solution is generally substantial. The methods for global optimization can be divided into two broad categories: deterministic and stochastic. Some deterministic methods are based on certain assumptions on the cost function that are not easy to check. These methods are not very useful since they are not applicable to general problems. Other deterministic methods are based on certain heuristics which may not lead to the true global solution. Several stochastic methods have been developed as some variation of the pure random search. Some methods are useful for only discrete optimization problems while others can be used for both discrete and continuous problems. Main characteristics of each method are identified and discussed. The selection of a method for a particular application depends on several attributes, such as types of design variables, whether or not all local minima are desired, and availability of gradients of all the functions.Notation Number of equality constraints - () T A transpose of a vector - A A hypercubic cell in clustering methods - Distance between two adjacent mesh points - Probability that a uniform sample of sizeN contains at least one point in a subsetA ofS - A(v, x) Aspiration level function - A The set of points with cost function values less thanf(x G * ) +. Same asA f () - A f () A set of points at which the cost function value is within off(x G * ) - A () A set of points x with[f(x)] smaller than - A N The set ofN random points - A q The set of sample points with the cost function value f q - Q The contraction coefficient; –1 Q 0 - R The expansion coefficient; E > 1 - R The reflection coefficient; 0 < R 1 - A x () A set of points that are within the distance from x G * - D Diagonal form of the Hessian matrix - det() Determinant of a matrix - d j A monotonic function of the number of failed local minimizations - d t Infinitesimal change in time - d x Infinitesimal change in design - A small positive constant - (t) A real function called the noise coefficient - 0 Initial value for(t) - exp() The exponential function - f (c) The record; smallest cost function value over X(C) - [f(x)] Functional for calculating the volume fraction of a subset - Second-order approximation tof(x) - f(x) The cost function - An estimate of the upper bound of global minimum - f E The cost function value at xE - f L The cost function value at xL - f opt The current best minimum function value - f P The cost function value at x P - f Q The cost function value at x Q - f q A function value used to reduce the random sample - f R The cost function value at x R - f S The cost function value at xS - f T F min A common minimum cost function value for several trajectories - f TF opt The best current minimum value found so far forf TF min - f W The cost function value at x W - G Minimum number of points in a cell (A) to be considered full - The gamma function - A factor used to scale the global optimum cost in the zooming method - Minimum distance assumed to exist between two local minimum points - gi(x) Constraints of the optimization problem - H The size of the tabu list - H(x*) The Hessian matrix of the cost function at x* - h j Half side length of a hypercube - h m Minimum half side lengths of hypercubes in one row - I The unity matrix - ILIM A limit on the number of trials before the temperature is reduced - J The set of active constraints - K Estimate of total number of local minima - k Iteration counter - The number of times a clustering algorithm is executed - L Lipschitz constant, defined in Section 2 - L The number of local searches performed - i The corresponding pole strengths - log () The natural logarithm - LS Local search procedure - M Number of local minimum points found inL searches - m Total number of constraints - m(t) Mass of a particle as a function of time - m() TheLebesgue measure of thea set - Average cost value for a number of random sample of points inS - N The number of sample points taken from a uniform random distribution - n Number of design variables - n(t) Nonconservative resistance forces - n c Number of cells;S is divided inton c cells - NT Number of trajectories - Pi (3.1415926) - P i (j) Hypersphere approximating thej-th cluster at stagei - p(x (i)) Boltzmann-Gibbs distribution; the probability of finding the system in a particular configuration - pg A parameter corresponding to each reduced sample point, defined in (36) - Q An orthogonal matrix used to diagonalize the Hessian matrix - i (i = 1, K) The relative size of thei-th region of attraction - r i (j) Radius of thej-th hypersp here at stagei - R x * Region of attraction of a local minimum x* - r j Radius of a hypersphere - r A critical distance; determines whether a point is linked to a cluster - R n A set ofn tuples of real numbers - A hyper rectangle set used to approximateS - S The constraint set - A user supplied parameter used to determiner - s The number of failed local minimizations - T The tabu list - t Time - T(x) The tunneling function - T c (x) The constrained tunneling function - T i The temperature of a system at a configurationi - TLIMIT A lower limit for the temperature - TR A factor between 0 and 1 used to reduce the temperature - u(x) A unimodal function - V(x) The set of all feasible moves at the current design - v(x) An oscillating small perturbation. - V(y(i)) Voronoi cell of the code point y(i) - v–1 An inverse move - v k A move; the change from previous to current designs - w(t) Ann-dimensional standard. Wiener process - x Design variable vector of dimensionn - x# A movable pole used in the tunneling method - x(0) A starting point for a local search procedure - X(c) A sequence of feasible points {x(1), x(2),,x(c)} - x(t) Design vector as a function of time - X* The set of all local minimum points - x* A local minimum point forf(x) - x*(i) Poles used in the tunneling method - x G * A global minimum point forf(x) - Transformed design space - The velocity vector of the particle as a function of time - Acceleration vector of the particle as a function of time - x C Centroid of the simplex excluding x L - x c A pole point used in the tunneling method - x E An expansion point of x R along the direction x C x R - x L The best point of a simplex - x P A new trial point - x Q A contraction point - x R A reflection point; reflection of x W on x C - x S The second worst point of a simplex - x W The worst point of a simplex - The reduced sample point with the smallest function value of a full cell - Y The set of code points - y (i) A code point; a point that represents all the points of thei-th cell - z A random number uniformly distributed in (0,1) - Z (c) The set of points x where [f (c) ] is smaller thanf(x) - []+ Max (0,) - | | Absolute value - The Euclidean norm - f[x(t)] The gradient of the cost function  相似文献   

8.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

9.
Recently, Yamashita and Fukushima [11] established an interesting quadratic convergence result for the Levenberg-Marquardt method without the nonsingularity assumption. This paper extends the result of Yamashita and Fukushima by using k=||F(xk)||, where [1,2], instead of k=||F(xk)||2 as the Levenberg-Marquardt parameter. If ||F(x)|| provides a local error bound for the system of nonlinear equations F(x)=0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xk,X*)0 given by Yamashita and Fukushima. Numerical results show that the method performs well for singular problems.  相似文献   

10.
Many recent papers have dealt with the application of feedforward neural networks in financial data processing. This powerful neural model can implement very complex nonlinear mappings, but when outputs are not available or clustering of patterns is required, the use of unsupervised models such as self-organizing maps is more suitable. The present work shows the capabilities of self-organizing feature maps for the analysis and representation of financial data and for aid in financial decision-making. For this purpose, we analyse the Spanish banking crisis of 1977–1985 and the Spanish economic situation in 1990 and 1991, making use of this unsupervised model. Emphasis is placed on the analysis of the synaptic weights, fundamental for delimiting regions on the map, such as bankrupt or solvent regions, where similar companies are clustered. The time evolution of the companies and other important conclusions can be drawn from the resulting maps.Characters and symbols used and their meaning nx x dimension of the neuron grid, in number of neurons - ny y dimension of the neuron grid, in number of neurons - n dimension of the input vector, number of input variables - (i, j) indices of a neuron on the map - k index of the input variables - w ijk synaptic weight that connects thek input with the (i, j) neuron on the map - W ij weight vector of the (i, j) neuron - x k input vector - X input vector - (t) learning rate - o starting learning rate - f final learning rate - R(t) neighbourhood radius - R0 starting neighbourhood radius - R f final neighbourhood radius - t iteration counter - t rf number of iterations until reachingR f - t f number of iterations until reaching f - h(·) lateral interaction function - standard deviation - for every - d (x, y) distance between the vectors x and y  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号