首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Summary The k-th threshold function, T k n , is defined as: where x i{0,1} and the summation is arithmetic. We prove that any monotone network computing T 3/n(x 1,...,x n) contains at least 2.5n-5.5 gates.This research was supported by the Science and Engineering Research Council of Great Britain, UK  相似文献   

2.
We prove that the weight function wt: on a set of messages uniquely determines a linear code of dimension k up to equivalence. We propose a natural way to extend the rth generalized Hamming weight, that is, a function on r-subspaces of a code C, to a function on . Using this, we show that, for each linear code C and any integer rk = dim C, a linear code exists whose weight distribution corresponds to a part of the generalized weight spectrum of C, from the rth weights to the kth. In particular, the minimum distance of this code is proportional to the rth generalized weight of C.__________Translated from Problemy Peredachi Informatsii, No. 2, 2005, pp. 26–41.Original Russian Text Copyright © 2005 by Nogin.Supported in part by the Russian Foundation for Basic Research, project no. 02-01-01041, and INTAS, grant no. 00-738.  相似文献   

3.
We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length &ell, we can bound the additional loss of our algorithm over the best partition by . For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes , where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial.  相似文献   

4.
This paper presents a new approach to construct a smalln-column 0, 1-matrix for two given integersn andk(k, such that everyk-column projection contains all 2 k possible row vectors, namely surjective on {0, 1} k . The number of the matrix's rows does not exceed . This approach has considerable advantage for smallk and practical sizes ofn. It can be applied to the test generation of VLSI circuits, the design of fault tolerant systems and other fields.  相似文献   

5.
In this paper we consider the problem ofL 1 sensitivity minimization for linear plants with commensurate input delays. We describe a procedure for computing the minimum performance, and we characterize optimal solutions. The computations involve solving a one-parameter family of finite-dimensional linear programs. Explicit solutions are presented for important special cases.Notation X * Dual space of a normed linear spaceX - All elements inS with norm 1 - S The annihilator subspace defined as . - S The annihilator subspace defined as . - BV(X) Functions of bounded variation onX - C 0(X) Continuous function on a locally compact spaceX such that for all > 0, {x ¦f(x)¦s is compact - C N (a, b) Vectors of continuous functions on (a, b) The authors acknowledge support from the Army Research Office, Center for Intelligent Control, under grant DAAL03-86-K-0171, and the National Science Foundation, under grant 8810178-ECS.  相似文献   

6.
Given a nonempty set of functions
where a = x 0 < ... < x n = b are known nodes and w i , i = 0,...,n, d i , i = 1,..., n, known compact intervals, the main aim of the present paper is to show that the functions and
exist, are in F, and are easily computable. This is achieved essentially by giving simple formulas for computing two vectors with the properties
] is the interval hull of (the tolerance polyhedron) T; iff T 0 iff F 0. , can serve for solving the following problem: Assume that is a monotonically increasing functional on the set of Lipschitz-continuous functions f : [a,b] R (e.g. (f) = a b f(x) dx or (f) = min f([a,b]) or (f) = max f([a,b])), and that the available information about a function g : [a,b] R is "g F," then the problem is to find the best possible interval inclusion of (g). Obviously, this inclusion is given by the interval [( ,( )]. Complete formulas for computing this interval are given for the case (f) = a b f(x) dx.  相似文献   

7.
LanguagesL n ={1 x 2 ix :i, x , 1in} were used to show that, for eachk, one-way non-sensing deterministic finite automata (1-MFA) withk+1 heads are more powerful than such automata withk heads, even if we consider only 2-bounded languages (Chrobak). Fork letf(k) be the maximal numbern such that languageL n can be recognized by a 1-MFA withk heads. We present a precise inductive formula forf(k). It may be shown that, fork3,
  相似文献   

8.
In many applications, we need to find a minimum cost partition of a network separating a given pair of nodes. A classical example is the Max-Flow Min-Cut Theorem, where the cost of the partition is defined to be the sum of capacities of arcs connecting the two parts. Other similar concepts such as minimum weighted sparsest cut and flux cut have also been introduced. There is always a cost associated with a cut, and we always seek the min-cost cut separating a given pair of nodes. A natural generalization from the separation of a given pair is to find all minimum cost cuts separating all pairs of nodes, with arbitrary costs associated with all 2n–1 — 1 cuts. In the present paper, we show thatn — 1 minimum cost cuts are always sufficient to separate all pairs of nodes.A further generalization is to considerk-way partitions rather than two-way partitions. An interesting relationship exists betweenk-way partitions, the multicommodity flow problem, and the minimum weighted sparsest cut. Namely, if the staturated arcs in a multicommodity flow problem form ak-way partition (k 4), then thek-way partition contains a two-way partition. This two-way partition is the minimum weight sparsest cut.This work is supported in part by the NSF under Grant MIP-8700767 and micro program under Grants 506205 and 506215, Intergraph, and Data General.  相似文献   

9.
Here we investigate the problem of transforming a nonlinear system on the torusT n by using global feedback and global changes of coordinates into an invariant system (i.e., a control system of whichf andg i are (left and right) invariant vector fields whenT 2 is considered as a Lie group). We provide a complete answer whenn=2 and give a sufficient condition and necessary conditions in the more general case.  相似文献   

10.
Conclusion The obvious deficiency of the method (1.3), (1.9) is the possible difficulty of the operation . In connection with this one can note that all the above given statements remain valid if the number is replaced by some positive lower bound of |f(t k ,x)| on .In computational methods, the presence of the Lipschitz constant is considered as a deficiency. In connection with this we can note that the Lipschitz constant L can be replaced by any of its upper estimates. For example, for a differentiable function f(z) one can take .Translated from Kibernetika, No. 2, pp. 71–74, March–April, 1987.  相似文献   

11.
The population dynamics model , was considered. For this model with uniform distribution of delays and a n = 0, nonnegativeness and convexity of the sequence a k (0 k n) was shown to be the sufficient stability condition. Therefore, there is no need to constrain the reproduction rate and the mean delay .  相似文献   

12.
Tools for computational differentiation transform a program that computes a numerical function F(x) into a related program that computes F(x) (the derivative of F). This paper describes how techniques similar to those used in computational-differentiation tools can be used to implement other program transformations—in particular, a variety of transformations for computational divided differencing. The specific technical contributions of the paper are as follows:– It presents a program transformation that, given a numerical function F(x) defined by a program, creates a program that computes F[x 0, x 1], the first divided difference of F(x), where – It shows how computational first divided differencing generalizes computational differentiation.– It presents a second program transformation that permits the creation of higher-order divided differences of a numerical function defined by a program.– It shows how to extend these techniques to handle functions of several variables.The paper also discusses how computational divided-differencing techniques could lead to faster and/or more robust programs in scientific and graphics applications.Finally, the paper describes how computational divided differencing relates to the numerical-finite-differencing techniques that motivated Robert Paige's work on finite differencing of set-valued expressions in SETL programs.  相似文献   

13.
Do  Dang-Khoa 《Reliable Computing》2004,10(6):489-500
The spigot approach used in the previous paper (Reliable Computing 7 (3) (2001), pp. 247–273) for root computation is now applied to natural logarithms. The logarithm ln Q with Q , Q > 1 is decomposed into a sum of two addends k 1× ln Q 1+k 2× ln Q 2 with k 1, k 2 , then each of them is computed by the spigot algorithm and summation is carried out using integer arithmetic. The whole procedure is not literally a spigot algorithm, but advantages are the same: only integer arithmetic is needed whereas arbitrary accuracy is achieved and absolute reliability is guaranteed. The concrete procedure based on the decomposition with p, q ( – {0}), p < q is simple and ready for implementation. In addition to the mentioned paper, means for determining an upper bound for the biggest integer occurring in the process of spigot computing are now provided, which is essential for the reliability of machine computation.  相似文献   

14.
Let be a finite field withq elements and a rational function over . No polynomial-time deterministic algorithm is known for the problem of deciding whetherf induces a permutation on . The problem has been shown to be in co-R co-NP, and in this paper we prove that it is inR NP and hence inZPP, and it is deterministic polynomial-time reducible to the problem of factoring univariate polynomials over . Besides the problem of recognizing prime numbers, it seems to be the only natural decision problem inZPP unknown to be inP. A deterministic test and a simple probabilistic test for permutation functions are also presented.  相似文献   

15.
The analytic structure of Rational Interpolants (R.I.) f (z) built from randomly perturbed data is explored; the interpolation nodes x j , j = 1,...,M, are real points where the function f reaches these prescribed data . It is assumed that the data are randomly perturbed values of a rational function (n) (m) (m / n is the degree of the numerator/denominator). Much attention is paid to the R.I. familyf (n+1) (m–1), in the small stochasticity régime. The main result is that the additional zero and pole are located nearby the root of the same random polynomial, called the Froissart Polynomial (F.P.). With gaussian hypothesis on the noise, the random real root of F.P. is distributed according to a Cauchy-Lorentz law, with parameters such that the integrated probability over the interpolation interval x 1, x M is always larger than 1/2; in two cases studied in detail, it reaches 2/3 in one case and almost 3/4 in the other. For the families f (n+k) (m+k), numerical explorations point to similar phenomena; inspection shows that, in the mean, the localization occurs in the complex and/or real vicinity of the interpolation interval.  相似文献   

16.
There is a great deal of research aimed toward the development of temporal logics and model checking algorithms which can be used to verify properties of systems. In this paper, we present a methodology and supporting tools which allow researchers and practitioners to automatically generate model checking algorithms for temporal logics from algebraic specifications. These tools are extensions of algebraic compiler generation tools and are used to specify model checkers as mappings of the form , where L s is a temporal logic source language and L t is a target language representing sets of states of a model M, such that . The algebraic specifications for a model checker define the logic source language, the target language representing sets of states in a model, and the embedding of the source language into the target language. Since users can modify and extend existing specifications or write original specifications, new model checking algorithms for new temporal logics can be easily and quickly developed; this allows the user more time to experiment with the logic and its model checking algorithm instead of developing its implementation. Here we show how this algebraic framework can be used to specify model checking algorithms for CTL, a real-time CTL, CTL*, and a custom extension called CTL e that makes use of propositions labeling the edges as well as the nodes of a model. We also show how the target language can be changed to a language of binary decision diagrams to generate symbolic model checkers from algebraic specifications.  相似文献   

17.
Battle and Lemarie derived independently wavelets by orthonormalizing B-splines. The scaling function m (t) corresponding to Battle–Lemarie's wavelet m (t) is given by , where B m(t) is the mth-order central B-spline and the coefficients m, k satisfy . In this paper, we propose an FFT-based algorithm for computing the expansion coefficients m, k and the two-scale relations of the scaling functions and wavelets. The algorithm is very simple and it can be easily implemented. Moreover, the expansion coefficients can be efficiently and accurately obtained via multiple sets of FFT computations. The computational approach presented in this paper here is noniterative and is more efficient than the matrix approach recently proposed in the literature.  相似文献   

18.
The paper describes an improved algorithm for computing cohomologies of Lie (super)algebras. The original algorithm developed earlier by the author of this paper is based on the decomposition of the entire cochain complex into minimal subcomplexes. The suggested improvement consists in the replacement of the arithmetic of rational or integer numbers by a more efficient arithmetic of modular fields and the use of the relationship dim H k( p) dimH k() between the dimensions of cohomologies over an arbitrary modular field p = /p and the filed of rational numbers . This inequality allows us to rapidly find subcomplexes for which dimH k( p) > 0 (the number of such subcomplexes is usually not great) using computations over an arbitrary p and, then, carry out all required computations over in these subcomplexes.  相似文献   

19.
Based on Kuhn-Tucker optimality criteria, using a grapho-analytical method, simple closed formulae are given for optimal diameter and thickness of centrally compressed struts of circular hollow section. These dimensions minimize the cross-sectional area and fulfill the design constraints. In the prebuckling range overall and local elastic buckling constraints are defined. In the post-buckling range the optimal sizes can be calculated using the formulae valid for the prebuckling range, but instead of the limitingD/t-ratio, valid for semi-compact tubes, the plastic limitingD/t-ratio should be used to avoid local buckling. The comparative calculations show the mass savings achievable by using a higher-strength steel as well as the larger axial shortening of struts made of higher-strength steel.Notation A cross-sectional area - (6) - D diameter - E modulus of elasticity - f overall buckling factor, (3), Fig. 2 - I moment of inertia - k end restraint factor, (4) - L strut length - P factored axial compressive force - P cr critical buckling force, (3) - P y squash load, (3) - r radius of gyration - t thickness - factor, (17) - = D/t diameter/thickness ratio - L.el limiting elasticD/t-ratio - L.pl limiting plasticD/t-ratio - = D/L diameter/length ratio - =kL/r slenderness - =/ E modified slenderness - Euler slenderness - y yield stress - axial shortening - y =P y L/(AE) pl see (17)  相似文献   

20.
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n × n matrix A with integer entries in and bit operations; here denotes the largest entry in absolute value and the exponent adjustment by +o(1) captures additional factors for positive real constants C1, C2, C3. The bit complexity results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n3.2+o(1) and O(n2.697263) ring additions, subtractions and multiplications.To B. David Saunders on the occasion of his 60th birthday  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号