首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider dynamic evaluation of algebraic functions (matrix multiplication, determinant, convolution, Fourier transform, etc.) in the model of Reif and Tate; i.e., if f(x1,…, xn)=(y1, …, ym) is an algebraic problem, we consider serving online requests of the form “change input xi to value v” or “what is the value of output yi?” We present techniques for showing lower bounds on the worst case time complexity per operation for such problems. The first gives lower bounds in a wide range of rather powerful models (for instance, history dependent algebraic computation trees over any infinite subset of a field, the integer RAM, and the generalized real RAM model of Ben-Amram and Galil). Using this technique, we show optimal Ω(n) bounds for dynamic matrix–vector product, dynamic matrix multiplication, and dynamic discriminant and an Ω( ) lower bound for dynamic polynomial multiplication (convolution), providing a good match with Reif and Tate's O( ) upper bound. We also show linear lower bounds for dynamic determinant, matrix adjoint, and matrix inverse and an Ω( ) lower bound for the elementary symmetric functions. The second technique is the communication complexity technique of Miltersen, Nisan, Safra, and Wigderson which we apply to the setting of dynamic algebraic problems, obtaining similar lower bounds in the word RAM model. The third technique gives lower bounds in the weaker straight line program model. Using this technique, we show an Ω((log n)2/log log n) lower bound for dynamic discrete Fourier transform. Technical ingredients of our techniques are the incompressibility technique of Ben-Amram and Galil and the lower bound for depth-two superconcentrators of Radhakrishnan and Ta-Shma. The incompressibility technique is extended to arithmetic computation in arbitrary fields.  相似文献   

2.
The initial value problem of the Korteweg-de Vries (KdV) equation posted on the real line R: defines a nonlinear map K from the space Hs( ) to the space C( , Hs( )) for any given real numbers s ≥ = 0. In this paper we prove that the map KR is computable for any integer s ≥ = 3.  相似文献   

3.
This paper studies a system of m robots operating in a set of n work locations connected by aisles in a × grid, where mn. From time to time the robots need to move along the aisles, in order to visit disjoint sets of locations. The movement of the robots must comply with the following constraints: (1) no two robots can collide at a grid node or traverse a grid edge at the same time; (2) a robot's sensory capability is limited to detecting the presence of another robot at a neighboring node. We present a deterministic protocol that, for any small constant ε>0, allows m≤(1-ε)n robots to visit their target locations in O( ) time, where each robot visits no more than dn targets and no target is visited by more than one robot. We also prove a lower bound showing that our protocol is optimal. Prior to this paper, no optimal protocols were known for d>1. For d=1, optimal protocols were known only for m≤ , while for general mn only a suboptimal randomized protocol was known.  相似文献   

4.
We present an algorithm that—given a set of clauses S saturated under some semantic refinements of the resolution calculus—automatically constructs a Herbrand model of S. is represented by a set of atoms with equality and disequality constraints interpreted over the finite tree algebra, hence the problem of evaluating first-order formulae in is decidable.  相似文献   

5.
This paper presents a number of new ideas and results on graph reduction applied to graphs of bounded treewidth. S. Arnborg, B. Courcelle, A. Proskurowski, and D. Seese (J. Assoc. Comput. Mach.40, 1134–1164 (1993)) have shown that many decision problems on graphs can be solved in linear time on graphs of bounded treewidth, using a finite set of reduction rules. These algorithms can be used to solve problems on graphs of bounded treewidth without the need to obtain a tree decomposition of the input graph first. We show that the reduction method can be extended to solve the construction variants of many decision problems on graphs of bounded treewidth, including all problems definable in monadic second order logic. We also show that a variant of these reduction algorithms can be used to solve (constructive) optimization problems in O(n) time. For example, optimization and construction variants of I S and H C N can be solved in this way on graphs of small treewidth. Additionally, we show that the results of H. L. Bodlaender and T. Hagerup (SIAM J. Comput.27, 1725–1746 (1998)) can be applied to our reduction algorithms, which results in parallel reduction algorithms that use O(n) operations and O(log n log* n) time on an EREW PRAM, or O(log n) time on a CRCW PRAM.  相似文献   

6.
This paper introduces formative processes, composed by transitive partitions. Given a family of sets, a formative process ending in the Venn partition Σ of is shown to exist. Sufficient criteria are also singled out for a transitive partition to model (via a function from set variables to unions of sets in the partition) all set-literals modeled by Σ. On the basis of such criteria a procedure is designed that mimics a given formative process by another where sets have finite rank bounded by C(|Σ|), with C a specific computable function. As a by-product, one of the core results on decidability in computable set theory is rediscovered, namely the one that regards the satisfiability of unquantified set-theoretic formulae involving Boolean operators, the singleton-former, and the powerset operator. The method described (which is able to exhibit a set-solution when the answer is affirmative) can be extended to solve the satisfiability problem for broader fragments of set theory.  相似文献   

7.
A distance quasi-metric for pattern recognition is presented. The “quasi” modifier distinguishes the metric from “true” distance metrics which obey a set of standard constraints. By relaxing one of the constraints and coupling it with a fast multidimensional search technique, the metric demonstrates improved accuracy and efficiency compared to other metrics in recognizing hand-written digit samples. A high-level design of a fast optical comparator for computing the distance in O( ) is also presented.  相似文献   

8.
We consider the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones. First, we describe a general method how to utilize the decidability of bisimulation problems to solve (certain instances of) the corresponding simulation problems. For certain process classes, the method allows us to design effective reductions of simulation problems to their bisimulation counterparts and some new decidability results for simulation have already been obtained in this way. Then we establish the decidability border for the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones w.r.t. the hierarchy of process rewrite systems. In particular, we show that simulation preorder (in both directions) and simulation equivalence are decidable in EXPTIME between pushdown processes and finite-state ones. On the other hand, simulation preorder is undecidable between PA and finite-state processes in both directions. These results also hold for those PA and finite-state processes which are deterministic and normed, and thus immediately extend to trace preorder. Regularity (finiteness) w.r.t. simulation and trace equivalence is also shown to be undecidable for PA. Finally, we prove that simulation preorder (in both directions) and simulation equivalence are intractable between all classes of infinite-state systems (in the hierarchy of process rewrite systems) and finite-state ones. This result is obtained by showing that the problem whether a BPA (or BPP) process simulates a finite-state one is PSPACE-hard and the other direction is co -hard; consequently, simulation equivalence between BPA (or BPP) and finite-state processes is also co -hard.  相似文献   

9.
This paper reviews a technique of adaptive wavelet expansions and introduces the novel concept of “biased wavelets.” These are functions that are localized in time and in frequency but, unlike conventional wavelets, have an adjustable nonzero mean component. Under mild conditions, it is shown that a conventional mother wavelet can be used to construct a family of biased wavelets which spans the set of finite-energy functions L2( ). Numerical tests suggest that the introduction of the adjustable “bias” considerably improves the representation capabilities of wavelet expansions. A problem of electrocardiographic data compression is used for illustration purposes. Test signals were extracted from the MIT–BIH ECG Compression Test Database.  相似文献   

10.
The problem of existence of predictive complexity for the absolute loss game is studied. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. For perfectly mixable loss functions (logarithmic and squared difference are among them) predictive complexity is defined like Kolmogorov complexity to within an additive constant. The absolute loss function is not perfectly mixable, and the question of existence of the corresponding predictive complexity, which is defined to within an additive constant, is open. We prove that in the case of the absolute loss game the predictive complexity can be defined to within an additive term O( ), where n is the length of a sequence of outcomes. We prove also that in some restricted settings this bound cannot be improved.  相似文献   

11.
Weakly useful sequences   总被引:1,自引:0,他引:1  
An infinite binary sequence x is defined to be
(i) strongly useful if there is a computable time bound within which every decidable sequence is Turing reducible to x; and
(ii) weakly useful if there is a computable time bound within which all the sequences in a non-measure 0 subset of the set of decidable sequences are Turing reducible to x.
Juedes, Lathrop, and Lutz [Theorectical Computer Science 132 (1994) 37] proved that every weakly useful sequence is strongly deep in the sense of Bennett [The Universal Turing Machine: A Half-Century Survey, 1988, 227] and asked whether there are sequences that are weakly useful but not strongly useful. The present paper answers this question affirmatively. The proof is a direct construction that combines the martingale diagonalization technique of Lutz [SIAM Journal on Computing 24 (1995) 1170] with a new technique, namely, the construction of a sequence that is “computably deep” with respect to an arbitrary, given uniform reducibility. The abundance of such computably deep sequences is also proven and used to show that every weakly useful sequence is computably deep with respect to every uniform reducibility.
Keywords: Computability; Randomness; Random sequence; Computational depth; Logical depth; Computable measure; Resource-bounded measure; Useful; Weakly useful  相似文献   

12.
For an arbitrary n×n constant matrix A the two following facts are well known:
• (1/n)Re(traceA)−maxj=1,…,nRe λj(A)0;
• If U is a unitary matrix, one can always find a skew-Hermitian matrix A so that U=eA.
In this note we present the extension of these two facts to the context of linear time-varying dynamical systemsAs a by-product, this result suggests that, the notion of “slowly varying state-space systems”, commonly used in literature, is mathematically not natural to the problem of exponential stability.  相似文献   

13.
We present deterministic upper and lower bounds on the slowdown required to simulate an (n, m)-PRAM on a variety of networks. The upper bounds are based on a novel scheme that exploits the splitting and combining of messages. This scheme can be implemented on an n-node d-dimensional mesh (for constant d) and on an n-leaf pruned butterfly and attains the smallest worst-case slowdown to date for such interconnections, namely, O(n1/d(log(m/n))1-1/d) for the d-dimensional mesh (with constant d) and O( ) for the pruned butterfly. In fact, the simulation on the pruned butterfly is the first PRAM simulation scheme on an area-universal network. Finally, we prove restricted and unrestricted lower bounds on the slowdown of any deterministic PRAM simulation on an arbitrary network, formulated in terms of the bandwidth properties of the interconnection as expressed by its decomposition tree.  相似文献   

14.
We investigate the nature of the phase transition (sharp or coarse) for random constraint satisfaction problems. We first give a sharp threshold criterion specified for CSPs, which is derived from Friedgut–Bourgain’s one. Thus, we get a complete and precise classification of the nature of the threshold for symmetric Boolean CSPs. In particular we show that it is governed by two local properties strongly related to the problems and .  相似文献   

15.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

16.
We consider planar circuits, formulas and multilective planar circuits. It is shown that planar circuits and formulas are incomparable. An (n logn) lower bound is given for the multilective planar circuit complexity of a decision problem and an (n 3/2) lower bound is given for the multilective planar circuit complexity of a multiple output function.  相似文献   

17.
In this paper, we provide a method to safely store a document in perhaps the most challenging settings, a highly decentralized replicated storage system where up to half of the storage servers may incur arbitrary failures, including alterations to data stored in them. Using an error correcting code (ECC), e.g., a Reed–Solomon code, one can take n pieces of a document, replace each piece with another piece of size larger by a factor of such that it is possible to recover the original set even when up to t of the larger pieces are altered. For t close to n/2 the space blowup factor of this scheme is close to n, and the overhead of an ECC such as the Reed–Solomon code degenerates to that of a trivial replication code. We show a technique to reduce this large space overhead for high values of t. Our scheme blows up each piece by a factor slightly larger than two using an erasure code which makes it possible to recover the original set using n/2−O(n/d) of the pieces, where d≈80 is a fixed constant. Then we attach to each piece O(d log n/log d) additional bits to make it possible to identify a large enough set of unmodified pieces, with negligible error probability, assuming that at least half the pieces are unmodified and with low complexity. For values of t close to n/2 we achieve a large asymptotic space reduction over the best possible space blowup of any ECC in deterministic setting. Our approach makes use of a d-regular expander graph to compute the bits required for the identification of n/2−O(n/d) good pieces.  相似文献   

18.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

19.
The SRL (speciate re-entrant logic) of King (1989) is a sound, complete and decidable logic designed specifically to support formalisms for the HPSG (head-driven phrase structure grammar) of Pollard and Sag (1994). The SRL notion of modellability in a signature is particularly important for HPSG, and the present paper modifies an elegant method due to Blackburn and Spaan (1993) in order to prove that
–  modellability in each computable signature is 1 0
–  modellability in some finite signature is 1 0 -hard (hence not decidable), and
–  modellability in some finite signature is decidable.
Since each finite signature is a computable signature, we conclude that 01-completeness is the least upper bound on the complexity of modellability both in finite signatures and in computable signatures, though not a lower bound in either.  相似文献   

20.
1 [11] is a decidable subclass of first-order clausal logic without equality. [7] shows that 1 becomes undecidable when equational literals are allowed, but remains decidable if equality is restricted to ground terms only.First, we extend this decidability result to some non ground equational literals. By carefully restricting the use of the equality predicate we obtain a new decidable class, called 1 =*. We show that existing paramodulation calculi do not terminate on 1 =* and we define a new simplification rule which allows to ensure termination. Second, we show that the automatic extraction of Herbrand models is possible from saturated sets in 1 =* not containing □. These models are represented by certain finite sets of (possibly equational and non ground) linear atoms. The difficult point here is to show that this formalism is suitable as a model representation mechanism, i.e. that the evaluation of arbitrary non equational first-order formulae in such interpretations is a decidable problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号