首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Constant-time distributed dominating set approximation   总被引:1,自引:0,他引:1  
Finding a small dominating set is one of the most fundamental problems of classical graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary, possibly constant parameter k and maximum node degree , our algorithm computes a dominating set of expected size in rounds. Each node has to send messages of size . This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.Received: 9 September 2003, Accepted: 2 September 2004, Published online: 13 January 2005The work presented in this paper was supported (in part) by the National Competence Center in Research on Mobile Information and Communication Systems (NCCR-MICS), a center supported by the Swiss National Science Foundation under grant number 5005-67322.  相似文献   

2.
All the results given in the paper hold true. In the proof of Theorem 1, change steps IV, V, and VI to IV. for every , add to P; V. for every , add , , , , , to P; VI. for every , add to P.  相似文献   

3.
The condition-based approach for consensus solvability consists of identifying sets of input vectors, called conditions, for which there exists an asynchronous protocol solving consensus despite the occurrence of up to f process crashes. This paper investigates , the largest set of conditions which allow us to solve the consensus problem in an asynchronous shared memory system.The first part of the paper shows that is made up of a hierarchy of classes of conditions, where d is a parameter (called degree of the condition), starting with and ending with d = 0, where . We prove that each one is strictly contained in the previous one: . Various properties of the hierarchy are also derived. It is shown that a class can be characterized in two equivalent but complementary ways: one is convenient for designing protocols while the other is for analyzing the class properties. The paper also defines a linear family of conditions that can be used to derive many specific conditions. In particular, for each d, two natural conditions are presented.The second part of the paper is devoted to the design of efficient condition-based protocols. A generic condition-based protocol is presented. This protocol can be instantiated with any condition C, , and requires at most shared memory read/write operations per process in the synchronization part of the protocol. Thus, the value (f-d) represents the difficulty of the class . An improvement of the protocol for the conditions in is also presented.Received: 15 November 2001, Accepted: 15 April 2003, Published online: 6 February 2004Parts of it have previously appeared in [23] and [25].  相似文献   

4.
We give a complete characterization of the complexity of the element distinctness problem for n elements of bits each on deterministic and nondeterministic one-tape Turing machines. We present an algorithm running in time for deterministic machines and nondeterministic solutions that are of time complexity . For elements of logarithmic size , on nondeterministic machines, these results close the gap between the known lower bound and the previous upper bound . Additional lower bounds are given to show that the upper bounds are optimal for all other possible relations between m and n. The upper bounds employ hashing techniques, while the lower bounds make use of the communication complexity of set disjointness.Received: 23 April 2001, Published online: 2 September 2003Holger Petersen: Supported by Deutsche Akademie der Naturforscher Leopoldina, grant number BMBF-LPD 9901/8-1 of Bundesministerium für Bildung und Forschung.  相似文献   

5.
Fast allocation and deallocation with an improved buddy system   总被引:1,自引:0,他引:1  
We propose several modifications to the binary buddy system for managing dynamic allocation of memory blocks whose sizes are powers of two. The standard buddy system allocates and deallocates blocks in time in the worst case (and on an amortized basis), where n is the size of the memory. We present three schemes that improve the running time to O(1) time, where the time bound for deallocation is amortized for the first two schemes. The first scheme uses just one more word of memory than the standard buddy system, but may result in greater fragmentation than necessary. The second and third schemes have essentially the same fragmentation as the standard buddy system, and use bits of auxiliary storage, which is but for all and . Finally, we present simulation results estimating the effect of the excess fragmentation in the first scheme.Received: 4 May 2003, Published online: 22 December 2004Gerth Stølting Brodal: Supported by the Carlsberg Foundation (contract number ANS-0257/20). Partially supported by the Future and Emerging Technologies Programme of the EU under contract number IST-1999-14186 (ALCOM-FT). Basic Research in Computer Science, www.brics.dk, funded by the Danish National Research Foundation.Erik D. Demaine: Partially supported by the Natural Science and Engineering Research Council of Canada (NSERC).J. Ian Munro: Supported by the Natural Science and Engineering Research Council of Canada (NSERC) and the Canada Research Chair in Algorithm Design.This paper includes several results that appeared in preliminary form in the Proceedings of the 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FST & TCS99) [8].  相似文献   

6.
We consider distributed broadcasting in radio networks, modeled as undirected graphs, whose nodes have no information on the topology of the network, nor even on their immediate neighborhood. For randomized broadcasting, we give an algorithm working in expected time in n-node radio networks of diameter D, which is optimal, as it matches the lower bounds of Alon et al. [1] and Kushilevitz and Mansour [16]. Our algorithm improves the best previously known randomized broadcasting algorithm of Bar-Yehuda, Goldreich and Itai [3], running in expected time . (In fact, our result holds also in the setting of n-node directed radio networks of radius D.) For deterministic broadcasting, we show the lower bound on broadcasting time in n-node radio networks of diameter D. This implies previously known lower bounds of Bar-Yehuda, Goldreich and Itai [3] and Bruschi and Del Pinto [5], and is sharper than any of them in many cases. We also give an algorithm working in time , thus shrinking - for the first time - the gap between the upper and the lower bound on deterministic broadcasting time to a logarithmic factor. Received: 1 August 2003, Accepted: 18 March 2005, Published online: 15 June 2005 Dariusz R. Kowalski: This work was done during the stay of Dariusz Kowalski at the Research Chair in Distributed Computing of the Université du Québec en Outaouais, as a postdoctoral fellow. Research supported in part by KBN grant 4T11C04425. Andrzej Pelc: Research of Andrzej Pelc was supported in part by NSERC discovery grant and by the Research Chair in Distributed Computing of the Université du Québec en Outaouais.  相似文献   

7.
This paper proposes a new approach and new techniques for on-line monitoring of concurrent programs to ensure that some of their safety properties are not violated. The techniques modify erroneous systems, which violate a certain safety property, into new systems which satisfy the safety property. It does so by adding a new layer that controls the scheduling of steps in the system. We formally characterize the relationship between the erroneous and the new system. Safety monitors for mutual-exclusion, -exclusion, and the producer-consumer tasks are presented. Proofs for the mutual-exclusion task and the -exclusion task are presented to demonstrate the applicability of our approach.Received: May 2001, Accepted: December 2002, An extended abstract of this work appears in the Proceedings of the fifth International Symposium on Autonomous Decentralized Systems (ISADS) 2001. Part of this work was done while the first author visited Wayne State University.  相似文献   

8.
Abstract  We obtain a multivariate extension of a classical result of Schoenberg on cardinal spline interpolation. Specifically, we prove the existence of a unique function in , polyharmonic of order p on each strip , , and periodic in its last n variables, whose restriction to the parallel hyperplanes , , coincides with a prescribed sequence of n-variate periodic data functions satisfying a growth condition in . The constructive proof is based on separation of variables and on Micchelli’s theory of univariate cardinal -splines. Keywords: cardinal -splines, polyharmonic functions, multivariable interpolation Mathematics Subject Classification (2000): 41A05, 41A15, 41A63  相似文献   

9.
The complexity of constructing pseudorandom generators from hard functions   总被引:3,自引:3,他引:0  
We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constant-depth circuits. We show that, starting from a function computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a constant such that every circuit of size fails to compute f on at least a 1/poly(l) fraction of inputs) we can construct a computable by DLOGTIME-uniform constant-depth circuits of size polynomial in n. Such a PRG implies under DLOGTIME-uniformity. On the negative side, we prove that starting from a worst-case hard function (i.e. there is a constant such that every circuit of size fails to compute f on some input) for every positive constant there is no black-box construction of a computable by constant-depth circuits of size polynomial in n. We also study worst-case hardness amplification, which is the related problem of producing an average-case hard function starting from a worst-case hard one. In particular, we deduce that there is no blackbox worst-case hardness amplification within the polynomial time hierarchy. These negative results are obtained by showing that polynomialsize constant-depth circuits cannot compute good extractors and listdecodable codes.  相似文献   

10.
Coupling and self-stabilization   总被引:1,自引:0,他引:1  
A randomized self-stabilizing algorithm is an algorithm that, whatever the initial configuration is, reaches a set of М legal configurations} in finite time with probability 1. The proof of convergence towards is generally done by exhibiting a potential function , which measures the “vertical” distance of any configuration to , such that decreases with non-null probability at each step of . We propose here a method, based on the notion of coupling, which makes use of a “horizontal” distance between any pair of configurations, such that decreases in expectation at each step of . In contrast with classical methods, our coupling method does not require the knowledge of . In addition to the proof of convergence, the method allows us to assess the convergence rate according to two different measures. Proofs produced by the method are often simpler or give better upper bounds than their classical counterparts, as examplified here on Herman's mutual exclusion and Iterated Prisoner's Dilemma algorithms in the case of cyclic graphs.  相似文献   

11.
This paper investigates two preemptive semi-online scheduling problems to minimize makespan on two uniform machines. In the first semi-online problem, we know in advance that all jobs have their processing times in between p and rp . In the second semi-online problem, we know the size of the largest job in advance. We design an optimal semi-online algorithm which is optimal for every combination of machine speed ratio and job processing time ratio for the first problem, and an optimal semi-online algorithm for every machine speed ratio for the second problem.Received: 2 December 2003, Published online: 16 January 2004This research is supported by the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE, China, and National Natural Science Foundation of China (10271110).  相似文献   

12.
We describe several observations regarding the completeness and the complexity of bounded model checking and propose techniques to solve some of the associated computational challenges. We begin by defining the completeness threshold ( ) problem: for every finite model M and an LTL property , there exists a number such that if there is no counterexample to in M of length or less, then M . Finding this number, if it is sufficiently small, offers a practical method for making bounded model checking complete. We describe how to compute an overapproximation to for a general LTL property using Büchi automata, following the Vardi–Wolper LTL model checking framework. This computation is based on finding the initialized diameter and initialized recurrence-diameter (the longest loop-free path from an initial state) of the product automaton. We show a method for finding a recurrence diameter with a formula of size O(klogk) (or O(k(logk)2) in practice), where k is the attempted depth, which is an improvement compared to the previously known method that requires a formula of size in O(k2). Based on the value of , we prove that the complexity of standard SAT-based BMC is doubly exponential and that, consequently, there is a complexity gap of an exponent between this procedure and standard LTL model checking. We discuss ways to bridge this gap.  相似文献   

13.
By the sometimes so-called Main Theorem of Recursive Analysis, every computable real function is necessarily continuous. We wonder whether and which kinds of hypercomputation allow for the effective evaluation of also discontinuous . More precisely the present work considers the following three super-Turing notions of real function computability: - relativized computation; specifically given oracle access to the Halting Problem or its jump ; - encoding input and/or output y = f(x) in weaker ways also related to the Arithmetic Hierarchy; - nondeterministic computation. It turns out that any computable in the first or second sense is still necessarily continuous whereas the third type of hypercomputation provides the required power to evaluate for instance the discontinuous Heaviside function.  相似文献   

14.
In general, it is undecidable if an arbitrary context-free grammar has a regular solution. Past work has focused on special cases, such as one-letter grammars, non self-embedded grammars and the finite-language grammars, for which regular counterparts have been proven to exist. However, little is known about grammars with the self-embedded property. Using systems of equations, we highlight a number of subclasses of grammars, with self-embeddedness terms, such as and , that can still have regular languages as solutions. Constructive proofs that allow these subclasses of context-free grammars to be transformed to regular expressions are provided. We also point out a subclass of context-free grammars that is inherently non-regular. Our latest results can help demarcate more precisely the known boundaries between the regular and non-regular languages, within the context-free domain.Received: 17 January 2003, Published online: 17 February 2004Stefan Andrei: stefan@infoiasi.ro  相似文献   

15.
We show that for arbitrary positive integers with probability the gcd of two linear combinations of these integers with rather small random integer coefficients coincides with This naturally leads to a probabilistic algorithm for computing the gcd of several integers, with probability via just one gcd of two numbers with about the same size as the initial data (namely the above linear combinations). This algorithm can be repeated to achieve any desired confidence level.  相似文献   

16.
In this paper, a method for matching complex objects in line-drawings is presented. Our approach is based on the notion of -signatures, which are a special kind of histogram of forces [17,19,28]. Such histograms have low time complexity and describe signatures that are invariant to fundamental geometrical transformations such as scaling, translation, symmetry, and rotation. This article presents a new application of this notion in the field of symbol identification and recognition. To improve the efficiency of matching, we propose using an approximation of the -signature from Fourier series and the associated matching.Received: 7 October 2002, Accepted: 1 December 2002, Published online: 4 July 2003  相似文献   

17.
Eye and gaze tracking for interactive graphic display   总被引:11,自引:0,他引:11  
This paper describes a computer vision system based on active IR illumination for real-time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a hierarchical classification scheme that deals with the classes that tend to be misclassified. This leads to a improvement in classification error. The angular gaze accuracy is about horizontally and vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.Received: 21 July 2002, Accepted: 3 February 2004, Published online: 8 June 2004 Correspondence to: Qiang Ji  相似文献   

18.
We consider asynchronous consensus in the shared-memory setting. We present the first efficient low-contention consensus algorithm for the weak-adversary-scheduler model. The algorithm achieves consensus in total work and (hot-spot) contention, both expected and with high probability. The algorithm assumes the value-oblivious scheduler, which is defined in the paper. Previous efficient consensus algorithms for weak adversaries suffer from memory contention.Yonatan Aumann: This work was partially completed while theauthor was at Harvard University, supported in part by ONRcontract ONR-N00014-91-J-1981.Michael A. Bender: This work was supported inpart by HRL Laboratories, Sandia National Laboratories, and NSF GrantsACI-032497, CCR-0208670, and EIA-0112849. This work was partiallycompleted while the author was at Harvard University, supported inpart by NSF grants CCR-9700365, CCR-9504436, and CCR-9313775.An early version of this paper was presented in the 23rd International Colloquium on Automata, Languages, and Programming (ICALP 96) [8].  相似文献   

19.
Unambiguity in alternating Turing machines has received considerable attention in the context of analyzing globally unique games by Aida et al. [ACRW] and in the design of efficient protocols involving globally unique games by Crasmaru et al. [CGRS]. This paper explores the power of unambiguity in alternating Turing machines in the following settings: 1. We show that unambiguity-based hierarchies-AUPH, UPH, and UPH-are infinite in some relativized world. For each , we construct another relativized world where the unambiguity-based hierarchies collapse so that they have exactly k distinct levels and their k-th levels coincide with PSPACE. These results shed light on the relativized power of the unambiguity-based hierarchies, and parallel the results known for the case of the polynomial hierarchy. 2. For every , we define the bounded-level unambiguous alternating solution class UAS(k) as the class of all sets L for which there exists a polynomial-time alternating Turing machine N, which need not be unambiguous on every input, with at most k alternations such that if and only if x is accepted unambiguously by N. We construct a relativized world where, for all and . 3. Finally, we show that robustly k-level unambiguous alternating polynomial-time Turing machines, i.e., polynomial-time alternating Turing machines that for every oracle have k alternating levels and are unambiguous, accept languages that are computable in , for every oracle A. This generalizes a result of Hartmanis and Hemachandra [HH].  相似文献   

20.
We study various combinatorial complexity measures of Boolean functions related to some natural arithmetic problems about binary polynomials, that is, polynomials over . In particular, we consider the Boolean function deciding whether a given polynomial over is squarefree. We obtain an exponential lower bound on the size of a decision tree for this function, and derive an asymptotic formula, having a linear main term, for its average sensitivity. This allows us to estimate other complexity characteristics such as the formula size, the average decision tree depth and the degrees of exact and approximative polynomial representations of this function. Finally, using a different method, we show that testing squarefreeness and irreducibility of polynomials over cannot be done in for any odd prime p. Similar results are obtained for deciding coprimality of two polynomials over as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号