首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
We prove a relationship between the Cleaning problem and the Balanced Vertex-Ordering problem, namely that the minimum total imbalance of a graph equals twice the brush number of a graph. This equality has consequences for both problems. On one hand, it allows us to prove the NP-completeness of the Cleaning problem, which was conjectured by Messinger et al. [M.-E. Messinger, R.J. Nowakowski, P. Pra?at, Cleaning a network with brushes, Theoret. Comput. Sci. 399 (2008) 191-205]. On the other hand, it also enables us to design a faster algorithm for the Balanced Vertex-Ordering problem [J. Kára, K. Kratochvíl, D. Wood, On the complexity of the balanced vertex ordering problem, Discrete Math. Theor. Comput. Sci. 9 (1) (2007) 193-202].  相似文献   

2.
The early algorithms for in-place merging were mainly focused on the time complexity, whereas their structures themselves were ignored. Most of them therefore are elusive and of only theoretical significance. For this reason, the paper simplifies the unstable in-place merge by Geffert et al. [V. Geffert, J. Katajainen, T. Pasanen, Asymptotically efficient in-place merging, Theoret. Comput. Sci. 237 (2000) 159-181]. The simplified algorithm is simple yet practical, and has a small time complexity.  相似文献   

3.
In the subset-sums ratio problem, we want to find two sets of numbers such that the ratio between the larger and smaller sets (in terms of the summed values) is as small as possible. We show a new Fully Polynomial-Time Approximation Scheme (FPTAS) for this problem which simplifies the algorithm proposed by [C. Bazgan, M. Santha, Z. Tuza, Efficient approximation algorithms for the SUBSET-SUMS EQUALITY problem, J. Comput. System Sci. 64 (2) (2002) 160–170, announced in ICALP 1998]. The key insight of the new algorithm is to solve the problem with an additional constraint that the output sets must contain some certain numbers. While the new problem is harder (in the sense that the original problem can be reduced to this problem), it still admits an FPTAS, which is surprisingly simpler than the FPTAS of the original problem.  相似文献   

4.
Approximation algorithms for terrain guarding   总被引:1,自引:0,他引:1  
We present approximation algorithms and heuristics for several variations of terrain guarding problems, where we need to guard a terrain in its entirety by a minimum number of guards. Terrain guarding has applications in telecommunications, namely in the setting up of antenna networks for wireless communication. Our approximation algorithms transform the terrain guarding instance into a Minimum Set Cover instance, which is then solved by the standard greedy approximation algorithm [J. Comput. System Sci. 9 (1974) 256-278]. The approximation algorithms achieve approximation ratios of O(logn), where n is the number of vertices in the input terrain. We also briefly discuss some heuristic approaches for solving other variations of terrain guarding problems, for which no approximation algorithms are known. These heuristic approaches do not guarantee non-trivial approximation ratios but may still yield good solutions.  相似文献   

5.
We consider the online scheduling problem with m−1, m?2, uniform machines each with a processing speed of 1, and one machine with a speed of s, 1?s?2, to minimize the makespan. The well-known list scheduling (LS) algorithm has a worst-case bound of [Y. Cho, S. Sahni, Bounds for list schedules on uniform processors, SIAM J. Comput. 9 (1980) 91-103]. An algorithm with a better competitive ratio was proposed in [R. Li, L. Shi, An on-line algorithm for some uniform processor scheduling, SIAM J. Comput. 27 (1998) 414-422]. It has a worst-case bound of 2.8795 for a big m and s=2. In this note we present a 2.45-competitive algorithm for m?4 and any s, 1?s?2.  相似文献   

6.
We describe an O(n 3/log n)-time algorithm for the all-pairs-shortest-paths problem for a real-weighted directed graph with n vertices. This slightly improves a series of previous, slightly subcubic algorithms by Fredman (SIAM J. Comput. 5:49–60, 1976), Takaoka (Inform. Process. Lett. 43:195–199, 1992), Dobosiewicz (Int. J. Comput. Math. 32:49–60, 1990), Han (Inform. Process. Lett. 91:245–250, 2004), Takaoka (Proc. 10th Int. Conf. Comput. Comb., Lect. Notes Comput. Sci., vol. 3106, pp. 278–289, Springer, 2004), and Zwick (Proc. 15th Int. Sympos. Algorithms and Computation, Lect. Notes Comput. Sci., vol. 3341, pp. 921–932, Springer, 2004). The new algorithm is surprisingly simple and different from previous ones. A preliminary version of this paper appeared in Proc. 9th Workshop Algorithms Data Struct. (WADS), Lect. Notes Comput. Sci., vol. 3608, pp. 318–324, Springer, 2005.  相似文献   

7.
We present upper and lower bounds of the computational complexity of the two-way communication model of multiple-prover quantum interactive proof systems whose verifiers are limited to measure-many two-way quantum finite automata. We prove that (i) the languages recognized by those multiple-prover systems running in expected polynomial time are exactly the ones in NEXP, the nondeterministic exponential-time complexity class, (ii) if we further require verifiers to be one-way quantum finite automata, then their associated proof systems recognize context-free languages but not beyond languages in NE, the nondeterministic linear exponential-time complexity class, and moreover, (iii) when no time bound is imposed, the proof systems become as powerful as Turing machines. The first two results answer affirmatively an open question, posed by Nishimura and Yamakami [J. Comput. System Sci. 75 (2009) 255–269], of whether multiple-prover quantum interactive proof systems are more powerful than single-prover ones. Our proofs are simple and intuitive, although they heavily rely on an earlier result on multiple-prover classical interactive proof systems of Feige and Shamir [J. Comput. System Sci. 44 (1992) 259–271].  相似文献   

8.
Hashiguchi has studied the limitedness problem of distance automata (DA) in a series of paper [(J. Comput System Sci. 24 (1982) 233; Theoret. Comput. Sci. 72 (1990) 27; Theoret. Comput. Sci. 233 (2000) 19)]. The distance of a DA can be limited or unbounded. Given that the distance of a DA is limited, Hashiguchi has proved in Hashiguchi (2000) that the distance of the automaton is bounded by 24n3+nlg(n+2)+n, where n is the number of states. In this paper, we study again Hashiguchi's solution to the limitedness problem. We have made a number of simplification and improvement on Hashiguchi's method. We are able to improve the upper bound to 23n3+nlgn+n−1.  相似文献   

9.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics 86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n 5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n 11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case. Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong.  相似文献   

10.
A box graph is the intersection graph of orthogonal rectangles in the plane. We show that maximum independent set and minimum vertex cover on box graphs can be solved in subexponential time, more precisely, in time , by applying Miller's simple cycle planar separator theorem [J. Comput. System Sci. 32 (1986) 265-279] (in spite of the fact that the input box graph might be strongly non-planar).  相似文献   

11.
In this paper, we prove that random graphs only have trivial stable colorings. Our result improves Theorem 4.1 in [Proc. 20th IEEE Symp. on Foundations of Comput. Sci., 1979, pp. 39-46]. It can be viewed as an effective version of Corollary 2.13 in [SIAM J. Comput. 29 (2) (2000) 590-599]. As a byproduct, we also give an upper bound of the size of induced regular subgraphs in random graphs.  相似文献   

12.
Optical interconnections attract many engineers and scientists’ attention due to their potential for gigahertz transfer rates and concurrent access to the bus in a pipelined fashion. These unique characteristics of optical interconnections give us the opportunity to reconsider traditional algorithms designed for ideal parallel computing models, such as PRAMs. Since the PRAM model is far from practice, not all algorithms designed on this model can be implemented on a realistic parallel computing system. From this point of view, we study Cole’s pipelined merge sort [Cole R. Parallel merge sort. SIAM J Comput 1988;14:770–85] on the CREW PRAM and extend it in an innovative way to an optical interconnection model, the LARPBS (Linear Array with Reconfigurable Pipelined Bus System) model [Pan Y, Li K. Linear array with a reconfigurable pipelined bus system—concepts and applications. J Inform Sci 1998;106;237–58]. Although Cole’s algorithm is optimal, communication details have not been provided due to the fact that it is designed for a PRAM. We close this gap in our sorting algorithm on the LARPBS model and obtain an O(log N)-time optimal sorting algorithm using O(N) processors. This is a substantial improvement over the previous best sorting algorithm on the LARPBS model that runs in O(log N log log N) worst-case time using N processors [Datta A, Soundaralakshmi S, Owens R. Fast sorting algorithms on a linear array with a reconfigurable pipelined bus system. IEEE Trans Parallel Distribut Syst 2002;13(3):212–22]. Our solution allows efficiently assign and reuse processors. We also discover two new properties of Cole’s sorting algorithm that are presented as lemmas in this paper.  相似文献   

13.
We present a linear time approximation algorithm with a performance ratio of 1/2 for finding a maximum weight matching in an arbitrary graph. Such a result is already known and is due to Preis [STACS'99, Lecture Notes in Comput. Sci., Vol. 1563, 1999, pp. 259-269]. Our algorithm uses a new approach which is much simpler than the one given by Preis and needs no amortized analysis for its running time.  相似文献   

14.
The article “Replication and consistency in a distributed environment” by Breitbart and Korth [Yuri Breitbart, Henry F. Korth, J. Comput. System Sci. 59 (1) (1999) 29-69] presents replication graphs as an efficient means to handle concurrency control in replicated databases.This technical note identifies and explains two inaccuracies in this article:
The basic global serializability-protocol BGS given by Breitbart and Korth [Yuri Breitbart, Henry F. Korth, Replication and consistency in a distributed environment, J. Comput. System Sci. 59 (1) (1999) 29-69] does not always guarantee serializability if combined with two-phase locking. We show that this problem can be avoided with a minor change to the protocol.
The theorem on minimal deadlock sets for the protocol BGS appears incorrect, and we give a counterexample to support this claim together with a brief discussion on the consequence of this.
Please note that this does not affect the applicability of replication graphs as a concept.  相似文献   

15.
A new decomposition scheme for bipartite graphs namely canonical decomposition was introduced by Fouquet et al. [Internat. J. Found. Comput. Sci. 10 (1999) 513-533]. The so-called weak-bisplit graphs are totally decomposable following this decomposition. We present here some optimization problems for general bipartite graphs which have efficient solutions when dealing with weak-bisplit graphs.  相似文献   

16.
Borodin et al. (Algorithmica 37(4):295–326, 2003) gave a model of greedy-like algorithms for scheduling problems and Angelopoulos and Borodin (Algorithmica 40(4):271–291, 2004) extended their work to facility location and set cover problems. We generalize their model to include other optimization problems, and apply the generalized framework to graph problems. Our goal is to define an abstract model that captures the intrinsic power and limitations of greedy algorithms for various graph optimization problems, as Borodin et al. (Algorithmica 37(4):295–326, 2003) did for scheduling. We prove bounds on the approximation ratio achievable by such algorithms for basic graph problems such as shortest path, weighted vertex cover, Steiner tree, and independent set. For example, we show that, for the shortest path problem, no algorithm in the FIXED priority model can achieve any approximation ratio (even one dependent on the graph size), but the well-known Dijkstra’s algorithm is an optimal ADAPTIVE priority algorithm. We also prove that the approximation ratio for weighted vertex cover achievable by ADAPTIVE priority algorithms is exactly 2. Here, a new lower bound matches the known upper bounds (Johnson in J. Comput. Syst. Sci. 9(3):256–278, 1974). We give a number of other lower bounds for priority algorithms, as well as a new approximation algorithm for minimum Steiner tree problem with weights in the interval [1,2]. S. Davis’ research supported by NSF grants CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. R. Impagliazzo’s research supported by NSF grant CCR-0098197, CCR-0313241, and CCR-0515332. Views expressed are not endorsed by the NSF. Some work done while at the Institute for Advanced Study, supported by the State of New Jersey.  相似文献   

17.
In [Abgrall R, Roe PL. High order fluctuation schemes on triangular meshes. J Sci Comput 2003;19(1-3):3-36] have been constructed very high order residual distribution schemes for scalar problems. They were using triangle unstructured meshes. However, the construction was quite involved and was not very flexible. Here, following [Abgrall R. Essentially non-oscillatory residual distribution schemes for hyperbolic problems. J Comput Phys 2006;214(2):773-808], we develop a systematic way of constructing very high order non-oscillatory schemes for such meshes. Applications to scalar and systems problems are given.  相似文献   

18.
Resolving an issue open since Fenner, Fortnow, and Kurtz raised it in [S. Fenner, L. Fortnow, S. Kurtz, Gap-definable counting classes, J. Comput. System Sci. 48 (1) (1994) 116–148], we prove that LWPP is not uniformly gap-definable and that WPP is not uniformly gap-definable. We do so in the context of a broader investigation, via the polynomial degree bound technique, of the lowness, Turing hardness, and inclusion relationships of counting and other central complexity classes.  相似文献   

19.
Double hashing with bucket capacity one is augmented with multiple passbits to obtain significant reduction to unsuccessful search lengths. This improves the analysis of Martini et al. [P.M. Martini, W.A. Burkhard, Double hashing with multiple passbits, Internat. J. Found. Theoret. Comput. Sci. 14 (6) (2003) 1165-1188] by providing a closed form expression for the expected unsuccessful search lengths.  相似文献   

20.
We present a quadratic identity on the number of perfect matchings of plane graphs by the method of graphical condensation, which generalizes the results found by Propp [J. Propp, Generalized domino-shuffling, Theoret. Comput. Sci. 303 (2003) 267–301], Kuo [E.H. Kuo, Applications of graphical condensation for enumerating matchings and tilings, Theoret. Comput. Sci. 319 (2004) 29–57], and Yan, Yeh, and Zhang [W.G. Yan, Y.-N. Yeh, F.J. Zhang, Graphical condensation of plane graphs: A combinatorial approach, Theoret. Comput. Sci. 349 (2005) 452–461].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号