首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

3.
Huaming Zhang 《Algorithmica》2010,57(2):381-397
We study the problem of transforming plane triangulations into irreducible triangulations, which are plane graphs with a quadrangular exterior face, triangular interior faces and no separating triangles. Our linear time transformation reveals important relations between the minimum Schnyder’s realizers of plane triangulations (Bonichon et al., Proceedings of the 20th Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science, vol. 2607, pp. 499–510, Springer, Berlin, 2003; Research Report RR-1279-02, LaBRI, University of Bordeaux, France; Brehm, Diploma thesis, FB Mathematik und Informatik, Freie Universität Berlin, 2000) and the transversal structures of irreducible triangulations (Fusy, Proceedings of 13th International Symposium on Graph Drawing, Lecture Notes in Computer Science, vol. 3843, pp. 177–188, Springer, Berlin, 2005; He, SIAM J. Comput. 22:1218–1226, 1993). The transformation morphs a 3-connected plane graph into an internally 4-connected plane graph. Therefore some of the graph algorithms designed specifically for 4-connected plane graphs can be applied to 3-connected plane graphs indirectly. As an example of such applications, we present a linear time algorithm that produces a planar polyline drawing for a plane graph with n vertices in a grid of size bounded by W×H, where $W\leq\lfloor\frac{2n-2}{3}\rfloorWe study the problem of transforming plane triangulations into irreducible triangulations, which are plane graphs with a quadrangular exterior face, triangular interior faces and no separating triangles. Our linear time transformation reveals important relations between the minimum Schnyder’s realizers of plane triangulations (Bonichon et al., Proceedings of the 20th Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science, vol. 2607, pp. 499–510, Springer, Berlin, 2003; Research Report RR-1279-02, LaBRI, University of Bordeaux, France; Brehm, Diploma thesis, FB Mathematik und Informatik, Freie Universit?t Berlin, 2000) and the transversal structures of irreducible triangulations (Fusy, Proceedings of 13th International Symposium on Graph Drawing, Lecture Notes in Computer Science, vol. 3843, pp. 177–188, Springer, Berlin, 2005; He, SIAM J. Comput. 22:1218–1226, 1993). The transformation morphs a 3-connected plane graph into an internally 4-connected plane graph. Therefore some of the graph algorithms designed specifically for 4-connected plane graphs can be applied to 3-connected plane graphs indirectly. As an example of such applications, we present a linear time algorithm that produces a planar polyline drawing for a plane graph with n vertices in a grid of size bounded by W×H, where W £ ?\frac2n-23?W\leq\lfloor\frac{2n-2}{3}\rfloor , and W+H £ ?\frac4n-43?W+H\leq\lfloor \frac{4n-4}{3}\rfloor . It uses at most ?\frac2n-53?\lfloor\frac{2n-5}{3}\rfloor bends, and each edge uses at most one bend. Our algorithm is area optimal. Compared with the existing area optimal polyline drawing algorithm proposed in Bonichon et al. (Proceedings of the 28th International Workshop on Graph-Theoretic Concepts in Computer Science, Lecture Notes in Computer Science, vol. 2573, pp. 35–46, Springer, Berlin, 2002), our algorithm uses a smaller number of bends. Their bend bound is (n−2).  相似文献   

4.
The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360–368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575–586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653–664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software.  相似文献   

5.
Given natural limitations on the length DNA sequences, designing phylogenetic reconstruction methods which are reliable under limited information is a crucial endeavor. There have been two approaches to this problem: reconstructing partial but reliable information about the tree (Mossel in IEEE Comput. Biol. Bioinform. 4:108–116, 2007; Daskalakis et al. in SIAM J. Discrete Math. 25:872–893, 2011; Daskalakis et al. in Proc. of RECOMB 2006, pp. 281–295, 2006; Gronau et al. in Proc. of the 19th Annual SODA 2008, pp. 379–388, 2008), and reaching “deeper” in the tree through reconstruction of ancestral sequences. In the latter category, Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006) settled an important conjecture of M. Steel (My favourite conjecture. Preprint, 2001), showing that, under the CFN model of evolution, all trees on n leaves with edge lengths bounded by the Ising model phase transition can be recovered with high probability from genomes of length O(logn) with a polynomial time algorithm. Their methods had a running time of O(n 10). Here we enhance our methods from Daskalakis et al. (Proc. of RECOMB 2006, pp. 281–295, 2006) with the learning of ancestral sequences and provide an algorithm for reconstructing a sub-forest of the tree which is reliable given available data, without requiring a-priori known bounds on the edge lengths of the tree. Our methods are based on an intuitive minimum spanning tree approach and run in O(n 3) time. For the case of full reconstruction of trees with edges under the phase transition, we maintain the same asymptotic sequence length requirements as in Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006), despite the considerably faster running time.  相似文献   

6.
In this paper, a compressed membership problem for finite automata, both deterministic (DFAs) and non-deterministic (NFAs), with compressed transition labels is studied. The compression is represented by straight-line programs (SLPs), i.e. context-free grammars generating exactly one string. A novel technique of dealing with SLPs is employed: the SLPs are recompressed, so that substrings of the input word are encoded in SLPs labelling the transitions of the NFA (DFA) in the same way, as in the SLP representing the input text. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. Furthermore, in order to reflect the recompression in the NFA, we need to modify it only a little, in particular its size stays polynomial in the input size. Using this technique it is shown that the compressed membership for NFA with compressed labels is in NP, thus confirming the conjecture of Plandowski and Rytter (Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999) and extending the partial result of Lohrey and Mathissen (in CSR, LNCS, vol. 6651, pp. 275–288, Springer, Berlin, 2011); as this problem is known to be NP-hard (in Plandowski and Rytter, Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999), we settle its exact computational complexity. Moreover, the same technique applied to the compressed membership for DFA with compressed labels yields that this problem is in P, and this problem is known to be P-hard (in Markey and Schnoebelen, Inf. Process. Lett. 90(1):3–6, 2004; Beaudry et al., SIAM J. Comput. 26(1):138–152, 1997).  相似文献   

7.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

8.
The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

9.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

10.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

11.
The goal of clustering is to identify subsets called clusters which usually correspond to objects that are more similar to each other than they are to objects from other clusters. We have proposed the MACLAW method, a cooperative coevolution algorithm for data clustering, which has shown good results (Blansché and Gançarski, Pattern Recognit. Lett. 27(11), 1299–1306, 2006). However the complexity of the algorithm increases rapidly with the number of clusters to find. We propose in this article a parallelization of MACLAW, based on a message-passing paradigm, as well as the analysis of the application performances with experiment results. We show that we reach near optimal speedups when searching for 16 clusters, a typical problem instance for which the sequential execution duration is an obstacle to the MACLAW method. Further, our approach is original because we use the P2P-MP1 grid middleware (Genaud and Rattanapoka, Lecture Notes in Comput. Sci., vol. 3666, pp. 276–284, 2005) which both provides the message passing library and infrastructure services to discover computing resources. We also put forward that the application can be tightly coupled with the middleware to make the parallel execution nearly transparent for the user.  相似文献   

12.
We first consider the problem of finding a maximum size stable matching if incomplete lists and ties are both allowed, but ties are on one side only. For this problem we give a simple, linear time 3/2-approximation algorithm, improving on the best known approximation factor 5/3 of Irving and Manlove (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007). Next, we show how this extends to the Hospitals/Residents problem with the same ratio if the residents have strict orders. We also give a simple linear time algorithm for the general problem with approximation factor 5/3, improving the best known 15/8-approximation algorithm of Iwama, Miyazaki and Yamauchi (SODA ??07: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp.?288?C297, 2007). For the cases considered in this paper it is NP-hard to approximate within a factor of 21/19 by the result of Halldórsson et?al. (ACM Transactions on Algorithms 3(3):30, 2007). Our algorithms not only give better approximation ratios than the cited ones, but are much simpler and run significantly faster. Also we may drop a restriction used in (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007) and the analysis is substantially more moderate. Preliminary versions of this paper appeared in (Király, Egres Technical Report TR-2008-04, www.cs.elte.hu/egres/, 2008; Király in Proceedings of MATCH-UP 2008: Matching Under Preferences??Algorithms and Complexity, Satellite Workshop of ICALP, July 6, 2008, Reykjavík, Iceland, pp.?36?C45, 2008; Király in ESA 2008, Lecture Notes in Computer Science, vol.?5193, pp.?623?C634, 2008). For the related results obtained thenceforth see Sect.?5.  相似文献   

13.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

14.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

15.
Ordering heuristics are a powerful tool in CSP search algorithms. Among the most successful ordering heuristics are heuristics which enforce a fail first strategy by using the Min-domain property (Haralick and Elliott, Artif Intel 14:263–313, 1980; Bessiere and Regin, Mac and combined heuristics: two reasons to forsake FC (and CBJ?) on hard problems. In Proc. CP 96, pp. 61–75, Cambridge, MA, 1996; Smith and Grant, Trying harder to fail first. In European Conference on Artificial Intelligence, pp. 249–253, 1998; Dechter, Constraint Processing. Morgan Kaufman, 2003). Ordering heuristics have been introduced recently to asynchronous backtracking (ABT), for distributed constraints satisfaction (DisCSP) (Zivan and Meisels, Dynamic ordering for asynchronous backtracking on discsps. In CP-2005, pp. 32–46, Sigtes (Barcelona), Spain, 2005). However, the pioneering study of dynamically ordered ABT, ABT_DO, has shown that a straightforward implementation of the Min-domain heuristic does not produce the expected improvement over a static ordering. The present paper proposes an asynchronous dynamic ordering which does not follow the standard restrictions on the position of reordered agents in ABT_DO. Agents can be moved to a position that is higher than that of the target of the backtrack. Combining the Nogood-triggered heuristic and the Min-domain property in this new class of heuristics results in the best performing version of ABT_DO. The new version of retroactively ordered ABT is faster by a large factor than the best form of ABT.  相似文献   

16.
The Travelling Salesman Problem is one of the fundamental and intensively studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides’s algorithm with an approximation factor of \(\frac{3}{2}\) , even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only \(\frac{4}{3}\) . Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis Gharan et al. (FOCS, 550–559, 2011), and then by Mömke and Svensson (FOCS, 560–569, 2011). In this paper, we provide an improved analysis of the approach presented in Mömke and Svensson (FOCS, 560–569, 2011) yielding a bound of \(\frac{13}{9}\) on the approximation factor, as well as a bound of \(\frac{19}{12}+\varepsilon\) for any ε>0 for a more general Travelling Salesman Path Problem in graphic metrics.  相似文献   

17.
We consider the problem of doing fast and reliable estimation of the number z of non-zero entries in a sparse boolean matrix product. This problem has applications in databases and computer algebra. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1±ε approximation of z (with small probability of error) in expected time for any $\varepsilon> 4/\sqrt[4]{z}$ . The previously best estimation algorithm, due to Cohen (J. Comput. Syst. Sci. 53(3):441–453, 1997), uses time . We also present a variant using I/Os in expectation in the cache-oblivious model. In contrast to these results, the currently best algorithms for computing a sparse boolean matrix product use time ω(n 4/3) (resp. ω(n 4/3/B) I/Os), even if the result matrix is restricted to nonzero entries. Our algorithm combines the size estimation technique of Bar-Yossef et al. (Proceedings of the 6th International Workshop on Randomization and Approximation Techniques (RANDOM ’02), pp. 1–10, 2002) with a particular class of pairwise independent hash functions that allows the sketch of a set of the form to be computed in expected time and I/Os. We then describe how sampling can be used to maintain (independent) sketches of matrices that allow estimation to be performed in time o(n) if z is sufficiently large. This gives a simpler alternative to the sketching technique of Ganguly et al. (Proceedings of the 24th ACM Symposium on Principles of Database Systems (PODS ’05), pp. 259–270, 2005), and matches a space lower bound shown in that paper. Finally, we present experiments on real-world data sets that show the accuracy of both our methods to be significantly better than the worst-case analysis predicts.  相似文献   

18.
We study the exact controllability, by a reduced number of controls, of coupled cascade systems of PDE’s and the existence of exact insensitizing controls for the scalar wave equation. We give a necessary and sufficient condition for the observability of abstract-coupled cascade hyperbolic systems by a single observation, the observation operator being either bounded or unbounded. Our proof extends the two-level energy method introduced in Alabau-Boussouira (Siam J Control Opt 42:871–906, 2003) and Alabau-Boussouira and Léautaud (J Math Pures Appl 99:544–576, 2013) for symmetric coupled systems, to cascade systems which are examples of non-symmetric coupled systems. In particular, we prove the observability of two coupled wave equations in cascade if the observation and coupling regions both satisfy the Geometric Control Condition (GCC) of Bardos et al. (SIAM J Control Opt 30:1024–1065, 1992). By duality, this solves the exact controllability, by a single control, of $2$ -coupled abstract cascade hyperbolic systems. Using transmutation, we give null-controllability results for the multidimensional heat and Schrödinger $2$ -coupled cascade systems under GCC and for any positive time. By our method, we can treat cases where the control and coupling coefficients have disjoint supports, partially solving an open question raised by de Teresa (CPDE 25:39–72, 2000). Moreover we answer the question of the existence of exact insensitizing locally distributed as well as boundary controls of scalar multidimensional wave equations, raised by Lions (Actas del Congreso de Ecuaciones Diferenciales y Aplicaciones (CEDYA), Universidad de Málaga, pp 43–54, 1989) and later on by Dáger (Siam J Control Opt 45:1758–1768, 2006) and Tebou (C R Acad Sci Paris 346(Sér I):407–412, 2008).  相似文献   

19.
Given a set of points \(P \subset\mathbb{R}^{d}\) , the k-means clustering problem is to find a set of k centers \(C = \{ c_{1},\ldots,c_{k}\}, c_{i} \in\mathbb{R}^{d}\) , such that the objective function ∑ xP e(x,C)2, where e(x,C) denotes the Euclidean distance between x and the closest center in C, is minimized. This is one of the most prominent objective functions that has been studied with respect to clustering. D 2-sampling (Arthur and Vassilvitskii, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA’07, pp. 1027–1035, SIAM, Philadelphia, 2007) is a simple non-uniform sampling technique for choosing points from a set of points. It works as follows: given a set of points \(P \subset\mathbb{R}^{d}\) , the first point is chosen uniformly at random from P. Subsequently, a point from P is chosen as the next sample with probability proportional to the square of the distance of this point to the nearest previously sampled point. D 2-sampling has been shown to have nice properties with respect to the k-means clustering problem. Arthur and Vassilvitskii (Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA’07, pp. 1027–1035, SIAM, Philadelphia, 2007) show that k points chosen as centers from P using D 2-sampling give an O(logk) approximation in expectation. Ailon et al. (NIPS, pp. 10–18, 2009) and Aggarwal et al. (Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 15–28, Springer, Berlin, 2009) extended results of Arthur and Vassilvitskii (Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA’07, pp. 1027–1035, SIAM, Philadelphia, 2007) to show that O(k) points chosen as centers using D 2-sampling give an O(1) approximation to the k-means objective function with high probability. In this paper, we further demonstrate the power of D 2-sampling by giving a simple randomized (1+?)-approximation algorithm that uses the D 2-sampling in its core.  相似文献   

20.
The NP-complete problem Proper Interval Vertex Deletion is to decide whether an input graph on n vertices and m edges can be turned into a proper interval graph by deleting at most k vertices. Van Bevern et al. (In: Proceedings WG 2010. Lecture notes in computer science, vol. 6410, pp. 232–243, 2010) showed that this problem can be solved in $\mathcal {O}((14k +14)^{k+1} kn^{6})$ time. We improve this result by presenting an $\mathcal {O}(6^{k} kn^{6})$ time algorithm for Proper Interval Vertex Deletion. Our fixed-parameter algorithm is based on a new structural result stating that every connected component of a {claw,net,tent,C 4,C 5,C 6}-free graph is a proper circular arc graph, combined with a simple greedy algorithm that solves Proper Interval Vertex Deletion on {claw,net,tent,C 4,C 5,C 6}-free graphs in $\mathcal {O}(n+m)$ time. Our approach also yields a polynomial-time 6-approximation algorithm for the optimization variant of Proper Interval Vertex Deletion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号