首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Numerous problems in Theoretical Computer Science can be solved very efficiently using powerful algebraic constructions. Computing shortest paths, constructing expanders, and proving the PCP Theorem, are just few examples of this phenomenon. The quest for combinatorial algorithms that do not use heavy algebraic machinery, but are roughly as efficient, has become a central field of study in this area. Combinatorial algorithms are often simpler than their algebraic counterparts. Moreover, in many cases, combinatorial algorithms and proofs provide additional understanding of studied problems. In this paper we initiate the study of combinatorial algorithms for Distributed Graph Coloring problems. In a distributed setting a communication network is modeled by a graph $G=(V,E)$ of maximum degree $\varDelta $ . The vertices of $G$ host the processors, and communication is performed over the edges of $G$ . The goal of distributed vertex coloring is to color $V$ with $(\varDelta + 1)$ colors such that any two neighbors are colored with distinct colors. Currently, efficient algorithms for vertex coloring that require $O(\varDelta + \log ^* n)$ time are based on the algebraic algorithm of Linial (SIAM J Comput 21(1):193–201, 1992) that employs set-systems. The best currently-known combinatorial set-system free algorithm, due to Goldberg et al. (SIAM J Discret Math 1(4):434–446, 1988), requires $O(\varDelta ^2+\log ^*n)$ time. We significantly improve over this by devising a combinatorial $(\varDelta + 1)$ -coloring algorithm that runs in $O(\varDelta + \log ^* n)$ time. This exactly matches the running time of the best-known algebraic algorithm. In addition, we devise a tradeoff for computing $O(\varDelta \cdot t)$ -coloring in $O(\varDelta /t + \log ^* n)$ time, for almost the entire range $1 < t < \varDelta $ . We also compute a Maximal Independent Set in $O(\varDelta + \log ^* n)$ time on general graphs, and in $O(\log n/ \log \log n)$ time on graphs of bounded arboricity. Prior to our work, these results could be only achieved using algebraic techniques. We believe that our algorithms are more suitable for real-life networks with limited resources, such as sensor networks.  相似文献   

2.
For hyper-rectangles in $\mathbb{R}^{d}$ Auer (1997) proved a PAC bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ , where $\varepsilon$ and $\delta$ are the accuracy and confidence parameters. It is still an open question whether one can obtain the same bound for intersection-closed concept classes of VC-dimension $d$ in general. We present a step towards a solution of this problem showing on one hand a new PAC bound of $O(\frac{1}{\varepsilon}(d\log d + \log \frac{1}{\delta}))$ for arbitrary intersection-closed concept classes, complementing the well-known bounds $O(\frac{1}{\varepsilon}(\log \frac{1}{\delta}+d\log \frac{1}{\varepsilon}))$ and $O(\frac{d}{\varepsilon}\log \frac{1}{\delta})$ of Blumer et al. and (1989) and Haussler, Littlestone and Warmuth (1994). Our bound is established using the closure algorithm, that generates as its hypothesis the intersection of all concepts that are consistent with the positive training examples. On the other hand, we show that many intersection-closed concept classes including e.g. maximum intersection-closed classes satisfy an additional combinatorial property that allows a proof of the optimal bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ . For such improved bounds the choice of the learning algorithm is crucial, as there are consistent learning algorithms that need $\Omega(\frac{1}{\varepsilon}(d\log\frac{1}{\varepsilon} +\log\frac{1}{\delta}))$ examples to learn some particular maximum intersection-closed concept classes.  相似文献   

3.
In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate $k$ distinct messages to all $n$ nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of $O((k+\log n + D)\varDelta )$ rounds with high probability, where $D$ and $\varDelta $ are the diameter and the maximum degree in the network, respectively. For many topologies and selections of $k$ this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is order optimal and the stopping time is $\varTheta (k + D)$ . To eliminate the factor of $\varDelta $ from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol $\mathcal{S } $ . The stopping time of TAG is $O(k+\log n +d(\mathcal{S })+t(\mathcal{S }))$ , where $t(\mathcal{S })$ is the stopping time of the spanning tree protocol, and $d(\mathcal{S })$ is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for $k=\varOmega (n)$ , where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after $\varTheta (n)$ rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for $k=\varOmega (\text{ polylog }(n))$ . The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.  相似文献   

4.
This paper introduces the notion of distributed verification without preprocessing. It focuses on the Minimum-weight Spanning Tree (MST) verification problem and establishes tight upper and lower bounds for the time and message complexities of this problem. Specifically, we provide an MST verification algorithm that achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time, where m is the number of edges in the given graph G, n is the number of nodes, and D is G’s diameter. On the other hand, we show that any MST verification algorithm must send $\tilde{\varOmega}(m)$ messages and incur $\tilde{\varOmega}(\sqrt{n} + D)$ time in worst case. Our upper bound result appears to indicate that the verification of an MST may be easier than its construction, since for MST construction, both lower bounds of $\tilde{\varOmega}(m)$ messages and $\tilde{\varOmega}(\sqrt{n} + D)$ time hold, but at the moment there is no known distributed algorithm that constructs an MST and achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time. Specifically, the best known time-optimal algorithm (using ${\tilde{O}}(\sqrt {n} + D)$ time) requires O(m+n 3/2) messages, and the best known message-optimal algorithm (using ${\tilde{O}}(m)$ messages) requires O(n) time. On the other hand, our lower bound results indicate that the verification of an MST is not significantly easier than its construction.  相似文献   

5.
This paper considers scheduling tasks while minimizing the power consumption of one or more processors, each of which can go to sleep at a fixed cost  $\alpha $ . There are two natural versions of this problem, both considered extensively in recent work: minimize the total power consumption (including computation time), or minimize the number of “gaps” in execution. For both versions in a multiprocessor system, we develop a polynomial-time algorithm based on sophisticated dynamic programming. In a generalization of the power-saving problem, where each task can execute in any of a specified set of time intervals, we develop a $(1+{2 \over 3} \alpha )$ -approximation, and show that dependence on $\alpha $ is necessary. In contrast, the analogous multi-interval gap scheduling problem is set-cover hard (and thus not $o(\lg n)$ -approximable), even in the special cases of just two intervals per job or just three unit intervals per job. We also prove several other hardness-of-approximation results. Finally, we give an $O(\sqrt{n})$ -approximation for maximizing throughput given a hard upper bound on the number of gaps.  相似文献   

6.
Gábor Wiener 《Algorithmica》2013,67(3):315-323
A set system $\mathcal{H} \subseteq2^{[m]}$ is said to be separating if for every pair of distinct elements x,y∈[m] there exists a set $H\in\mathcal{H}$ such that H contains exactly one of them. The search complexity of a separating system $\mathcal{H} \subseteq 2^{[m]}$ is the minimum number of questions of type “xH?” (where $H \in\mathcal{H}$ ) needed in the worst case to determine a hidden element x∈[m]. If we receive the answer before asking a new question then we speak of the adaptive complexity, denoted by $\mathrm{c} (\mathcal{H})$ ; if the questions are all fixed beforehand then we speak of the non-adaptive complexity, denoted by $\mathrm{c}_{na} (\mathcal{H})$ . If we are allowed to ask the questions in at most k rounds then we speak of the k-round complexity of $\mathcal{H}$ , denoted by $\mathrm{c}_{k} (\mathcal{H})$ . It is clear that $|\mathcal{H}| \geq\mathrm{c}_{na} (\mathcal{H}) = \mathrm{c}_{1} (\mathcal{H}) \geq\mathrm{c}_{2} (\mathcal{H}) \geq\cdots\geq\mathrm{c}_{m} (\mathcal{H}) = \mathrm{c} (\mathcal{H})$ . A group of problems raised by G.O.H. Katona is to characterize those separating systems for which some of these inequalities are tight. In this paper we are discussing set systems $\mathcal{H}$ with the property $|\mathcal{H}| = \mathrm{c}_{k} (\mathcal{H}) $ for any k≥3. We give a necessary condition for this property by proving a theorem about traces of hypergraphs which also has its own interest.  相似文献   

7.
We relate the exponential complexities 2 s(k)n of $\textsc {$k$-sat}$ and the exponential complexity $2^{s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))n}$ of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ (the problem of evaluating quantified formulas of the form $\forall\vec{x} \exists\vec{y} \textsc {F}(\vec {x},\vec{y})$ where F is a 3-cnf in $\vec{x}$ variables and $\vec{y}$ variables) and show that s(∞) (the limit of s(k) as k→∞) is at most $s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))$ . Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ running in time 2 cn with c<1. On the other hand, a nontrivial exponential-time algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ would provide a $\textsc {$k$-sat}$ solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ have nontrivial algorithms, and provide strong evidence that the hardest cases of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n?o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable $\textsc {$k$-cnf}$ s and the application of the Sparsification lemma.  相似文献   

8.
A pair of unit clauses is called conflicting if it is of the form (x), $(\bar{x})$ . A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28:411?C421, 1981) showed that for each UCF CNF formula with m clauses we can simultaneously satisfy at least $\hat{ \varphi } m$ clauses, where $\hat{ \varphi }=(\sqrt{5}-1)/2$ . We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula F with m clauses we can find, in polynomial time, a?subformula F?? with m?? clauses such that we can simultaneously satisfy at least $\hat{ \varphi } m+(1-\hat{ \varphi })m'+(2-3\hat {\varphi })n''/2$ clauses (in F), where n?? is the number of variables in F which are not in F??. We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2 and $m(\sqrt{5}-1)/2$ . The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31:335?C354, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+3 variables and 10k clauses. We improve this to 4k variables and $(2\sqrt{5}+4)k$ clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most $(7+3\sqrt{5})k$ variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above.  相似文献   

9.
In this paper, we introduce the concept of $\lambda $ -statistical convergence of order $\theta $ and strong $\lambda $ -summability of order $\theta $ for the sequence of fuzzy numbers. Further the same concept is extended to the sequence of fuzzy functions and introduce the spaces like $S_\lambda ^\theta (\hat{f})$ and $\omega _{\lambda p} ^\theta (\hat{f})$ . Some inclusion relations in those spaces and also the underlying relation between these two spaces are also obtained.  相似文献   

10.
We develop a stability and convergence theory for a Discontinuous Galerkin formulation (DG) of a highly indefinite Helmholtz problem in $\mathbb R ^{d}$ , $d\in \{1,2,3\}$ . The theory covers conforming as well as non-conforming generalized finite element methods. In contrast to conventional Galerkin methods where a minimal resolution condition is necessary to guarantee the unique solvability, it is proved that the DG-method admits a unique solution under much weaker conditions. As an application we present the error analysis for the $hp$ -version of the finite element method explicitly in terms of the mesh width $h$ , polynomial degree $p$ and wavenumber $k$ . It is shown that the optimal convergence order estimate is obtained under the conditions that $kh/\sqrt{p}$ is sufficiently small and the polynomial degree $p$ is at least $O(\log k)$ . On regular meshes, the first condition is improved to the requirement that $kh/p$ be sufficiently small.  相似文献   

11.
We study broadcasting, also known as one-to-all communication, in synchronous radio networks with known topology modeled by undirected (symmetric) graphs, where the interference range of a node is likely exceeding its transmission range. In this model, if two nodes are connected by a transmission edge they can communicate directly. On the other hand, if two nodes are connected by an interference edge they cannot communicate directly and transmission of one node disables recipience of any message at the other node. For a network $G,$ we term the smallest integer $d$ , s.t., for any interference edge $e$ there exists a simple path formed of at most $d$ transmission edges connecting the endpoints of $e$ as its interference distance $d_I$ . In this model the schedule of transmissions is precomputed in advance. It is based on the full knowledge of the size and the topology (including location of transmission and interference edges) of the network. We are interested in the design of fast broadcasting schedules that are energy efficient, i.e., based on a bounded number of transmissions executed at each node. We adopt $n$ as the number of nodes, $D_T$ is the diameter of the subnetwork induced by the transmission edges, and $\varDelta $ refers to the maximum combined degree (formed of transmission and interference edges) of the network. We contribute the following new results: (1) We prove that for networks with the interference distance $d_I\ge 2$ any broadcasting schedule requires at least $D_T+\varOmega (\varDelta \cdot \frac{\log {n}}{\log {\varDelta }})$ rounds. (2) We provide for networks modeled by bipartite graphs an algorithm that computes $1$ -shot (each node transmits at most once) broadcasting schedules of length $O(\varDelta \cdot \log {n})$ . (3) The main result of the paper is an algorithm that computes a $1$ -shot broadcasting schedule of length at most $4 \cdot D_T + O(\varDelta \cdot d_I \cdot \log ^4{n})$ for networks with arbitrary topology. Note that in view of the lower bound from (1) if $d_I$ is poly-logarithmic in $n$ this broadcast schedule is a poly-logarithmic factor away from the optimal solution.  相似文献   

12.
The discrete logarithm problem modulo a composite??abbreviate it as DLPC??is the following: given a (possibly) composite integer n??? 1 and elements ${a, b \in \mathbb{Z}_n^*}$ , determine an ${x \in \mathbb{N}}$ satisfying a x ?=?b if one exists. The question whether integer factoring can be reduced in deterministic polynomial time to the DLPC remains open. In this paper we consider the problem ${{\rm DLPC}_\varepsilon}$ obtained by adding in the DLPC the constraint ${x\le (1-\varepsilon)n}$ , where ${\varepsilon}$ is an arbitrary fixed number, ${0 < \varepsilon\le\frac{1}{2}}$ . We prove that factoring n reduces in deterministic subexponential time to the ${{\rm DLPC}_\varepsilon}$ with ${O_\varepsilon((\ln n)^2)}$ queries for moduli less or equal to n.  相似文献   

13.
We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

14.
A compact discontinuous Galerkin method (CDG) is devised for nearly incompressible linear elasticity, through replacing the global lifting operator for determining the numerical trace of stress tensor in a local discontinuous Galerkin method (cf. Chen et al., Math Probl Eng 20, 2010) by the local lifting operator and removing some jumping terms. It possesses the compact stencil, that means the degrees of freedom in one element are only connected to those in the immediate neighboring elements. Optimal error estimates in broken energy norm, $H^1$ -norm and $L^2$ -norm are derived for the method, which are uniform with respect to the Lamé constant $\lambda .$ Furthermore, we obtain a post-processed $H(\text{ div})$ -conforming displacement by projecting the displacement and corresponding trace of the CDG method into the Raviart–Thomas element space, and obtain optimal error estimates of this numerical solution in $H(\text{ div})$ -seminorm and $L^2$ -norm, which are uniform with respect to $\lambda .$ A series of numerical results are offered to illustrate the numerical performance of our method.  相似文献   

15.
Let $ Q$ be a complete residuated lattice. Let $\text {SetR}(Q)$ be the category of sets with similarity relations with values in $ Q$ (called $ Q$ -sets), which is an analogy of the category of classical sets with relations as morphisms. A cut in an $ Q$ -set $(A,\delta )$ is a system $(C_{\alpha })_{\alpha \in Q}$ , where $C_{\alpha }$ are subsets of $A\times Q$ . It is well known that in the category $\text {SetR}(Q)$ , there is a close relation between special cuts (called f-cuts) in an $ Q$ -set on one hand and fuzzy sets in the same $ Q$ -set, on the other hand. Moreover, there exists a completion procedure according to which any cut $(C_{\alpha })_{\alpha }$ can be extended onto an f-cut $(\overline{C_{\alpha }})_{\alpha }$ . In the paper, we prove that the completion procedure is, in some sense, the best possible. This will be expressed by the theorem which states that the category of f-cuts is a full reflective subcategory in the category of cuts.  相似文献   

16.
Most state-of-the-art approaches for Satisfiability Modulo Theories $(SMT(\mathcal{T}))$ rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory $\mathcal{T} (\mathcal{T}{\text {-}}solver)$ . Often $\mathcal{T}$ is the combination $\mathcal{T}_1 \cup \mathcal{T}_2$ of two (or more) simpler theories $(SMT(\mathcal{T}_1 \cup \mathcal{T}_2))$ , s.t. the specific ${\mathcal{T}_i}{\text {-}}solvers$ must be combined. Up to a few years ago, the standard approach to $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ was to integrate the SAT solver with one combined $\mathcal{T}_1 \cup \mathcal{T}_2{\text {-}}solver$ , obtained from two distinct ${\mathcal{T}_i}{\text {-}}solvers$ by means of evolutions of Nelson and Oppen’s (NO) combination procedure, in which the ${\mathcal{T}_i}{\text {-}}solvers$ deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ procedure called Delayed Theory Combination (DTC), in which each ${\mathcal{T}_i}{\text {-}}solver$ interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for $SMT(\mathcal{T}_1 \cup \mathcal{T}_2)$ . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the ${\mathcal{T}_i}{\text {-}}solvers$ by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the ${\mathcal{T}_i}{\text {-}}solvers$ required by NO, DTC causes no extra Boolean search; (ii) using ${\mathcal{T}_i}{\text {-}}solvers$ with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the $\mathcal{T}$ -conflict sets returned by the ${\mathcal{T}_i}{\text {-}}solvers$ .  相似文献   

17.
We present a technique for numerically solving convection-diffusion problems in domains $\varOmega $ with curved boundary. The technique consists in approximating the domain $\varOmega $ by polyhedral subdomains $\mathsf{{D}}_h$ where a finite element method is used to solve for the approximate solution. The approximation is then suitably extended to the remaining part of the domain $\varOmega $ . This approach allows for the use of only polyhedral elements; there is no need of fitting the boundary in order to obtain an accurate approximation of the solution. To achieve this, the boundary condition on the border of $\varOmega $ is transferred to the border of $\mathsf{D }_h$ by using simple line integrals. We apply this technique to the hybridizable discontinuous Galerkin method and provide extensive numerical experiments showing that, whenever the distance of $\mathsf{{D}}_h$ to $\partial \varOmega $ is of order of the meshsize $h$ , the convergence properties of the resulting method are the same as those for the case in which $\varOmega =\mathsf{{D}}_h$ . We also show numerical evidence indicating that the ratio of the $L^2(\varOmega )$ norm of the error in the scalar variable computed with $d>0$ to that of that computed with $d=0$ remains constant (and fairly close to one), whenever the distance $d$ is proportional to $\min \{h,Pe^{-1}\}/(k+1)^2$ , where $Pe$ is the so-called Péclet number.  相似文献   

18.
Given a DNF formula f on n variables, the two natural size measures are the number of terms or size s(f) and the maximum width of a term w(f). It is folklore that small DNF formulas can be made narrow: if a formula has m terms, it can be ${\epsilon}$ -approximated by a formula with width ${{\rm log}(m/{\epsilon})}$ . We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ${\epsilon}$ -approximated by a width w DNF with at most ${(w\, {\rm log}(1/{\epsilon}))^{O(w)}}$ terms. We combine our sparsification result with the work of Luby & Velickovic (1991, Algorithmica 16(4/5):415–433, 1996) to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic ${n^{\tilde{O}({\rm log}\, {\rm log} (n))}}$ time algorithm that computes an additive ${\epsilon}$ approximation to the fraction of satisfying assignments of f for ${\epsilon = 1/{\rm poly}({\rm log}\, n)}$ . The previous best result due to Luby and Velickovic from nearly two decades ago had a run time of ${n^{{\rm exp}(O(\sqrt{{\rm log}\, {\rm log} n}))}}$ (Luby & Velickovic 1991, in Algorithmica 16(4/5):415–433, 1996).  相似文献   

19.
A number of algorithms for computing the simulation preorder (and equivalence) on Kripke structures are available. Let $\varSigma $ denote the state space, ${\rightarrow }$ the transition relation and $P_{\mathrm {sim}}$ the partition of $\varSigma $ induced by simulation equivalence. While some algorithms are designed to reach the best space bounds, whose dominating additive term is $|P_{\mathrm {sim}}|^2$ , other algorithms are devised to attain the best time complexity $O(|P_{\mathrm {sim}}||{\rightarrow }|)$ . We present a novel simulation algorithm which is both space and time efficient: it runs in $O(|P_ {\mathrm {sim}}|^2 \log |P_{\mathrm {sim}}| + |\varSigma |\log |\varSigma |)$ space and $O(|P_{\mathrm {sim}}||{\rightarrow }|\log |\varSigma |)$ time. Our simulation algorithm thus reaches the best space bounds while closely approaching the best time complexity.  相似文献   

20.
Using S.L. Sobolev’s method, we construct the interpolation splines minimizing the semi-norm in $K_2(P_2)$ , where $K_2(P_2)$ is the space of functions $\phi $ such that $\phi ^{\prime } $ is absolutely continuous, $\phi ^{\prime \prime } $ belongs to $L_2(0,1)$ and $\int _0^1(\varphi ^{\prime \prime }(x)+\varphi (x))^2dx<\infty $ . Explicit formulas for coefficients of the interpolation splines are obtained. The resulting interpolation spline is exact for the trigonometric functions $\sin x$ and $\cos x$ . Finally, in a few numerical examples the qualities of the defined splines and $D^2$ -splines are compared. Furthermore, the relationship of the defined splines with an optimal quadrature formula is shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号