共查询到20条相似文献,搜索用时 31 毫秒
1.
Given an alphabet Σ={1,2,…,|Σ|} text string T∈Σ
n
and a pattern string P∈Σ
m
, for each i=1,2,…,n−m+1 define L
p
(i) as the p-norm distance when the pattern is aligned below the text and starts at position i of the text. The problem of pattern matching with L
p
distance is to compute L
p
(i) for every i=1,2,…,n−m+1. We discuss the problem for d=1,2,∞. First, in the case of L
1 matching (pattern matching with an L
1 distance) we show a reduction of the string matching with mismatches problem to the L
1 matching problem and we present an algorithm that approximates the L
1 matching up to a factor of 1+ε, which has an
O(\frac1e2nlogmlog|S|)O(\frac{1}{\varepsilon^{2}}n\log m\log|\Sigma|)
run time. Then, the L
2 matching problem (pattern matching with an L
2 distance) is solved with a simple O(nlog m) time algorithm. Finally, we provide an algorithm that approximates the L
∞ matching up to a factor of 1+ε with a run time of
O(\frac1enlogmlog|S|)O(\frac{1}{\varepsilon}n\log m\log|\Sigma|)
. We also generalize the problem of String Matching with mismatches to have weighted mismatches and present an O(nlog 4
m) algorithm that approximates the results of this problem up to a factor of O(log m) in the case that the weight function is a metric. 相似文献
2.
Connected dominating set (CDS) in unit disk graphs has a wide range of applications in wireless ad hoc networks. A number
of approximation algorithms for constructing a small CDS in unit disk graphs have been proposed in the literature. The majority
of these algorithms follow a general two-phased approach. The first phase constructs a dominating set, and the second phase
selects additional nodes to interconnect the nodes in the dominating set. In the performance analyses of these two-phased
algorithms, the relation between the independence number α and the connected domination number γ
c
of a unit-disk graph plays the key role. The best-known relation between them is
a £ 3\frac23gc+1\alpha\leq3\frac{2}{3}\gamma_{c}+1. In this paper, we prove that α≤3.4306γ
c
+4.8185. This relation leads to tighter upper bounds on the approximation ratios of two approximation algorithms proposed
in the literature. 相似文献
3.
Philippe Raïpin Parvédy Michel Raynal Corentin Travers 《Theory of Computing Systems》2010,47(1):259-287
The k-set agreement problem is a generalization of the consensus problem: considering a system made up of n processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is a proposed value, and no more than k different values are decided. It has recently be shown that, in the crash failure model, $\min(\lfloor \frac{f}{k}\rfloor+2,\lfloor \frac{t}{k}\rfloor +1)The k-set agreement problem is a generalization of the consensus problem: considering a system made up of n processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is
a proposed value, and no more than k different values are decided. It has recently be shown that, in the crash failure model,
min(?\fracfk?+2,?\fractk?+1)\min(\lfloor \frac{f}{k}\rfloor+2,\lfloor \frac{t}{k}\rfloor +1)
is a lower bound on the number of rounds for the non-faulty processes to decide (where t is an upper bound on the number of process crashes, and f, 0≤f≤t, the actual number of crashes). 相似文献
4.
Roel Verstappen 《Journal of scientific computing》2011,49(1):94-110
Large eddy simulation (LES) seeks to predict the dynamics of spatially filtered turbulent flows. The very essence is that
the LES-solution contains only scales of size ≥Δ, where Δ denotes some user-chosen length scale. This property enables us
to perform a LES when it is not feasible to compute the full, turbulent solution of the Navier-Stokes equations. Therefore,
in case the large eddy simulation is based on an eddy viscosity model we determine the eddy viscosity such that any scales
of size <Δ are dynamically insignificant. In this paper, we address the following two questions: how much eddy diffusion is
needed to (a) balance the production of scales of size smaller than Δ; and (b) damp any disturbances having a scale of size
smaller than Δ initially. From this we deduce that the eddy viscosity ν
e
has to depend on the invariants
q = \frac12tr(S2)q = \frac{1}{2}\mathrm{tr}(S^{2}) and
r = -\frac13tr(S3)r= -\frac{1}{3}\mathrm{tr}(S^{3}) of the (filtered) strain rate tensor S. The simplest model is then given by
ne = \frac32(D/p)2 |r|/q\nu_{e} = \frac{3}{2}(\Delta/\pi)^{2} |r|/q. This model is successfully tested for a turbulent channel flow (Re
τ
=590). 相似文献
5.
Let mad (G) denote the maximum average degree (over all subgraphs) of G and let χ
i
(G) denote the injective chromatic number of G. We prove that if Δ≥4 and
mad(G) < \frac145\mathrm{mad}(G)<\frac{14}{5}, then χ
i
(G)≤Δ+2. When Δ=3, we show that
mad(G) < \frac3613\mathrm{mad}(G)<\frac{36}{13} implies χ
i
(G)≤5. In contrast, we give a graph G with Δ=3,
mad(G)=\frac3613\mathrm{mad}(G)=\frac{36}{13}, and χ
i
(G)=6. 相似文献
6.
Huaming Zhang 《Algorithmica》2010,57(2):381-397
We study the problem of transforming plane triangulations into irreducible triangulations, which are plane graphs with a quadrangular exterior face, triangular interior faces and no separating triangles. Our linear time transformation reveals important relations between the minimum Schnyder’s realizers of plane triangulations (Bonichon et al., Proceedings of the 20th Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science, vol. 2607, pp. 499–510, Springer, Berlin, 2003; Research Report RR-1279-02, LaBRI, University of Bordeaux, France; Brehm, Diploma thesis, FB Mathematik und Informatik, Freie Universität Berlin, 2000) and the transversal structures of irreducible triangulations (Fusy, Proceedings of 13th International Symposium on Graph Drawing, Lecture Notes in Computer Science, vol. 3843, pp. 177–188, Springer, Berlin, 2005; He, SIAM J. Comput. 22:1218–1226, 1993). The transformation morphs a 3-connected plane graph into an internally 4-connected plane graph. Therefore some of the graph algorithms designed specifically for 4-connected plane graphs can be applied to 3-connected plane graphs indirectly. As an example of such applications, we present a linear time algorithm that produces a planar polyline drawing for a plane graph with n vertices in a grid of size bounded by W×H, where $W\leq\lfloor\frac{2n-2}{3}\rfloorWe study the problem of transforming plane triangulations into irreducible triangulations, which are plane graphs with a quadrangular
exterior face, triangular interior faces and no separating triangles. Our linear time transformation reveals important relations
between the minimum Schnyder’s realizers of plane triangulations (Bonichon et al., Proceedings of the 20th Annual Symposium
on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science, vol. 2607, pp. 499–510, Springer, Berlin, 2003; Research Report RR-1279-02, LaBRI, University of Bordeaux, France; Brehm, Diploma thesis, FB Mathematik und Informatik,
Freie Universit?t Berlin, 2000) and the transversal structures of irreducible triangulations (Fusy, Proceedings of 13th International Symposium on Graph
Drawing, Lecture Notes in Computer Science, vol. 3843, pp. 177–188, Springer, Berlin, 2005; He, SIAM J. Comput. 22:1218–1226, 1993). The transformation morphs a 3-connected plane graph into an internally 4-connected plane graph. Therefore some of the graph
algorithms designed specifically for 4-connected plane graphs can be applied to 3-connected plane graphs indirectly. As an
example of such applications, we present a linear time algorithm that produces a planar polyline drawing for a plane graph
with n vertices in a grid of size bounded by W×H, where
W £ ?\frac2n-23?W\leq\lfloor\frac{2n-2}{3}\rfloor
, and
W+H £ ?\frac4n-43?W+H\leq\lfloor \frac{4n-4}{3}\rfloor
. It uses at most
?\frac2n-53?\lfloor\frac{2n-5}{3}\rfloor
bends, and each edge uses at most one bend. Our algorithm is area optimal. Compared with the existing area optimal polyline
drawing algorithm proposed in Bonichon et al. (Proceedings of the 28th International Workshop on Graph-Theoretic Concepts
in Computer Science, Lecture Notes in Computer Science, vol. 2573, pp. 35–46, Springer, Berlin, 2002), our algorithm uses a smaller number of bends. Their bend bound is (n−2). 相似文献
7.
Some Hamiltonians are constructed from the unitary
\checkRi,i+1(q, j){\check{R}_{i,i+1}(\theta, \varphi)}-matrices, where θ and j{\varphi} are time-independent parameters. We show that the entanglement sudden death (ESD) can happen in these closed Yang–Baxter
systems. It is found that the ESD is not only sensitive to the initial condition, but also has a great connection with different
Yang–Baxter systems. Especially, we find that the meaningful parameter j{\varphi} has a great influence on ESD. 相似文献
8.
In Dijkstra (Commun ACM 17(11):643–644, 1974) introduced the notion of self-stabilizing algorithms and presented three such
algorithms for the problem of mutual exclusion on a ring of n processors. The third algorithm is the most interesting of these three but is rather non intuitive. In Dijkstra (Distrib
Comput 1:5–6, 1986) a proof of its correctness was presented, but the question of determining its worst case complexity—that
is, providing an upper bound on the number of moves of this algorithm until it stabilizes—remained open. In this paper we
solve this question and prove an upper bound of
3\frac1318 n2 + O(n){3\frac{13}{18} n^2 + O(n)} for the complexity of this algorithm. We also show a lower bound of
1\frac56 n2 - O(n){1\frac{5}{6} n^2 - O(n)} for the worst case complexity. For computing the upper bound, we use two techniques: potential functions and amortized analysis.
We also present a new-three state self-stabilizing algorithm for mutual exclusion and show a tight bound of
\frac56 n2 + O(n){\frac{5}{6} n^2 + O(n)} for the worst case complexity of this algorithm. In Beauquier and Debas (Proceedings of the second workshop on self-stabilizing
systems, pp 17.1–17.13, 1995) presented a similar three-state algorithm, with an upper bound of
5\frac34n2+O(n){5\frac{3}{4}n^2+O(n)} and a lower bound of
\frac18n2-O(n){\frac{1}{8}n^2-O(n)} for its stabilization time. For this algorithm we prove an upper bound of
1\frac12n2 + O(n){1\frac{1}{2}n^2 + O(n)} and show a lower bound of n
2−O(n). As far as the worst case performance is considered, the algorithm in Beauquier and Debas (Proceedings of the second workshop
on self-stabilizing systems, pp 17.1–17.13, 1995) is better than the one in Dijkstra (Commun ACM 17(11):643–644, 1974) and
our algorithm is better than both. 相似文献
9.
We prove that the concept class of disjunctions cannot be pointwise approximated by linear combinations of any small set of
arbitrary real-valued functions. That is, suppose that there exist functions f1, ?, fr\phi_{1}, \ldots , \phi_{r} : {− 1, 1}n →
\mathbbR{\mathbb{R}} with the property that every disjunction f on n variables has $\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3$\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3 for some reals a1, ?, ar\alpha_{1}, \ldots , \alpha_{r}. We prove that then $r \geq
exp \{\Omega(\sqrt{n})\}$r \geq
exp \{\Omega(\sqrt{n})\}, which is tight. We prove an incomparable lower bound for the concept class of decision lists. For the concept class of majority
functions, we obtain a lower bound of W(2n/n)\Omega(2^{n}/n) , which almost meets the trivial upper bound of 2n for any concept class. These lower bounds substantially strengthen and generalize the polynomial approximation lower bounds of Paturi
(1992) and show that the regression-based agnostic learning algorithm of Kalai et al. (2005) is optimal. 相似文献
10.
Given an undirected graph and 0 £ e £ 1{0\le\epsilon\le1}, a set of nodes is called an e{\epsilon}-near clique if all but an e{\epsilon} fraction of the pairs of nodes in the set have a link between them. In this paper we present a fast synchronous network algorithm
that uses small messages and finds a near-clique. Specifically, we present a constant-time algorithm that finds, with constant
probability of success, a linear size e{\epsilon}-near clique if there exists an e3{\epsilon^3}-near clique of linear size in the graph. The algorithm uses messages of O(log n) bits. The failure probability can be reduced to n
−Ω(1) by increasing the time complexity by a logarithmic factor, and the algorithm also works if the graph contains a clique of
size Ω(n/(log log n)
α
) for some a ? (0,1){\alpha \in (0,1)}. Our approach is based on a new idea of adapting property testing algorithms to the distributed setting. 相似文献
11.
Given a “black box” function to evaluate an unknown rational polynomial
f ? \mathbbQ[x]f \in {\mathbb{Q}}[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine
the sparsity $t \in {\mathbb{Z}}_{>0}$t \in {\mathbb{Z}}_{>0}, the shift
a ? \mathbbQ\alpha \in {\mathbb{Q}}, the exponents 0 £ e1 < e2 < ? < et{0 \leq e_{1} < e_{2} < \cdots < e_{t}}, and the coefficients
c1, ?, ct ? \mathbbQ \{0}c_{1}, \ldots , c_{t} \in {\mathbb{Q}} \setminus \{0\} such that
f(x) = c1(x-a)e1+c2(x-a)e2+ ?+ct(x-a)etf(x) = c_{1}(x-\alpha)^{e_{1}}+c_{2}(x-\alpha)^{e_{2}}+ \cdots +c_{t}(x-\alpha)^{e_{t}} 相似文献
12.
Consider the following model on the spreading of messages. A message initially convinces a set of vertices, called the seeds,
of the Erdős-Rényi random graph G(n,p). Whenever more than a ρ∈(0,1) fraction of a vertex v’s neighbors are convinced of the message, v will be convinced. The spreading proceeds asynchronously until no more vertices can be convinced. This paper derives lower
bounds on the minimum number of initial seeds, min-seed(n,p,d,r)\mathrm{min\hbox{-}seed}(n,p,\delta,\rho), needed to convince a δ∈{1/n,…,n/n} fraction of vertices at the end. In particular, we show that (1) there is a constant β>0 such that min-seed(n,p,d,r)=W(min{d,r}n)\mathrm{min\hbox{-}seed}(n,p,\delta,\rho)=\Omega(\min\{\delta,\rho\}n) with probability 1−n
−Ω(1) for p≥β (ln (e/min {δ,ρ}))/(ρ
n) and (2) min-seed(n,p,d,1/2)=W(dn/ln(e/d))\mathrm{min\hbox{-}seed}(n,p,\delta,1/2)=\Omega(\delta n/\ln(e/\delta)) with probability 1−exp (−Ω(δ
n))−n
−Ω(1) for all p∈[ 0,1 ]. The hidden constants in the Ω notations are independent of p. 相似文献
13.
Complexity of Hard-Core Set Proofs 总被引:1,自引:1,他引:0
We study a fundamental result of Impagliazzo (FOCS’95) known as the hard-core set lemma. Consider any function f:{0,1}n?{0,1}{f:\{0,1\}^n\to\{0,1\}} which is “mildly hard”, in the sense that any circuit of size s must disagree with f on at least a δ fraction of inputs. Then, the hard-core set lemma says that f must have a hard-core set H of density δ on which it is “extremely hard”, in the sense that any circuit of size
s¢=O(s/(\frac1e2log(\frac1ed))){s'=O(s/(\frac{1}{\epsilon^2}\log(\frac{1}{\epsilon\delta})))} must disagree with f on at least (1-e)/2{(1-\epsilon)/2} fraction of inputs from H. 相似文献
14.
Consider a rooted tree T of arbitrary maximum degree d representing a collection of n web pages connected via a set of links, all reachable from a source home page represented by the root of T. Each web page i carries a probability p
i
representative of the frequency with which it is visited. By adding hotlinks—shortcuts from a node to one of its descendents—we
wish to minimize the expected number of steps l needed to visit pages from the home page, expressed as a function of the entropy H(p) of the access probabilities p. This paper introduces several new strategies for effectively assigning hotlinks in a tree. For assigning exactly one hotlink
per node, our method guarantees an upper bound on l of 1.141H(p)+1 if d>2 and 1.08H(p)+2/3 if d=2. We also present the first efficient general methods for assigning at most k hotlinks per node in trees of arbitrary maximum degree, achieving bounds on l of at most
\frac2H(p)log(k+1)+1\frac{2H(p)}{\log(k+1)}+1
and
\fracH(p)log(k+d)-logd+1\frac{H(p)}{\log(k+d)-\log d}+1
, respectively. All our methods are strong, i.e., they provide the same guarantees on all subtrees after the assignment. We also present an algorithm implementing these
methods in O(nlog n) time, an improvement over the previous O(n
2) time algorithms. Finally we prove a Ω(nlog n) lower bound on the running time of any strong method that guarantee an average access time strictly better than 2H(p). 相似文献
15.
Hazem M. Bahig 《The Journal of supercomputing》2008,43(1):99-104
In this paper, we study the merging of two sorted arrays
and
on EREW PRAM with two restrictions: (1) The elements of two arrays are taken from the integer range [1,n], where n=Max(n
1,n
2). (2) The elements are taken from either uniform distribution or non-uniform distribution such that
, for 1≤i≤p (number of processors). We give a new optimal deterministic algorithm runs in
time using p processors on EREW PRAM. For
; the running time of the algorithm is O(log (g)
n) which is faster than the previous results, where log (g)
n=log log (g−1)
n for g>1 and log (1)
n=log n. We also extend the domain of input data to [1,n
k
], where k is a constant.
16.
Fast Algorithms for the Density Finding Problem 总被引:1,自引:0,他引:1
We study the problem of finding a specific density subsequence of a sequence arising from the analysis of biomolecular sequences.
Given a sequence A=(a
1,w
1),(a
2,w
2),…,(a
n
,w
n
) of n ordered pairs (a
i
,w
i
) of numbers a
i
and width w
i
>0 for each 1≤i≤n, two nonnegative numbers ℓ, u with ℓ≤u and a number δ, the Density Finding Problem is to find the consecutive subsequence A(i
*,j
*) over all O(n
2) consecutive subsequences A(i,j) with width constraint satisfying ℓ≤w(i,j)=∑
r=i
j
w
r
≤u such that its density
is closest to δ. The extensively studied Maximum-Density Segment Problem is a special case of the Density Finding Problem with δ=∞. We show that the Density Finding Problem has a lower bound Ω(nlog n) in the algebraic decision tree model of computation. We give an algorithm for the Density Finding Problem that runs in optimal O(nlog n) time and O(nlog n) space for the case when there is no upper bound on the width of the sequence, i.e., u=w(1,n). For the general case, we give an algorithm that runs in O(nlog 2
m) time and O(n+mlog m) space, where
and w
min=min
r=1
n
w
r
. As a byproduct, we give another O(n) time and space algorithm for the Maximum-Density Segment Problem.
Grants NSC95-2221-E-001-016-MY3, NSC-94-2422-H-001-0001, and NSC-95-2752-E-002-005-PAE, and by the Taiwan Information Security
Center (TWISC) under the Grants NSC NSC95-2218-E-001-001, NSC95-3114-P-001-002-Y, NSC94-3114-P-001-003-Y and NSC 94-3114-P-011-001. 相似文献
17.
Maurice Jansen 《Theory of Computing Systems》2011,49(2):343-354
The determinantal complexity of a polynomial f(x
1,x
2,…,x
n
) is the minimum m such that f=det
m
(L(x
1,x
2,…,x
n
)), where L(x
1,x
2,…,x
n
) is a matrix whose entries are affine forms in the x
i
s over some field $\mbox {$\mbox {. 相似文献
18.
A M-matrix which satisfies the Hecke algebraic relations is presented. Via the Yang–Baxterization approach, we obtain a unitary
solution
\breveR(q,j1,j2){\breve{R}(\theta,\varphi_{1},\varphi_{2})} of Yang–Baxter equation. It is shown that any pure two-qutrit entangled states can be generated via the universal
\breveR{\breve{R}}-matrix assisted by local unitary transformations. A Hamiltonian is constructed from the
\breveR{\breve{R}}-matrix, and Berry phase of the Yang–Baxter system is investigated. Specifically, for j1 = j2{\varphi_{1}\,{=}\,\varphi_{2}}, the Hamiltonian can be represented based on three sets of SU(2) operators, and three oscillator Hamiltonians can be obtained.
Under this framework, the Berry phase can be interpreted. 相似文献
19.
We consider the optimal makespan C(P, m, i) of an arbitrary set P of independent jobs scheduled with i preemptions on a multiprocessor with m identical processors. We compare the ratio for such makespans for i and j preemptions, respectively, where i < j. This ratio depends on P, but we are interested in the P that maximizes this ratio, i.e. we calculate a formula for the worst case ratio G(m, i, j) defined as
G(m,i,j)=max\fracC(P,m,i)C(P,m,j),{G(m,i,j)=\max \frac{C(P,m,i)}{C(P,m,j)},} where the maximum is taken over all sets P of independent jobs. 相似文献
20.
Daniel M. Kane 《Computational Complexity》2011,20(2):389-412
We prove asymptotically optimal bounds on the Gaussian noise sensitivity and Gaussian surface area of degree-d polynomial threshold functions. In particular, we show that for f a degree-d polynomial threshold function that the Gaussian noise sensitivity of f with parameter e{\epsilon} is at most
\fracdarcsin(?{2e-e2})p{\frac{d\arcsin\left(\sqrt{2\epsilon-\epsilon^2}\right)}{\pi}} . This bound translates into an optimal bound on the Gaussian surface area of such functions, namely that the Gaussian surface
area is at most
\fracd?{2p}{\frac{d}{\sqrt{2\pi}}} . Finally, we note that the later result implies bounds on the runtime of agnostic learning algorithms for polynomial threshold
functions. 相似文献
|