共查询到20条相似文献,搜索用时 15 毫秒
1.
We prove that the concept class of disjunctions cannot be pointwise approximated by linear combinations of any small set of
arbitrary real-valued functions. That is, suppose that there exist functions f1, ?, fr\phi_{1}, \ldots , \phi_{r} : {− 1, 1}n →
\mathbbR{\mathbb{R}} with the property that every disjunction f on n variables has $\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3$\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3 for some reals a1, ?, ar\alpha_{1}, \ldots , \alpha_{r}. We prove that then $r \geq
exp \{\Omega(\sqrt{n})\}$r \geq
exp \{\Omega(\sqrt{n})\}, which is tight. We prove an incomparable lower bound for the concept class of decision lists. For the concept class of majority
functions, we obtain a lower bound of W(2n/n)\Omega(2^{n}/n) , which almost meets the trivial upper bound of 2n for any concept class. These lower bounds substantially strengthen and generalize the polynomial approximation lower bounds of Paturi
(1992) and show that the regression-based agnostic learning algorithm of Kalai et al. (2005) is optimal. 相似文献
2.
Rahul Tripathi 《Theory of Computing Systems》2010,46(2):193-221
The 1-versus-2 queries problem, which has been extensively studied in computational complexity theory, asks in its generality whether every efficient
algorithm that makes at most 2 queries to a Σ
k
p
-complete language L
k
has an efficient simulation that makes at most 1 query to L
k
. We obtain solutions to this problem for hypotheses weaker than previously considered. We prove that:
Here, for any complexity class
C\mathcal{C}
and integer j≥1, we define
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
to be the class of problems solvable by zero-error randomized algorithms that run in polynomial time, make at most j queries to
C\mathcal{C}
, and succeed with probability at least 1/2+1/poly(⋅). This same definition of
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
, also considered in Cai and Chakaravarthy (J. Comb. Optim. 11(2):189–202, 2006), subsumes the class of problems solvable by randomized algorithms that always answer correctly in expected polynomial time
and make at most j queries to
C\mathcal{C}
.
Hemaspaandra, Hemaspaandra, and Hempel (SIAM J. Comput. 28(2):383–393, 1998), for k>2, and Buhrman and Fortnow (J. Comput. Syst. Sci. 59(2):182–194, 1999), for k=2, had obtained the same consequence as ours in (I) using the stronger hypothesis
PSpk[2]tt í PSpk[1]\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{P}^{\Sigma^{p}_{k}[1]}
. Fortnow, Pavan, and Sengupta (J. Comput. Syst. Sci. 74(3):358–363, 2008) had obtained the same consequence as ours in (II) using the stronger hypothesis P
tt
NP[2]⊆PNP[1]. 相似文献
(I) | For each k≥2, PSpk[2]tt í ZPPSpk[1]T PH=Spk\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{ZPP}^{\Sigma^{p}_{k}[1]}\Rightarrow \mathrm{PH}=\Sigma^{p}_{k} , and |
(II) | P tt NP[2]⊆ZPPNP[1] ⇒PH=S2 p . |
3.
We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C
t
and try to select a parameter vector w
t−1 such that the total variance over a sequence of trials ?t=1T (wt-1)T Ctwt-1\sum_{t=1}^{T} (\boldsymbol {w}^{t-1})^{\top} \boldsymbol {C}^{t}\boldsymbol {w}^{t-1} is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ℝ
n
are considered—the probability simplex and the unit sphere. The first space is associated with the problem of minimizing
risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of
the total covariance matrix ?t=1T Ct\sum_{t=1}^{T} \boldsymbol {C}^{t}. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization.
In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu
⊤ and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density
matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated
Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm
over the best offline parameter. 相似文献
4.
We investigate the diameter
problem in the streaming and sliding-window
models. We show that, for
a stream of nn points or a sliding window of size nn, any exact
algorithm for diameter requires W(n)\Omega(n) bits of
space. We present a simple e\epsilon-approximation algorithm
for computing the diameter in the streaming model. Our main result
is an e\epsilon-approximation algorithm
that maintains the diameter in two dimensions in the sliding-window
model using O((1/e3/2) log3n(logR+loglogn + log(1/e)))O(({1}/{\epsilon^{3/2}}) \log^{3}n(\log R+\log\log n +
\log ({1}/{\epsilon}))) bits of space, where RR is the maximum, over all
windows, of the ratio of the diameter to the minimum non-zero distance
between any two points in the window. 相似文献
5.
Roel Verstappen 《Journal of scientific computing》2011,49(1):94-110
Large eddy simulation (LES) seeks to predict the dynamics of spatially filtered turbulent flows. The very essence is that
the LES-solution contains only scales of size ≥Δ, where Δ denotes some user-chosen length scale. This property enables us
to perform a LES when it is not feasible to compute the full, turbulent solution of the Navier-Stokes equations. Therefore,
in case the large eddy simulation is based on an eddy viscosity model we determine the eddy viscosity such that any scales
of size <Δ are dynamically insignificant. In this paper, we address the following two questions: how much eddy diffusion is
needed to (a) balance the production of scales of size smaller than Δ; and (b) damp any disturbances having a scale of size
smaller than Δ initially. From this we deduce that the eddy viscosity ν
e
has to depend on the invariants
q = \frac12tr(S2)q = \frac{1}{2}\mathrm{tr}(S^{2}) and
r = -\frac13tr(S3)r= -\frac{1}{3}\mathrm{tr}(S^{3}) of the (filtered) strain rate tensor S. The simplest model is then given by
ne = \frac32(D/p)2 |r|/q\nu_{e} = \frac{3}{2}(\Delta/\pi)^{2} |r|/q. This model is successfully tested for a turbulent channel flow (Re
τ
=590). 相似文献
6.
Given a “black box” function to evaluate an unknown rational polynomial
f ? \mathbbQ[x]f \in {\mathbb{Q}}[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine
the sparsity $t \in {\mathbb{Z}}_{>0}$t \in {\mathbb{Z}}_{>0}, the shift
a ? \mathbbQ\alpha \in {\mathbb{Q}}, the exponents 0 £ e1 < e2 < ? < et{0 \leq e_{1} < e_{2} < \cdots < e_{t}}, and the coefficients
c1, ?, ct ? \mathbbQ \{0}c_{1}, \ldots , c_{t} \in {\mathbb{Q}} \setminus \{0\} such that
f(x) = c1(x-a)e1+c2(x-a)e2+ ?+ct(x-a)etf(x) = c_{1}(x-\alpha)^{e_{1}}+c_{2}(x-\alpha)^{e_{2}}+ \cdots +c_{t}(x-\alpha)^{e_{t}} 相似文献
7.
Given an undirected graph and 0 £ e £ 1{0\le\epsilon\le1}, a set of nodes is called an e{\epsilon}-near clique if all but an e{\epsilon} fraction of the pairs of nodes in the set have a link between them. In this paper we present a fast synchronous network algorithm
that uses small messages and finds a near-clique. Specifically, we present a constant-time algorithm that finds, with constant
probability of success, a linear size e{\epsilon}-near clique if there exists an e3{\epsilon^3}-near clique of linear size in the graph. The algorithm uses messages of O(log n) bits. The failure probability can be reduced to n
−Ω(1) by increasing the time complexity by a logarithmic factor, and the algorithm also works if the graph contains a clique of
size Ω(n/(log log n)
α
) for some a ? (0,1){\alpha \in (0,1)}. Our approach is based on a new idea of adapting property testing algorithms to the distributed setting. 相似文献
8.
S. Gupta 《Theory of Computing Systems》1996,29(6):661-672
Paulet al. [12] proved that deterministic Turing machines can be speeded up by a factor of log*t (n) using four alternations; that is, DTIME(t(n) log*t(n))
σ4(t(n)). Balcázaret al. [1] noted that two alternations are sufficient to achieve this speed-up of deterministic Turing machines; that is, DTIME(t(n) log*t(n))
Σ2(t (n)). We supply a proof of this speed-up and using that show that for each time-constructible functiont(n), DTIME(t(n)) ⊂ Σ2(t(n)); that is, two alternations are strictly more powerful than deterministic time. An important corollary is that at least
one (or both) of the immediate generalizations of the result DTIME(n) ⊂ NTIME(n) [12] must be true: NTIME(n) ≠ co-NTIME(n) or, for each time-constructible functiont(n), DTIME(t (n)) ⊂ NTIME(t (n)).
This work was supported in part by NSF Grant CCR-8909071. The preliminary version of the work was done when the author was
a graduate student at The Ohio State University. 相似文献
9.
Given an alphabet Σ={1,2,…,|Σ|} text string T∈Σ
n
and a pattern string P∈Σ
m
, for each i=1,2,…,n−m+1 define L
p
(i) as the p-norm distance when the pattern is aligned below the text and starts at position i of the text. The problem of pattern matching with L
p
distance is to compute L
p
(i) for every i=1,2,…,n−m+1. We discuss the problem for d=1,2,∞. First, in the case of L
1 matching (pattern matching with an L
1 distance) we show a reduction of the string matching with mismatches problem to the L
1 matching problem and we present an algorithm that approximates the L
1 matching up to a factor of 1+ε, which has an
O(\frac1e2nlogmlog|S|)O(\frac{1}{\varepsilon^{2}}n\log m\log|\Sigma|)
run time. Then, the L
2 matching problem (pattern matching with an L
2 distance) is solved with a simple O(nlog m) time algorithm. Finally, we provide an algorithm that approximates the L
∞ matching up to a factor of 1+ε with a run time of
O(\frac1enlogmlog|S|)O(\frac{1}{\varepsilon}n\log m\log|\Sigma|)
. We also generalize the problem of String Matching with mismatches to have weighted mismatches and present an O(nlog 4
m) algorithm that approximates the results of this problem up to a factor of O(log m) in the case that the weight function is a metric. 相似文献
10.
Amir Shpilka 《Computational Complexity》2009,18(4):495-525
In this work we give two new constructions of ε-biased generators. Our first construction significantly extends a result of
Mossel et al. (Random Structures and Algorithms 2006, pages 56-81), and our second construction answers an open question of
Dodis and Smith (STOC 2005, pages 654-663). In particular we obtain the following results:
11.
We show a construction of a PCP with both sub-constant error and almost-linear size. Specifically, for some constant 0 < α < 1, we construct a PCP verifier for checking satisfiability of Boolean formulas that on input of size n uses log n+O((log n)1-a)\log\, n+O((\log\, n)^{1-\alpha}) random bits to make 7 queries to a proof of size n·2O((log n)1-a)n·2^{O((\log\, n)^{1-\alpha})}, where each query is answered by O((log n)1-a)O((\log\, n)^{1-\alpha}) bit long string, and the verifier has perfect completeness and error 2-W((log n)a)2^{-\Omega((\log\, n)^{\alpha})}. 相似文献
12.
This paper studies the notions of self-reducibility and autoreducibility. Our main result regarding length-decreasing self-reducibility
is that any complexity class
C\mathcal{C}
that has a (logspace) complete language and is closed under polynomial-time (logspace) padding has the property that if all
C\mathcal{C}
-complete languages are length-decreasing (logspace) self-reducible then
C í P\mathcal{C}\subseteq \mathrm {P}
(C í L\mathcal {C}\subseteq \mathrm {L}
). In particular, this result applies to NL, NP and PSPACE. We also prove an equivalent of this theorem for function classes
(for example, for #P).
We also show that for several hard function classes, in particular for #P, it is the case that all their complete functions
are deterministically autoreducible. In particular, we show the following result. Let f be a #P parsimonious function with two preimages of 0. We show that there are two FP functions h and t such that for all inputs x we have f(x)=t(x)+f(h(x)), h(x)≠x, and t(x)∈{0,1}. Our results regarding single-query autoreducibility of #P functions can be contrasted with random self-reducibility
for which it is known that if a #P complete function were random self-reducible with one query then the polynomial hierarchy
would collapse. 相似文献
13.
Alexander A. Sherstov 《Computational Complexity》2010,19(1):135-150
We solve an open problem in communication complexity posed by Kushilevitz and Nisan (1997). Let R∈(f) and $D^\mu_\in
(f)$D^\mu_\in
(f) denote the randomized and μ-distributional communication complexities of f, respectively (∈ a small constant). Yao’s well-known minimax principle states that $R_{\in}(f) = max_\mu
\{D^\mu_\in(f)\}$R_{\in}(f) = max_\mu
\{D^\mu_\in(f)\}. Kushilevitz and Nisan (1997) ask whether this equality is approximately preserved if the maximum is taken over product distributions
only, rather than all distributions μ. We give a strong negative answer to this question. Specifically, we prove the existence
of a function f : {0, 1}n ×{0, 1}n ? {0, 1}f : \{0, 1\}^n \times \{0, 1\}^n \rightarrow \{0, 1\} for which maxμ product {Dm ? (f)} = Q(1) but R ? (f) = Q(n)\{D^\mu_\in (f)\} = \Theta(1) \,{\textrm but}\, R_{\in} (f) = \Theta(n). We also obtain an exponential separation between the statistical query dimension and signrank, solving a problem previously
posed by the author (2007). 相似文献
14.
We present in this paper an analysis of a semi-Lagrangian second order Backward Difference Formula combined with hp-finite
element method to calculate the numerical solution of convection diffusion equations in ℝ2. Using mesh dependent norms, we prove that the a priori error estimate has two components: one corresponds to the approximation
of the exact solution along the characteristic curves, which is
O(Dt2+hm+1(1+\frac\mathopen|logh|Dt))O(\Delta t^{2}+h^{m+1}(1+\frac{\mathopen{|}\log h|}{\Delta t})); and the second, which is O(Dtp+|| [(u)\vec]-[(u)\vec]h||L¥)O(\Delta t^{p}+\| \vec{u}-\vec{u}_{h}\|_{L^{\infty}}), represents the error committed in the calculation of the characteristic curves. Here, m is the degree of the polynomials in the finite element space, [(u)\vec]\vec{u} is the velocity vector, [(u)\vec]h\vec{u}_{h} is the finite element approximation of [(u)\vec]\vec{u} and p denotes the order of the method employed to calculate the characteristics curves. Numerical examples support the validity
of our estimates. 相似文献
15.
We study algorithms simulating a system evolving with Hamiltonian H = ?j=1m Hj{H = \sum_{j=1}^m H_j} , where each of the H
j
, j = 1, . . . ,m, can be simulated efficiently. We are interested in the cost for approximating
e-iHt, t ? \mathbbR{e^{-iHt}, t \in \mathbb{R}} , with error e{\varepsilon} . We consider algorithms based on high order splitting formulas that play an important role in quantum Hamiltonian simulation.
These formulas approximate e
−iHt
by a product of exponentials involving the H
j
, j = 1, . . . ,m. We obtain an upper bound for the number of required exponentials. Moreover, we derive the order of the optimal splitting method that minimizes our upper bound. We show significant speedups relative to previously known results. 相似文献
16.
Philippe Raïpin Parvédy Michel Raynal Corentin Travers 《Theory of Computing Systems》2010,47(1):259-287
The k-set agreement problem is a generalization of the consensus problem: considering a system made up of n processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is a proposed value, and no more than k different values are decided. It has recently be shown that, in the crash failure model, $\min(\lfloor \frac{f}{k}\rfloor+2,\lfloor \frac{t}{k}\rfloor +1)
17.
Complexity of Hard-Core Set Proofs 总被引:1,自引:1,他引:0
We study a fundamental result of Impagliazzo (FOCS’95) known as the hard-core set lemma. Consider any function f:{0,1}n?{0,1}{f:\{0,1\}^n\to\{0,1\}} which is “mildly hard”, in the sense that any circuit of size s must disagree with f on at least a δ fraction of inputs. Then, the hard-core set lemma says that f must have a hard-core set H of density δ on which it is “extremely hard”, in the sense that any circuit of size
s¢=O(s/(\frac1e2log(\frac1ed))){s'=O(s/(\frac{1}{\epsilon^2}\log(\frac{1}{\epsilon\delta})))} must disagree with f on at least (1-e)/2{(1-\epsilon)/2} fraction of inputs from H. 相似文献
18.
Seth Pettie 《Distributed Computing》2010,22(3):147-166
We present efficient algorithms for computing very sparse low distortion spanners in distributed networks and prove some non-trivial lower bounds on the tradeoff between time, sparseness, and distortion. All of our algorithms assume a synchronized distributed network, where relatively short messages may be communicated in each time step. Our first result is a fast distributed algorithm for finding an ${O(2^{{\rm log}^{*} n} {\rm log} n)}
19.
We design approximation algorithms for the vertex ordering problems Minimum Linear Arrangement, Minimum Containing Interval Graph, and Minimum Storage-Time Product, achieving approximation factors of $O(\sqrt{\log n}\log\log n)
20.
In classical constraint satisfaction, redundant modeling has been shown effective in increasing constraint propagation and
reducing search space for many problem instances. In this paper, we investigate, for the first time, how to benefit the same
from redundant modeling in weighted constraint satisfaction problems (WCSPs), a common soft constraint framework for modeling optimization and over-constrained problems. Our work focuses on
a popular and special class of problems, namely, permutation problems. First, we show how to automatically generate a redundant permutation WCSP model from an existing permutation WCSP using
generalized model induction. We then uncover why naively combining mutually redundant permutation WCSPs by posting channeling constraints as hard constraints
and relying on the standard node consistency (NC*) and arc consistency (AC*) algorithms would miss pruning opportunities, which are available even in a single model. Based on these observations,
we suggest two approaches to handle the combined WCSP models. In our first approach, we propose
m\text -NC\text c*m\text {-NC}_{\text c}^* and
m\text -AC\text c*m\text {-AC}_{\text c}^* and their associated algorithms for effectively enforcing node and arc consistencies in a combined model with m sub-models. The two notions are strictly stronger than NC* and AC* respectively. While the first approach specifically refines
NC* and AC* so as to apply to combined models, in our second approach, we propose a parameterized local consistency LB(m,Φ). The consistency can be instantiated with any local consistency Φ for single models and applied to a combined model with m sub-models. We also provide a simple algorithm to enforce LB(m,Φ). With the two suggested approaches, we demonstrate their applicabilities on several permutation problems in the experiments.
Prototype implementations of our proposed algorithms confirm that applying
2\text -NC\text c*, 2\text -AC\text c*2\text {-NC}_{\text c}^*,\;2\text {-AC}_{\text c}^*, and LB(2,Φ) on combined models allow far more constraint propagation than applying the state-of-the-art AC*, FDAC*, and
EDAC* algorithms on single models of hard benchmark problems. 相似文献
|