共查询到20条相似文献,搜索用时 62 毫秒
1.
Rahul Tripathi 《Theory of Computing Systems》2010,46(2):193-221
The 1-versus-2 queries problem, which has been extensively studied in computational complexity theory, asks in its generality whether every efficient
algorithm that makes at most 2 queries to a Σ
k
p
-complete language L
k
has an efficient simulation that makes at most 1 query to L
k
. We obtain solutions to this problem for hypotheses weaker than previously considered. We prove that:
Here, for any complexity class
C\mathcal{C}
and integer j≥1, we define
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
to be the class of problems solvable by zero-error randomized algorithms that run in polynomial time, make at most j queries to
C\mathcal{C}
, and succeed with probability at least 1/2+1/poly(⋅). This same definition of
ZPPC[j]\mathrm{ZPP}^{\mathcal{C}[j]}
, also considered in Cai and Chakaravarthy (J. Comb. Optim. 11(2):189–202, 2006), subsumes the class of problems solvable by randomized algorithms that always answer correctly in expected polynomial time
and make at most j queries to
C\mathcal{C}
.
Hemaspaandra, Hemaspaandra, and Hempel (SIAM J. Comput. 28(2):383–393, 1998), for k>2, and Buhrman and Fortnow (J. Comput. Syst. Sci. 59(2):182–194, 1999), for k=2, had obtained the same consequence as ours in (I) using the stronger hypothesis
PSpk[2]tt í PSpk[1]\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{P}^{\Sigma^{p}_{k}[1]}
. Fortnow, Pavan, and Sengupta (J. Comput. Syst. Sci. 74(3):358–363, 2008) had obtained the same consequence as ours in (II) using the stronger hypothesis P
tt
NP[2]⊆PNP[1]. 相似文献
(I) | For each k≥2, PSpk[2]tt í ZPPSpk[1]T PH=Spk\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{ZPP}^{\Sigma^{p}_{k}[1]}\Rightarrow \mathrm{PH}=\Sigma^{p}_{k} , and |
(II) | P tt NP[2]⊆ZPPNP[1] ⇒PH=S2 p . |
2.
We study statistical sum-tests and independence tests, in particular for computably enumerable semimeasures on a discrete
domain. Among other things, we prove that for universal semimeasures every
S01\Sigma ^{0}_{1}
-sum-test is bounded, but unbounded
P01\Pi ^{0}_{1}
-sum-tests exist, and we study to what extent the latter can be universal. For universal semimeasures, in the unary case of
sum-test we leave open whether universal
P01\Pi ^{0}_{1}
-sum-tests exist, whereas in the binary case of independence tests we prove that they do not exist. 相似文献
3.
We prove that the concept class of disjunctions cannot be pointwise approximated by linear combinations of any small set of
arbitrary real-valued functions. That is, suppose that there exist functions f1, ?, fr\phi_{1}, \ldots , \phi_{r} : {− 1, 1}n →
\mathbbR{\mathbb{R}} with the property that every disjunction f on n variables has $\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3$\|f - \sum\nolimits_{i=1}^{r} \alpha_{i}\phi
_{i}\|_{\infty}\leq 1/3 for some reals a1, ?, ar\alpha_{1}, \ldots , \alpha_{r}. We prove that then $r \geq
exp \{\Omega(\sqrt{n})\}$r \geq
exp \{\Omega(\sqrt{n})\}, which is tight. We prove an incomparable lower bound for the concept class of decision lists. For the concept class of majority
functions, we obtain a lower bound of W(2n/n)\Omega(2^{n}/n) , which almost meets the trivial upper bound of 2n for any concept class. These lower bounds substantially strengthen and generalize the polynomial approximation lower bounds of Paturi
(1992) and show that the regression-based agnostic learning algorithm of Kalai et al. (2005) is optimal. 相似文献
4.
Roel Verstappen 《Journal of scientific computing》2011,49(1):94-110
Large eddy simulation (LES) seeks to predict the dynamics of spatially filtered turbulent flows. The very essence is that
the LES-solution contains only scales of size ≥Δ, where Δ denotes some user-chosen length scale. This property enables us
to perform a LES when it is not feasible to compute the full, turbulent solution of the Navier-Stokes equations. Therefore,
in case the large eddy simulation is based on an eddy viscosity model we determine the eddy viscosity such that any scales
of size <Δ are dynamically insignificant. In this paper, we address the following two questions: how much eddy diffusion is
needed to (a) balance the production of scales of size smaller than Δ; and (b) damp any disturbances having a scale of size
smaller than Δ initially. From this we deduce that the eddy viscosity ν
e
has to depend on the invariants
q = \frac12tr(S2)q = \frac{1}{2}\mathrm{tr}(S^{2}) and
r = -\frac13tr(S3)r= -\frac{1}{3}\mathrm{tr}(S^{3}) of the (filtered) strain rate tensor S. The simplest model is then given by
ne = \frac32(D/p)2 |r|/q\nu_{e} = \frac{3}{2}(\Delta/\pi)^{2} |r|/q. This model is successfully tested for a turbulent channel flow (Re
τ
=590). 相似文献
5.
Given a “black box” function to evaluate an unknown rational polynomial
f ? \mathbbQ[x]f \in {\mathbb{Q}}[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine
the sparsity $t \in {\mathbb{Z}}_{>0}$t \in {\mathbb{Z}}_{>0}, the shift
a ? \mathbbQ\alpha \in {\mathbb{Q}}, the exponents 0 £ e1 < e2 < ? < et{0 \leq e_{1} < e_{2} < \cdots < e_{t}}, and the coefficients
c1, ?, ct ? \mathbbQ \{0}c_{1}, \ldots , c_{t} \in {\mathbb{Q}} \setminus \{0\} such that
f(x) = c1(x-a)e1+c2(x-a)e2+ ?+ct(x-a)etf(x) = c_{1}(x-\alpha)^{e_{1}}+c_{2}(x-\alpha)^{e_{2}}+ \cdots +c_{t}(x-\alpha)^{e_{t}} 相似文献
6.
We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C
t
and try to select a parameter vector w
t−1 such that the total variance over a sequence of trials ?t=1T (wt-1)T Ctwt-1\sum_{t=1}^{T} (\boldsymbol {w}^{t-1})^{\top} \boldsymbol {C}^{t}\boldsymbol {w}^{t-1} is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ℝ
n
are considered—the probability simplex and the unit sphere. The first space is associated with the problem of minimizing
risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of
the total covariance matrix ?t=1T Ct\sum_{t=1}^{T} \boldsymbol {C}^{t}. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization.
In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu
⊤ and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density
matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated
Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm
over the best offline parameter. 相似文献
7.
In classical constraint satisfaction, redundant modeling has been shown effective in increasing constraint propagation and
reducing search space for many problem instances. In this paper, we investigate, for the first time, how to benefit the same
from redundant modeling in weighted constraint satisfaction problems (WCSPs), a common soft constraint framework for modeling optimization and over-constrained problems. Our work focuses on
a popular and special class of problems, namely, permutation problems. First, we show how to automatically generate a redundant permutation WCSP model from an existing permutation WCSP using
generalized model induction. We then uncover why naively combining mutually redundant permutation WCSPs by posting channeling constraints as hard constraints
and relying on the standard node consistency (NC*) and arc consistency (AC*) algorithms would miss pruning opportunities, which are available even in a single model. Based on these observations,
we suggest two approaches to handle the combined WCSP models. In our first approach, we propose
m\text -NC\text c*m\text {-NC}_{\text c}^* and
m\text -AC\text c*m\text {-AC}_{\text c}^* and their associated algorithms for effectively enforcing node and arc consistencies in a combined model with m sub-models. The two notions are strictly stronger than NC* and AC* respectively. While the first approach specifically refines
NC* and AC* so as to apply to combined models, in our second approach, we propose a parameterized local consistency LB(m,Φ). The consistency can be instantiated with any local consistency Φ for single models and applied to a combined model with m sub-models. We also provide a simple algorithm to enforce LB(m,Φ). With the two suggested approaches, we demonstrate their applicabilities on several permutation problems in the experiments.
Prototype implementations of our proposed algorithms confirm that applying
2\text -NC\text c*, 2\text -AC\text c*2\text {-NC}_{\text c}^*,\;2\text {-AC}_{\text c}^*, and LB(2,Φ) on combined models allow far more constraint propagation than applying the state-of-the-art AC*, FDAC*, and
EDAC* algorithms on single models of hard benchmark problems. 相似文献
8.
Sayed Yousef Monir Vaghefi Sayed Mahmoud Monir Vaghefi 《Neural computing & applications》2011,20(7):1055-1060
A multilayer feedforward neural network with two hidden layers was designed and developed for prediction of the phosphorus
content of electroless Ni–P coatings. The input parameters of the network were the pH, metal turnover, and loading of an electroless
bath. The output parameter was the phosphorus content of the electroless Ni–P coatings. The temperature and molar rate of
the bath were constant (
91° \textC, 0.4 \textNi\text + + /\textH2 \textPO2 - - 91^\circ {\text{C}},\:0.4\,{\text{Ni}}^{{{\text{ + + }}}} /{\text{H}}_{2} {\text{PO}}_{2}^{{ - - }} ). The network was trained and tested using the data gathered from our own experiments. The goal of the study was to estimate
the accuracy of this type of neural network in prediction of the phosphorus content. The study result shows that this type
of network has high accuracy even when the number of hidden neurons is very low. Some comparison between the network’s predictions
and own experimental data are given. 相似文献
9.
A set A is nontrivial for the linear-exponential-time class E=DTIME(2
lin
) if for any k≥1 there is a set B
k
∈E such that B
k
is (p-m-)reducible to A and Bk ? DTIME(2k·n)B_{k} \not\in \mathrm{DTIME}(2^{k\cdot n}). I.e., intuitively, A is nontrivial for E if there are arbitrarily complex sets in E which can be reduced to A. Similarly, a set A is nontrivial for the polynomial-exponential-time class EXP=DTIME(2
poly
) if for any k≥1 there is a set [^(B)]k ? EXP\hat{B}_{k} \in \mathrm {EXP} such that [^(B)]k\hat{B}_{k} is reducible to A and [^(B)]k ? DTIME(2nk)\hat{B}_{k} \not\in \mathrm{DTIME}(2^{n^{k}}). We show that these notions are independent, namely, there are sets A
1 and A
2 in E such that A
1 is nontrivial for E but trivial for EXP and A
2 is nontrivial for EXP but trivial for E. In fact, the latter can be strengthened to show that there is a set A∈E which is weakly EXP-hard in the sense of Lutz (SIAM J. Comput. 24:1170–1189, 11) but E-trivial. 相似文献
10.
The min-sum k -clustering problem is to partition a metric space (P,d) into k clusters C 1,…,C k ?P such that $\sum_{i=1}^{k}\sum_{p,q\in C_{i}}d(p,q)
11.
We present in this paper an analysis of a semi-Lagrangian second order Backward Difference Formula combined with hp-finite
element method to calculate the numerical solution of convection diffusion equations in ℝ2. Using mesh dependent norms, we prove that the a priori error estimate has two components: one corresponds to the approximation
of the exact solution along the characteristic curves, which is
O(Dt2+hm+1(1+\frac\mathopen|logh|Dt))O(\Delta t^{2}+h^{m+1}(1+\frac{\mathopen{|}\log h|}{\Delta t})); and the second, which is O(Dtp+|| [(u)\vec]-[(u)\vec]h||L¥)O(\Delta t^{p}+\| \vec{u}-\vec{u}_{h}\|_{L^{\infty}}), represents the error committed in the calculation of the characteristic curves. Here, m is the degree of the polynomials in the finite element space, [(u)\vec]\vec{u} is the velocity vector, [(u)\vec]h\vec{u}_{h} is the finite element approximation of [(u)\vec]\vec{u} and p denotes the order of the method employed to calculate the characteristics curves. Numerical examples support the validity
of our estimates. 相似文献
12.
The “Priority Algorithm” is a model of computation introduced by Borodin, Nielsen and Rackoff ((Incremental) Priority algorithms,
Algorithmica 37(4):295–326, 2003) which formulates a wide class of greedy algorithms. For an arbitrary set
\mathbbS\mathbb{S}
of jobs, we are interested in whether or not there exists a priority algorithm that gains optimal profit on every subset of
\mathbbS\mathbb{S}
. In the case where the jobs are all intervals, we characterize such sets
\mathbbS\mathbb{S}
and give an efficient algorithm (when
\mathbbS\mathbb{S}
is finite) for determining this. We show that in general, however, the problem is NP-hard. 相似文献
13.
We investigate the arithmetic formula complexity of the elementary symmetric polynomials Skn{S^k_n} . We show that every multilinear homogeneous formula computing Skn{S^k_n} has size at least kW(logk)n{k^{\Omega(\log k)}n} , and that product-depth d multilinear homogeneous formulas for Skn{S^k_n} have size at least 2W(k1/d)n{2^{\Omega(k^{1/d})}n} . Since Sn2n{S^{n}_{2n}} has a multilinear formula of size O(n
2), we obtain a superpolynomial separation between multilinear and multilinear homogeneous formulas. We also show that Skn{S^k_n} can be computed by homogeneous formulas of size kO(logk)n{k^{O(\log k)}n} , answering a question of Nisan and Wigderson. Finally, we present a superpolynomial separation between monotone and non-monotone
formulas in the noncommutative setting, answering a question of Nisan. 相似文献
14.
Mohammad Ali Abam Mark de Berg Mohammad Farshi Joachim Gudmundsson Michiel Smid 《Algorithmica》2011,61(1):207-225
Let (S,d) be a finite metric space, where each element p∈S has a non-negative weight w (p). We study spanners for the set S with respect to the following weighted distance function:
|
设为首页 | 免责声明 | 关于勤云 | 加入收藏 |
Copyright©北京勤云科技发展有限公司 京ICP备09084417号 |