首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
2.
In the List H- Homomorphism Problem, for a graph H that is a parameter of the problem, an instance consists of an undirected graph G with a list constraint \({L(v) \subseteq V(H)}\) for each variable \({v \in V(G)}\), and the objective is to determine whether there is a list H-homomorphism \({f:V(G) \to V(H)}\), that is, \({f(v) \in L(v)}\) for every \({v \in V(G)}\) and \({(f(u),f(v)) \in E(H)}\) whenever \({(u,v) \in E(G)}\).We consider the problem of testing list H-homomorphisms in the following weighted setting: An instance consists of an undirected graph G, list constraints L, weights imposed on the vertices of G, and a map \({f:V(G) \to V(H)}\) given as an oracle access. The objective is to determine whether f is a list H-homomorphism or far from any list H-homomorphism. The farness is measured by the total weight of vertices \({v \in V(G)}\) for which f(v) must be changed so as to make f a list H-homomorphism. In this paper, we classify graphs H with respect to the number of queries to f required to test the list H-homomorphisms. Specifically, we show that (i) list H-homomorphisms are testable with a constant number of queries if and only if H is a reflexive complete graph or an irreflexive complete bipartite graph and (ii) list H-homomorphisms are testable with a sublinear number of queries if and only if H is a bi-arc graph.  相似文献   

3.
A circuit C compresses a function \({f : \{0,1\}^n\rightarrow \{0,1\}^m}\) if given an input \({x\in \{0,1\}^n}\), the circuit C can shrink x to a shorter ?-bit string x′ such that later, a computationally unbounded solver D will be able to compute f(x) based on x′. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size \({s=n^c}\). Motivated by cryptographic applications, we focus on average-case \({(\ell,\epsilon)}\) incompressibility, which guarantees that on a random input \({x\in \{0,1\}^n}\), for every size s circuit \({C:\{0,1\}^n\rightarrow \{0,1\}^{\ell}}\) and any unbounded solver D, the success probability \({\Pr_x[D(C(x))=f(x)]}\) is upper-bounded by \({2^{-m}+\epsilon}\). While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work, we present the following results:
  1. (1)
    Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) which is incompressible by size n c circuits with communication \({\ell=(1-o(1)) \cdot n}\) and error \({\epsilon=n^{-c}}\). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14).
     
  2. (2)
    We show that it is possible to achieve negligible error parameter \({\epsilon=n^{-\omega(1)}}\) for nonboolean functions. Specifically, assuming that E is hard for exponential size \({\Sigma_3}\)-circuits, we construct a nonboolean function \({f:\{0,1\}^n\rightarrow \{0,1\}^m}\) which is incompressible by size n c circuits with \({\ell=\Omega(n)}\) and extremely small \({\epsilon=n^{-c} \cdot 2^{-m}}\). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest.
     
  3. (3)
    We show that the task of constructing an incompressible boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) with negligible error parameter \({\epsilon}\) cannot be achieved by “existing proof techniques”. Namely, nondeterministic reductions (or even \({\Sigma_i}\) reductions) cannot get \({\epsilon=n^{-\omega(1)}}\) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospect, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (STOC 08).
     
  相似文献   

4.
This paper considers the quantum query complexity of almost all functions in the set \({\mathcal{F}}_{N,M}\) of \({N}\)-variable Boolean functions with on-set size \({M (1\le M \le 2^{N}/2)}\), where the on-set size is the number of inputs on which the function is true. The main result is that, for all functions in \({\mathcal{F}}_{N,M}\) except its polynomially small fraction, the quantum query complexity is \({ \Theta\left(\frac{\log{M}}{c + \log{N} - \log\log{M}} + \sqrt{N}\right)}\) for a constant \({c > 0}\). This is quite different from the quantum query complexity of the hardest function in \({\mathcal{F}}_{N,M}\): \({\Theta\left(\sqrt{N\frac{\log{M}}{c + \log{N} - \log\log{M}}} + \sqrt{N}\right)}\). In contrast, almost all functions in \({\mathcal{F}}_{N,M}\) have the same randomized query complexity \({\Theta(N)}\) as the hardest one, up to a constant factor.  相似文献   

5.
Scaling skyline queries over high-dimensional datasets remains to be challenging due to the fact that most existing algorithms assume dimensional independence when establishing the worst-case complexity by discarding correlation distribution. In this paper, we present HashSkyline, a systematic and correlation-aware approach for scaling skyline queries over high-dimensional datasets with three novel features: First, it offers a fast hash-based method to prune non-skyline points by utilizing data correlation characteristics and speed up the overall skyline evaluation for correlated datasets. Second, we develop \(HashSkyline_{GPU}\), which can dramatically reduce the response time for anti-correlated and independent datasets by capitalizing on the parallel processing power of GPUs. Third, the HashSkyline approach uses the pivot cell-based mechanism combined with the correlation threshold to determine the correlation distribution characteristics for a given dataset, enabling adaptive configuration of HashSkyline for skyline query evaluation by auto-switching of \(HashSkyline_{CPU}\) and \(HashSkyline_{GPU}\). We evaluate the validity of HashSkyline using both synthetic datasets and real datasets. Our experiments show that HashSkyline consumes significantly less pre-processing cost and achieves significantly higher overall query performance, compared to existing state-of-the-art algorithms.  相似文献   

6.
Most entropy notions \({H(.)}\) like Shannon or min-entropy satisfy a chain rule stating that for random variables \({X,Z,}\) and \({A}\) we have \({H(X|Z,A)\ge H(X|Z)-|A|}\). That is, by conditioning on \({A}\) the entropy of \({X}\) can decrease by at most the bitlength \({|A|}\) of \({A}\). Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption, and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: we construct joint distributions \({(X,Z,A)}\), where \({A}\) is a distribution over a single bit, such that the HILL entropy H HILL \({(X|Z)}\) is large but H HILL \({(X|Z,A)}\) is basically zero.Our counterexample just makes the minimal assumption that \({{\mathbf{NP}} \nsubseteq{\mathbf{P/poly}}}\). Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable.Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.  相似文献   

7.
Aggregate similarity search, also known as aggregate nearest-neighbor (Ann) query, finds many useful applications in spatial and multimedia databases. Given a group Q of M query objects, it retrieves from a database the objects most similar to Q, where the similarity is an aggregation (e.g., \({{\mathrm{sum}}}\), \(\max \)) of the distances between each retrieved object p and all the objects in Q. In this paper, we propose an added flexibility to the query definition, where the similarity is an aggregation over the distances between p and any subset of \(\phi M\) objects in Q for some support \(0< \phi \le 1\). We call this new definition flexible aggregate similarity search and accordingly refer to a query as a flexible aggregate nearest-neighbor ( Fann ) query. We present algorithms for answering Fann queries exactly and approximately. Our approximation algorithms are especially appealing, which are simple, highly efficient, and work well in both low and high dimensions. They also return near-optimal answers with guaranteed constant-factor approximations in any dimensions. Extensive experiments on large real and synthetic datasets from 2 to 74 dimensions have demonstrated their superior efficiency and high quality.  相似文献   

8.
The calculus T? is a successor-free version of Gödel’s T. It is well known that a number of important complexity classes, like e.g. the classes logspace, \(\textsc{p}\), \(\textsc{linspace}\), \(\textsc{etime}\) and \(\textsc{pspace}\), are captured by natural fragments of T? and related calculi. We introduce the calculus T, which is a non-deterministic variant of T?, and compare the computational power of T and T?. First, we provide a denotational semantics for T and prove this semantics to be adequate. Furthermore, we prove that \(\textsc{linspace}\subseteq \mathcal {G}^{\backsim }_{0} \subseteq \textsc{linspace}\) and \(\textsc{etime}\subseteq \mathcal {G}^{\backsim }_{1} \subseteq \textsc{pspace}\) where \(\mathcal {G}^{\backsim }_{0}\) and \(\mathcal {G}^{\backsim }_{1}\) are classes of problems decidable by certain fragments of T. (It is proved elsewhere that the corresponding fragments of T? equal respectively \(\textsc{linspace}\) and \(\textsc{etime}\).) Finally, we show a way to interpret T in T?.  相似文献   

9.
Architectures depict design principles: paradigms that can be understood by all, allow thinking on a higher plane and avoiding low-level mistakes. They provide means for ensuring correctness by construction by enforcing global properties characterizing the coordination between components. An architecture can be considered as an operator A that, applied to a set of components \({\mathcal{B}}\), builds a composite component \({A(\mathcal{B})}\) meeting a characteristic property \({\Phi}\). Architecture composability is a basic and common problem faced by system designers. In this paper, we propose a formal and general framework for architecture composability based on an associative, commutative and idempotent architecture composition operator \({\oplus}\). The main result is that if two architectures A 1 and A 2 enforce respectively safety properties \({\Phi_{1}}\) and \({\Phi_{2}}\), the architecture \({A_{1} \oplus A_{2}}\) enforces the property \({\Phi_{1} \land \Phi_{2}}\), that is both properties are preserved by architecture composition. We also establish preservation of liveness properties by architecture composition. The presented results are illustrated by a running example and a case study.  相似文献   

10.
This paper studies the problem of approximating a function f in a Banach space \(\mathcal{X}\) from measurements \(l_j(f)\), \(j=1,\ldots ,m\), where the \(l_j\) are linear functionals from \(\mathcal{X}^*\). Quantitative results for such recovery problems require additional information about the sought after function f. These additional assumptions take the form of assuming that f is in a certain model class \(K\subset \mathcal{X}\). Since there are generally infinitely many functions in K which share these same measurements, the best approximation is the center of the smallest ball B, called the Chebyshev ball, which contains the set \(\bar{K}\) of all f in K with these measurements. Therefore, the problem is reduced to analytically or numerically approximating this Chebyshev ball. Most results study this problem for classical Banach spaces \(\mathcal{X}\) such as the \(L_p\) spaces, \(1\le p\le \infty \), and for K the unit ball of a smoothness space in \(\mathcal{X}\). Our interest in this paper is in the model classes \(K=\mathcal{K}(\varepsilon ,V)\), with \(\varepsilon >0\) and V a finite dimensional subspace of \(\mathcal{X}\), which consists of all \(f\in \mathcal{X}\) such that \(\mathrm{dist}(f,V)_\mathcal{X}\le \varepsilon \). These model classes, called approximation sets, arise naturally in application domains such as parametric partial differential equations, uncertainty quantification, and signal processing. A general theory for the recovery of approximation sets in a Banach space is given. This theory includes tight a priori bounds on optimal performance and algorithms for finding near optimal approximations. It builds on the initial analysis given in Maday et al. (Int J Numer Method Eng 102:933–965, 2015) for the case when \(\mathcal{X}\) is a Hilbert space, and further studied in Binev et al. (SIAM UQ, 2015). It is shown how the recovery problem for approximation sets is connected with well-studied concepts in Banach space theory such as liftings and the angle between spaces. Examples are given that show how this theory can be used to recover several recent results on sampling and data assimilation.  相似文献   

11.
The set of all primitive words Q over an alphabet X was first defined and studied by Shyr and Thierrin (Proceedings of the 1977 Inter. FCT-Conference, Poznan, Poland, Lecture Notes in Computer Science 56. pp. 171–176 (1977)). It showed that for the case |X| ≥ 2, the set along with \({Q^{(i)} = \{f^i\,|\,f \in Q\}, i\geq 2}\) are all disjunctive. Since then these disjunctive sets are often be quoted. Following Shyr and Thierrin showed that the half sets \({Q_{ev} = \{f \in Q\,|\,|f| = {\rm even}\}}\) and Q od = Q \ Q ev of Q are disjunctive, Chien proved that each of the set \({Q_{p,r}= \{u\in Q\,|\,|u|\equiv r\,(mod\,p) \},\,0\leq r < p}\) is disjunctive, where p is a prime number. In this paper, we generalize this property to that all the languages \({Q_{n,r}= \{u\in Q\,|\,|u|\equiv r\,(mod\,n) \},\, 0\leq r < n}\) are disjunctive languages, where n is any positive integer. We proved that for any n ≥ 1, k ≥ 2, (Q n,0) k are all regular languages. Some algebraic properties related to the family of languages {Q n,r | n ≥ 2, 0 ≤ r < n } are investigated.  相似文献   

12.
Let \(H_{1}, H_{2},\ldots ,H_{n}\) be separable complex Hilbert spaces with \(\dim H_{i}\ge 2\) and \(n\ge 2\). Assume that \(\rho \) is a state in \(H=H_1\otimes H_2\otimes \cdots \otimes H_n\). \(\rho \) is called strong-k-separable \((2\le k\le n)\) if \(\rho \) is separable for any k-partite division of H. In this paper, an entanglement witnesses criterion of strong-k-separability is obtained, which says that \(\rho \) is not strong-k-separable if and only if there exist a k-division space \(H_{m_{1}}\otimes \cdots \otimes H_{m_{k}}\) of H, a finite-rank linear elementary operator positive on product states \(\Lambda :\mathcal {B}(H_{m_{2}}\otimes \cdots \otimes H_{m_{k}})\rightarrow \mathcal {B}(H_{m_{1}})\) and a state \(\rho _{0}\in \mathcal {S}(H_{m_{1}}\otimes H_{m_{1}})\), such that \(\mathrm {Tr}(W\rho )<0\), where \(W=(\mathrm{Id}\otimes \Lambda ^{\dagger })\rho _{0}\) is an entanglement witness. In addition, several different methods of constructing entanglement witnesses for multipartite states are also given.  相似文献   

13.
We introduce two scheduling problems, the flexible bandwidth allocation problem (\(\textsc {FBAP}\)) and the flexible storage allocation problem (\(\textsc {FSAP}\)). In both problems, we have an available resource, and a set of requests, each consists of a minimum and a maximum resource requirement, for the duration of its execution, as well as a profit accrued per allocated unit of the resource. In \(\textsc {FBAP}\), the goal is to assign the available resource to a feasible subset of requests, such that the total profit is maximized, while in \(\textsc {FSAP}\) we also require that each satisfied request is given a contiguous portion of the resource. Our problems generalize the classic bandwidth allocation problem (BAP) and storage allocation problem (SAP) and are therefore \(\text {NP-hard}\). Our main results are a 3-approximation algorithm for \(\textsc {FBAP}\) and a \((3+\epsilon )\)-approximation algorithm for \(\textsc {FSAP}\), for any fixed \(\epsilon >0 \). These algorithms make nonstandard use of the local ratio technique. Furthermore, we present a \((2+\epsilon )\)-approximation algorithm for \(\textsc {SAP}\), for any fixed \(\epsilon >0 \), thus improving the best known ratio of \(\frac{2e-1}{e-1} + \epsilon \). Our study is motivated also by critical resource allocation problems arising in all-optical networks.  相似文献   

14.
Non-negative matrix factorization (NMF) is widely used in feature extraction and dimension reduction fields. Essentially, it is an optimization problem to determine two non-negative low rank matrices \(W_{m \times k}\) and \(H_{k \times n}\) for a given matrix \(A_{m \times n}\), satisfying \(A_{m \times n} \approx W_{m \times k}H_{k \times n}\). In this paper, a novel approach to improve the image decomposing and reconstruction effects by introducing the Singular Value Decomposing (SVD)-based initialization scheme of factor matrices W and H, and another measure called choosing rule to determine the optimum value of factor rank k, are proposed. The input image is first decomposed using SVD to get its singular values and corresponding eigenvectors. Then, the number of main components as the rank value k is extracted. Then, the singular values and corresponding eigenvectors are used to initialize W and H based on selected rank k. Finally, convergent results are obtained using multiplicative and additive update rules. However, iterative NMF algorithms’ convergence is very slow on most platforms limiting its practicality. To this end, a parallel implementation frame of described improved NMF algorithm using CUDA, a tool for algorithms parallelization on massively parallel processors, i.e., many-core graphics processors, is presented. Experimental results show that our approach can get better decomposing effect than traditional NMF implementations and dramatic accelerate rate comparing to serial schemes as well as existing distributed-system implementations.  相似文献   

15.
We initiate studying the Remote Set Problem (\({\mathsf{RSP}}\)) on lattices, which given a lattice asks to find a set of points containing a point which is far from the lattice. We show a polynomial-time deterministic algorithm that on rank n lattice \({\mathcal{L}}\) outputs a set of points, at least one of which is \({\sqrt{\log n / n} \cdot \rho(\mathcal{L})}\) -far from \({\mathcal{L}}\) , where \({\rho(\mathcal{L})}\) stands for the covering radius of \({\mathcal{L}}\) (i.e., the maximum possible distance of a point in space from \({\mathcal{L}}\)). As an application, we show that the covering radius problem with approximation factor \({\sqrt{n / \log n}}\) lies in the complexity class \({\mathsf{NP}}\) , improving a result of Guruswami et al. (Comput Complex 14(2): 90–121, 2005) by a factor of \({\sqrt{\log n}}\) .Our results apply to any \({\ell_p}\) norm for \({2 \leq p \leq \infty}\) with the same approximation factors (except a loss of \({\sqrt{\log \log n}}\) for \({p = \infty}\)). In addition, we show that the output of our algorithm for \({\mathsf{RSP}}\) contains a point whose \({\ell_2}\) distance from \({\mathcal{L}}\) is at least \({(\log n/n)^{1/p} \cdot \rho^{(p)}(\mathcal{L})}\) , where \({\rho^{(p)}(\mathcal{L})}\) is the covering radius of \({\mathcal{L}}\) measured with respect to the \({\ell_p}\) norm. The proof technique involves a theorem on balancing vectors due to Banaszczyk (Random Struct Algorithms 12(4):351–360, 1998) and the “six standard deviations” theorem of Spencer (Trans Am Math Soc 289(2):679–706, 1985).  相似文献   

16.
Consider a set of labels L and a set of unordered trees \(\mathcal{T}=\{\mathcal{T}^{(1)},\mathcal{T}^{(2)},\ldots ,\allowbreak \mathcal{T}^{(k)}\}\) where each tree \(\mathcal{T}^{(i)}\) is distinctly leaf-labeled by some subset of L. One fundamental problem is to find the biggest tree (denoted as supertree) to represent \(\mathcal{T}\) which minimizes the disagreements with the trees in \(\mathcal{T}\) under certain criteria. In this paper, we focus on two particular supertree problems, namely, the maximum agreement supertree problem (MASP) and the maximum compatible supertree problem (MCSP). These two problems are known to be NP-hard for k≥3. This paper gives improved algorithms for both MASP and MCSP. In particular, our results imply the first polynomial time algorithms for both MASP and MCSP when both k and the maximum degree D of the input trees are constant.  相似文献   

17.
The construction of quantum MDS codes has been studied by many authors. We refer to the table in page 1482 of (IEEE Trans Inf Theory 61(3):1474–1484, 2015) for known constructions. However, there have been constructed only a few q-ary quantum MDS \([[n,n-2d+2,d]]_q\) codes with minimum distances \(d>\frac{q}{2}\) for sparse lengths \(n>q+1\). In the case \(n=\frac{q^2-1}{m}\) where \(m|q+1\) or \(m|q-1\) there are complete results. In the case \(n=\frac{q^2-1}{m}\) while \(m|q^2-1\) is neither a factor of \(q-1\) nor \(q+1\), no q-ary quantum MDS code with \(d> \frac{q}{2}\) has been constructed. In this paper we propose a direct approach to construct Hermitian self-orthogonal codes over \(\mathbf{F}_{q^2}\). Then we give some new q-ary quantum codes in this case. Moreover many new q-ary quantum MDS codes with lengths of the form \(\frac{w(q^2-1)}{u}\) and minimum distances \(d > \frac{q}{2}\) are presented.  相似文献   

18.
We study the unextendible maximally entangled bases (UMEB) in \(\mathbb {C}^{d}\bigotimes \mathbb {C}^{d}\) and connect the problem to the partial Hadamard matrices. We show that for a given special UMEB in \(\mathbb {C}^{d}\bigotimes \mathbb {C}^{d}\), there is a partial Hadamard matrix which cannot be extended to a Hadamard matrix in \(\mathbb {C}^{d}\). As a corollary, any \((d-1)\times d\) partial Hadamard matrix can be extended to a Hadamard matrix, which answers a conjecture about \(d=5\). We obtain that for any d there is a UMEB except for \(d=p\ \text {or}\ 2p\), where \(p\equiv 3\mod 4\) and p is a prime. The existence of different kinds of constructions of UMEBs in \(\mathbb {C}^{nd}\bigotimes \mathbb {C}^{nd}\) for any \(n\in \mathbb {N}\) and \(d=3\times 5 \times 7\) is also discussed.  相似文献   

19.
An interval extension of successive matrix squaring (SMS) method for computing the weighted Moore–Penrose inverse \(A^{\dagger }_{MN}\) along with its rigorous error bounds is proposed for given full rank \(m \times n\) complex matrices A, where M and N be two Hermitian positive definite matrices of orders m and n, respectively. Starting with a suitably chosen complex interval matrix containing \(A^{\dagger }_{MN}\), this method generates a sequence of complex interval matrices each enclosing \(A^{\dagger }_{MN}\) and converging to it. A new method is developed for constructing initial complex interval matrix containing \(A^{\dagger }_{MN}\). Convergence theorems are established. The R-order convergence is shown to be equal to at least l, where \(l \ge 2\). A number of numerical examples are worked out to demonstrate its efficiency and effectiveness. Graphs are plotted to show variations of the number of iterations and computational times compared to matrix dimensions. It is observed that ISMS is more stable compared to SMS.  相似文献   

20.
Shpilka & Wigderson (IEEE conference on computational complexity, vol 87, 1999) had posed the problem of proving exponential lower bounds for (nonhomogeneous) depth-three arithmetic circuits with bounded bottom fanin over a field \({{\mathbb{F}}}\) of characteristic zero. We resolve this problem by proving a \({N^{\Omega(\frac{d}{\tau})}}\) lower bound for (nonhomogeneous) depth-three arithmetic circuits with bottom fanin at most \({\tau}\) computing an explicit \({N}\)-variate polynomial of degree \({d}\) over \({{\mathbb{F}}}\). Meanwhile, Nisan & Wigderson (Comp Complex 6(3):217–234, 1997) had posed the problem of proving super-polynomial lower bounds for homogeneous depth-five arithmetic circuits. Over fields of characteristic zero, we show a lower bound of \({N^{\Omega(\sqrt{d})}}\) for homogeneous depth-five circuits (resp. also for depth-three circuits) with bottom fanin at most \({N^{\mu}}\), for any fixed \({\mu < 1}\). This resolves the problem posed by Nisan and Wigderson only partially because of the added restriction on the bottom fanin (a general homogeneous depth-five circuit has bottom fanin at most \({N}\)).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号