共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we show that the problem of deterministically factoring multivariate polynomials reduces to the problem of deterministic polynomial identity testing. Specifically, we show that given an arithmetic circuit (either explicitly or via black-box access) that computes a multivariate polynomial f, the task of computing arithmetic circuits for the factors of f can be solved deterministically, given a deterministic algorithm for the polynomial identity testing problem (we require either a white-box or a black-box algorithm, depending on the representation of f).Together with the easy observation that deterministic factoring implies a deterministic algorithm for polynomial identity testing, this establishes an equivalence between these two central derandomization problems of arithmetic complexity.Previously, such an equivalence was known only for multilinear circuits (Shpilka & Volkovich, 2010). 相似文献
2.
3.
Khan Lal Said Khan Majid Jamal Sajjad Shaukat Amin Muhammad 《Multimedia Tools and Applications》2022,81(23):33591-33611
Multimedia Tools and Applications - One of the most critical aspects of this technologically progressive era is the propagation of information through an unsecured communication channel. The... 相似文献
4.
Oblivious polynomial evaluation (OPE) is a two-party protocol that allows a receiver, R to learn an evaluation f(α), of a sender, S's polynomial f(x), whilst keeping both α and f(x) private. This protocol has attracted a lot of attention recently, as it has wide ranging applications in the field of cryptography.
In this article we review some of these applications and, additionally, take an in-depth look at the special case of information theoretic OPE. Specifically, we provide a current and critical review of the existing information theoretic OPE protocols in the literature. We divide these protocols into two distinct cases (three-party and distributed OPE) allowing for the easy distinction and classification of future information theoretic OPE protocols. In addition to this work, we also develop several modifications and extensions to existing schemes, resulting in increased security, flexibility and efficiency. Lastly, we also identify a security flaw in a previously published OPE scheme. 相似文献
5.
6.
We show that derandomizing Polynomial Identity Testing is essentially equivalent to proving arithmetic circuit lower bounds for NEXP. More precisely, we prove that if one can test in polynomial time (or even nondeterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero polynomial, then either (i)
(ii) Permanent is not computable by polynomial-size arithmetic circuits. We also prove a (partial) converse: If Permanent requires superpolynomial-size arithmetic circuits, then one can test in subexponential time whether a given arithmetic circuit of polynomially bounded degree computes an identically zero polynomial.Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If
then NEXP is not computable by polynomial-size arithmetic circuits. Thus establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.We also prove unconditionally that NEXPRP does not have polynomial-size Boolean or arithmetic circuits. Finally, we show that
if both BPP = P and low-degree testing is in P; here low-degree testing is the problem of checking whether a given Boolean circuit computes a function that is close to some low-degree polynomial over a finite field. 相似文献
7.
王继营 《电脑编程技巧与维护》2014,(2):41-43
系统从根本上解决了传统考试效率低、反馈时间长、资源浪费等问题。介绍了系统的数据库设计、主要功能模块及实现中的几个关键技术,并给出了具体的实现代码。 相似文献
8.
9.
Harry L. Trentelman Paolo Rapisarda 《Mathematics of Control, Signals, and Systems (MCSS)》1999,12(1):24-61
In this paper new algorithms are developed for J-spectral factorization of polynomial matrices. These algorithms are based on the calculus of two-variable polynomial matrices
and associated quadratic differential forms, and share the common feature that the problem is lifted from the original one-variable
polynomial context to a two-variable polynomial context. The problem of polynomial J-spectral factorization is thus reduced to a problem of factoring a constant matrix obtained from the coefficient matrices
of the polynomial matrix to be factored. In the second part of the paper, we specifically address the problem of computing
polynomial J-spectral factors in the context of H
∞ control. For this, we propose an algorithm that uses the notion of a Pick matrix associated with a given two-variable polynomial
matrix.
Date received: January 1, 1998. Date revised: October 15, 1998. 相似文献
10.
《Journal of Symbolic Computation》1995,20(4):363-397
We consider the problem of factoring univariate polynomials over a finite field. We demonstrate that the new baby step/giant step factoring method, recently developed by Kaltofen and Shoup, can be made into a very practical algorithm. We describe an implementation of this algorithm, and present the results of empirical tests comparing this new algorithm with others. When factoring polynomials modulo large primes, the algorithm allows much larger polynomials to be factored using a reasonable amount of time and space than was previously possible. For example, this new software has been used to factor a "generic" polynomial of degree 2048 modulo a 2048-bit prime in under 12 days on a Sun SPARC-station 10, using 68 MB of main memory. 相似文献
11.
Ovidiu Daescu 《Algorithmica》2004,38(1):131-143
In this paper we give bounds on the complexity of some algorithms for approximating 2-D and 3-D polygonal paths with the infinite beam measure of error. While the time/space complexities of the algorithms known for other error measures are well understood, path approximation with infinite beam measure seems to be harder due to the complexity of some geometric structures that arise in the known approaches. Our results answer some open problems left in previous work. We also present a more careful analysis of the time complexity of the general iterative algorithm for infinite beam measure and show that it could perform much better in practice than in the worst case. We then propose a new approach for path approximation that consists of a breadth first traversal of the path approximation graph, without constructing the graph. This approach can be applied to any error criterion in any constant dimension. The running time of the new algorithm depends on the size of the output: if the optimal approximating path has m vertices, the algorithm performs F(m) iterations rather than n–1 in the previous approaches, where F(m) \le n–1 is the number of vertices of the path approximation graph that can be reached with at most m–2 edges. This is the first output sensitive path approximation algorithm. 相似文献
12.
Ovidiu Daescu 《Algorithmica》2003,38(1):131-143
In this paper we give bounds on the complexity of some algorithms for
approximating 2-D and 3-D polygonal paths with the
infinite beam measure of error. While the time/space complexities
of the algorithms known for other error measures are well understood,
path approximation with infinite beam measure seems to be harder due to the
complexity of some geometric structures that arise in the known approaches.
Our results answer some open problems left in previous work.
We also present a more careful analysis of the time complexity of the
general iterative algorithm for infinite beam measure and show that it
could perform much better in practice than in the worst case. We then
propose a new approach for path approximation that consists of a
breadth first traversal of the path approximation graph,
without constructing the graph. This approach
can be applied to any error criterion in any constant dimension.
The running time of the new algorithm depends on the size of the output:
if the optimal approximating path has m vertices, the algorithm performs
F(m) iterations rather than n–1 in the previous approaches, where F(m) \le n–1
is the number of vertices of the path approximation graph that can be reached
with at most m–2 edges. This is the first output sensitive path approximation algorithm. 相似文献
13.
14.
New classes of Linearly Independent Ternary Arithmetic (LITA) transforms being the bases of ternary arithmetic polynomial expansions are introduced here. Recursive equations defining the LITA transforms and the corresponding butterfly diagrams are shown. Various properties and relations between introduced classes of new transforms are discussed. Computational costs to calculate LITA transforms and applications of corresponding polynomial expansions in logic design are also discussed. 相似文献
15.
16.
17.
M. E. Shaikin 《Automation and Remote Control》2003,64(8):1264-1274
Statistical inference is investigated under the following constraints on the covariance structure for the observation vector: covariance matrices belong to some commutative matrix algebra. Commutative approximation of arbitrary covariance structures and statistical estimation of the parameters of a given commutative structure are studied. The results are applied to statistical classification of Gaussian vectors having commutative covariance structure. 相似文献
18.
19.
Commutative Encryption and Watermarking (CEW) is a technology combined with encryption and watermarking, which is used for providing comprehensive security protection for multimedia information. This paper presents a novel CEW scheme based on orthogonal decomposition. Different from current CEW, the proposed CEW not only achieves the integration of encryption and watermarking in the final data but also has no specific restrictions in selecting encryption and watermarking algorithms. Therefore, the proposed CEW possesses higher security and applicability. Experimental results demonstrate that the proposed CEW can keep the performances of selected encryption and watermarking algorithm and show more robustness than other current CEW schemes. 相似文献
20.
The polynomial identity testing (PIT) problem is known to be challenging even for constant depth arithmetic circuits. In this work, we study the complexity of two special but natural cases of identity testing—first is a case of depth-3 PIT, the other of depth-4 PIT. Our first problem is a vast generalization of verifying whether a bounded top fan-in depth-3 circuit equals a sparse polynomial (given as a sum of monomial terms). Formally, given a depth-3 circuit C, having constant many general product gates and arbitrarily many semidiagonal product gates test whether the output of C is identically zero. A semidiagonal product gate in C computes a product of the form ${m \cdot \prod^b_{i=1}l^{e_i}_i}$ , where m is a monomial, l i is a linear polynomial, and b is a constant. We give a deterministic polynomial time test, along with the computation of leading monomials of semidiagonal circuits over local rings. The second problem is on verifying a given sparse polynomial factorization, which is a classical question (von zur Gathen, FOCS 1983): Given multivariate sparse polynomials f, g1, . . . , gt explicitly check whether ${f = \prod^t_{i=1} {g_i}}$ . For the special case when every gi is a sum of univariate polynomials, we give a deterministic polynomial time test. We characterize the factors of such g i ’s and even show how to test the divisibility of f by the powers of such polynomials. The common tools used are Chinese remaindering and dual representation. The dual representation of polynomials (Saxena, ICALP 2008) is a technique to express a product-of-sums of univariates as a sum-ofproducts of univariates. We generalize this technique by combining it with a generalized Chinese remaindering to solve these two problems (over any field). 相似文献