首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Given a DNF formula f on n variables, the two natural size measures are the number of terms or size s(f) and the maximum width of a term w(f). It is folklore that small DNF formulas can be made narrow: if a formula has m terms, it can be ${\epsilon}$ -approximated by a formula with width ${{\rm log}(m/{\epsilon})}$ . We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ${\epsilon}$ -approximated by a width w DNF with at most ${(w\, {\rm log}(1/{\epsilon}))^{O(w)}}$ terms. We combine our sparsification result with the work of Luby & Velickovic (1991, Algorithmica 16(4/5):415–433, 1996) to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic ${n^{\tilde{O}({\rm log}\, {\rm log} (n))}}$ time algorithm that computes an additive ${\epsilon}$ approximation to the fraction of satisfying assignments of f for ${\epsilon = 1/{\rm poly}({\rm log}\, n)}$ . The previous best result due to Luby and Velickovic from nearly two decades ago had a run time of ${n^{{\rm exp}(O(\sqrt{{\rm log}\, {\rm log} n}))}}$ (Luby & Velickovic 1991, in Algorithmica 16(4/5):415–433, 1996).  相似文献   

2.
A number of algorithms for computing the simulation preorder (and equivalence) on Kripke structures are available. Let $\varSigma $ denote the state space, ${\rightarrow }$ the transition relation and $P_{\mathrm {sim}}$ the partition of $\varSigma $ induced by simulation equivalence. While some algorithms are designed to reach the best space bounds, whose dominating additive term is $|P_{\mathrm {sim}}|^2$ , other algorithms are devised to attain the best time complexity $O(|P_{\mathrm {sim}}||{\rightarrow }|)$ . We present a novel simulation algorithm which is both space and time efficient: it runs in $O(|P_ {\mathrm {sim}}|^2 \log |P_{\mathrm {sim}}| + |\varSigma |\log |\varSigma |)$ space and $O(|P_{\mathrm {sim}}||{\rightarrow }|\log |\varSigma |)$ time. Our simulation algorithm thus reaches the best space bounds while closely approaching the best time complexity.  相似文献   

3.
We prove two main results on how arbitrary linear threshold functions ${f(x) = {\rm sign}(w \cdot x - \theta)}$ over the n-dimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every n-variable threshold function f is ${\epsilon}$ -close to a threshold function depending only on ${{\rm Inf}(f)^2 \cdot {\rm poly}(1/\epsilon)}$ many variables, where ${{\rm Inf}(f)}$ denotes the total influence or average sensitivity of f. This is an exponential sharpening of Friedgut’s well-known theorem (Friedgut in Combinatorica 18(1):474–483, 1998), which states that every Boolean function f is ${\epsilon}$ -close to a function depending only on ${2^{O({\rm Inf}(f)/\epsilon)}}$ many variables, for the case of threshold functions. We complement this upper bound by showing that ${\Omega({\rm Inf}(f)^2 + 1/\epsilon^2)}$ many variables are required for ${\epsilon}$ -approximating threshold functions. Our second result is a proof that every n-variable threshold function is ${\epsilon}$ -close to a threshold function with integer weights at most ${{\rm poly}(n) \cdot 2^{\tilde{O}(1/\epsilon^{2/3})}.}$ This is an improvement, in the dependence on the error parameter ${\epsilon}$ , on an earlier result of Servedio (Comput Complex 16(2):180–209, 2007) which gave a ${{\rm poly}(n) \cdot 2^{\tilde{O}(1/\epsilon^{2})}}$ bound. Our improvement is obtained via a new proof technique that uses strong anti-concentration bounds from probability theory. The new technique also gives a simple and modular proof of the original result of Servedio (Comput Complex 16(2):180–209, 2007) and extends to give low-weight approximators for threshold functions under a range of probability distributions other than the uniform distribution.  相似文献   

4.
The parallel complexity class $\textsf{NC}$ 1 has many equivalent models such as polynomial size formulae and bounded width branching programs. Caussinus et al. (J. Comput. Syst. Sci. 57:200–212, 1992) considered arithmetizations of two of these classes, $\textsf{\#NC}$ 1 and $\textsf{\#BWBP}$ . We further this study to include arithmetization of other classes. In particular, we show that counting paths in branching programs over visibly pushdown automata is in $\textsf{FLogDCFL}$ , while counting proof-trees in logarithmic width formulae has the same power as $\textsf{\#NC}$ 1. We also consider polynomial-degree restrictions of $\textsf{SC}$ i , denoted $\textsf{sSC}$ i , and show that the Boolean class $\textsf{sSC}$ 1 is sandwiched between $\textsf{NC}$ 1 and $\textsf{L}$ , whereas $\textsf{sSC}$ 0 equals $\textsf{NC}$ 1. On the other hand, the arithmetic class $\textsf{\#sSC}$ 0 contains $\textsf{\#BWBP}$ and is contained in $\textsf{FL}$ , and $\textsf{\#sSC}$ 1 contains $\textsf{\#NC}$ 1 and is in $\textsf{SC}$ 2. We also investigate some closure properties of the newly defined arithmetic classes.  相似文献   

5.
This paper introduces the notion of distributed verification without preprocessing. It focuses on the Minimum-weight Spanning Tree (MST) verification problem and establishes tight upper and lower bounds for the time and message complexities of this problem. Specifically, we provide an MST verification algorithm that achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time, where m is the number of edges in the given graph G, n is the number of nodes, and D is G’s diameter. On the other hand, we show that any MST verification algorithm must send $\tilde{\varOmega}(m)$ messages and incur $\tilde{\varOmega}(\sqrt{n} + D)$ time in worst case. Our upper bound result appears to indicate that the verification of an MST may be easier than its construction, since for MST construction, both lower bounds of $\tilde{\varOmega}(m)$ messages and $\tilde{\varOmega}(\sqrt{n} + D)$ time hold, but at the moment there is no known distributed algorithm that constructs an MST and achieves simultaneously $\tilde{O}(m)$ messages and $\tilde{O}(\sqrt{n} + D)$ time. Specifically, the best known time-optimal algorithm (using ${\tilde{O}}(\sqrt {n} + D)$ time) requires O(m+n 3/2) messages, and the best known message-optimal algorithm (using ${\tilde{O}}(m)$ messages) requires O(n) time. On the other hand, our lower bound results indicate that the verification of an MST is not significantly easier than its construction.  相似文献   

6.
We present a data structure that stores a sequence s[1..n] over alphabet [1..σ] in $n\mathcal{H}_{0}(s) + o(n)(\mathcal {H}_{0}(s){+}1)$ bits, where $\mathcal{H}_{0}(s)$ is the zero-order entropy of s. This structure supports the queries access, rank and select, which are fundamental building blocks for many other compressed data structures, in worst-case time ${\mathcal{O} ( {\lg\lg\sigma} )}$ and average time ${\mathcal{O} ( {\lg\mathcal{H}_{0}(s)} )}$ . The worst-case complexity matches the best previous results, yet these had been achieved with data structures using $n\mathcal{H}_{0}(s)+o(n\lg \sigma)$ bits. On highly compressible sequences the o(nlgσ) bits of the redundancy may be significant compared to the $n\mathcal{H}_{0}(s)$ bits that encode the data. Our representation, instead, compresses the redundancy as well. Moreover, our average-case complexity is unprecedented. Our technique is based on partitioning the alphabet into characters of similar frequency. The subsequence corresponding to each group can then be encoded using fast uncompressed representations without harming the overall compression ratios, even in the redundancy. The result also improves upon the best current compressed representations of several other data structures. For example, we achieve (i) compressed redundancy, retaining the best time complexities, for the smallest existing full-text self-indexes; (ii) compressed permutations π with times for π() and π ?1() improved to loglogarithmic; and (iii) the first compressed representation of dynamic collections of disjoint sets. We also point out various applications to inverted indexes, suffix arrays, binary relations, and data compressors. Our structure is practical on large alphabets. Our experiments show that, as predicted by theory, it dominates the space/time tradeoff map of all the sequence representations, both in synthetic and application scenarios.  相似文献   

7.
In this paper, we give the first construction of a pseudorandom generator, with seed length O(log n), for CC0[p], the class of constant-depth circuits with unbounded fan-in MOD p gates, for some prime p. More accurately, the seed length of our generator is O(log n) for any constant error ${\epsilon > 0}$ . In fact, we obtain our generator by fooling distributions generated by low-degree polynomials, over ${\mathbb{F}_p}$ , when evaluated on the Boolean cube. This result significantly extends previous constructions that either required a long seed (Luby et al. 1993) or could only fool the distribution generated by linear functions over ${\mathbb{F}_p}$ , when evaluated on the Boolean cube (Lovett et al. 2009; Meka & Zuckerman 2009). En route of constructing our PRG, we prove two structural results for low-degree polynomials over finite fields that can be of independent interest.
  1. Let f be an n-variate degree d polynomial over ${\mathbb{F}_p}$ . Then, for every ${\epsilon > 0}$ , there exists a subset ${S \subset [n]}$ , whose size depends only on d and ${\epsilon}$ , such that ${\sum_{\alpha \in \mathbb{F}_p^n: \alpha \ne 0, \alpha_S=0}|\hat{f}(\alpha)|^2 \leq \epsilon}$ . Namely, there is a constant size subset S such that the total weight of the nonzero Fourier coefficients that do not involve any variable from S is small.
  2. Let f be an n-variate degree d polynomial over ${\mathbb{F}_p}$ . If the distribution of f when applied to uniform zero–one bits is ${\epsilon}$ -far (in statistical distance) from its distribution when applied to biased bits, then for every ${\delta > 0}$ , f can be approximated over zero–one bits, up to error δ, by a function of a small number (depending only on ${\epsilon,\delta}$ and d) of lower degree polynomials.
  相似文献   

8.
We relate the exponential complexities 2 s(k)n of $\textsc {$k$-sat}$ and the exponential complexity $2^{s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))n}$ of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ (the problem of evaluating quantified formulas of the form $\forall\vec{x} \exists\vec{y} \textsc {F}(\vec {x},\vec{y})$ where F is a 3-cnf in $\vec{x}$ variables and $\vec{y}$ variables) and show that s(∞) (the limit of s(k) as k→∞) is at most $s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))$ . Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ running in time 2 cn with c<1. On the other hand, a nontrivial exponential-time algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ would provide a $\textsc {$k$-sat}$ solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ have nontrivial algorithms, and provide strong evidence that the hardest cases of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n?o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable $\textsc {$k$-cnf}$ s and the application of the Sparsification lemma.  相似文献   

9.
We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    10.
    We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
    1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
    2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
      相似文献   

    11.
    The inverse and reverse counterparts of the single-machine scheduling problem $1||L_{\max }$ are studied in [2], in which the complexity classification is provided for various combinations of adjustable parameters (due dates and processing times) and for five different types of norm: $\ell _{1},\ell _{2},\ell _{\infty },\ell _{H}^{\Sigma } $ , and $\ell _{H}^{\max }$ . It appears that the $O(n^{2})$ -time algorithm for the reverse problem with adjustable due dates contains a flaw. In this note, we present the structural properties of the reverse model, establishing a link with the forward scheduling problem with due dates and deadlines. For the four norms $\ell _{1},\ell _{\infty },\ell _{H}^{\Sigma }$ , and $ \ell _{H}^{\max }$ , the complexity results are derived based on the properties of the corresponding forward problems, while the case of the norm $\ell _{2}$ is treated separately. As a by-product, we resolve an open question on the complexity of problem $1||\sum \alpha _{j}T_{j}^{2}$ .  相似文献   

    12.
    The Hamiltonian Cycle problem is the problem of deciding whether an n-vertex graph G has a cycle passing through all vertices of G. This problem is a classic NP-complete problem. Finding an exact algorithm that solves it in ${\mathcal {O}}^{*}(\alpha^{n})$ time for some constant α<2 was a notorious open problem until very recently, when Björklund presented a randomized algorithm that uses ${\mathcal {O}}^{*}(1.657^{n})$ time and polynomial space. The Longest Cycle problem, in which the task is to find a cycle of maximum length, is a natural generalization of the Hamiltonian Cycle problem. For a claw-free graph G, finding a longest cycle is equivalent to finding a closed trail (i.e., a connected even subgraph, possibly consisting of a single vertex) that dominates the largest number of edges of some associated graph H. Using this translation we obtain two deterministic algorithms that solve the Longest Cycle problem, and consequently the Hamiltonian Cycle problem, for claw-free graphs: one algorithm that uses ${\mathcal {O}}^{*}(1.6818^{n})$ time and exponential space, and one algorithm that uses ${\mathcal {O}}^{*}(1.8878^{n})$ time and polynomial space.  相似文献   

    13.
    Let $ Q$ be a complete residuated lattice. Let $\text {SetR}(Q)$ be the category of sets with similarity relations with values in $ Q$ (called $ Q$ -sets), which is an analogy of the category of classical sets with relations as morphisms. A cut in an $ Q$ -set $(A,\delta )$ is a system $(C_{\alpha })_{\alpha \in Q}$ , where $C_{\alpha }$ are subsets of $A\times Q$ . It is well known that in the category $\text {SetR}(Q)$ , there is a close relation between special cuts (called f-cuts) in an $ Q$ -set on one hand and fuzzy sets in the same $ Q$ -set, on the other hand. Moreover, there exists a completion procedure according to which any cut $(C_{\alpha })_{\alpha }$ can be extended onto an f-cut $(\overline{C_{\alpha }})_{\alpha }$ . In the paper, we prove that the completion procedure is, in some sense, the best possible. This will be expressed by the theorem which states that the category of f-cuts is a full reflective subcategory in the category of cuts.  相似文献   

    14.
    A C-coloured graph is a graph, that is possibly directed, where the edges are coloured with colours from the set C. Clique-width is a complexity measure for C-coloured graphs, for finite sets C. Rank-width is an equivalent complexity measure for undirected graphs and has good algorithmic and structural properties. It is in particular related to the vertex-minor relation. We discuss some possible extensions of the notion of rank-width to C-coloured graphs. There is not a unique natural notion of rank-width for C-coloured graphs. We define two notions of rank-width for them, both based on a coding of C-coloured graphs by ${\mathbb{F}}^{*}$ -graphs— $\mathbb {F}$ -coloured graphs where each edge has exactly one colour from $\mathbb{F}\setminus \{0\},\ \mathbb{F}$ a field—and named respectively $\mathbb{F}$ -rank-width and $\mathbb {F}$ -bi-rank-width. The two notions are equivalent to clique-width. We then present a notion of vertex-minor for $\mathbb{F}^{*}$ -graphs and prove that $\mathbb{F}^{*}$ -graphs of bounded $\mathbb{F}$ -rank-width are characterised by a list of $\mathbb{F}^{*}$ -graphs to exclude as vertex-minors (this list is finite if $\mathbb{F}$ is finite). An algorithm that decides in time O(n 3) whether an $\mathbb{F}^{*}$ -graph with n vertices has $\mathbb{F}$ -rank-width (resp. $\mathbb{F}$ -bi-rank-width) at most k, for fixed k and fixed finite field $\mathbb{F}$ , is also given. Graph operations to check MSOL-definable properties on $\mathbb{F}^{*}$ -graphs of bounded $\mathbb{F}$ -rank-width (resp. $\mathbb{F}$ -bi-rank-width) are presented. A specialisation of all these notions to graphs without edge colours is presented, which shows that our results generalise the ones in undirected graphs.  相似文献   

    15.
    We study the null controllability of Kolmogorov-type equations $\partial _t f + v^\gamma \partial _x f - \partial _v^2 f = u(t,x,v) 1_{\omega }(x,v)$ in a rectangle $\Omega $ , under an additive control supported in an open subset $\omega $ of $\Omega $ . For $\gamma =1$ , with periodic-type boundary conditions, we prove that null controllability holds in any positive time, with any control support $\omega $ . This improves the previous result by Beauchard and Zuazua (Ann Ins H Poincaré Anal Non Linéaire 26:1793–1815, 2009), in which the control support was a horizontal strip. With Dirichlet boundary conditions and a horizontal strip as control support, we prove that null controllability holds in any positive time if $\gamma =1$ or if $\gamma =2$ and $\omega $ contains the segment $\{v=0\}$ , and only in large time if $\gamma =2$ and $\omega $ does not contain the segment $\{v=0\}$ . Our approach, inspired from Benabdallah et al. (C R Math Acad Sci Paris 344(6):357–362, 2007), Lebeau and Robbiano (Commun Partial Differ Equ 20:335–356, 1995), is based on two key ingredients: the observability of the Fourier components of the solution of the adjoint system, uniformly with respect to the frequency, and the explicit exponential decay rate of these Fourier components.  相似文献   

    16.
    Initially developed in the context of ${\tt REGILINK}$ project, ${\tt SIMUL 3.2}$ econometric software is able to estimate and to run large-scale dynamic multi-regional, multi-sectoral models. The package includes a data bank management module, ${\tt GEBANK}$ which performs the usual data import/export functions, and transformations (especially the RAS and the aggregation one), a graphic module, ${\tt GRAPHE}$ , a cartographic module, ${\tt GEOGRA}$ for a “typical use”. For an “atypical use” the package includes ${\tt CHRONO}$ to help for the WDC (Working Days Correction) estimation and ${\tt GNOMBR}$ to replace the floating point arithmetic by a multi-precision one in a program. Although the current package includes a basic estimation’s (OLS) and solving’s (Gauss–Seidel) algorithms, it allows user to implement the equations in their reduced form ${Y_{r,b}=X_{r,b} + \varepsilon}$ and to use alternative econometric equations. ${\tt SIMUL}$ provides results and reports documentation in ASCII and ${\hbox{\LaTeX}}$ formats. The next releases of ${\tt SIMUL}$ should improve the OLS procedure according to the Wilkinson’s criteria, include Hildreth–Lu’s algorithm and comparative statics option. Later, the package should allow other models implementations (Input–Output, VAR etc.). Even if it’s probably outclassed by the major softwares in terms of design and statistic tests sets, ${\tt SIMUL}$ provides freely basic evolutive tools to estimate and run easily and safety some large scale multi-sectoral, multi-regional, econometric models.  相似文献   

    17.
    The notion of plaintext awareness ( ${\mathsf{PA}}$ ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving ${\mathsf{PA}}$ in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving ${\mathsf{PA}}$ of schemes in one model cannot be adapted to the other model. Existing research addresses ${\mathsf{PA}}$ in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of ${\mathsf{PA}}$ in proof strategies of ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ security, and explores relationships between ${\mathsf{PA}}$ and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of $\gamma $ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is ${\mathsf{PA}}$ and ${\mathsf{IND}\hbox {-}{\mathsf{CCA}}}$ -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.  相似文献   

    18.
    19.
    Recently, Shabtay and Bensoussan (2012) developed an original exact pseudo-polynomial algorithm and an efficient $\upvarepsilon $ -approximation algorithm (FPTAS) for maximizing the weighted number of just-in-time jobs in a two-machine flow shop problem. The complexity of the FPTAS is $O$ (( $n^{4}/\upvarepsilon $ )log( $n$ / $\upvarepsilon $ )), where $n$ is the number of jobs. In this note we suggest another pseudo-polynomial algorithm that can be converted to a new FPTAS which improves Shabtay–Bensoussan’s complexity result and runs in $O(n^{3}/\upvarepsilon )$ time.  相似文献   

    20.
    Uncertainty relations for more than two observables have found use in quantum information, though commonly known relations pertain to a pair of observables. We present novel uncertainty and certainty relations of state-independent form for the three Pauli observables with use of the Tsallis $\alpha $ -entropies. For all real $\alpha \in (0;1]$ and integer $\alpha \ge 2$ , lower bounds on the sum of three $\alpha $ -entropies are obtained. These bounds are tight in the sense that they are always reached with certain pure states. The necessary and sufficient condition for equality is that the qubit state is an eigenstate of one of the Pauli observables. Using concavity with respect to the parameter $\alpha $ , we derive approximate lower bounds for non-integer $\alpha \in (1;+\infty )$ . In the case of pure states, the developed method also allows to obtain upper bounds on the entropic sum for real $\alpha \in (0;1]$ and integer $\alpha \ge 2$ . For applied purposes, entropic bounds are often used with averaging over the individual entropies. Combining the obtained bounds leads to a band, in which the rescaled average $\alpha $ -entropy ranges in the pure-state case. A width of this band is essentially dependent on $\alpha $ . It can be interpreted as an evidence for sensitivity in quantifying the complementarity.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号