首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A novel feature selection algorithm is designed for high-dimensional data classification. The relevant features are selected with the least square loss function and \({\ell _{2,1}}\)-norm regularization term if the minimum representation error rate between the features and labels is approached with respect to only these features. Taking into account both the local and global structures of data distribution with subspace learning, an efficient optimization algorithm is proposed to solve the joint objective function, so as to select the most representative features and noise-resistant features to enhance the performance of classification. Sets of experiments are conducted on benchmark datasets, show that the proposed approach is more effective and robust than existing feature selection algorithms.  相似文献   

2.
In recent years, sparse representation-based classification (SRC) has made great progress in face recognition (FR). However, SRC emphasizes noise sparsity too much and it is not suitable for the real world. In this paper, we propose a robust \(l_{2,1}\)-norm Sparse Representation framework that constrains the noise penalty by the \(l_{2,1}\)-norm. The \(l_{2,1} \)-norm takes advantage of both the discriminative nature of the \(l_1 \)-norm and the systemic representation of the \(l_2 \)-norm. In addition, we use the nuclear norm to constrain the coefficient matrix. Motivated by the Fisher criterion, we propose the Fisher discriminant-based \(l_{2,1} \)-norm sparse representation method for FR which utilizes a supervised approach. Thus, we consider the within-class scatter and between-class scatter when all of the label information is available. The paper shows that the model can provide stronger discriminant power than the classical sparse representation models and can be solved by the alternating direction method of multiplier. Additionally, it is robust to the contiguous occlusion noise. Extensive experiments demonstrate that our method achieves significantly better results than SRC and some other sparse representation methods for FR when addressing large regions with contiguous occlusion.  相似文献   

3.
Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the \(l_1\) regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed \(l_1\) regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed \(l_1\) regularization works better than the KL divergence.  相似文献   

4.
A method of constructing n 2 × n 2 matrix realization of Temperley–Lieb algebras is presented. The single loop of these realizations are \({d=\sqrt{n}}\). In particular, a 9 × 9-matrix realization with single loop \({d=\sqrt{3}}\) is discussed. A unitary Yang–Baxter \({\breve{R}\theta,q_{1},q_{2})}\) matrix is obtained via the Yang-Baxterization process. The entanglement properties and geometric properties (i.e., Berry Phase) of this Yang–Baxter system are explored.  相似文献   

5.
We investigate the modified trace distance measure of coherence recently introduced in Yu et al. [Phys. Rev. A 94, 060302(R), 2016]. We show that for any single-qubit state, the modified trace norm of coherence is equal to the \(l_{1}\)-norm of coherence. For any d-dimensional quantum system, an analytical formula of this measure for a class of maximally coherent mixed states is provided. The trade-off relation between the coherence quantified by the new measure and the mixedness quantified by the trace norm is also discussed. Furthermore, we explore the relation between the modified trace distance measure of coherence and other measures such as the \(l_{1}\)-norm of coherence and the geometric measure of coherence.  相似文献   

6.
We initiate studying the Remote Set Problem (\({\mathsf{RSP}}\)) on lattices, which given a lattice asks to find a set of points containing a point which is far from the lattice. We show a polynomial-time deterministic algorithm that on rank n lattice \({\mathcal{L}}\) outputs a set of points, at least one of which is \({\sqrt{\log n / n} \cdot \rho(\mathcal{L})}\) -far from \({\mathcal{L}}\) , where \({\rho(\mathcal{L})}\) stands for the covering radius of \({\mathcal{L}}\) (i.e., the maximum possible distance of a point in space from \({\mathcal{L}}\)). As an application, we show that the covering radius problem with approximation factor \({\sqrt{n / \log n}}\) lies in the complexity class \({\mathsf{NP}}\) , improving a result of Guruswami et al. (Comput Complex 14(2): 90–121, 2005) by a factor of \({\sqrt{\log n}}\) .Our results apply to any \({\ell_p}\) norm for \({2 \leq p \leq \infty}\) with the same approximation factors (except a loss of \({\sqrt{\log \log n}}\) for \({p = \infty}\)). In addition, we show that the output of our algorithm for \({\mathsf{RSP}}\) contains a point whose \({\ell_2}\) distance from \({\mathcal{L}}\) is at least \({(\log n/n)^{1/p} \cdot \rho^{(p)}(\mathcal{L})}\) , where \({\rho^{(p)}(\mathcal{L})}\) is the covering radius of \({\mathcal{L}}\) measured with respect to the \({\ell_p}\) norm. The proof technique involves a theorem on balancing vectors due to Banaszczyk (Random Struct Algorithms 12(4):351–360, 1998) and the “six standard deviations” theorem of Spencer (Trans Am Math Soc 289(2):679–706, 1985).  相似文献   

7.
Architectures depict design principles: paradigms that can be understood by all, allow thinking on a higher plane and avoiding low-level mistakes. They provide means for ensuring correctness by construction by enforcing global properties characterizing the coordination between components. An architecture can be considered as an operator A that, applied to a set of components \({\mathcal{B}}\), builds a composite component \({A(\mathcal{B})}\) meeting a characteristic property \({\Phi}\). Architecture composability is a basic and common problem faced by system designers. In this paper, we propose a formal and general framework for architecture composability based on an associative, commutative and idempotent architecture composition operator \({\oplus}\). The main result is that if two architectures A 1 and A 2 enforce respectively safety properties \({\Phi_{1}}\) and \({\Phi_{2}}\), the architecture \({A_{1} \oplus A_{2}}\) enforces the property \({\Phi_{1} \land \Phi_{2}}\), that is both properties are preserved by architecture composition. We also establish preservation of liveness properties by architecture composition. The presented results are illustrated by a running example and a case study.  相似文献   

8.
A circuit C compresses a function \({f : \{0,1\}^n\rightarrow \{0,1\}^m}\) if given an input \({x\in \{0,1\}^n}\), the circuit C can shrink x to a shorter ?-bit string x′ such that later, a computationally unbounded solver D will be able to compute f(x) based on x′. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size \({s=n^c}\). Motivated by cryptographic applications, we focus on average-case \({(\ell,\epsilon)}\) incompressibility, which guarantees that on a random input \({x\in \{0,1\}^n}\), for every size s circuit \({C:\{0,1\}^n\rightarrow \{0,1\}^{\ell}}\) and any unbounded solver D, the success probability \({\Pr_x[D(C(x))=f(x)]}\) is upper-bounded by \({2^{-m}+\epsilon}\). While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work, we present the following results:
  1. (1)
    Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) which is incompressible by size n c circuits with communication \({\ell=(1-o(1)) \cdot n}\) and error \({\epsilon=n^{-c}}\). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14).
     
  2. (2)
    We show that it is possible to achieve negligible error parameter \({\epsilon=n^{-\omega(1)}}\) for nonboolean functions. Specifically, assuming that E is hard for exponential size \({\Sigma_3}\)-circuits, we construct a nonboolean function \({f:\{0,1\}^n\rightarrow \{0,1\}^m}\) which is incompressible by size n c circuits with \({\ell=\Omega(n)}\) and extremely small \({\epsilon=n^{-c} \cdot 2^{-m}}\). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest.
     
  3. (3)
    We show that the task of constructing an incompressible boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) with negligible error parameter \({\epsilon}\) cannot be achieved by “existing proof techniques”. Namely, nondeterministic reductions (or even \({\Sigma_i}\) reductions) cannot get \({\epsilon=n^{-\omega(1)}}\) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospect, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (STOC 08).
     
  相似文献   

9.
Shpilka & Wigderson (IEEE conference on computational complexity, vol 87, 1999) had posed the problem of proving exponential lower bounds for (nonhomogeneous) depth-three arithmetic circuits with bounded bottom fanin over a field \({{\mathbb{F}}}\) of characteristic zero. We resolve this problem by proving a \({N^{\Omega(\frac{d}{\tau})}}\) lower bound for (nonhomogeneous) depth-three arithmetic circuits with bottom fanin at most \({\tau}\) computing an explicit \({N}\)-variate polynomial of degree \({d}\) over \({{\mathbb{F}}}\). Meanwhile, Nisan & Wigderson (Comp Complex 6(3):217–234, 1997) had posed the problem of proving super-polynomial lower bounds for homogeneous depth-five arithmetic circuits. Over fields of characteristic zero, we show a lower bound of \({N^{\Omega(\sqrt{d})}}\) for homogeneous depth-five circuits (resp. also for depth-three circuits) with bottom fanin at most \({N^{\mu}}\), for any fixed \({\mu < 1}\). This resolves the problem posed by Nisan and Wigderson only partially because of the added restriction on the bottom fanin (a general homogeneous depth-five circuit has bottom fanin at most \({N}\)).  相似文献   

10.
Based on spatial conforming and nonconforming mixed finite element methods combined with classical L1 time stepping method, two fully-discrete approximate schemes with unconditional stability are first established for the time-fractional diffusion equation with Caputo derivative of order \(0<\alpha <1\). As to the conforming scheme, the spatial global superconvergence and temporal convergence order of \(O(h^2+\tau ^{2-\alpha })\) for both the original variable u in \(H^1\)-norm and the flux \(\vec {p}=\nabla u\) in \(L^2\)-norm are derived by virtue of properties of bilinear element and interpolation postprocessing operator, where h and \(\tau \) are the step sizes in space and time, respectively. At the same time, the optimal convergence rates in time and space for the nonconforming scheme are also investigated by some special characters of \(\textit{EQ}_1^{\textit{rot}}\) nonconforming element, which manifests that convergence orders of \(O(h+\tau ^{2-\alpha })\) and \(O(h^2+\tau ^{2-\alpha })\) for the original variable u in broken \(H^1\)-norm and \(L^2\)-norm, respectively, and approximation for the flux \(\vec {p}\) converging with order \(O(h+\tau ^{2-\alpha })\) in \(L^2\)-norm. Numerical examples are provided to demonstrate the theoretical analysis.  相似文献   

11.
In this paper, a linearized local conservative mixed finite element method is proposed and analyzed for Poisson–Nernst–Planck (PNP) equations, where the mass fluxes and the potential flux are introduced as new vector-valued variables to equations of ionic concentrations (Nernst–Planck equations) and equation of the electrostatic potential (Poisson equation), respectively. These flux variables are crucial to PNP equations on determining the Debye layer and computing the electric current in an accurate fashion. The Raviart–Thomas mixed finite element is employed for the spatial discretization, while the backward Euler scheme with linearization is adopted for the temporal discretization and decoupling nonlinear terms, thus three linear equations are separately solved at each time step. The proposed method is more efficient in practice, and locally preserves the mass conservation. By deriving the boundedness of numerical solutions in certain strong norms, an unconditionally optimal error analysis is obtained for all six unknowns: the concentrations p and n, the mass fluxes \({{\varvec{J}}}_p=\nabla p + p {\varvec{\sigma }}\) and \({{\varvec{J}}}_n=\nabla n - n {\varvec{\sigma }}\), the potential \(\psi \) and the potential flux \({\varvec{\sigma }}= \nabla \psi \) in \(L^{\infty }(L^2)\) norm. Numerical experiments are carried out to demonstrate the efficiency and to validate the convergence theorem of the proposed method.  相似文献   

12.
In this paper, we study the ordering states with Tsallis relative \(\alpha \)-entropies of coherence and \(l_{1}\) norm of coherence for single-qubit states. Firstly, we show that any Tsallis relative \(\alpha \)-entropies of coherence and \(l_{1}\) norm of coherence give the same ordering for single-qubit pure states. However, they do not generate the same ordering for some high-dimensional states, even though these states are pure. Secondly, we also consider three special Tsallis relative \(\alpha \)-entropies of coherence for \(\alpha =2, 1, \frac{1}{2}\) and show these three measures and \(l_{1}\) norm of coherence will not generate the same ordering for some single-qubit mixed states. Nevertheless, they may generate the same ordering if we only consider a special subset of single-qubit mixed states. Furthermore, we find that any two of these three special measures generate different ordering for single-qubit mixed states. Finally, we discuss the degree of violation of between \(l_{1}\) norm of coherence and Tsallis relative \(\alpha \)-entropies of coherence. In a sense, this degree can measure the difference between these two coherence measures in ordering states.  相似文献   

13.
In many parallel and distributed multiprocessor systems, the processors are connected based on different types of interconnection networks. The topological structure of an interconnection network is typically modeled as a graph. Among the many kinds of network topologies, the crossed cube is one of the most popular. In this paper, we investigate the panpositionable panconnectedness problem with respect to the crossed cube. A graph G is r-panpositionably panconnected if for any three distinct vertices x, y, z of G and for any integer \(l_1\) satisfying \(r \le l_1 \le |V(G)| - r - 1\), there exists a path \(P = [x, P_1, y, P_2, z]\) in G such that (i) \(P_1\) joins x and y with \(l(P_1) = l_1\) and (ii) \(P_2\) joins y and z with \(l(P_2) = l_2\) for any integer \(l_2\) satisfying \(r \le l_2 \le |V(G)| - l_1 - 1\), where |V(G)| is the total number of vertices in G and \(l(P_1)\) (respectively, \(l(P_2)\)) is the length of path \(P_1\) (respectively, \(P_2\)). By mathematical induction, we demonstrate that the n-dimensional crossed cube \(CQ_n\) is n-panpositionably panconnected. This result indicates that the path embedding of joining x and z with a mediate vertex y in \(CQ_n\) is extremely flexible. Moreover, applying our result, crossed cube problems such as panpositionable pancyclicity, panpositionably Hamiltonian connectedness, and panpositionable Hamiltonicity can be solved.  相似文献   

14.
Most entropy notions \({H(.)}\) like Shannon or min-entropy satisfy a chain rule stating that for random variables \({X,Z,}\) and \({A}\) we have \({H(X|Z,A)\ge H(X|Z)-|A|}\). That is, by conditioning on \({A}\) the entropy of \({X}\) can decrease by at most the bitlength \({|A|}\) of \({A}\). Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption, and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: we construct joint distributions \({(X,Z,A)}\), where \({A}\) is a distribution over a single bit, such that the HILL entropy H HILL \({(X|Z)}\) is large but H HILL \({(X|Z,A)}\) is basically zero.Our counterexample just makes the minimal assumption that \({{\mathbf{NP}} \nsubseteq{\mathbf{P/poly}}}\). Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable.Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.  相似文献   

15.
Quantum coherence is an important physical resource in quantum computation and quantum information processing. Among the appropriate measures of quantum coherence, the \(l_1\) norm of coherence is a widely known coherence measure and easy to use quantifiers. In this paper, we discuss the superadditivity inequalities and strong subadditivity of the \(l_1\) norm of coherence. We show that the \(l_1\) norm of coherence is superadditive for all states, which gives a positive answer to a conjecture in Liu and Li (Int J Theor Phys 56:494, 2017).  相似文献   

16.
A Neumann series of Bessel functions (NSBF) representation for solutions of Sturm–Liouville equations and for their derivatives is obtained. The representation possesses an attractive feature for applications: for all real values of the spectral parameter \(\omega \) the estimate of the difference between the exact solution and the approximate one (the truncated NSBF) depends on N (the truncation parameter) and the coefficients of the equation and does not depend on \(\omega \). A similar result is valid when \(\omega \in {\mathbb {C}}\) belongs to a strip \(\left| \hbox {Im }\omega \right| <C\). This feature makes the NSBF representation especially useful for applications requiring computation of solutions for large intervals of \(\omega \). Error and decay rate estimates are obtained. An algorithm for solving initial value, boundary value or spectral problems for the Sturm–Liouville equation is developed and illustrated on a test problem.  相似文献   

17.
Using Bloch’s parametrization for qudits (d-level quantum systems), we write the Hilbert–Schmidt distance (HSD) between two generic n-qudit states as an Euclidean distance between two vectors of observables mean values in \(\mathbb {R}^{\Pi _{s=1}^{n}d_{s}^{2}-1}\), where \(d_{s}\) is the dimension for qudit s. Then, applying the generalized Gell–Mann’s matrices to generate \(\mathrm{SU}(d_{s})\), we use that result to obtain the Hilbert–Schmidt quantum coherence (HSC) of n-qudit systems. As examples, we consider in detail one-qubit, one-qutrit, two-qubit, and two copies of one-qubit states. In this last case, the possibility for controlling local and non-local coherences by tuning local populations is studied, and the contrasting behaviors of HSC, \(l_{1}\)-norm coherence, and relative entropy of coherence in this regard are noticed. We also investigate the decoherent dynamics of these coherence functions under the action of qutrit dephasing and dissipation channels. At last, we analyze the non-monotonicity of HSD under tensor products and report the first instance of a consequence (for coherence quantification) of this kind of property of a quantum distance measure.  相似文献   

18.
In this paper, we propose a locking-free stabilized mixed finite element method for the linear elasticity problem, which employs a jump penalty term for the displacement approximation. The continuous piecewise k-order polynomial space is used for the stress and the discontinuous piecewise \((k-1)\)-order polynomial space for the displacement, where we require that \(k\ge 3\) in the two dimensions and \(k\ge 4\) in the three dimensions. The method is proved to be stable and k-order convergent for the stress in \(H(\mathrm {div})\)-norm and for the displacement in \(L^2\)-norm. Further, the convergence does not deteriorate in the nearly incompressible or incompressible case. Finally, the numerical results are presented to illustrate the optimal convergence of the stabilized mixed method.  相似文献   

19.
Relief algorithm is a feature selection algorithm used in binary classification proposed by Kira and Rendell, and its computational complexity remarkably increases with both the scale of samples and the number of features. In order to reduce the complexity, a quantum feature selection algorithm based on Relief algorithm, also called quantum Relief algorithm, is proposed. In the algorithm, all features of each sample are superposed by a certain quantum state through the CMP and rotation operations, then the swap test and measurement are applied on this state to get the similarity between two samples. After that, Near-hit and Near-miss are obtained by calculating the maximal similarity, and further applied to update the feature weight vector WT to get \({\overline{WT}}\) that determine the relevant features with the threshold \(\tau \). In order to verify our algorithm, a simulation experiment based on IBM Q with a simple example is performed. Efficiency analysis shows the computational complexity of our proposed algorithm is O(M), while the complexity of the original Relief algorithm is O(NM), where N is the number of features for each sample, and M is the size of the sample set. Obviously, our quantum Relief algorithm has superior acceleration than the classical one.  相似文献   

20.
\(L_1\) regularization is widely used in various applications for sparsifying transform. In Wasserman et al. (J Sci Comput 65(2):533–552, 2015) the reconstruction of Fourier data with \(L_1\) minimization using sparsity of edges was proposed—the sparse PA method. With the sparse PA method, the given Fourier data are reconstructed on a uniform grid through the convex optimization based on the \(L_1\) regularization of the jump function. In this paper, based on the method proposed by Wasserman et al. (J Sci Comput 65(2):533–552, 2015) we propose to use the domain decomposition method to further enhance the quality of the sparse PA method. The main motivation of this paper is to minimize the global effect of strong edges in \(L_1\) regularization that the reconstructed function near weak edges does not benefit from the sparse PA method. For this, we split the given domain into several subdomains and apply \(L_1\) regularization in each subdomain separately. The split function is not necessarily periodic, so we adopt the Fourier continuation method in each subdomain to find the Fourier coefficients defined in the subdomain that are consistent to the given global Fourier data. The numerical results show that the proposed domain decomposition method yields sharp reconstructions near both strong and weak edges. The proposed method is suitable when the reconstruction is required only locally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号