首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Splines play an important role as solutions of various interpolation and approximation problems that minimize special functionals in some smoothness spaces. In this paper, we show in a strictly discrete setting that splines of degree m−1 solve also a minimization problem with quadratic data term and m-th order total variation (TV) regularization term. In contrast to problems with quadratic regularization terms involving m-th order derivatives, the spline knots are not known in advance but depend on the input data and the regularization parameter λ. More precisely, the spline knots are determined by the contact points of the m–th discrete antiderivative of the solution with the tube of width 2λ around the m-th discrete antiderivative of the input data. We point out that the dual formulation of our minimization problem can be considered as support vector regression problem in the discrete counterpart of the Sobolev space W 2,0 m . From this point of view, the solution of our minimization problem has a sparse representation in terms of discrete fundamental splines.  相似文献   

2.
We study the complexity of worst-case time-domain identification of linear time-invariant systems using model sets consisting of degree-n rational models with poles in a fixed region of the complex plane. For specific noise level δ and tolerance levels τ, the number of required output samples and the total sampling time should be as small as possible. In discrete time, using known fractional covers for certain polynomial spaces (with the same norm), we show that the complexity isO(n 2) for theH norm,O(n) for the ℓ2 norm, and exponential inn for the ℓ1 norm, for each δ and τ. We also show that these bounds are tight. For the continuous-time case we prove analogous results, and show that the input signals may be compactly supported step functions with equally spaced nodes. We show, however, that the internodal spacing must approach 0 asn increases.  相似文献   

3.
We describe Gauss–Newton-type methods for fitting implicitly defined curves and surfaces to given unorganized data points. The methods are suitable not only for least-squares approximation, but they can also deal with general error functions, such as approximations to the 1 or norm of the vector of residuals. Two different definitions of the residuals will be discussed, which lead to two different classes of methods: direct methods and data-based ones. In addition we discuss the continuous versions of the methods, which furnish geometric interpretations as evolution processes. It is shown that the data-based methods—which are less costly, as they work without the computation of the closest points—can efficiently deal with error functions that are adapted to noisy and uncertain data. In addition, we observe that the interpretation as evolution process allows to deal with the issues of regularization and with additional constraints.
Bert JüttlerEmail:
  相似文献   

4.
A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration   总被引:2,自引:0,他引:2  
In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (Osher et al., Multiscale Model. Simul. 4(2):460–489, 2005) based methods, such as linearized Bregman (Osher et al., Commun. Math. Sci. 8(1):93–111, 2010; Cai et al., SIAM J. Imag. Sci. 2(1):226–252, 2009, CAM Report 09-28, UCLA, March 2009; Yin, CAAM Report, Rice University, 2009) and split Bregman (Goldstein and Osher, SIAM J. Imag. Sci., 2, 2009). The convergence of the general algorithm framework is proved under mild assumptions. The applications to 1 basis pursuit, TV−L 2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy to implement, efficient, stable and flexible enough to cover a wide variety of applications.  相似文献   

5.
We study the sizes of connected components according to their excesses during a random graph process built with n vertices. The considered model is the continuous one defined in [17]. An ℓ-component is a connected component with ℓ edges more than vertices. ℓ is also called the excess of such a component. As our main result, we show that when ℓ and n/ℓ are both large, the expected number of vertices that ever belong to an ℓ-component is about 121/31/3 n2/3. We also obtain limit theorems for the number of creations of ℓ-components.  相似文献   

6.
We show that subclasses of q-ary separable Goppa codes Γ(L, G) with L = {α ∈ GF(q 2ℓ): G(α) ∈ 0} and with special Goppa polynomials G(x) can be represented as a chain of equivalent and embedded codes. For all codes of the chain we obtain an improved bound for the dimension and find an exact value of the minimum distance. A chain of binary codes is considered as a particular case with specific properties.  相似文献   

7.
We propose in this paper minimization algorithms for image restoration using dual functionals and dual norms. In order to extract a clean image u from a degraded version f=Ku+n (where f is the observation, K is a blurring operator and n represents additive noise), we impose a standard regularization penalty Φ(u)= φ(|Du|)dx<∞ on u, where φ is positive, increasing and has at most linear growth at infinity. However, on the residual fKu we impose a dual penalty Φ*(fKu)<∞, instead of the more standard fidelity term. In particular, when φ is convex, homogeneous of degree one, and with linear growth (for instance the total variation of u), we recover the (BV,BV *) decomposition of the data f, as suggested by Y. Meyer (Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, vol. 22, Am. Math. Soc., Providence, 2001). Practical minimization methods are presented, together with theoretical, experimental results and comparisons to illustrate the validity of the proposed models. Moreover, we also show that by a slight modification of the associated Euler-Lagrange equations, we obtain well-behaved approximations and improved results.
Luminita A. Vese (Corresponding author)Email:
  相似文献   

8.
A general multi-scale vectorial total variation model with spatially adapted regularization parameter for color image restoration is introduced in this paper. This total variation model contains an L τ -data fidelity for any τ∈[1,2]. The use of a spatial dependent regularization parameter improves the reconstruction of features in the image as well as an adequate smoothing for the homogeneous parts. The automated adaptation of this regularization parameter is made according to local statistical characteristics of the noise which contaminates the image. The corresponding multiscale vectorial total variation model is solved by Fenchel-duality and inexact semismooth Newton techniques. Numerical results are presented for the cases τ=1 and τ=2 which reconstruct images contaminated with salt-and-pepper noise and Gaussian noise, respectively.  相似文献   

9.
A. Frommer 《Computing》2003,70(2):87-109
  We consider a seed system Ax = b together with a shifted linear system of the form
We develop modifications of the BiCGStab(ℓ) method which allow to solve the seed and the shifted system at the expense of just the matrix-vector multiplications needed to solve Ax = b via BiCGStab(ℓ). On the shifted system, these modifications do not perform the corresponding BiCGStab(ℓ)-method, but we show, that in the case that A is positive real and σ ≥ 0, the resulting method is still a well-smoothed variant of BiCG. Numerical examples from an application arising in quantum chromodynamics are given to illustrate the efficiency of the method developed. Received November 11, 2002; revised February 20, 2003 Published online: April 14, 2003  相似文献   

10.
We study the parameters of bent and hyper-bent (HB) functions in n variables over a field $ P = \mathbb{F}_q We study the parameters of bent and hyper-bent (HB) functions in n variables over a field with q = 2 elements, ℓ > 1. Any such function is identified with a function F: QP, where . The latter has a reduced trace representation F = tr P Q (Φ), where Φ(x) is a uniquely defined polynomial of a special type. It is shown that the most accurate generalization of results on parameters of bent functions from the case ℓ = 1 to the case ℓ > 1 is obtained if instead of the nonlinearity degree of a function one considers its binary nonlinearity index (in the case ℓ = 1 these parameters coincide). We construct a class of HB functions that generalize binary HB functions found in [1]; we indicate a set of parameters q and n for which there are no other HB functions. We introduce the notion of the period of a function and establish a relation between periods of (hyper-)bent functions and their frequency characteristics. Original Russian Text ? A.S. Kuz’min, V.T. Markov, A.A. Nechaev, V.A. Shishkin, A.B. Shishkov, 2008, published in Problemy Peredachi Informatsii, 2008, Vol. 44, No. 1, pp. 15–37. Supported in part by the Russian Foundation for Basic Research, project nos. 05-01-01018 and 05-01-01048, and the President of the Russian Federation Council for State Support of Leading Scientific Schools, project nos. NSh-8564.2006.10 and NSh-5666.2006.1. A part of the results were obtained in the course of research in the Cryptography Academy of the Russian Federation.  相似文献   

11.
Regularized energies with 1-fitting have attracted a considerable interest in the recent years and numerous aspects of the problem have been studied, mainly to solve various problems arising in image processing. In this paper we focus on a rather simple form where the regularization term is a quadratic functional applied on the first-order differences between neighboring pixels. We derive a semi-explicit expression for the minimizers of this energy which shows that the solution is an affine function in the neighborhood of each data set. We then describe the volumes of data for which the same system of affine equations leads to the minimum of the relevant energy. Our analysis involves an intermediate result on random matrices constructed from truncated neighborhood sets. We also put in evidence some drawbacks due to the 1-fitting. A fast, simple and exact optimization method is proposed. By way of application, we separate impulse noise from Gaussian noise in a degraded image.
Mila NikolovaEmail:
  相似文献   

12.
In this paper, we focus on the generalization ability of the empirical risk minimization technique in the framework of agnostic learning, and consider the support vector regression method as a special case. We give a set of analytic conditions that characterize the empirical risk minimization methods and their approximations that are distribution-free consistent. Then utilizing the weak topology of the feature space, we show that the support vector regression, possibly with a discontinuous kernel, is distribution-free consistent. Moreover, a tighter generalization error bound is shown to be achieved in certain cases if the value of the regularization parameter grows as the sample size increases. The results carry over to the ν-support vector regression.  相似文献   

13.
We design approximation algorithms for the vertex ordering problems Minimum Linear Arrangement, Minimum Containing Interval Graph, and Minimum Storage-Time Product, achieving approximation factors of $O(\sqrt{\log n}\log\log n)We design approximation algorithms for the vertex ordering problems Minimum Linear Arrangement, Minimum Containing Interval Graph, and Minimum Storage-Time Product, achieving approximation factors of O(?{logn}loglogn)O(\sqrt{\log n}\log\log n) , O(?{logn}loglogn)O(\sqrt{\log n}\log\log n) , and O(?{logT}loglogT)O(\sqrt{\log T}\log\log T) , respectively, the last running in time polynomial in T (T being the sum of execution times). The technical contribution of our paper is to introduce “ 22 spreading metrics” (that can be computed by semidefinite programming) as relaxations for both undirected and directed “permutation metrics,” which are induced by permutations of {1,2,…,n}. The techniques introduced in the recent work of Arora, Rao and Vazirani (Proc. of 36th STOC, pp. 222–231, 2004) can be adapted to exploit the geometry of such 22 spreading metrics, giving a powerful tool for the design of divide-and-conquer algorithms. In addition to their applications to approximation algorithms, the study of such 22 spreading metrics as relaxations of permutation metrics is interesting in its own right. We show how our results imply that, in a certain sense we make precise, 22 spreading metrics approximate permutation metrics on n points to a factor of O(?{logn}loglogn)O(\sqrt{\log n}\log\log n) .  相似文献   

14.
A numerical method for the computation of the best constant in a Sobolev inequality involving the spaces H 2(Ω) and C0([`(W)])C^{0}(\overline{\Omega}) is presented. Green’s functions corresponding to the solution of Poisson problems are used to express the solution. This (kind of) non-smooth eigenvalue problem is then formulated as a constrained optimization problem and solved with two different strategies: an augmented Lagrangian method, together with finite element approximations, and a Green’s functions based approach. Numerical experiments show the ability of the methods in computing this best constant for various two-dimensional domains, and the remarkable convergence properties of the augmented Lagrangian based iterative method.  相似文献   

15.
In this work we develop an efficient algorithm for the application of the method of fundamental solutions to inhomogeneous polyharmonic problems, that is problems governed by equations of the form Δ u=f, ∈ℕ, in circular geometries. Following the ideas of Alves and Chen (Adv. Comput. Math. 23:125–142, 2005), the right hand side of the equation in question is approximated by a linear combination of fundamental solutions of the Helmholtz equation. A particular solution of the inhomogeneous equation is then easily obtained from this approximation and the resulting homogeneous problem in the method of particular solutions is subsequently solved using the method of fundamental solutions. The fact that both the problem of approximating the right hand side and the homogeneous boundary value problem are performed in a circular geometry, makes it possible to develop efficient matrix decomposition algorithms with fast Fourier transforms for their solution. The efficacy of the method is demonstrated on several test problems.  相似文献   

16.
The generalized windows scheduling problem for n jobs on multiple machines is defined as follows: Given is a sequence, I=〈(w 1, 1),(w 2, 2),…,(w n , n )〉 of n pairs of positive integers that are associated with the jobs 1,2,…,n, respectively. The processing length of job i is i slots where a slot is the processing time of one unit of length. The goal is to repeatedly and non-preemptively schedule all the jobs on the fewest possible machines such that the gap (window) between two consecutive beginnings of executions of job i is at most w i slots. This problem arises in push broadcast systems in which data are transmitted on multiple channels. The problem is NP-hard even for unit-length jobs and a (1+ε)-approximation algorithm is known for this case by approximating the natural lower bound W(I)=?i=1n(1/wi)W(I)=\sum_{i=1}^{n}(1/w_{i}). The techniques used for approximating unit-length jobs cannot be extended for arbitrary-length jobs mainly because the optimal number of machines might be arbitrarily larger than the generalized lower bound W(I)=?i=1n(li/wi)W(I)=\sum_{i=1}^{n}(\ell_{i}/w_{i}). The main result of this paper is an 8-approximation algorithm for the WS problem with arbitrary lengths using new methods, different from those used for the unit-length case. The paper also presents another algorithm that uses 2(1+ε)W(I)+logw max machines and a greedy algorithm that is based on a new tree representation of schedules. The greedy algorithm is optimal for some special cases, and computational experiments show that it performs very well in practice.  相似文献   

17.
In the classic Josephus problem, elements 1,2,…,n are placed in order around a circle and a skip value k is chosen. The problem proceeds in n rounds, where each round consists of traveling around the circle from the current position, and selecting the kth remaining element to be eliminated from the circle. After n rounds, every element is eliminated. Special attention is given to the last surviving element, denote it by j. We generalize this popular problem by introducing a uniform number of lives , so that elements are not eliminated until they have been selected for the th time. We prove two main results: 1) When n and k are fixed, then j is constant for all values of larger than the nth Fibonacci number. In other words, the last surviving element stabilizes with respect to increasing the number of lives. 2) When n and j are fixed, then there exists a value of k that allows j to be the last survivor simultaneously for all values of . In other words, certain skip values ensure that a given position is the last survivor, regardless of the number of lives. For the first result we give an algorithm for determining j (and the entire sequence of selections) that uses O(n 2) arithmetic operations.  相似文献   

18.
We present two new techniques for regular expression searching and use them to derive faster practical algorithms. Based on the specific properties of Glushkov’s nondeterministic finite automaton construction algorithm, we show how to encode a deterministic finite automaton (DFA) using O(m2m) bits, where m is the number of characters, excluding operator symbols, in the regular expression. This compares favorably against the worst case of O(m2m|Σ|) bits needed by a classical DFA representation (where Σ is the alphabet) and O(m22m) bits needed by the Wu and Manber approach implemented in Agrep. We also present a new way to search for regular expressions, which is able to skip text characters. The idea is to determine the minimum length ℓ of a string matching the regular expression, manipulate the original automaton so that it recognizes all the reverse prefixes of length up to ℓ of the strings originally accepted, and use it to skip text characters as done for exact string matching in previous work. We combine these techniques into two algorithms, one able and one unable to skip text characters. The algorithms are simple to implement, and our experiments show that they permit fast searching for regular expressions, normally faster than any existing algorithm.  相似文献   

19.
Anonymizing bipartite graph data using safe groupings   总被引:1,自引:0,他引:1  
Private data often come in the form of associations between entities, such as customers and products bought from a pharmacy, which are naturally represented in the form of a large, sparse bipartite graph. As with tabular data, it is desirable to be able to publish anonymized versions of such data, to allow others to perform ad hoc analysis of aggregate graph properties. However, existing tabular anonymization techniques do not give useful or meaningful results when applied to graphs: small changes or masking of the edge structure can radically change aggregate graph properties. We introduce a new family of anonymizations for bipartite graph data, called (k, ℓ)-groupings. These groupings preserve the underlying graph structure perfectly, and instead anonymize the mapping from entities to nodes of the graph. We identify a class of “safe” (k, ℓ)-groupings that have provable guarantees to resist a variety of attacks, and show how to find such safe groupings. We perform experiments on real bipartite graph data to study the utility of the anonymized version, and the impact of publishing alternate groupings of the same graph data. Our experiments demonstrate that (k, ℓ)-groupings offer strong tradeoffs between privacy and utility.  相似文献   

20.
This paper proposes a numerical algorithm for image registration using energy minimization and nonlinear elasticity regularization. Application to the registration of gene expression data to a neuroanatomical mouse atlas in two dimensions is shown. We apply a nonlinear elasticity regularization to allow larger and smoother deformations, and further enforce optimality constraints on the landmark points distance for better feature matching. To overcome the difficulty of minimizing the nonlinear elasticity functional due to the nonlinearity in the derivatives of the displacement vector field, we introduce a matrix variable to approximate the Jacobian matrix and solve for the simplified Euler-Lagrange equations. By comparison with image registration using linear regularization, experimental results show that the proposed nonlinear elasticity model also needs fewer numerical corrections such as regridding steps for binary image registration, it renders better ground truth, and produces larger mutual information; most importantly, the landmark points distance and L 2 dissimilarity measure between the gene expression data and corresponding mouse atlas are smaller compared with the registration model with biharmonic regularization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号