首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a system of N points x 1 < ... < x N on a segment of the real line. An ideal system (crystal) is a system where all distances between neighbors are the same. Deviation from idealness is characterized by a system of finite differences ? i 1 = x x+1 ? x i , ? i k+1 = ? i+1 k ? ? i k , for all possible i and k. We find asymptotic estimates as N ?? ??, k????, for a system of points minimizing the potential energy of a Coulomb system in an external field.  相似文献   

2.
We introduce two hierarchies of unknown ordinal height. The hierarchies are induced by natural fragments of a calculus based on finite types and Gödel’s T, and all the classes in the hierarchies are uniformly defined without referring to explicit bounds. Deterministic complexity classes like logspace, p, pspace, linspace and exp are captured by the hierarchies. Typical subrecursive classes are also captured, e.g. the small relational Grzegorczyk classes ? * 0 , ? * 1 and ? * 2 .  相似文献   

3.
In this paper we present Fourier type error analysis on the recent four discontinuous Galerkin methods for diffusion equations, namely the direct discontinuous Galerkin (DDG) method (Liu and Yan in SIAM J. Numer. Anal. 47(1):475?C698, 2009); the DDG method with interface corrections (Liu and Yan in Commun. Comput. Phys. 8(3):541?C564, 2010); and the DDG method with symmetric structure (Vidden and Yan in SIAM J. Numer. Anal., 2011); and a DG method with nonsymmetric structure (Yan, A discontinuous Galerkin method for nonlinear diffusion problems with nonsymmetric structure, 2011). The Fourier type L 2 error analysis demonstrates the optimal convergence of the four DG methods with suitable numerical fluxes. The theoretical predicted errors agree well with the numerical results.  相似文献   

4.
Given a set of pointsV in the plane, the Euclidean bottleneck matching problem is to match each point with some other point such that the longest Euclidean distance between matched points, resulting from this matching, is minimized. To solve this problem, we definek-relative neighborhood graphs, (kRNG) which are derived from Toussaint's relative neighborhood graphs (RNG). Two points are calledk-relative neighbors if and only if there are less thank points ofV which are closer to both of the two points than the two points are to each other. AkRNG is an undirected graph (V,E r k ) whereE r k is the set of pairs of points ofV which arek-relative neighbors. We prove that there exists an optimal solution of the Euclidean bottleneck matching problem which is a subset ofE r 17 . We also prove that ¦E r k ¦ < 18kn wheren is the number of points in setV. Our algorithm would construct a 17RNG first. This takesO(n 2) time. We then use Gabow and Tarjan's bottleneck maximum cardinality matching algorithm for general graphs whose time-complexity isO((n logn)0.5 m), wherem is the number of edges in the graph, to solve the bottleneck maximum cardinality matching problem in the 17RNG. This takesO(n 1.5 log0.5 n) time. The total time-complexity of our algorithm for the Euclidean bottleneck matching problem isO(n 2 +n 1.5 log0.5 n).  相似文献   

5.
Dr. P. Pottinger 《Computing》1976,17(2):163-167
Some estimations for the relative projection constantP( n+k k ,Csik[a,b]) are given. By constructing an associated polynomial operatorL n :C 0[a,b]→ n 0 to a given polynomial operatorH n+k :C k [a,b]→ n+k k we get a lower bound for the projection constant. An upper bound forP( n+k k ,C k [a,b]) is obtained by the determination of the norms of appropriate polynomial operatorsP n+k :C k [a,b]→ n+k k . Further we give a convergence property for the sequence (P n+k ) n∈?.  相似文献   

6.
A compact discontinuous Galerkin method (CDG) is devised for nearly incompressible linear elasticity, through replacing the global lifting operator for determining the numerical trace of stress tensor in a local discontinuous Galerkin method (cf. Chen et al., Math Probl Eng 20, 2010) by the local lifting operator and removing some jumping terms. It possesses the compact stencil, that means the degrees of freedom in one element are only connected to those in the immediate neighboring elements. Optimal error estimates in broken energy norm, $H^1$ -norm and $L^2$ -norm are derived for the method, which are uniform with respect to the Lamé constant $\lambda .$ Furthermore, we obtain a post-processed $H(\text{ div})$ -conforming displacement by projecting the displacement and corresponding trace of the CDG method into the Raviart–Thomas element space, and obtain optimal error estimates of this numerical solution in $H(\text{ div})$ -seminorm and $L^2$ -norm, which are uniform with respect to $\lambda .$ A series of numerical results are offered to illustrate the numerical performance of our method.  相似文献   

7.
Multi-letter quantum finite automata (QFAs) can be thought of quantum variants of the one-way multi-head finite automata (Hromkovi?, Acta Informatica 19:377?C384, 1983). It has been shown that this new one-way QFAs (multi-letter QFAs) can accept with no error some regular languages, for example (a?+?b)*b, that are not acceptable by QFAs of Moore and Crutchfield (Theor Comput Sci 237:275?C306, 2000) as well as Kondacs and Watrous (66?C75, 1997; Observe that 1-letter QFAs are exactly measure-once QFAs (MO-1QFAs) of Moore and Crutchfield (Theor Comput Sci 237:275?C306, 2000)). In this paper, we study the decidability of the equivalence and minimization problems of multi-letter QFAs. Three new results presented in this paper are the following ones: (1) Given a k 1-letter QFA ${{\mathcal A}_1}$ and a k 2-letter QFA ${{\mathcal A}_2}$ over the same input alphabet ??, they are equivalent if and only if they are (n 2 m k-1?m k-1?+?k)-equivalent, where m =?|??| is the cardinality of ??, k =?max(k 1,k 2), and n =?n 1?+?n 2, with n 1 and n 2 being numbers of states of ${{\mathcal A}_{1}}$ and ${{\mathcal A}_{2}}$ , respectively. When k =?1, this result implies the decidability of equivalence of measure-once QFAs (Moore and Crutchfield in Theor Comput Sci 237:275?C306, 2000). (It is worth mentioning that our technical method is essentially different from the previous ones used in the literature.) (2) A polynomial-time O(m 2k-1 n 8?+?km k n 6) algorithm is designed to determine the equivalence of any two multi-letter QFAs (see Theorems 2 and 3; Observe that if a brute force algorithm to determine equivalence would be used, as suggested by the decidability outcome of the point (1), the worst case time complexity would be exponential). Observe also that time complexity is expressed here in terms of the number of states of the multi-letter QFAs and k can be seen as a constant. (3) It is shown that the states minimization problem of multi-letter QFAs is solvable in EXPSPACE. This implies also that the state minimization problem of MO-1QFAs (see Moore and Crutchfield in Theor Comput Sci 237:275?C306, 2000, page 304, Problem 5), an open problem stated in that paper, is also solvable in EXPSPACE.  相似文献   

8.
Zeev Nutov 《Algorithmica》2012,63(1-2):398-410
We consider the (undirected) Node Connectivity Augmentation (NCA) problem: given a graph J=(V,E J ) and connectivity requirements $\{r(u,v): u,v \in V\}$ , find a minimum size set I of new edges (any edge is allowed) such that the graph JI contains r(u,v) internally-disjoint uv-paths, for all u,vV. In Rooted NCA there is sV such that r(u,v)>0 implies u=s or v=s. For large values of k=max? u,vV r(u,v), NCA is at least as hard to approximate as Label-Cover and thus it is unlikely to admit an approximation ratio polylogarithmic in k. Rooted NCA is at least as hard to approximate as Hitting-Set. The previously best approximation ratios for the problem were O(kln?n) for NCA and O(ln?n) for Rooted NCA. In this paper we give an approximation algorithm with ratios O(kln?2 k) for NCA and O(ln?2 k) for Rooted NCA. This is the first approximation algorithm with ratio independent of?n, and thus is a constant for any fixed k. Our algorithm is based on the following new structural result which is of independent interest. If $\mathcal{D}$ is a set of node pairs in a graph?J, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in $\mathcal{D}$ is O(? 2), where ? is the maximum connectivity in J of a pair in $\mathcal{D}$ .  相似文献   

9.
We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts:
  1. Lower bounds against medium-uniform circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium-uniform circuit classes, including: ? For all k, P is not contained in P-uniform SIZE(n k ). That is, for all k, there is a language \({L_k \in {\textsf P}}\) that does not have O(n k )-size circuits constructible in polynomial time. This improves Kannan’s lower bound from 1982 that NP is not in P-uniform SIZE(n k ) for any fixed k. ? For all k, NP is not in \({{\textsf P}^{\textsf NP}_{||}-{\textsf {uniform SIZE}}(n^k)}\) .This also improves Kannan’s theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ? For all k, LOGSPACE does not have LOGSPACE-uniform branching programs of size n k .
  2. Eliminating non-uniformity and (non-uniform) circuit lower bounds. We complement these results by showing how to convert any potential simulation of LOGTIME-uniform NC 1 in ACC 0/poly or TC 0/poly into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds and leads to the following new connection: ? Consider the following task: given a TC 0 circuit C of n O(1) size, output yes when C is unsatisfiable, and output no when C has at least 2 n-2 satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministically in 2 n-ω(log n) time, then \({{\textsf{NEXP}} \not \subset {\textsf{TC}}^0/{\rm poly}}\) .
Another application is to derandomize randomized TC 0 simulations of NC 1 on almost all inputs: ?Suppose \({{\textsf{NC}}^1 \subseteq {\textsf{BPTC}}^0}\) . Then, for every ε > 0 and every language L in NC 1, there is a LOGTIME?uniform TC 0 circuit family of polynomial size recognizing a language L′ such that L and L′ differ on at most \({2^{n^{\epsilon}}}\) inputs of length n, for all n.  相似文献   

10.
The ??direct product problem?? is a fundamental question in complexity theory which seeks to understand how the difficulty in computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most ${1 - \varepsilon}$ in computing the Boolean function f on input distribution???, then for ?? ?? 1, every ${\alpha \varepsilon Tk}$ -query algorithm has success probability at most ${(2^{\alpha \varepsilon}(1-\varepsilon))^k}$ in computing the k-fold direct product ${f^{\otimes k}}$ correctly on k independent inputs from???. In light of examples due to Shaltiel, this statement gives an essentially optimal trade-off between the query bound and the error probability. Using this DPT, we show that for an absolute constant ?? > 0, the worst-case success probability of any ?? R 2(f) k-query randomized algorithm for ${f^{\otimes k}}$ falls exponentially with k. The best previous statement of this type, due to Klauck, ?palek, and de Wolf, required a query bound of O(bs(f) k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve ${f^{\otimes k}}$ . Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest.  相似文献   

11.
In this paper we give some properties of interval operatorsF which guarantee the convergence of the interval sequence {X k} defined byX k+1:=F(Xk)∩Xk to a unique fixed interval \(\hat X\) . This interval \(\hat X\) encloses the “zero-set”X * of a function strip \(G(x): = [g(x),\bar g(x)]\) . for some known interval operators we investigate under which assumptions these properties are valid.  相似文献   

12.
13.
Dr. R. Haverkamp 《Computing》1984,32(4):343-355
Letp n denote the polynomial of degreen or less that interpolates a given smooth functionf at the ?eby?ev nodest j n =cos(jπ/n), 0≤jn, and let ‖·‖ be the maximum norm inC[?1, 1]. It is proved that fork-th derivatives (2≤kn) estimates of the following type hold $$\parallel f^{(k)} - p_n^{(k)} \parallel \leqslant c_k n^{k - 1} \inf \{ \parallel f^{(k)} - q\parallel :q \in \Pi _{n - k} \} .$$ In this relationc k only depends onk andΠ n?k denotes the space of polynomials up to degreen?k.  相似文献   

14.
Interpreting the Ritz method as a procedure to compute elements of best approximation for the norm ‖z R 2 =(Az, z), similar methods are obtained in substituting ‖ ‖ g with ‖z g =‖g(A)z‖ for a suitable functiong in place of ‖ ‖ R . Such methods are applicable to large classes of linear operators. Taking bounded or unbounded, normal operators forA and polynomials in the variablesw and \(\bar w\) forg it is demonstrated how to get some estimations of the error.  相似文献   

15.
In many real-life situations, we want to reconstruct the dependencyy=f(x 1,…, xn) from the known experimental resultsx i (k) , y(k). In other words, we want tointerpolate the functionf from its known valuesy (k)=f(x 1 (k) ,…, x n (k) ) in finitely many points $\bar x^{(k)} = (x_1^{(k)} , \ldots ,x_n^{(k)} )$ , 1≤kN There are many functions that go through given points. How to choose one of them? The main goal of findingf is to be able to predicty based onx i. If we getx i from measurements, then usually, we only getintervals that containx i. As a result of applyingf, we get an interval y of possible values ofy. It is reasonable to choosef for which the resulting interval is the narrowest possible. In this paper, we formulate this choice problem in mathematical terms, solve the corresponding problem for several simple cases, and describe the application of these solutions to intelligent control.  相似文献   

16.
Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

17.
Dr. G. Merz 《Computing》1974,12(3):195-201
Using generating functions we obtain in the case ofn+1 equidistant data points a method for the calculation of the interpolating spline functions(x) of degree 2k+1 with boundary conditionss (κ) (x0)=y 0 (κ) ,s (κ) (x n )=y n (κ) , κ=1(1)k, which only needs the inversion of a matrix of orderk. The applicability of our method in the case of general boundary conditions is also mentioned.  相似文献   

18.
In this paper, we present a new approach to hp-adaptive finite element methods. Our a posteriori error estimates and hp-refinement indicator are inspired by the work on gradient/derivative recovery of Bank and Xu (SIAM J Numer Anal 41:2294?C2312, 2003; SIAM J Numer Anal 41:2313?C2332, 2003). For element ?? of degree p, R(? p u hp ), the (piece-wise linear) recovered function of ? p u is used to approximate ${|\varepsilon|_{1,\tau} = |\hat{u}_{p+1} - u_{p}|_{1,\tau}}$ , which serves as our local error indicator. Under sufficient conditions on the smoothness of u, it can be shown that ${\|{\partial^{p}(\hat{u}_{p+1} - u_{p})\|_{0,\Omega}}}$ is a superconvergent approximation of ${\|(I - R){\partial}^p u_{hp}\|_{0,\Omega}}$ . Based on this, we develop a heuristic hp-refinement indicator based on the ratio between the two quantities on each element. Also in this work, we introduce nodal basis functions for special elements where the polynomial degree along edges is allowed to be different from the overall element degree. Several numerical examples are provided to show the effectiveness of our approach.  相似文献   

19.
We consider minimal quadrature formulae for the Hilbert spacesH 2 R andL 2 R consisting of functions which are analytical on the open disc with radiusR and centre at the origin; the inner products are the boundary contour integral forH 2 R and the area integral over the disc forL 2 R . Such formulae can be viewed as interpolatory, generalizing—in two ways—Markoff's idea to construct the classical Gaussian quadratur formulae. This can be done simultaneously for both spaces using the same Hermitian interpolating operator. The advantage of this approach to minimal formulae is that we get a nonlinear system of equations for the nodes of the minimal formulae alone, in contrast to the coupled system for nodes and weights which arises from the minimality conditions. The uncoupled system that we obtain is numerically solvable for reasonable numbers of nodes and numerical tests show that the resulting minimal formulae are very well suited for the integration of functions with boundary singularities.  相似文献   

20.
The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号