首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we develop and analyze a new superconvergent local discontinuous Galerkin (LDG) method for approximating solutions to the fourth-order Euler–Bernoulli beam equation in one space dimension. We prove the $L^2$ stability of the scheme and several optimal $L^2$ error estimates for the solution and for the three auxiliary variables that approximate derivatives of different orders. Our numerical experiments demonstrate optimal rates of convergence. We also prove superconvergence results towards particular projections of the exact solutions. More precisely, we prove that the LDG solution and its spatial derivatives (up to third order) are $\mathcal O (h^{k+3/2})$ super close to particular projections of the exact solutions for $k$ th-degree polynomial spaces while computational results show higher $\mathcal O (h^{k+2})$ convergence rate. Our proofs are valid for arbitrary regular meshes and for $P^k$ polynomials with $k\ge 1$ , and for periodic, Dirichlet, and mixed boundary conditions. These superconvergence results will be used to construct asymptotically exact a posteriori error estimates by solving a local steady problem on each element. This will be reported in Part II of this work, where we will prove that the a posteriori LDG error estimates for the solution and its derivatives converge to the true errors in the $L^2$ -norm under mesh refinement.  相似文献   

2.
Gábor Wiener 《Algorithmica》2013,67(3):315-323
A set system $\mathcal{H} \subseteq2^{[m]}$ is said to be separating if for every pair of distinct elements x,y∈[m] there exists a set $H\in\mathcal{H}$ such that H contains exactly one of them. The search complexity of a separating system $\mathcal{H} \subseteq 2^{[m]}$ is the minimum number of questions of type “xH?” (where $H \in\mathcal{H}$ ) needed in the worst case to determine a hidden element x∈[m]. If we receive the answer before asking a new question then we speak of the adaptive complexity, denoted by $\mathrm{c} (\mathcal{H})$ ; if the questions are all fixed beforehand then we speak of the non-adaptive complexity, denoted by $\mathrm{c}_{na} (\mathcal{H})$ . If we are allowed to ask the questions in at most k rounds then we speak of the k-round complexity of $\mathcal{H}$ , denoted by $\mathrm{c}_{k} (\mathcal{H})$ . It is clear that $|\mathcal{H}| \geq\mathrm{c}_{na} (\mathcal{H}) = \mathrm{c}_{1} (\mathcal{H}) \geq\mathrm{c}_{2} (\mathcal{H}) \geq\cdots\geq\mathrm{c}_{m} (\mathcal{H}) = \mathrm{c} (\mathcal{H})$ . A group of problems raised by G.O.H. Katona is to characterize those separating systems for which some of these inequalities are tight. In this paper we are discussing set systems $\mathcal{H}$ with the property $|\mathcal{H}| = \mathrm{c}_{k} (\mathcal{H}) $ for any k≥3. We give a necessary condition for this property by proving a theorem about traces of hypergraphs which also has its own interest.  相似文献   

3.
In this paper, we introduce the concept of $\lambda $ -statistical convergence of order $\theta $ and strong $\lambda $ -summability of order $\theta $ for the sequence of fuzzy numbers. Further the same concept is extended to the sequence of fuzzy functions and introduce the spaces like $S_\lambda ^\theta (\hat{f})$ and $\omega _{\lambda p} ^\theta (\hat{f})$ . Some inclusion relations in those spaces and also the underlying relation between these two spaces are also obtained.  相似文献   

4.
In this paper, we consider the $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy and $(\overline{\in}_{\gamma},\overline{\in}_{\gamma} \vee \; \overline{\hbox{q}}_{\delta})$ -fuzzy subnear-rings (ideals) of a near-ring. Some new characterizations are also given. In particular, we introduce the concepts of (strong) prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings and discuss the relationship between strong prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals and prime $(\in_{\gamma},\in_{\gamma} \vee \; \hbox{q}_{\delta})$ -fuzzy ideals of near-rings.  相似文献   

5.
We present a technique for numerically solving convection-diffusion problems in domains $\varOmega $ with curved boundary. The technique consists in approximating the domain $\varOmega $ by polyhedral subdomains $\mathsf{{D}}_h$ where a finite element method is used to solve for the approximate solution. The approximation is then suitably extended to the remaining part of the domain $\varOmega $ . This approach allows for the use of only polyhedral elements; there is no need of fitting the boundary in order to obtain an accurate approximation of the solution. To achieve this, the boundary condition on the border of $\varOmega $ is transferred to the border of $\mathsf{D }_h$ by using simple line integrals. We apply this technique to the hybridizable discontinuous Galerkin method and provide extensive numerical experiments showing that, whenever the distance of $\mathsf{{D}}_h$ to $\partial \varOmega $ is of order of the meshsize $h$ , the convergence properties of the resulting method are the same as those for the case in which $\varOmega =\mathsf{{D}}_h$ . We also show numerical evidence indicating that the ratio of the $L^2(\varOmega )$ norm of the error in the scalar variable computed with $d>0$ to that of that computed with $d=0$ remains constant (and fairly close to one), whenever the distance $d$ is proportional to $\min \{h,Pe^{-1}\}/(k+1)^2$ , where $Pe$ is the so-called Péclet number.  相似文献   

6.
A $C^0$ -weak Galerkin (WG) method is introduced and analyzed in this article for solving the biharmonic equation in 2D and 3D. A discrete weak Laplacian is defined for $C^0$ functions, which is then used to design the weak Galerkin finite element scheme. This WG finite element formulation is symmetric, positive definite and parameter free. Optimal order error estimates are established for the weak Galerkin finite element solution in both a discrete $H^2$ norm and the standard $H^1$ and $L^2$ norms with appropriate regularity assumptions. Numerical results are presented to confirm the theory. As a technical tool, a refined Scott-Zhang interpolation operator is constructed to assist the corresponding error estimates. This refined interpolation preserves the volume mass of order $(k+1-d)$ and the surface mass of order $(k+2-d)$ for the $P_{k+2}$ finite element functions in $d$ -dimensional space.  相似文献   

7.
The main aim of this paper is to study the nonconforming $EQ_1^{rot}$ quadrilateral finite element approximation to second order elliptic problems on anisotropic meshes. The optimal order error estimates in broken energy norm and $L^2$ -norm are obtained, and three numerical experiments are carried out to confirm the theoretical results.  相似文献   

8.
In this article we propose a class of so-called two-grid hp-version discontinuous Galerkin finite element methods for the numerical solution of a second-order quasilinear elliptic boundary value problem of monotone type. The key idea in this setting is to first discretise the underlying nonlinear problem on a coarse finite element space $V({{\mathcal {T}_{H}}},\boldsymbol {P})$ . The resulting ‘coarse’ numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretisation on the finer space $V({{\mathcal {T}_{h}}},\boldsymbol {p})$ ; thereby, only a linear system of equations is solved on the richer space $V({{\mathcal {T}_{h}}},\boldsymbol {p})$ . In this article both the a priori and a posteriori error analysis of the two-grid hp-version discontinuous Galerkin finite element method is developed. Moreover, we propose and implement an hp-adaptive two-grid algorithm, which is capable of designing both the coarse and fine finite element spaces $V({{\mathcal {T}_{H}}},\boldsymbol {P})$ and $V({{\mathcal {T}_{h}}},\boldsymbol {p})$ , respectively, in an automatic fashion. Numerical experiments are presented for both two- and three-dimensional problems; in each case, we demonstrate that the CPU time required to compute the numerical solution to a given accuracy is typically less when the two-grid approach is exploited, when compared to the standard discontinuous Galerkin method.  相似文献   

9.
In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate $k$ distinct messages to all $n$ nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of $O((k+\log n + D)\varDelta )$ rounds with high probability, where $D$ and $\varDelta $ are the diameter and the maximum degree in the network, respectively. For many topologies and selections of $k$ this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is order optimal and the stopping time is $\varTheta (k + D)$ . To eliminate the factor of $\varDelta $ from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol $\mathcal{S } $ . The stopping time of TAG is $O(k+\log n +d(\mathcal{S })+t(\mathcal{S }))$ , where $t(\mathcal{S })$ is the stopping time of the spanning tree protocol, and $d(\mathcal{S })$ is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for $k=\varOmega (n)$ , where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after $\varTheta (n)$ rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for $k=\varOmega (\text{ polylog }(n))$ . The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.  相似文献   

10.
Let $ Q$ be a complete residuated lattice. Let $\text {SetR}(Q)$ be the category of sets with similarity relations with values in $ Q$ (called $ Q$ -sets), which is an analogy of the category of classical sets with relations as morphisms. A cut in an $ Q$ -set $(A,\delta )$ is a system $(C_{\alpha })_{\alpha \in Q}$ , where $C_{\alpha }$ are subsets of $A\times Q$ . It is well known that in the category $\text {SetR}(Q)$ , there is a close relation between special cuts (called f-cuts) in an $ Q$ -set on one hand and fuzzy sets in the same $ Q$ -set, on the other hand. Moreover, there exists a completion procedure according to which any cut $(C_{\alpha })_{\alpha }$ can be extended onto an f-cut $(\overline{C_{\alpha }})_{\alpha }$ . In the paper, we prove that the completion procedure is, in some sense, the best possible. This will be expressed by the theorem which states that the category of f-cuts is a full reflective subcategory in the category of cuts.  相似文献   

11.
Linear kernel support vector machines (SVMs) using either $L_{1}$ -norm or $L_{2}$ -norm have emerged as an important and wildly used classification algorithm for many applications such as text chunking, part-of-speech tagging, information retrieval, and dependency parsing. $L_{2}$ -norm SVMs usually provide slightly better accuracy than $L_{1}$ -SVMs in most tasks. However, $L_{2}$ -norm SVMs produce too many near-but-nonzero feature weights that are highly time-consuming when computing nonsignificant weights. In this paper, we present a cutting-weight algorithm to guide the optimization process of the $L_{2}$ -SVMs toward a sparse solution. Before checking the optimality, our method automatically discards a set of near-but-nonzero feature weight. The final objects can then be achieved when the objective function is met by the remaining features and hypothesis. One characteristic of our cutting-weight algorithm is that it requires no changes in the original learning objects. To verify this concept, we conduct the experiments using three well-known benchmarks, i.e., CoNLL-2000 text chunking, SIGHAN-3 Chinese word segmentation, and Chinese word dependency parsing. Our method achieves 1–10 times feature parameter reduction rates in comparison with the original $L_{2}$ -SVMs, slightly better accuracy with a lower training time cost. In terms of run-time efficiency, our method is reasonably faster than the original $L_{2}$ -regularized SVMs. For example, our sparse $L_{2}$ -SVMs is 2.55 times faster than the original $L_{2}$ -SVMs with the same accuracy.  相似文献   

12.
We develop a stability and convergence theory for a Discontinuous Galerkin formulation (DG) of a highly indefinite Helmholtz problem in $\mathbb R ^{d}$ , $d\in \{1,2,3\}$ . The theory covers conforming as well as non-conforming generalized finite element methods. In contrast to conventional Galerkin methods where a minimal resolution condition is necessary to guarantee the unique solvability, it is proved that the DG-method admits a unique solution under much weaker conditions. As an application we present the error analysis for the $hp$ -version of the finite element method explicitly in terms of the mesh width $h$ , polynomial degree $p$ and wavenumber $k$ . It is shown that the optimal convergence order estimate is obtained under the conditions that $kh/\sqrt{p}$ is sufficiently small and the polynomial degree $p$ is at least $O(\log k)$ . On regular meshes, the first condition is improved to the requirement that $kh/p$ be sufficiently small.  相似文献   

13.
For hyper-rectangles in $\mathbb{R}^{d}$ Auer (1997) proved a PAC bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ , where $\varepsilon$ and $\delta$ are the accuracy and confidence parameters. It is still an open question whether one can obtain the same bound for intersection-closed concept classes of VC-dimension $d$ in general. We present a step towards a solution of this problem showing on one hand a new PAC bound of $O(\frac{1}{\varepsilon}(d\log d + \log \frac{1}{\delta}))$ for arbitrary intersection-closed concept classes, complementing the well-known bounds $O(\frac{1}{\varepsilon}(\log \frac{1}{\delta}+d\log \frac{1}{\varepsilon}))$ and $O(\frac{d}{\varepsilon}\log \frac{1}{\delta})$ of Blumer et al. and (1989) and Haussler, Littlestone and Warmuth (1994). Our bound is established using the closure algorithm, that generates as its hypothesis the intersection of all concepts that are consistent with the positive training examples. On the other hand, we show that many intersection-closed concept classes including e.g. maximum intersection-closed classes satisfy an additional combinatorial property that allows a proof of the optimal bound of $O(\frac{1}{\varepsilon}(d+\log \frac{1}{\delta}))$ . For such improved bounds the choice of the learning algorithm is crucial, as there are consistent learning algorithms that need $\Omega(\frac{1}{\varepsilon}(d\log\frac{1}{\varepsilon} +\log\frac{1}{\delta}))$ examples to learn some particular maximum intersection-closed concept classes.  相似文献   

14.
We relate the exponential complexities 2 s(k)n of $\textsc {$k$-sat}$ and the exponential complexity $2^{s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))n}$ of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ (the problem of evaluating quantified formulas of the form $\forall\vec{x} \exists\vec{y} \textsc {F}(\vec {x},\vec{y})$ where F is a 3-cnf in $\vec{x}$ variables and $\vec{y}$ variables) and show that s(∞) (the limit of s(k) as k→∞) is at most $s(\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf}))$ . Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ running in time 2 cn with c<1. On the other hand, a nontrivial exponential-time algorithm for $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ would provide a $\textsc {$k$-sat}$ solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ have nontrivial algorithms, and provide strong evidence that the hardest cases of $\textsc {eval}(\mathrm {\varPi }_{2} 3\textsc {-cnf})$ must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n?o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable $\textsc {$k$-cnf}$ s and the application of the Sparsification lemma.  相似文献   

15.
We study broadcasting, also known as one-to-all communication, in synchronous radio networks with known topology modeled by undirected (symmetric) graphs, where the interference range of a node is likely exceeding its transmission range. In this model, if two nodes are connected by a transmission edge they can communicate directly. On the other hand, if two nodes are connected by an interference edge they cannot communicate directly and transmission of one node disables recipience of any message at the other node. For a network $G,$ we term the smallest integer $d$ , s.t., for any interference edge $e$ there exists a simple path formed of at most $d$ transmission edges connecting the endpoints of $e$ as its interference distance $d_I$ . In this model the schedule of transmissions is precomputed in advance. It is based on the full knowledge of the size and the topology (including location of transmission and interference edges) of the network. We are interested in the design of fast broadcasting schedules that are energy efficient, i.e., based on a bounded number of transmissions executed at each node. We adopt $n$ as the number of nodes, $D_T$ is the diameter of the subnetwork induced by the transmission edges, and $\varDelta $ refers to the maximum combined degree (formed of transmission and interference edges) of the network. We contribute the following new results: (1) We prove that for networks with the interference distance $d_I\ge 2$ any broadcasting schedule requires at least $D_T+\varOmega (\varDelta \cdot \frac{\log {n}}{\log {\varDelta }})$ rounds. (2) We provide for networks modeled by bipartite graphs an algorithm that computes $1$ -shot (each node transmits at most once) broadcasting schedules of length $O(\varDelta \cdot \log {n})$ . (3) The main result of the paper is an algorithm that computes a $1$ -shot broadcasting schedule of length at most $4 \cdot D_T + O(\varDelta \cdot d_I \cdot \log ^4{n})$ for networks with arbitrary topology. Note that in view of the lower bound from (1) if $d_I$ is poly-logarithmic in $n$ this broadcast schedule is a poly-logarithmic factor away from the optimal solution.  相似文献   

16.
Using S.L. Sobolev’s method, we construct the interpolation splines minimizing the semi-norm in $K_2(P_2)$ , where $K_2(P_2)$ is the space of functions $\phi $ such that $\phi ^{\prime } $ is absolutely continuous, $\phi ^{\prime \prime } $ belongs to $L_2(0,1)$ and $\int _0^1(\varphi ^{\prime \prime }(x)+\varphi (x))^2dx<\infty $ . Explicit formulas for coefficients of the interpolation splines are obtained. The resulting interpolation spline is exact for the trigonometric functions $\sin x$ and $\cos x$ . Finally, in a few numerical examples the qualities of the defined splines and $D^2$ -splines are compared. Furthermore, the relationship of the defined splines with an optimal quadrature formula is shown.  相似文献   

17.
This article introduces a class of piecewise-constant image segmentation models that involves $L^1$ norms as data fidelity measures. The $L^1$ norms enable to segment images with low contrast or outliers such as impulsive noise. The regions to be segmented are represented as smooth functions instead of the Heaviside expression of level set functions as in the level set method. In order to deal with both non-smooth data-fitting and regularization terms, we use the variable splitting scheme to obtain constrained optimization problems, and apply an augmented Lagrangian method to solve the problems. This results in fast and efficient iterative algorithms for piecewise-constant image segmentation. The segmentation framework is extended to vector-valued images as well as to a multi-phase model to deal with arbitrary number of regions. We show comparisons with Chan-Vese models that use $L^2$ fidelity measures, to enlight the benefit of the $L^1$ ones.  相似文献   

18.
The concepts of $(\overline{\in},\overline{\in} \vee \overline{q})$ -fuzzy (p-, q- and a-) ideals of BCI-algebras are introduced and some related properties are investigated. In particular, we describe the relationships among ordinary fuzzy (p-, q- and a-) ideals, (??,?????? q)-fuzzy (p-, q- and a-) ideals and $(\overline{\in},\overline{\in} \vee \overline{q})$ -fuzzy (p-,q- and a-) ideals of BCI-algebras. Moreover, we prove that a fuzzy set??? of a BCI-algebra X is an $(\overline{\in},\overline{\in} \vee \overline{q})$ -fuzzy a-ideal of X if and only if it is both an $(\overline{\in},\overline{\in} \vee \overline{q})$ -fuzzy p-ideal and an $(\overline{\in},\overline{\in} \vee \overline{q})$ -fuzzy q-ideal. Finally, we give some characterizations of three particular cases of BCI-algebras by these generalized fuzzy ideals.  相似文献   

19.
We present and analyze a finite volume scheme of arbitrary order for elliptic equations in the one-dimensional setting. In this scheme, the control volumes are constructed by using the Gauss points in subintervals of the underlying mesh. We provide a unified proof for the inf-sup condition, and show that our finite volume scheme has optimal convergence rate under the energy and $L^2$ norms of the approximate error. Furthermore, we prove that the derivative error is superconvergent at all Gauss points and in some special cases, the convergence rate can reach $h^{r+2}$ and even $h^{2r}$ , comparing with $h^{r+1}$ rate of the counterpart finite element method. Here $r$ is the polynomial degree of the trial space. All theoretical results are justified by numerical tests.  相似文献   

20.
We investigate efficient algorithms and a practical implementation of an explicit-type high-order timestepping method based on Krylov subspace approximations, for possible application to large-scale engineering problems in electromagnetics. We consider a semi-discrete form of the Maxwell’s equations resulting from a high-order spectral-element discontinuous Galerkin discretization in space whose solution can be expressed analytically by a large matrix exponential of dimension $\kappa \times \kappa $ . We project the matrix exponential into a small Krylov subspace by the Arnoldi process based on the modified Gram–Schmidt algorithm and perform a matrix exponential operation with a much smaller matrix of dimension $m\times m$ ( $m\ll \kappa $ ). For computing the matrix exponential, we obtain eigenvalues of the $m\times m$ matrix using available library packages and compute an ordinary exponential function for the eigenvalues. The scheme involves mainly matrix-vector multiplications, and its convergence rate is generally $O(\Delta t^{m-1})$ in time so that it allows taking a larger timestep size as $m$ increases. We demonstrate CPU time reduction compared with results from the five-stage fourth-order Runge–Kutta method for a certain accuracy. We also demonstrate error behaviors for long-time simulations. Case studies are also presented, showing loss of orthogonality that can be recovered by adding a low-cost reorthogonalization technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号