首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper, we consider the backward problem for diffusion equation with space fractional Laplacian, i.e. determining the initial distribution from the final value measurement data. In order to overcome the ill-posedness of the backward problem, we present a so-called negative exponential regularization method to deal with it. Based on the conditional stability estimate and an a posteriori regularization parameter choice rule, the convergence rate estimate are established under a-priori bound assumption for the exact solution. Finally, several numerical examples are proposed to show that the numerical methods are effective.  相似文献   

2.
In this paper, we take the advantage of an analytical method to solve the advection-dispersion equation (ADE) for identifying the contamination problems. First, the Fourier series expansion technique is employed to calculate the concentration field C(x, t) at any time t< T. Then, we consider a direct regularization by adding an extra term αC(x,0) on the final condition to carry off a second kind Fredholm integral equation. The termwise separable property of the kernel function permits us to transform itinto a two-point boundary value problem. The uniform convergence and error estimate of the regularized solution Cα(x,t) are provided and a strategy to select the regularized parameter is suggested. The solver used in this work can recover the spatial distribution of the groundwater contaminant concentration. Several numerical examples are examined to show that the new approach can retrieve all past data very well and is good enough to cope with heterogeneous parameters’ problems, even though the final data are noised seriously.  相似文献   

3.
This paper is concerned with the Cauchy problem connected with the Helmholtz equation in a smooth-bounded domain. The Fourier–Bessel method with Tikhonov regularization is applied to achieve a regularized solution to the problem with noisy data. The convergence and stability are obtained with a suitable choice of the regularization parameter. Numerical experiments are also presented to show the effectiveness of the proposed method.  相似文献   

4.
An anchored analysis of variance (ANOVA) method is proposed in this paper to decompose the statistical moments. Compared to the standard ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the anchor point, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the covariance decomposition of the output variance is used in this work to take account of the interactions between non‐orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The covariance‐based sensitivity indices (SI) are also used, compared to the variance‐based SI. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher‐order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. Finally, the proposed method is applied on a realistic case, that is, estimating the chemical reactions uncertainties in a hypersonic flow around a space vehicle during an atmospheric reentry. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper, we propose a closed-loop supply chain network configuration model and a solution methodology that aim to address several research gaps in the literature. The proposed solution methodology employs a novel metaheuristic algorithm, along with the popular gradient descent search method, to aid location-allocation and pricing-inventory decisions in a two-stage process. In the first stage, we use an improved version of the particle swarm optimisation (PSO) algorithm, which we call improved PSO (IPSO), to solve the location-allocation problem (LAP). The IPSO algorithm is developed by introducing mutation to avoid premature convergence and embedding an evolutionary game-based procedure known as replicator dynamics to increase the rate of convergence. The results obtained through the application of IPSO are used as input in the second stage to solve the inventory-pricing problem. In this stage, we use the gradient descent search method to determine the selling price of new products and the buy-back price of returned products, as well as inventory cycle times for both product types. Numerical evaluations undertaken using problem instances of different scales confirm that the proposed IPSO algorithm performs better than the comparable traditional PSO, simulated annealing (SA) and genetic algorithm (GA) methods.  相似文献   

6.
An a-posteriori error estimator for finite element analysis proposed by Zienkiewicz and Zhu is analysed and shown to be effective and convergent. In addition we analyse wider classes of estimators of which the Zienkiewicz–Zhu estimator is a special case. It is shown that some of these estimators will be asymptotically exact. Numerical evidence is presented supporting the analysis.  相似文献   

7.
Compressive imaging has been intensively studied during the past few years, capable of reconstructing high-resolution images with sampling ratios far below the Nyquist rate. In contrast to previous works, a new l0l2 minimisation approach is proposed for compressive imaging in this paper, regularised by sparsity constraints in three complementary frames. The new approach stems from the observation that images of practical interest may consist of different morphological components (e.g. point singularities, oscillating textures, curvilinear edges), and therefore, cannot be sparsely represented in one single frame. The alternating split Lagrangian method is further exploited to resolve the l0l2 minimisation problem, leading to an efficient iteration scheme for compressive imaging from partial Fourier data. In addition, we analyse the convergence properties of the proposed algorithm and compare its performance against several recently proposed methods. Numerical simulations on natural and magnetic resonance images show that the proposed approach achieves state-of-the-art performance.  相似文献   

8.
Computable a-posteriori error estimates for finite element solutions are derived in an asymptotic form for h → 0 where h measures the size of the elements. The approach has similarity to the residual method but differs from it in the use of norms of negative Sobolev spaces corresponding to the given bilinear (energy) form. For clarity the presentation is restricted to one-dimensional model problems. More specifically, the source, eigenvalue, and parabolic problems are considered involving a linear, self-adjoint operator of the second order. Generalizations to more general one-dimensional problems are straightforward, and the results also extend to higher space dimensions; but this involves some additional considerations. The estimates can be used for a practical a-posteriori assessment of the accuracy of a computed finite element solution, and they provide a basis for the design of adaptive finite element solvers.  相似文献   

9.
The theory and mathematical bases ofa-posteriori error estimates are explained. It is shown that theMedial Axis of a body can be used to decompose it into a set of mutually non-overlapping quadrilateral and triangular primitives. A mesh generation scheme used to generate quadrilaterals inside these primitives is also presented together with its relevant implementation aspects. A newh-refinement strategy based on weighted average energy norm and enhanced by strain energy density ratios is proposed and two typical problems are solved to demonstrate its efficiency over the conventional refinement strategy in the relative improvement of global asymptotic convergence.  相似文献   

10.
In this paper, an inverse source problem for the Helium Production–Diffusion Equation on a columnar symmetric domain is investigated. Based on an a priori assumption, the optimal error bound analysis and a conditional stability result are given. This problem is ill-posed and Landweber iteration regularization method is used to deal with this problem. Convergence estimates are presented under the priori and the posteriori regularization choice rules. For the a priori and the a posteriori regularization parameters choice rules, the convergence error estimates are all order optimal. Numerical examples are given to show that the regularization method is effective and stable for dealing with this ill-posed problem.  相似文献   

11.
P G Awate  P V Saraph 《Sadhana》1997,22(1):83-100
The well-known priority dispatching rule MOD (Modified Operational Due Date) in job shop scheduling considers job urgency through ODD (Operational Due Date) and also incorporates SPT (Shortest Processing Time)-effect in prioritising operationally late jobs; leading to robust behaviour in Mean Tardiness (MT) with respect to tightness/looseness of due dates. In the present paper, we study an extension of the MOD rule using job-waiting-time based discrimination among operationally late jobs to protect long jobs from excessive delays by incorporating an ‘acceleration property’ into the scheduling rule. Formally, we employ a weighted-SPT dispatching priority index of the form: (Processing time)/(Waiting time)α for operationally late jobs, while the priority index is ODD for operationally non-late jobs; and the latter class of jobs has a lower priority than the former class. In the context of Assembly Job Shop scheduling, some existing literature includes considerable focus around the concept of ‘Staging Delay’, i.e., waiting of components or sub-assemblies for their counterparts for assembly. Some existing approaches attempt dynamic anticipation of staging delay problems and re-prioritisation of operations along converging branches. In the present paper, rather than depending on such a centralised and largely backward scheduling approach, we consider a partially decentralised approach, endowing jobs with a priority index yielding an ‘acceleration property’ based on a ‘look-back’ in terms of waiting time, rather than ‘look-ahead’. For the particular case, in our proposed rule, whenα is set at zero and when all jobs at a machine are operationally late, our rule agrees with MOD as both exhibit the SPT effect. In simulation tests of our priority scheme for assembly job shops, in comparison with leading heuristics in literature, we found our rule to be particularly effective in: (1) minimising conditional mean tardiness, (2) minimising 99-percentile-point of the tardiness distribution, through proper choice ofα. We also exploit an interesting duality between the scheduling and queueing control versions of the problem. Based on this, some exact and heuristic analysis is given to guide the choice ofα, which is also supported by numerical evidence.  相似文献   

12.
We propose a new embedded finite element method to simulate partial differential equations over domains with internal interfaces. Our approach belongs to the family of surrogate/approximate interface methods and relies on the idea of shifting the location and value of jump interface conditions. This choice has the goal of preserving optimal convergence rates while avoiding small cut cells and related problematic issues, typical of traditional embedded methods. The proposed approach is accurate, robust, efficient, and simple to implement. We apply this concept to internal interface computations in the context of the mixed Poisson problem, also known as the Darcy flow problem. An extensive set of numerical tests is provided to demonstrate the performance of the proposed approach.  相似文献   

13.
14.
One of the most commonly used methods for modeling multivariate time series is the vector autoregressive model (VAR). VAR is generally used to identify lead, lag, and contemporaneous relationships describing Granger causality within and between time series. In this article, we investigate the VAR methodology for analyzing data consisting of multilayer time series that are spatially interdependent. When modeling VAR relationships for such data, the dependence between time series is both a curse and a blessing. The former because it requires modeling the between time-series correlation or the contemporaneous relationships which may be challenging when using likelihood-based methods. The latter because the spatial correlation structure can be used to specify the lead–lag relationships within and between time series, within and between layers. To address these challenges, we propose an L1\L2 regularized likelihood estimation method. The lead, lag, and contemporaneous relationships are estimated using an efficient algorithm that exploits sparsity in the VAR structure, accounts for the spatial dependence, and models the error dependence. We consider a case study to illustrate the applicability of our method. In the supplementary materials available online, we assess the performance of the proposed VAR model and compare it with existing methods within a simulation study.  相似文献   

15.
In this article, we propose a semi-analytical method to tackle the two-dimensional backward heat conduction problem (BHCP) by using a quasi-boundary idea. First, the Fourier series expansion technique is employed to calculate the temperature field u(x, y, t) at any time t < T. Second, we consider a direct regularization by adding an extra termau(x, y, 0) to reach a second-kind Fredholm integral equation for u(x, y, 0). The termwise separable property of the kernel function permits us to obtain a closed-form regularized solution. Besides, a strategy to choose the regularization parameter is suggested. When several numerical examples were tested, we find that the proposed scheme is robust and applicable to the two-dimensional BHCP.  相似文献   

16.
In this paper we propose a numerical algorithm based on the method of fundamental solutions for recovering a space-dependent heat source and the initial data simultaneously in an inverse heat conduction problem. The problem is transformed into a homogeneous backward-type inverse heat conduction problem and a Dirichlet boundary value problem for Poisson's equation. We use an improved method of fundamental solutions to solve the backward-type inverse heat conduction problem and apply the finite element method for solving the well-posed direct problem. The Tikhonov regularization method combined with the generalized cross validation rule for selecting a suitable regularization parameter is applied to obtain a stable regularized solution for the backward-type inverse heat conduction problem. Numerical experiments for four examples in one-dimensional and two-dimensional cases are provided to show the effectiveness of the proposed algorithm.  相似文献   

17.
Jose M. Vidal-Sanz 《TEST》2009,18(1):96-114
We consider the nonparametric estimation of spectral densities for second-order stationary random fields on a d-dimensional lattice. We discuss some drawbacks of standard methods and propose modified estimator classes with improved bias convergence rate, emphasizing the use of kernel methods and the choice of an optimal smoothing number. We prove the uniform consistency and study the uniform asymptotic distribution when the optimal smoothing number is estimated from the sampled data.  相似文献   

18.
In this paper, we study the efficient numerical integration of functions with sharp gradients and cusps. An adaptive integration algorithm is presented that systematically improves the accuracy of the integration of a set of functions. The algorithm is based on a divide and conquer strategy and is independent of the location of the sharp gradient or cusp. The error analysis reveals that for a C0 function (derivative discontinuity at a point), a rate of convergence of n + 1 is obtained in . Two applications of the adaptive integration scheme are studied. First, we use the adaptive quadratures for the integration of the regularized Heaviside function—a strongly localized function that is used for modeling sharp gradients. Then the adaptive quadratures are employed in the enriched finite element solution of the all‐electron Coulomb problem in crystalline diamond. The source term and enrichment functions of this problem have sharp gradients and cusps at the nuclei. We show that the optimal rate of convergence is obtained with only a marginal increase in the number of integration points with respect to the pure finite element solution with the same number of elements. The adaptive integration scheme is simple, robust, and directly applicable to any generalized finite element method employing enrichments with sharp local variations or cusps in n‐dimensional parallelepiped elements. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The problem of differentiating non-smooth functions of specimen displacements, which are measured during the material removal, is discussed. This problem arises when employing the layer removal method, namely a method of rings and strips, for residual stress depth profiling. It is shown that this problem is ill-posed and special solution methods are required in order to obtain a stable solution. The stability of the solution affects to a high extent the resulting accuracy of the residual stress evaluation in the investigated material. The presented study discusses a numerical approach to solving such ill-posed problems. The proposed approach, which is based on the Tikhonov regularization and a regularized finite difference method, provides a stable approximate solution, including its pointwise error estimation. The advantage of this approach is that it does not require any knowledge about the unknown exact solution; the pointwise error estimation of the measured data is the only prior information that must be available. In addition, this approach provides a convergence of the approximate solution to the unknown exact one when the perturbation of the initial data approaches zero.  相似文献   

20.
Winfried Stute 《TEST》2001,10(2):393-403
In this paper we provide consistency and distributional convergence results for functions of residuals from an ARCH(p)-time series. An application to a goodness-of-fit problem is also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号