首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
A Nonlinear Stepsize Control (NSC) framework has been proposed by Toint [Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Optim.Methods Softw. 28 (2013), pp. 82–95] for unconstrained optimization, generalizing many trust-region and regularization algorithms. More recently, worst-case complexity bounds for the generic NSC framework were proved by Grapiglia et al. [On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization, Math. Program. 152 (2015), pp. 491–520] in the context of non-convex problems. In this paper, improved complexity bounds are obtained for convex and strongly convex objectives.  相似文献   

2.
Over the past decade, ? 1 regularization has emerged as a powerful way to learn classifiers with implicit feature selection. More recently, mixed-norm (e.g., ? 1/? 2) regularization has been utilized as a way to select entire groups of features. In this paper, we propose a novel direct multiclass formulation specifically designed for large-scale and high-dimensional problems such as document classification. Based on a multiclass extension of the squared hinge loss, our formulation employs ? 1/? 2 regularization so as to force weights corresponding to the same features to be zero across all classes, resulting in compact and fast-to-evaluate multiclass models. For optimization, we employ two globally-convergent variants of block coordinate descent, one with line search (Tseng and Yun in Math. Program. 117:387–423, 2009) and the other without (Richtárik and Taká? in Math. Program. 1–38, 2012a; Tech. Rep. arXiv:1212.0873, 2012b). We present the two variants in a unified manner and develop the core components needed to efficiently solve our formulation. The end result is a couple of block coordinate descent algorithms specifically tailored to our multiclass formulation. Experimentally, we show that block coordinate descent performs favorably compared to other solvers such as FOBOS, FISTA and SpaRSA. Furthermore, we show that our formulation obtains very compact multiclass models and outperforms ? 1/? 2-regularized multiclass logistic regression in terms of training speed, while achieving comparable test accuracy.  相似文献   

3.
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given.  相似文献   

4.
We study the problem of minimizing the sum of a smooth convex function and a convex block-separable regularizer and propose a new randomized coordinate descent method, which we call ALPHA. Our method at every iteration updates a random subset of coordinates, following an arbitrary distribution. No coordinate descent methods capable to handle an arbitrary sampling have been studied in the literature before for this problem. ALPHA is a very flexible algorithm: in special cases, it reduces to deterministic and randomized methods such as gradient descent, coordinate descent, parallel coordinate descent and distributed coordinate descent—both in nonaccelerated and accelerated variants. The variants with arbitrary (or importance) sampling are new. We provide a complexity analysis of ALPHA, from which we deduce as a direct corollary complexity bounds for its many variants, all matching or improving best known bounds.  相似文献   

5.
In this paper, a new pattern search is proposed to solve the systems of nonlinear equations. We introduce a new non-monotone strategy which includes a convex combination of the maximum function of some preceding successful iterates and the current function. First, we produce a stronger non-monotone strategy in relation to the generated strategy by Gasparo et al. [Nonmonotone algorithms for pattern search methods, Numer. Algorithms 28 (2001), pp. 171–186] whenever iterates are far away from the optimizer. Second, when iterates are near the optimizer, we produce a weaker non-monotone strategy with respect to the generated strategy by Ahookhosh and Amini [An efficient nonmonotone trust-region method for unconstrained optimization, Numer. Algorithms 59 (2012), pp. 523–540]. Third, whenever iterates are neither near the optimizer nor far away from it, we produce a medium non-monotone strategy which will be laid between the generated strategy by Gasparo et al. [Nonmonotone algorithms for pattern search methods, Numer. Algorithms 28 (2001), pp. 171–186] and Ahookhosh and Amini [An efficient nonmonotone trust-region method for unconstrained optimization, Numer. Algorithms 59 (2012), pp. 523–540]. Reported are numerical results of the proposed algorithm for which the global convergence is established.  相似文献   

6.
Many recent applications in machine learning and data fitting call for the algorithmic solution of structured smooth convex optimization problems. Although the gradient descent method is a natural choice for this task, it requires exact gradient computations and hence can be inefficient when the problem size is large or the gradient is difficult to evaluate. Therefore, there has been much interest in inexact gradient methods (IGMs), in which an efficiently computable approximate gradient is used to perform the update in each iteration. Currently, non-asymptotic linear convergence results for IGMs are typically established under the assumption that the objective function is strongly convex, which is not satisfied in many applications of interest; while linear convergence results that do not require the strong convexity assumption are usually asymptotic in nature. In this paper, we combine the best of these two types of results by developing a framework for analysing the non-asymptotic convergence rates of IGMs when they are applied to a class of structured convex optimization problems that includes least squares regression and logistic regression. We then demonstrate the power of our framework by proving, in a unified manner, new linear convergence results for three recently proposed algorithms—the incremental gradient method with increasing sample size [R.H. Byrd, G.M. Chin, J. Nocedal, and Y. Wu, Sample size selection in optimization methods for machine learning, Math. Program. Ser. B 134 (2012), pp. 127–155; M.P. Friedlander and M. Schmidt, Hybrid deterministic–stochastic methods for data fitting, SIAM J. Sci. Comput. 34 (2012), pp. A1380–A1405], the stochastic variance-reduced gradient (SVRG) method [R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Advances in Neural Information Processing Systems 26: Proceedings of the 2013 Conference, 2013, pp. 315–323], and the incremental aggregated gradient (IAG) method [D. Blatt, A.O. Hero, and H. Gauchman, A convergent incremental gradient method with a constant step size, SIAM J. Optim. 18 (2007), pp. 29–51]. We believe that our techniques will find further applications in the non-asymptotic convergence analysis of other first-order methods.  相似文献   

7.
We introduce a new iteration algorithm for solving the Ky Fan inequality over the fixed point set of a nonexpansive mapping, where the cost bifunction is monotone without Lipschitz-type continuity. The algorithm is based on the idea of the ergodic iteration method for solving multi-valued variational inequality which is proposed by Bruck [On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space, J. Math. Anal. Appl. 61 (1977), pp. 159–164] and the auxiliary problem principle for equilibrium problems P.N. Anh, T.N. Hai, and P.M. Tuan. [On ergodic algorithms for equilibrium problems, J. Glob. Optim. 64 (2016), pp. 179–195]. By choosing suitable regularization parameters, we also present the convergence analysis in detail for the algorithm and give some illustrative examples.  相似文献   

8.
In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp. 201–245.]. The first focal point of this paper is a new variant of the approach that employs a line search rather than a trust region strategy, where a critical algorithmic feature for the line search strategy is the use of convexified piecewise quadratic models of the AL function for computing the search directions. We prove global convergence guarantees for our line search algorithm that are on par with those for the previously proposed trust region method. A second focal point of this paper is the practical performance of the line search and trust region algorithm variants in Matlab software, as well as that of an adaptive penalty parameter updating strategy incorporated into the Lancelot software. We test these methods on problems from the CUTEst and COPS collections, as well as on challenging test problems related to optimal power flow. Our numerical experience suggests that the adaptive algorithms outperform traditional AL methods in terms of efficiency and reliability. As with traditional AL algorithms, the adaptive methods are matrix-free and thus represent a viable option for solving large-scale problems.  相似文献   

9.
We introduce a Steffensen-type method (STTM) for solving nonlinear equations in a Banach space setting. Then, we present a local convergence analysis for (STTM) using recurrence relations. Numerical examples validating our theoretical results are also provided in this study to show that (STTM) is faster than other methods [I.K. Argyros, J. Ezquerro, J.M. Gutiérrez, M. Hernández, and S. Hilout, On the semilocal convergence of efficient Chebyshev-Secant-type methods, J. Comput. Appl. Math. 235 (2011), pp. 3195–3206; J.A. Ezquerro and M.A. Hernández, An optimization of Chebyshev's method, J. Complexity 25 (2009), pp. 343–361] using similar convergence conditions.  相似文献   

10.
《国际计算机数学杂志》2012,89(14):3273-3296
We introduce the new idea of recurrent functions to provide a new semilocal convergence analysis for Newton-type methods. It turns out that our sufficient convergence conditions are weaker, and the error bounds are tighter than in earlier studies in many interesting cases [X. Chen, On the convergence of Broyden-like methods for nonlinear equations with nondifferentiable terms, Ann. Inst. Statist. Math. 42 (1990), pp. 387–401; X. Chen and T. Yamamoto, Convergence domains of certain iterative methods for solving nonlinear equations, Numer. Funct. Anal. Optim. 10 (1989), pp. 37–48; Y. Chen and D. Cai, Inexact overlapped block Broyden methods for solving nonlinear equations, Appl. Math. Comput. 136 (2003), pp. 215–228; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in Nonlinear Functional Analysis and Applications, L.B. Rall, ed., Academic Press, New York, 1971, pp. 425–472; P. Deuflhard, Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms, Springer Series in Computational Mathematics, Vol. 35, Springer-Verlag, Berlin, 2004; P. Deuflhard and G. Heindl, Affine invariant convergence theorems for Newton's method and extensions to related methods, SIAM J. Numer. Anal. 16 (1979), pp. 1–10; Z. Huang, A note of Kantorovich theorem for Newton iteration, J. Comput. Appl. Math. 47 (1993), pp. 211–217; L.V. Kantorovich and G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982; D. Li and M. Fukushima, Globally Convergent Broyden-like Methods for Semismooth Equations and Applications to VIP, NCP and MCP, Optimization and Numerical Algebra (Nanjing, 1999), Ann. Oper. Res. 103 (2001), pp. 71–97; C. Ma, A smoothing Broyden-like method for the mixed complementarity problems, Math. Comput. Modelling 41 (2005), pp. 523–538; G.J. Miel, Unified error analysis for Newton-type methods, Numer. Math. 33 (1979), pp. 391–396; G.J. Miel, Majorizing sequences and error bounds for iterative methods, Math. Comp. 34 (1980), pp. 185–202; I. Moret, A note on Newton type iterative methods, Computing 33 (1984), pp. 65–73; F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985), pp. 71–84; W.C. Rheinboldt, A unified convergence theory for a class of iterative processes, SIAM J. Numer. Anal. 5 (1968), pp. 42–63; T. Yamamoto, A convergence theorem for Newton-like methods in Banach spaces, Numer. Math. 51 (1987), pp. 545–557; P.P. Zabrejko and D.F. Nguen, The majorant method in the theory of Newton–Kantorovich approximations and the Pták error estimates, Numer. Funct. Anal. Optim. 9 (1987), pp. 671–684; A.I. Zin[cbreve]enko, Some approximate methods of solving equations with non-differentiable operators, (Ukrainian), Dopovidi Akad. Nauk Ukraïn. RSR (1963), pp. 156–161]. Applications and numerical examples, involving a nonlinear integral equation of Chandrasekhar-type, and a differential equation are also provided in this study.  相似文献   

11.
To save more Jacobian calculations and achieve a faster convergence rate, Yang [A higher-order Levenberg-Marquardt method for nonlinear equations, Appl. Math. Comput. 219(22)(2013), pp. 10682–10694, doi:10.1016/j.amc.2013.04.033, 65H10] proposed a higher-order Levenberg–Marquardt (LM) method by computing the LM step and another two approximate LM steps for nonlinear equations. Under the local error bound condition, global and local convergence of this method is proved by using trust region technique. However, it is clear that the last two approximate LM steps may be not necessarily a descent direction, and standard line search technique cannot be used directly to obtain the convergence properties of this higher-order LM method. Hence, in this paper, we employ the nonmonotone second-order Armijo line search proposed by Zhou [On the convergence of the modified Levenberg-Marquardt method with a nonmonotone second order Armijo type line search, J. Comput. Appl. Math. 239 (2013), pp. 152–161] to guarantee the global convergence of this higher-order LM method. Moreover, the local convergence is also preserved under the local error bound condition. Numerical results show that the new method is efficient.  相似文献   

12.
《国际计算机数学杂志》2012,89(16):3468-3482
In this paper, a spline collocation method is applied to solve a system of fourth-order boundary-value problems associated with obstacle, unilateral and contact problems. The presented method is dependent on four collocation points to be satisfied by four parameters θ j ∈(0, 1], j=1(1) 4 in each subinterval. It turns out that the proposed method when applied to the concerned system is a fourth-order convergent method and gives numerical results which are better than those produced by other spline methods [E.A. Al-Said and M.A. Noor, Finite difference method for solving fourth-order obstacle problems, Int. J. Comput. Math. 81(6) (2004), pp. 741–748; F. Geng and Y. Lin, Numerical solution of a system of fourth order boundary value problems using variational iteration method, Appl. Math. Comput. 200 (2008), pp. 231–241; J. Rashidinia, R. Mohammadi, R. Jalilian, and M. Ghasemi, Convergence of cubic-spline approach to the solution of a system of boundary-value problems, Appl. Math. Comput. 192 (2007), pp. 319–331; S.S. Siddiqi and G. Akram, Solution of the system of fourth order boundary value problems using non polynomial spline technique, Appl. Math. Comput. 185 (2007), pp. 128–135; S.S. Siddiqi and G. Akram, Numerical solution of a system of fourth order boundary value problems using cubic non-polynomial spline method, Appl. Math. Comput. 190(1) (2007), pp. 652–661; S.S. Siddiqi and G. Akram, Solution of the system of fourth order boundary value problems using cubic spline, Appl. Math. Comput. 187(2) (2007), pp. 1219–1227; Siraj-ul-Islam, I.A. Tirmizi, F. Haq, and S.K. Taseer, Family of numerical methods based on non-polynomial splines for solution of contact problems, Commun. Nonlinear Sci. Numer. Simul. 13 (2008), pp. 1448–1460]. Moreover, the absolute stability properties appear that the method is A-stable. Two numerical examples (one for each case of boundary conditions) are given to illustrate practical usefulness of the method developed.  相似文献   

13.
14.
《国际计算机数学杂志》2012,89(6):1351-1369
We use more precise majorizing sequences than in earlier studies such as [J. Appell, E. De Pascale, J.V. Lysenko, and P.P. Zabrejko, New results on Newton–Kantorovich approximations with applications to nonlinear integral equations, Numer. Funct. Anal. Optim. 18 (1997), pp. 1–17; I.K. Argyros, Concerning the ‘terra incognita’ between convergence regions of two Newton methods, Nonlinear Anal. 62 (2005), pp. 179–194; F. Cianciaruso, A further journey in the ‘terra incognita’ of the Newton–Kantorovich method, Nonlinear Funct. Anal. Appl. 15 (2010), pp. 173–183; F. Cianciaruso and E. De Pascale, Newton–Kantorovich approximations when the derivative is Hölderian: Old and new results, Numer. Funct. Anal. Optim. 24 (2003), pp. 713–723; F. Cianciaruso, E. De Pascale, and P.P. Zabrejko, Some remarks on the Newton–Kantorovich approximations, Atti Sem. Mat. Fis. Univ. Modena 48 (2000), pp. 207–215; E. De Pascale and P.P. Zabrejko, Convergence of the Newton–Kantorovich method under Vertgeim conditions: A new improvement, Z. Anal. Anwendvugen 17 (1998), pp. 271–280; J.A. Ezquerro and M.A. Hernández, On the R-order of convergence of Newton's method under mild differentiability conditions, J. Comput. Appl. Math. 197 (2006), pp. 53–61; J.V. Lysenko, Conditions for the convergence of the Newton–Kantorovich method for nonlinear equations with Hölder linearizations (in Russian), Dokl. Akad. Nauk BSSR 38 (1994), pp. 20–24; P.D. Proinov, New general convergence theory for iterative processes and its applications to Newton–Kantorovich type theorems, J. Complexity 26 (2010), pp. 3–42; J. Rokne, Newton's method under mild differentiability conditions with error analysis, Numer. Math. 18 (1971/72), pp. 401–412; B.A. Vertgeim, On conditions for the applicability of Newton's method, (in Russian), Dokl. Akad. N., SSSR 110 (1956), pp. 719–722; B.A. Vertgeim, On some methods for the approximate solution of nonlinear functional equations in Banach spaces, Uspekhi Mat. Nauk 12 (1957), pp. 166–169 (in Russian); English transl.: Amer. Math. Soc. Transl. 16 (1960), pp. 378–382; P.P. Zabrejko and D.F. Nguen, The majorant method in the theory of Newton–Kantorovich approximations and the Pták error estimates, Numer. Funct. Anal. Optim. 9 (1987), pp. 671–684; A.I. Zinc?enko, Some approximate methods of solving equations with non-differentiable operators (Ukrainian), Dopovidi Akad. Nauk Ukraïn. RSR (1963), pp. 156–161] to provide a semilocal convergence analysis for Newton's method under Hölder differentiability conditions. Our sufficient convergence conditions are also weaker even in the Lipschitz differentiability case. Moreover, the results are obtained under the same or less computational cost. Numerical examples are provided where earlier conditions do not hold but for which the new conditions are satisfied.  相似文献   

15.
This paper describes an accelerated HPE-type method based on general Bregman distances for solving convex–concave saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions, Math. Oper. Res. 25(2) (2000), pp. 214–230] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm presented in He and Monteiro [An accelerated HPE-type algorithm for a class of composite convex–concave saddle-point problems, SIAM J. Optim. 26 (2016), pp. 29–56] in two ways, namely: (a) it deals with general monotone SP problems instead of bilinear structured SPs and (b) it is based on general Bregman distances instead of the Euclidean one. Similar to the algorithm of He and Monteiro [An accelerated HPE-type algorithm for a class of composite convex–concave saddle-point problems, SIAM J. Optim. 26 (2016), pp. 29–56], it has the advantage that it works for any constant choice of proximal stepsize. Moreover, a suitable choice of the stepsize yields a method with the best known iteration-complexity for solving monotone SP problems. Computational results show that the new method is superior to Nesterov's [Smooth minimization of non-smooth functions, Math. Program. 103(1) (2005), pp. 127–152] smoothing scheme.  相似文献   

16.
《国际计算机数学杂志》2012,89(17):3762-3779
In order to solve the large sparse systems of linear equations arising from numerical solutions of two-dimensional steady incompressible viscous flow problems in primitive variable formulation, Ran and Yuan [On modified block SSOR iteration methods for linear systems from steady incompressible viscous flow problems, Appl. Math. Comput. 217 (2010), pp. 3050–3068] presented the block symmetric successive over-relaxation (BSSOR) and the modified BSSOR iteration methods based on the special structures of the coefficient matrices. In this study, we present the modified alternating direction-implicit (MADI) iteration method for solving the linear systems. Under suitable conditions, we establish convergence theorems for the MADI iteration method. In addition, the optimal parameter involved in the MADI iteration method is estimated in detail. Numerical experiments show that the MADI iteration method is a feasible and effective iterative solver.  相似文献   

17.
《国际计算机数学杂志》2012,89(8):1662-1672
Motivated by the Chinese Postman Problem, Boesch, Suffel, and Tindell [The spanning subgraphs of Eulerian graphs, J. Graph Theory 1 (1977), pp. 79–84] proposed the supereulerian graph problem which seeks the characterization of graphs with a spanning Eulerian subgraph. Pulleyblank [A note on graphs spanned by Eulerian graphs, J. Graph Theory 3 (1979), pp. 309–310] showed that the supereulerian problem, even within planar graphs, is NP-complete. In this paper, we settle an open problem raised by An and Xiong on characterization of supereulerian graphs with small matching numbers. A well-known theorem by Chvátal and Erdös [A note on Hamilton circuits, Discrete Math. 2 (1972), pp. 111–135] states that if G satisfies α(G)≤κ(G), then G is hamiltonian. Flandrin and Li in 1989 showed that every 3-connected claw-free graph G with α(G)≤2 κ(G) is hamiltonian. Our characterization is also applied to show that every 2-connected claw-free graph G with α(G)≤3 is hamiltonian, with only one well-characterized exceptional class.  相似文献   

18.
A trust-funnel method is proposed for solving nonlinear optimization problems with general nonlinear constraints. It extends the one presented by Gould and Toint [Nonlinear programming without a penalty function or a filter. Math. Prog. 122(1):155–196, 2010], originally proposed for equality-constrained optimization problems only, to problems with both equality and inequality constraints and where simple bounds are also considered. As the original one, our method makes use of neither filter nor penalty functions and considers the objective function and the constraints as independently as possible. To handle the bounds, an active-set approach is employed. We then exploit techniques developed for derivative-free optimization (DFO) to obtain a method that can also be used to solve problems where the derivatives are unavailable or are available at a prohibitive cost. The resulting approach extends the DEFT-FUNNEL algorithm presented by Sampaio and Toint [A derivative-free trust-funnel method for equality-constrained nonlinear optimization. Comput. Optim. Appl. 61(1):25–49, 2015], which implements a derivative-free trust-funnel method for equality-constrained problems. Numerical experiments with the extended algorithm show that our approach compares favourably to other well-known model-based algorithms for DFO.  相似文献   

19.
The purpose of this study is to give a Taylor polynomial approximation for the solution of hyperbolic type partial differential equations with constant coefficients. The technique used is an improved Taylor matrix method, which has been given for solving ordinary differential, integral and integro-differential equations [M. Gülsu and M. Sezer, A method for the approximate solution of the high-order linear difference equations in terms of Taylor polynomials, Int. J. Comput. Math. 82(5) (2005), pp. 629–642; M. Gülsu and M. Sezer, On the solution of the Riccati equation by the Taylor matrix method, Appl. Math. Comput. 188 (2007), pp. 446–449; A. Karamete and M. Sezer, A Taylor collocation method for the solution of linear integro-differential equations, Int. J. Comput. Math. 79(9) (2002), pp. 987–1000; N. Kurt and M. Çevik, Polynomial solution of the single degree of freedom system by Taylor matrix method, Mech. Res. Commun. 35 (2008), pp. 530–536; N. Kurt and M. Sezer, Polynomial solution of high-order linear Fredholm integro-differential equations with constant coefficients, J. Franklin Inst. 345 (2008), pp. 839–850; ?. Nas, S. Yalçinba?, and M. Sezer, A method for approximate solution of the high-order linear Fredholm integro-differential equations, Int. J. Math. Edu. Sci. Technol. 27(6) (1996), pp. 821–834; M. Sezer, Taylor polynomial solution of Volterra integral equations, Int. J. Math. Edu. Sci. Technol. 25(5) (1994), pp. 625–633; M. Sezer, A method for approximate solution of the second order linear differential equations in terms of Taylor polynomials, Int. J. Math. Edu. Sci. Technol. 27(6) (1996), pp. 821–834; M. Sezer, M. Gülsu, and B. Tanay, A matrix method for solving high-order linear difference equations with mixed argument using hybrid Legendre and Taylor polynomials, J. Franklin Inst. 343 (2006), pp. 647–659; S. Yalçinba?, Taylor polynomial solutions of nonlinear Volterra–Fredholm integral equation, Appl. Math. Comput. 127 (2002), pp. 196–206; S. Yalçinba? and M. Sezer, The approximate solution of high-order linear Volterra–Fredholm integro-differential equations in terms of Taylor polynomials, Appl. Math. Comput. 112 (2000), pp. 291–308]. Some numerical examples, which consist of initial and boundary conditions, are given to illustrate the reliability and efficiency of the method. Also, the results obtained are compared by the known results; the error analysis is performed and the accuracy of the solution is shown.  相似文献   

20.
ABSTRACT

We introduce a new ergodic algorithm for solving equilibrium problems over the fixed point set of a nonexpansive mapping. In contrast to the existing one in Kim [The Bruck's ergodic iteration method for the Ky Fan inequality over the fixed point set. Int. J. Comput. Math. 94 (2017), pp. 2466–2480], our algorithm uses self-adaptive step sizes. Thanks to that, the proposed algorithm converges under milder conditions. Moreover, at each step of our algorithm, instead of solving strongly convex problems, we only have to compute a subgradient of a convex function. Hence, our algorithm has lower computational cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号