首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
A. Buzdin 《Computing》1998,61(3):257-276
In this paper, we present the tangential block decomposition for block-tridiagonal matrices which is in many aspects similar to the frequency filtering method by Wittum [8] and also to the tangential frequency filtering decomposition by Wagner [6]–[7]. In opposite to the methods of Wittum and Wagner, for the class of model problems our approach does not use any test vectors for its implementation. Similar to many iterative methods, it needs only bounds for extremal eigenvalues. Theoretical properties of our scheme are similar to those for the ADI-method. The practical convergence of the presented method is illustrated by numerical examples.  相似文献   

2.
A general method for solving game problems of approach for dynamic Volterra-evolution systems is presented. This method is based on the method of resolving functions [5] and the techniques of the theory of multivalued mappings. Properties of resolving functions are studied in more detail. Cases are separated where resolving functions can be derived in an analytical form. The scheme proposed covers a wide range of functional-differential systems, in particular, integral, integro-differential, and differential-difference systems of equations that describe the dynamics of a conflict controlled process. Game problems for systems with fractional Riemann-Liouville derivatives and regularized Dzhrbashyan-Nersesyan derivatives are studied in more detail. We will call them fractal games. An important role in the presentation of solutions of such systems is played by the generalized Mittag-Leffler matrix functions, which are introduced here. The use of asymptotic representations of these functions within the framework of the scheme of the method allows us to establish sufficient conditions of resolvability of game problems. A formal definition of parallel approach is given and illustrated by game problems for systems with fractional derivatives. Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 3–32, May–June, 2000.  相似文献   

3.
Conclusion Transformation (6) smoothing thef(x) level lines explains the effectiveness ofr(α)-algorithms from visual geometrical considerations. It may be regarded as a satisfactory interpretation of space dilation in the direction of the difference of two successive subgradients. On the other hand, it preserves the gradient flavor of the method, in contrast to the classical ellipsoid method [11, 12], which is a successful interpretation of the subgradient method with space dilation in the direction of the subgradient. A sensible combination of ellipsoids of a special kind [5] with the ellipsoids ell(x0,a, b, c) is quite capable of producing, on the basis of a one-dimensional space dilation operator, effective algorithms that solve a broader class of problems than convex programming problems, e.g., the problem to find saddle points of convex-concave functions, particular cases of the problem of solving variational inequalities, and also special classes of linear and nonlinear complementarity problems. Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 113–134, January–February, 1996.  相似文献   

4.
Conclusion We have proposed a modification of the orthogonal Faddeev method [6] for solving various SLAE and also for inversion and pseudoinversion of matrices. The proposed version of the method relies on Householder and Jordan-Gauss methods and its computational complexity is approximately half that of [6]. This method, combined with the matrix-graph method [9] of formalized SPPC structure design, has been applied to synthesize a number of AP architectures that efficiently implement the proposed method. Goal-directed isomorphic and homeomorphic transformations of the LFG of the original algorithm (5) lead to a one-dimensional (linear) AP of fixed size, with minimum hardware and time costs and with minimized input-output channel width. The proposed algorithm (5) has been implemented using a 4-processor AP, with Motorola DSP96002 processors as PEs (Fig. 7). Application of the algorithm (5) to solve an SLAE with a coefficient matrixA withM=N=100 and one righthand side on this AP produced a load factor η=0.82; for inversion of the matrixA of the same size we achieved η=0.77. The sequence of transformations and the partitioning of a trapezoidal planaer LFG described in this article have been generalized to the case of other LA algorithms decribed by triangular planar LFGs and executed on linear APs. It is shown that the AP structures synthesized in this study execute all the above-listed algorithms no less efficiently than the modified Faddeev algorithm, provided their PEs are initially tuned to the execution of the corresponding operators. Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 47–66, March–April, 1996.  相似文献   

5.
 XCS [1, 2] represents a new form of learning classifier system [3] that uses accuracy as a means of guiding fitness for selection within a Genetic Algorithm. The combination of accuracy-based selection and a dynamic niche-based deletion mechanism achieve a long sought-after goal–the reliable production, maintenance, and proliferation of the sub-population of optimally general accurate classifiers that map the problem domain [4]. Wilson [2] and Lanzi [5, 6] have demonstrated the applicability of XCS to the identification of the optimal action-chain leading to the optimum trade-off between reward distance and magnitude. However, Lanzi [6] also demonstrated that XCS has difficulty in finding an optimal solution to the long action-chain environment Woods-14 [7]. Whilst these findings have shed some light on the ability of XCS to form long action-chains, they have not provided a systematic and, above all, controlled investigation of the limits of XCS learning within multiple-step environments. In this investigation a set of confounding variables in such problems are identified. These are controlled using carefully constructed FSW environments [8, 9] of increasing length. Whilst investigations demonstrate that XCS is able to establish the optimal sub-population [O] [4] when generalisation is not used, it is shown that the introduction of generalisation introduces low bounds on the length of action-chains that can be identified and chosen between to find the optimal pathway. Where these bounds are reached a form of over-generalisation caused by the formation of dominant classifiers can occur. This form is further investigated and the Domination Hypothesis introduced to explain its formation and preservation.  相似文献   

6.
Conclusion The proposed approach leads to flexible decision support algorithms and procedures that easily adapt to changing requirements. The application of the proposed principles is illustrated in [12] with the object of allowing for the specific features of the problem and accelerating convergence of distributed decision support systems. The application of these principles to the construction of various control procedures and decision support scheme is demonstrated in [13–19]. At the present time, in connection with active transition to the market and operation in a rapidly changing reality, we can expect an increase in demand for algorithms, procedures, and schemes that divide the domains of competence, sharply delineate the domains of responsibility, and clearly separate the fields of action of the “center” and the “periphery” [11]. The need for such procedures will also be felt in financial management support [26–27] and in macro/micro economic modeling and forecasting [20–26]. This is due to the fact that in our rapidly changing world we are often unable to identify several separate criteria for optimization. We are often forced to look for a decision which is admissible by a whole range of formal and informal criteria and is stable under small perturbations of both the criteria and the external conditions [28–30]. Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 161–175, March–April, 1996.  相似文献   

7.
The paper is concerned with an optimization problem on game-type permutations, where one or both players have combinatorial constraints on their strategies. A mathematical model of such problems is constructed and analyzed. A modified graphical method is proposed to solve (2xn)-and (mx2)-dimensional problems. High-dimensional problems are reduced to linear programming and combinatorial optimization problems. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 103–114, November–December 2007.  相似文献   

8.
A stochastic successive approximation method is analyzed with a view to solving risk assessment problems that are reduced to a renewal integral equation and, in particular, to assessing the insolvency risk of an insurance company. Integrals in the equation are evaluated approximately, for example, by the Monte Carlo method. Iterations of the method are proved to converge uniformly with probability one. Theoretical results are illustrated by numeral computations. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 116–130, November–December 2008.  相似文献   

9.
10.
 In this paper Beth–Smullyan's tableaux method is extended to the fuzzy propositional logic. The fuzzy tableaux method is based on the concepts of t-truth and extended graded formula. As in classical logic, it is a refutation procedure. A closed fuzzy tableau beginning with the extended graded formula [r, A] asserting that this is not t-true, is a tableau proof of the graded formula (A, r). The theorems of soundness, completeness, and decidability are proved.  相似文献   

11.
Abstract. We exploit the gap in ability between human and machine vision systems to craft a family of automatic challenges that tell human and machine users apart via graphical interfaces including Internet browsers. Turing proposed [Tur50] a method whereby human judges might validate “artificial intelligence” by failing to distinguish between human and machine interlocutors. Stimulated by the “chat room problem” posed by Udi Manber of Yahoo!, and influenced by the CAPTCHA project [BAL00] of Manuel Blum et al. of Carnegie-Mellon Univ., we propose a variant of the Turing test using pessimal print: that is, low-quality images of machine-printed text synthesized pseudo-randomly over certain ranges of words, typefaces, and image degradations. We show experimentally that judicious choice of these ranges can ensure that the images are legible to human readers but illegible to several of the best present-day optical character recognition (OCR) machines. Our approach is motivated by a decade of research on performance evaluation of OCR machines [RJN96,RNN99] and on quantitative stochastic models of document image quality [Bai92,Kan96]. The slow pace of evolution of OCR and other species of machine vision over many decades [NS96,Pav00] suggests that pessimal print will defy automated attack for many years. Applications include `bot' barriers and database rationing. Received: February 14, 2002 / Accepted: March 28, 2002 An expanded version of: A.L. Coates, H.S. Baird, R.J. Fateman (2001) Pessimal Print: a reverse Turing Test. In: {\it Proc. 6th Int. Conf. on Document Analysis and Recognition}, Seattle, Wash., USA, September 10–13, pp. 1154–1158 Correspondence to: H. S. Baird  相似文献   

12.
In the paper, the first results of the work on the development and implementation of an adaptive control system for car parking are presented. The proposed approach to construction of a car control system differs from the commonly accepted approach [1–3] and is based on the autonomous adaptive control method [4] developed by the research team headed by A.A. Zhdanov. In the framework of this method, car control and training and retraining of the control system are implemented in one process with satisfaction of all necessary conditions and constraints. This allows the control system to adjust to parameters of a particular car and to automatically readjust if these parameters vary in the course of the car usage. In the paper, a program modeling car parking based on the principles developed in this study is described. Examples of parking a car for some particular cases of mutual location of obstacles and the desired slot in the parking place are presented.  相似文献   

13.
A stability criterion for a vector integer linear problem of lexicographic optimization is obtained. A regularization method is proposed that allows us to reduce a possible unstable output problem to a sequence of perturbed stable equivalent problems. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 125–130, November–December, 1999.  相似文献   

14.
The temporal stability and effective order of two different direct high-order Stokes solvers are examined. Both solvers start from the primitive variables formulation of the Stokes problem, but are distinct by the numerical uncoupling they apply on the Stokes operator. One of these solvers introduces an intermediate divergence free velocity for performing a temporal splitting (J. Comput. Phys. [1991] 97, 414–443) while the other treats the whole Stokes problem through the evaluation of a divergence free acceleration field (C.R. Acad. Sci. Paris [1994] 319 Serie I, 1455–1461; SIAM J. Scient. Comput. [2000] 22(4), 1386–1410). The second uncoupling is known to be consistent with the harmonicity of the pressure field (SIAM J. Scient. Comput. [2000] 22(4), 1386–1410). Both solvers proceed by two steps, a pressure evaluation based on an extrapolated in time (of theoretical order Je) Neumann condition, and a time implicit (of theoretical order Ji) diffusion step for the final velocity. These solvers are implemented with a Chebyshev mono-domain and a Legendre spectral element collocation schemes. The numerical stability of these four options is investigated for the sixteen combinations of (Je,Ji), 1 ≤ Je, Ji ≤ 4.  相似文献   

15.
Ill-posed inverse problems of recovering nonlinear dependencies in observational data are considered. An inductive method is developed for construction of a Bayesian model of support vector regression in Bernstein form. A new Bayesian evidence criterion is used to compare the adequacy of models. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 179–188, July–August 2007.  相似文献   

16.
V. Scholtyssek 《Calcolo》1995,32(1-2):17-38
The inverse eigenvalue problem for symmetric matrices (IEP) can be formulated as a system of two matrix equations. For solving the system a variation of Newton's method is used which has been proposed by Fusco and Zecca [Calcolo XXIII (1986), pp. 285–303] for the simultaneous computation of eigenvalues and eigenvectors of a given symmetric matrix. An iteration step of this method consists of a Newton step followed by an orthonormalization with the consequence that each iterate satisfies one of the given equations. The method is proved to convergence locally quadratically to regular solutions. The algorithm and some numerical examples are presented. In addition, it is shown that the so-called Method III proposed by Friedland, Nocedal, and Overton [SIAM J. Numer. Anal., 24 (1987), pp. 634–667] for solving IEP may be constructed similarly to the method presented here.  相似文献   

17.
The choice table provides one of the techniques for the representation of functions in continuous-valued logic [1]. The need to synthesize functions from choice tables arise in the design of hybrid [2] and analog [3] computers, and also in other applications of continuous-valued logic that are surveyed in [4]. The structure of the original table is determined by the external specification of the device or unit being designed. Algorithms are available for the synthesis of continuous-valued logic functions from choice tables of a special form, for instance, from ordered choice tables [ It is noted in [ that a general algorithm to synthesize a continuous-valued logic function from an arbitrary choice table is still unknown. In the present article, we derive a criterion that decides whether a given choice table defmes some continuous-valued logic function and construct a simple algorithm to synthesize the function from the table. Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 42–49, March–April, 1998.  相似文献   

18.
In [J. Comput. Phys. 193:115–135, 2004] and [Comput. Fluids 34:642–663, 2005], Qiu and Shu developed a class of high order weighted essentially non-oscillatory (WENO) schemes based on Hermite polynomials, termed HWENO (Hermite WENO) schemes, for solving nonlinear hyperbolic conservation law systems, and applied them as limiters for the Runge-Kutta discontinuous Galerkin (RKDG) methods on structured meshes. In this continuation paper, we extend the method to solve two dimensional problems on unstructured meshes. The emphasis is again on the application of such HWENO finite volume methodology as limiters for RKDG methods to maintain compactness of RKDG methods. Numerical experiments for two dimensional Burgers’ equation and Euler equations of compressible gas dynamics are presented to show the effectiveness of these methods. The research was partially supported by the European project ADIGMA on the development of innovative solution algorithms for aerodynamic simulations, NSFC grant 10671091 and JSNSF BK2006511.  相似文献   

19.
A popular method for the discretization of conservation laws is the finite volume (FV) method, used extensively in CFD, based on piecewise constant approximation of the solution sought. However, the FV method has problems with the approximation of diffusion terms. Therefore, in several works [17–19, 1, 12, 16, 2], a combination of the FV and FE methods is used. To this end, it is necessary to construct various combinations of simplicial FE meshes with suitable associated FV grids. This is rather complicated from the point of view of the mesh refinement, particularly in 3D problems [20, 21]. It is desirable to use only one mesh. The combination of FV and FE discretizations on the same triangular grid is proposed in [39]. Another possibility is to use the DG method (see [7] or [9] (and the references there) for a general survey). Here we shall use a compromise between the DG FE method and the FV method using piecewise linear discontinuous finite elements over the grid ? h and piecewise constant approximation of convective terms on the same grid. Dedicated to Professor Ivo Babuška on the occasion of his 75th birthday Received: May 2001 / Accepted: September 2001  相似文献   

20.
In [Turek (1996). Int. J. Numer. Meth. Fluids 22, 987–1011], we had performed numerical comparisons for different time stepping schemes for the incompressible Navier–Stokes equations. In this paper, we present the numerical analysis in the context of the Navier–Stokes equations for a modified time-stepping θ-scheme which has been recently proposed by Glowinski [Glowinski (2003). In: Ciarlet, P. G., and Lions, J. L. (eds.), Handbook of Numerical Analysis, Vol. IX, North-Holland, Amsterdam, pp. 3–1176]. Like the well-known classical Fractional-Step-θ-scheme which had been introduced by Glowinski [Glowinski (1985). In Murman, E. M. and Abarbanel, S. S. (eds.), Progress and Supercomputing in Computational Fluid Dynamics, Birkh?user, Boston MA; Bristeau et al. (1987). Comput. Phys. Rep. 6, 73–187], too, and which is still one of the most popular time stepping schemes, with or without operator splitting techniques, this new scheme consists of 3 substeps with nonequidistant substepping to build one macro time step. However, in contrast to the Fractional-Step-θ-scheme, the second substep can be formulated as an extrapolation step for previously computed data only, and the two remaining substeps look like a Backward Euler step so that no expensive operator evaluations for the right hand side vector with older solutions, as for instance in the Crank–Nicolson scheme, have to be performed. This modified scheme is implicit, strongly A-stable and second order accurate, too, which promises some advantageous behavior, particularly in implicit CFD simulations for the nonstationary Navier–Stokes equations. Representative numerical results, based on the software package FEATFLOW [Turek (2000). FEATFLOW Finite element software for the incompressible Navier–Stokes equations: User Manual, Release 1.2, University of Dortmund] are obtained for typical flow problems with benchmark character which provide a fair rating of the solution schemes, particularly in long time simulations.Dedicated to David Gottlieb on the occasion of his 60th anniversary  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号