共查询到20条相似文献,搜索用时 0 毫秒
1.
Electron Repulsion Integrals (ERIs) are a common bottleneck in ab initio computational chemistry. It is known that sorted/reordered execution of ERIs results in efficient SIMD/vector processing. This paper shows that reconfigurable computing and heterogeneous processor architectures can also benefit from a deliberate ordering of ERI tasks. However, realizing these benefits as net speedup requires a very rapid sorting mechanism. This paper presents two such mechanisms. Included in this study are analytical, simulation-based, and experimental benchmarking approaches to consider five use cases for ERI sorting, i.e. SIMD processing, reconfigurable computing, limited address spaces, instruction cache exploitation, and data cache exploitation. Specific consideration is given to existing cache-based processors, FPGAs, and the Cell Broadband Engine processor. It is proposed that the analyses conducted in this work should be built upon to aid the development of software autotuners which will produce efficient ab initio computational chemistry codes for a variety of computer architectures. 相似文献
2.
JunKyu Lee Yu Bi Gregory D. Peterson Robert J. Hinde Robert J. Harrison 《Computer Physics Communications》2009,180(12):2574-2581
The Scalable Parallel Random Number Generators library (SPRNG) supports fast and scalable random number generation with good statistical properties for parallel computational science applications. In order to accelerate SPRNG in high performance reconfigurable computing systems, we present the Hardware Accelerated SPRNG library (HASPRNG). Ported to the Xilinx University Program (XUP) and Cray XD1 reconfigurable computing platforms, HASPRNG includes the reconfigurable logic for Field Programmable Gate Arrays (FPGAs) along with a programming interface which performs integer random number generation that produces identical results with SPRNG. This paper describes the reconfigurable logic of HASPRNG exploiting the mathematical properties and data parallelism residing in the SPRNG algorithms to produce high performance and also describes how to use the programming interface to minimize the communication overhead between FPGAs and microprocessors. The programming interface allows a user to be able to use HASPRNG the same way as SPRNG 2.0 on platforms such as the Cray XD1. We also describe how to install HASPRNG and use it. For HASPRNG usage we discuss a FPGA π-estimator for a High Performance Reconfigurable Computer (HPRC) sample application and compare to a software π-estimator. HASPRNG shows 1.7x speedup over SPRNG on the Cray XD1 and is able to obtain substantial speedup for a HPRC application.
Program summary
Program title: HASPRNGCatalogue identifier: AEER_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEER_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 594 928No. of bytes in distributed program, including test data, etc.: 6 509 724Distribution format: tar.gzProgramming language: VHDL (XUP and Cray XD1), C++ (XUP), C (Cray XD1)Computer: PowerPC 405 (XUP) / AMD 2.2 GHz Opteron processor (Cray XD1)Operating system: LinuxFile size: 15 MB (XUP) / 22 MB (Cray XD1)Classification: 4.13Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations such as π-estimation are able to consume limitless random numbers forthe computation as long as hardware resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The library presented here accelerates the generators of independent streams of random numbers.Solution method: Multiple copies of random number generators in FPGAs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. HASPRNG is a random number generators library to allow a computational science application to employ the multiple copies of random number generators to boost performance. Users can interface HASPRNG with software code executing on microprocessors and/or with hardware applications executing on FPGAs. 相似文献3.
Virtualizing access to scientific applications with the Application Hosting Environment 总被引:1,自引:0,他引:1
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion.
Program summary
Program title: Application Hosting Environment 2.0Catalogue identifier: AEEJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public Licence, Version 2No. of lines in distributed program, including test data, etc.: not applicableNo. of bytes in distributed program, including test data, etc.: 1 685 603 766Distribution format: tar.gzProgramming language: Perl (server), Java (Client)Computer: x86Operating system: Linux (Server), Linux/Windows/MacOS (Client)RAM: 134 217 728 (server), 67 108 864 (client) bytesClassification: 6.5External routines: VirtualBox (server), Java (client)Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid.Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications.Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox (http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide.Running time: Not applicableReferences:[1]
J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. [2]
P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406–418.
4.
Grid computing is distributed computing performed transparently across multiple administrative domains. Grid middleware, which is meant to enable access to grid resources, is currently widely seen as being too heavyweight and, in consequence, unwieldy for general scientific use. Its heavyweight nature, especially on the client-side, has severely restricted the uptake of grid technology by computational scientists. In this paper, we describe the Application Hosting Environment (AHE) which we have developed to address some of these problems. The AHE is a lightweight, easily deployable environment designed to allow the scientist to quickly and easily run legacy applications on distributed grid resources. It provides a higher level abstraction of a grid than is offered by existing grid middleware schemes such as the Globus Toolkit. As a result, the computational scientist does not need to know the details of any particular underlying grid middleware and is isolated from any changes to it on the distributed resources. The functionality provided by the AHE is ‘application-centric’: applications are exposed as web services with a well-defined standards-compliant interface. This allows the computational scientist to start and manage application instances on a grid in a transparent manner, thus greatly simplifying the user experience. We describe how a range of computational science codes have been hosted within the AHE and how the design of the AHE allows us to implement complex workflows for deployment on grid infrastructure. 相似文献
5.
Given the resurgent attractiveness of single-instruction-multiple-data (SIMD) processing, it is important for high-performance computing applications to be SIMD-capable. The Hartree-Fock SCF (HF-SCF) application, in it's canonical form, cannot fully exploit SIMD processing. Prior attempts to implement Electron Repulsion Integral (ERI) sorting functionality to essentially “SIMD-ify” the HF-SCF application have met frustration because of the low throughput of the sorting functionality. With greater awareness of computer architecture, we discuss how the sorting functionality may be practically implemented to provide high-performance. Overall system performance analysis, including memory locality analysis, is also conducted, and further emphasises that a system with ERI sorting is capable of very high throughput. We discuss two alternative implementation options, with one immediately accessible software-based option discussed in detail. The impact of workload characteristics on expected performance is also discussed, and it is found that in general as basis set size increases the potential performance of the system also increases. Consideration is given to conventional CPUs, GPUs, FPGAs, and the Cell Broadband Engine architecture. 相似文献
6.
7.
Marc Baboulin Alfredo Buttari Jack Dongarra Jakub Kurzak Julie Langou Julien Langou Piotr Luszczek Stanimire Tomov 《Computer Physics Communications》2009,180(12):2526-2533
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.
Program summary
Program title: ITER-REFCatalogue identifier: AECO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7211No. of bytes in distributed program, including test data, etc.: 41 862Distribution format: tar.gzProgramming language: FORTRAN 77Computer: desktop, serverOperating system: Unix/LinuxRAM: 512 MbytesClassification: 4.8External routines: BLAS (optional)Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision.Running time: seconds/minutes 相似文献8.
This paper describes an algorithm and a computer program which solves numerically (virtually exactly) equations of the restricted open-shell Hartree-Fock and Hartree-Fock-Slater model for diatomic molecules 相似文献
9.
Masato Ida 《Computer Physics Communications》2002,143(2):142-154
The semi-Lagrangian method using the hybrid-cubic-rational interpolation function [M. Ida, Comput. Fluid Dyn. J. 10 (2001) 159] is modified to a conservative method by incorporating the concept discussed in [R. Tanaka et al., Comput. Phys. Commun. 126 (2000) 232]. In the method due to Tanaka et al., not only a physical quantity but also its integrated quantity within a computational cell are used as dependent variables, and the mass conservation is achieved by giving a constraint to a forth-order polynomial used as an interpolation function. In the present method, a hybrid-cubic-rational function whose optimal mixing ratio was determined theoretically is employed for the interpolation, and its derivative is used for updating the physical quantity. The numerical oscillation appearing in results by the method due to Tanaka et al. is sufficiently eliminated by the use of the hybrid function. 相似文献
10.
A domain decomposition algorithm for molecular dynamics simulation of atomic and molecular systems with arbitrary shape and non-periodic boundary conditions is described. The molecular dynamics program uses cell multipole method for efficient calculation of long range electrostatic interactions and a multiple time step method to facilitate bigger time steps. The system is enclosed in a cube and the cube is divided into a hierarchy of cells. The deepest level cells are assigned to processors such that each processor has contiguous cells and static load balancing is achieved by redistributing the cells so that each processor has approximately same number of atoms. The resulting domains have irregular shape and may have more than 26 neighbors. Atoms constituting bond angles and torsion angles may straddle more than two processors. An efficient strategy is devised for initial assignment and subsequent reassignment of such multiple-atom potentials to processors. At each step, computation is overlapped with communication greatly reducing the effect of communication overhead on parallel performance. The algorithm is tested on a spherical cluster of water molecules, a hexasaccharide and an enzyme both solvated by a spherical cluster of water molecules. In each case a spherical boundary containing oxygen atoms with only repulsive interactions is used to prevent evaporation of water molecules. The algorithm shows excellent parallel efficiency even for small number of cells/atoms per processor. 相似文献
11.
A numerical program is presented which facilitates a computation pertaining to the full set of one-gluon loop diagrams (including ghost loop contributions), with M attached external gluon lines in all possible ways. The feasibility of such a task rests on a suitably defined master formula, which is expressed in terms of a set of Grassmann and a set of Feynman parameters. The program carries out the Grassmann integration and performs the Lorentz trace on the involved functions, expressing the result as a compact sum of parametric integrals. The computation is based on tracing the structure of the final result, thus avoiding all intermediate unnecessary calculations and directly writing the output. Similar terms entering the final result are grouped together. The running time of the program demonstrates its effectiveness, especially for large M.
Program summary
Program title:DILOG2Program identifier:ADXN_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXN_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandProgramming language:FORTRAN 90Computer(s) for which the program has been designed:Personal ComputerOperating system(s) for which the program has been designed: Windows 98, XP, LINUXNumber of processors used:oneNo. of lines in distributed program, including test data, etc.:2000No. of bytes in distributed program, including test data, etc.:16 249Distribution format:tar.gzExternal routines/libraries used:noneCPC Program Library subprograms used:noneNature of problem:The computation of one gluon/ghost loop diagrams in QCD with many external gluon lines is a time consuming task, practically beyond reasonable reach of analytic procedures. We apply recently proposed master formulas towards the computation of such diagrams with an arbitrary number (M) of external gluon lines, achieving a final result which reduces the problem to one involving integrals over the standard set, for given M, of Feynman parameters.Solution method:The structure of the master expressions is analyzed from a numerical computation point of view. Using the properties of Grassmann variables we identify all the different forms of terms that appear in the final result. Each form is called “structure”. We calculate theoretically the number of terms belonging to every “structure”. We carry out the calculation organizing the whole procedure into separate calculations of the terms belonging to every “structure”. Terms which do not contribute to the final result are thereby avoided. The final result, extending to large values of M, is also presented with terms belonging to the same “structure” grouped together.Restrictions:M is coded as a 2-digit integer. Overflow in the dimension of used array is expected to appear for M?20 in a processor that uses 4-bytes integers or for M?34 in a processor with 8-bytes integers.Running time:Depends on M, see enclosed figures. 相似文献12.
V.M. Burke C.J. Noble V. Faro-Maza A. Maniopoulou N.S. Scott 《Computer Physics Communications》2009,180(12):2450-2451
To complete the 2DRMP package an asymptotic program, such as FARM, is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R has already been constructed for each scattering energy during propagation. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP.
New version program summary
Program title: FARM_2DRMPCatalogue identifier: ADAZ_v1_1Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADAZ_v1_1.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 806No. of bytes in distributed program, including test data, etc.: 134 462Distribution format: tar.gzProgramming language: Fortran 95 and MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorized or parallelized?: Yes. 16 cores were used for the small test runClassification: 2.4External routines: BLAS, LAPACKDoes the new version supersede the previous version?: NoNature of problem: The program solves the scattering problem in the asymptotic region of R-matrix theory where exchange is negligible.Solution method: A radius is determined at which the wave function, calculated as a Gailitis expansion [4] with accelerated summing [5] over terms, converges. The R-matrix is propagated from the boundary of the internal region to this radius and the K-matrix calculated. Collision strengths or cross sections may be calculated.Reasons for new version: To complete the 2DRMP package [6] an asymptotic program, such as FARM [7], is needed. The original version of FARM is designed to construct the physical R-matrix, R, from surface amplitudes contained in the H-file. However, in 2DRMP, R, has already been constructed for each scattering energy during propagation and each R is stored in one of the RmatT files described in Fig. 8 of [6]. Therefore, this modified version of FARM, known as FARM_2DRMP, has been developed solely for use with 2DRMP. Instructions on its use and corresponding test data is provided with 2DRMP [6].Summary of revisions: FARM_2DRMP contains two codes, farm.f and farm_par.f90. The former is a serial code while the latter is a parallel F95 code that employs an MPI harness to enable the nenergy energies to be computed simultaneously across ncore cores, with each core processing either ⌊nenergy/ncore⌋ or ⌈nenergy/ncore⌉ energies. The input files, input.d and H, and the output file farm.out are as described in [7]. Both codes read R directly from RmatT.Restrictions: FARM_2DRMP is for use solely with 2DRMP and for a specified L,S and Π combination. The energy range specified in input.d must match that specified in energies.data.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is 9 secs.References:[1]
HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, visited 22 July, 2009. [2]
HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, visited 22 July, 2009. [3]
HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen's University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, visited 22 July, 2009. [4]
M. Gailitis, J. Phys. B 9 (1976) 843. [5]
C.J. Noble, R.K. Nesbet, Comput. Phys. Comm. 33 (1984) 399. [6]
N.S. Scott, M.P. Scott, P.G. Burke, T. Stitt, V. Faro-Maza, C. Denis, A. Maniopoulou, Comput. Phys. Comm. 180 (12) (2009) 2424–2449, this issue. [7]
V.M. Burke, C.J. Noble, Comput. Phys. Comm. 85 (1995) 471.
13.
14.
P. S. Kostenetskii A. V. Lepikhov L. V. Sokolinskii 《Automation and Remote Control》2007,68(5):847-859
For the multiprocessor systems of the hierarchical-architecture relational databases, a new approach to data layout and load balancing was proposed. Described was a database multiprocessor model enabling simulation and examination of arbitrary multiprocessor hierarchical configurations in the context of the on-line transaction processing applications. An important subclass of the symmetrical multiprocessor hierarchies was considered, and a new data layout strategy based on the method of partial mirroring was proposed for them. The disk space used to replicate the data was evaluated analytically. For the symmetrical hierarchies having certain regularity, theorems estimating the laboriousness of replica formation were proved. An efficient method of load balancing on the basis of the partial mirroring technique was proposed. The methods described are oriented to the clusters and Grid-systems. 相似文献
15.
N.S. Scott M.P. Scott P.G. Burke T. Stitt V. Faro-Maza C. Denis A. Maniopoulou 《Computer Physics Communications》2009,180(12):2424-2449
The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.
Program summary
Program title: 2DRMPCatalogue identifier: AEEA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 196 717No. of bytes in distributed program, including test data, etc.: 3 819 727Distribution format: tar.gzProgramming language: Fortran 95, MPIComputer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3]Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3]Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test runClassification: 2.4External routines: BLAS, LAPACK, PBLAS, ScaLAPACKSubprograms used: ADAZ_v1_1Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies.Solution method: Two-dimensional R-matrix propagation theory. The (r1,r2) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available.Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions.Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files.Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s).References:[1]
HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009. [2]
HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009. [3]
HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009. [4]
Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.
16.
For the parallel computer systems, a new formulation of the problem of constructing parallel asynchronous abstract programs of the desired length was proposed. The conditions for the problem of planning were represented as a system of Boolean equations (constraints) whose solutions define the feasible plans for activation of the program modules specified in the planner’s knowledge base. The constraints on the number of processors and time delays arising at execution of the program modules were taken into consideration. 相似文献
17.
Consideration was given to the direct problem of technical diagnosis which lies in determining the technical state of a combinatorial discrete device from the results of testing. The graph and analytical models of behavior of the combinatorial discrete device allowing for the technical state of its elements were presented. A method of segregation of the suspected logical malfunctions under which observable behavior of the combinatorial discrete device is possible was proposed. 相似文献
18.
E. S. Kirik 《Automation and Remote Control》2007,68(4):645-656
A robust analog of the Nadaraya-Watson regression estimate is considered. A solution is obtained in the class of censor algorithms. A criterion and iteration procedure for determining a censored sample are proposed. The criterion is based on the analysis of residuals (errors) of estimation. 相似文献
19.
Joe Pitt-Francis Pras Pathmanathan Miguel O. Bernabeu Rafel Bordas Jonathan Cooper Alexander G. Fletcher Gary R. Mirams Philip Murray James M. Osborne Alex Walter S. Jon Chapman Alan Garny Ingeborg M.M. van Leeuwen Philip K. Maini Blanca Rodríguez Sarah L. Waters Jonathan P. Whiteley Helen M. Byrne David J. Gavaghan 《Computer Physics Communications》2009,180(12):2452-2471
20.
A. M. Tsykunov 《Automation and Remote Control》2009,70(2):271-282
Solved was the problem was of constructing a robust control system with linear nonstationary multidimensional control plant compensating the parametric and external bounded perturbations to within δ if the derivatives of the output vector are not measured and fully if the derivatives are measured. 相似文献