首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Fuzzy systems are represented as series expansions of fuzzy basis functions which are algebraic superpositions of fuzzy membership functions. Using the Stone-Weierstrass theorem, it is proved that linear combinations of the fuzzy basis functions are capable of uniformly approximating any real continuous function on a compact set to arbitrary accuracy. Based on the fuzzy basis function representations, an orthogonal least-squares (OLS) learning algorithm is developed for designing fuzzy systems based on given input-output pairs; then, the OLS algorithm is used to select significant fuzzy basis functions which are used to construct the final fuzzy system. The fuzzy basis function expansion is used to approximate a controller for the nonlinear ball and beam system, and the simulation results show that the control performance is improved by incorporating some common-sense fuzzy control rules.  相似文献   

2.
C. Heuberger 《Computing》1999,63(4):341-349
We consider digit expansions in redundant number systems to base q with and consider such an expansion as minimal, if is minimal. We describe an efficient algorithm for determining a minimal representation and give an explicit characterization of optimal representations for odd q. Received: July 20, 1999; revised August 23 1999  相似文献   

3.
1 Introduction Copy theory is a kind of signal generation technology, built on the research of Walsh function. The main content of the copy theory includes generating non-sine orthogonal functions, utilizing shift and symmetric copy methods and control in…  相似文献   

4.
This paper deals with the problem of constructing confidence regions for the parameters of truncated series expansion models. The models are represented using orthonormal basis functions, and we extend the ‘Leave-out Sign-dominant Correlation Regions’ (LSCR) algorithm such that non-asymptotic confidence regions for the parameters can be constructed in the presence of unmodelled dynamics. The constructed regions have guaranteed probability of containing the true parameters for any finite number of data points. The algorithm is first developed for FIR models and then extended to models with generalized orthonormal basis functions. The usefulness of the developed approach is demonstrated for FIR and Laguerre models in simulation examples.  相似文献   

5.
布尔函数和伪布尔函数在不同的领域有着广泛的应用,利用多项式表示有利于刻划它们的一些特征属性。论文首先在已知输入都能得到输出的条件下给出了布尔函数多项式表示的快速实现算法,该算法仅用到模2加运算,运算次数少,具有简洁、易于编程实现、准确而快速的特点,而且该算法很易推广为伪布尔函数多项式表示的快速实现算法,只需把模2加运算换成实数加运算即可。接着通过比较说明了伪布尔函数多项式表示的快速实现算法,同时指出任何伪布尔函数都能通过多项式形式表示出来。最后通过实例进一步验证了算法的正确性。  相似文献   

6.
7.
In this paper, we propose a methodology for training a new model of artificial neural network called the generalized radial basis function (GRBF) neural network. This model is based on generalized Gaussian distribution, which parametrizes the Gaussian distribution by adding a new parameter τ. The generalized radial basis function allows different radial basis functions to be represented by updating the new parameter τ. For example, when GRBF takes a value of τ=2, it represents the standard Gaussian radial basis function. The model parameters are optimized through a modified version of the extreme learning machine (ELM) algorithm. In the methodology proposed (MELM-GRBF), the centers of each GRBF were taken randomly from the patterns of the training set and the radius and τ values were determined analytically, taking into account that the model must fulfil two constraints: locality and coverage. An thorough experimental study is presented to test its overall performance. Fifteen datasets were considered, including binary and multi-class problems, all of them taken from the UCI repository. The MELM-GRBF was compared to ELM with sigmoidal, hard-limit, triangular basis and radial basis functions in the hidden layer and to the ELM-RBF methodology proposed by Huang et al. (2004) [1]. The MELM-GRBF obtained better results in accuracy than the corresponding sigmoidal, hard-limit, triangular basis and radial basis functions for almost all datasets, producing the highest mean accuracy rank when compared with these other basis functions for all datasets.  相似文献   

8.
Assuming that the parameters of a generalized hypergeometric function depend linearly on a small variable εε, the successive derivatives of the function with respect to that small variable are evaluated at ε=0ε=0 to obtain the coefficients of the εε-expansion of the function. The procedure, which is quite naive, benefits from simple explicit expressions of the derivatives, to any order, of the Pochhammer and reciprocal Pochhammer symbols with respect to their argument. The algorithm may be used algebraically, irrespective of the values of the parameters. It reproduces the exact results obtained by other authors in cases of especially simple parameters. Implemented numerically, the procedure improves considerably, for higher orders in εε, the numerical expansions given by other methods.  相似文献   

9.
10.
It is shown that the Burchnall-Chaundy expansions, which are of fundamental importance in the theory of Appell's functions, can easily be implemented and generalized by means of the operator factorization method, which provides a simple and universal base, both for a new theory of hypergeometric series and for the development of effective new algorithms for computer-aided symbolic transformations of these series. Five new generalized expansions are derived, including 44 Burchnall-Chaundy expansions, as well as many new expansions, some of which are related to the Horn series.  相似文献   

11.
Algorithms for computing integral bases of an algebraic function field are implemented in some computer algebra systems. They are used e.g. for the integration of algebraic functions. The method used by Maple 5.2 and AXIOM is given by Trager in [Trager, 1984]. He adapted an algorithm of Ford and Zassenhaus [Ford, 1978], that computes the ring of integers in an algebraic number field, to the case of a function field.It turns out that using algebraic geometry one can write a faster algorithm. The method we will give is based on Puiseux expansions. One can see this as a variant on the Coates' algorithm as it is described in [Davenport, 1981]. Some difficulties in computing with Puiseux expansions can be avoided using a sharp bound for the number of terms required which will be given in Section 3. In Section 5 we derive which denominator is needed in the integral basis. Using this result 'intermediate expression swell' can be avoided.The Puiseux expansions generally introduce algebraic extensions. These extensions will not appear in the resulting integral basis.  相似文献   

12.
广义预测控制系统的闭环分析   总被引:20,自引:4,他引:20       下载免费PDF全文
本文通过把广义预测控制转化为内模控制结构,导出了其中控制器、滤波器的定量表达,并在此基础上分析了系统的闭环动态特性、稳定性和鲁棒性。这些理论结果提供了设计广义预测控制系统的依据。  相似文献   

13.
This paper presents an unsupervised learning scheme for initializing the internal representations of feedforward neural networks, which accelerates the convergence of supervised learning algorithms. It is proposed in this paper that the initial set of internal representations can be formed through a bottom-up unsupervised learning process applied before the top-down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through linear or nonlinear variations of a generalized Hebbian learning rule, known as Oja's rule. Various generalized Hebbian rules were experimentally tested and evaluated in terms of their effect on the convergence of the supervised training process. Several experiments indicated that the use of the proposed initialization of the internal representations significantly improves the convergence of gradient-descent-based algorithms used to perform nontrivial training tasks. The improvement of the convergence becomes significant as the size and complexity of the training task increase.  相似文献   

14.
Convex multi-task feature learning   总被引:2,自引:1,他引:1  
We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select—not learn—a few common variables across the tasks. Editors: Daniel Silver, Kristin Bennett, Richard Caruana. This is a longer version of the conference paper (Argyriou et al. in Advances in neural information processing systems, vol. 19, 2007a). It includes new theoretical and experimental results.  相似文献   

15.

The algorithm selection problem is defined as identifying the best-performing machine learning (ML) algorithm for a given combination of dataset, task, and evaluation measure. The human expertise required to evaluate the increasing number of ML algorithms available has resulted in the need to automate the algorithm selection task. Various approaches have emerged to handle the automatic algorithm selection challenge, including meta-learning. Meta-learning is a popular approach that leverages accumulated experience for future learning and typically involves dataset characterization. Existing meta-learning methods often represent a dataset using predefined features and thus cannot be generalized across different ML tasks, or alternatively, learn a dataset’s representation in a supervised manner and therefore are unable to deal with unsupervised tasks. In this study, we propose a novel learning-based task-agnostic method for producing dataset representations. Then, we introduce TRIO, a meta-learning approach, that utilizes the proposed dataset representations to accurately recommend top-performing algorithms for previously unseen datasets. TRIO first learns graphical representations for the datasets, using four tools to learn the latent interactions among dataset instances and then utilizes a graph convolutional neural network technique to extract embedding representations from the graphs obtained. We extensively evaluate the effectiveness of our approach on 337 datasets and 195 ML algorithms, demonstrating that TRIO significantly outperforms state-of-the-art methods for algorithm selection for both supervised (classification and regression) and unsupervised (clustering) tasks.

  相似文献   

16.
The paper reviews studies on the representations and expansions of weighted pseudoinverse matrices with positive semidefinite weights and on the construction of iterative methods and regularized problems for the calculation of weighted pseudoinverses and weighted normal pseudosolutions based on these representations and expansions. The use of these methods to solve constrained least squares problems is examined. Continued from Cybernetics and Systems Analysis, 44, No. 1, 36–55 (2008). __________ Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 75–102, May–June 2008.  相似文献   

17.
Orthogonal expansion of the space of quadratically integrable nonlinear functionals of the Gaussian random process was considered. The problem of explicit construction of the expansion terms was restated as generation of the differential equations for its coordinate functions. The method of canonical expansions from the linear statistical analysis was generalized to the nonlinear case on the basis of the results obtained.  相似文献   

18.
This paper is concerned with structural and algorithmic aspects of certain R-bases in polynomial rings R[Xij] over a commutative ring R with 1. These bases are related to standard tableaux. We shall examine the main tools in full detail: (symmetrized) bideterminants, Capelli operators, hyperdominance, and generalized Laplace's expansions. These tools are then applied to the representation theory of symmetric groups. In particular, we present an algorithm which efficiently computes for every skew module of a symmetric group an R-basis which is adapted to a Specht series. This result is a constructive, characteristic-free analogue of the celebrated Littlewood-Richardson rule. This paper will serve as the basis for a possible generalization of that rule to more general shapes.  相似文献   

19.
20.
Consider a collection of waveforms, each of which is treated as a set of independent variables containing information about some other (dependent) variable. This paper addresses the problem of finding informationally efficient expansions of the waveforms. A procedure is described for determining conditional entropy efficient basis functions for the given collection of waveforms, where the entropy is conditioned on the specified dependent variable. Use of these basis functions for approximate waveform reconstruction minimizes the loss of information about the dependent variable (the degree of approximation depending upon the number of basis functions used).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号