排序方式: 共有28条查询结果,搜索用时 15 毫秒
11.
Neural Processing Letters - The exceptional capabilities of the much celebrated Real Adaboost ensembles for solving decision and classification problems are universally recognized. These... 相似文献
12.
Lyhyaoui A. Martinez M. Mora I. Vaquez M. Sancho J.-L. Figueiras-Vidal A.R. 《Neural Networks, IEEE Transactions on》1999,10(6):1474-1481
Explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtains also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are considered, in particular with respect to sample selection; as well as some different criteria to train them. Simulation results for well-known classification problems show very good performance of the corresponding designs, improving that of support vector machines and reducing substantially their number of units. This shows that our interest in selecting samples (or centroids) in an efficient manner is justified. Many new research avenues appear from these experiments and discussions, as suggested in our conclusions. 相似文献
13.
Navia-Vázquez A. Pérez-Cruz F. Artés-Rodríguez A. Figueiras-Vidal A.R. 《Journal of Signal Processing Systems》2004,37(2-3):223-235
Many learning algorithms have been used for data mining applications, including Support Vector Classifiers (SVC), which have shown improved capabilities with respect to other approaches, since they provide a natural mechanism for implementing Structural Risk Minimization (SRM), obtaining machines with good generalization properties. SVC leads to the optimal hyperplane (maximal margin) criterion for separable datasets but, in the nonseparable case, the SVC minimizes the L 1 norm of the training errors plus a regularizing term, to control the machine complexity. The L 1 norm is chosen because it allows to solve the minimization with a Quadratic Programming (QP) scheme, as in the separable case. But the L 1 norm is not truly an “error counting” term as the Empirical Risk Minimization (ERM) inductive principle indicates, leading therefore to a biased solution. This effect is specially severe in low complexity machines, such as linear classifiers or machines with few nodes (neurons, kernels, basis functions). Since one of the main goals in data mining is that of explanation, these reduced architectures are of great interest because they represent the origins of other techniques such as input selection or rule extraction. Training SVMs as accurately as possible in these situations (i.e., without this bias) is, therefore, an interesting goal. We propose here an unbiased implementation of SVC by introducing a more appropriate “error counting” term. This way, the number of classification errors is truly minimized, while the maximal margin solution is obtained in the separable case. QP can no longer be used for solving the new minimization problem, and we apply instead an iterated Weighted Least Squares (WLS) procedure. This modification in the cost function of the Support Vector Machine to solve ERM was not possible up to date given the Quadratic or Linear Programming techniques commonly used, but it is now possible using the iterated WLS formulation. Computer experiments show that the proposed method is superior to the classical approach in the sense that it truly solves the ERM problem. 相似文献
14.
Salcedo-Sanz S. Bousono-Calzon C. Figueiras-Vidal A.R. 《Wireless Communications, IEEE Transactions on》2003,2(2):277-283
The broadcast scheduling problem (BSP) arises in frame design for packet radio networks (PRNs). The frame structure determines the main communication parameters: communication delay and throughput. The BSP is a combinatorial optimization problem which is known to be NP-hard. To solve it, we propose an algorithm with two main steps which naturally arise from the problem structure: the first one tackles the hardest contraints and the second one carries out the throughput optimization. This algorithm combines a Hopfield neural network for the constraints satisfaction and a genetic algorithm for achieving a maximal throughput. The algorithm performance is compared with that of existing algorithms in several benchmark cases; in all of them, our algorithm finds the optimum frame length and outperforms previous algorithms in the resulting throughput. 相似文献
15.
Fernando Díaz-de-María Aníbal R. Figueiras-Vidal 《International Journal of Adaptive Control and Signal Processing》1997,11(7):585-601
Non-linear prediction is a natural way to increase the quality of speech coders. In particular, low-delay CELP-type coders can incorporate this improvement because the predictor adaptation is backward. Consequently, there is the possibility of using neural networks as predictors, since their weights (usually a larger number than required in the linear approach) do not have to be transmitted. We apply a radial basis function (RBF) network for this purpose since it computes a regularized solution to the prediction problem. As a result, the stability of the non-linear autoregressive synthesis system can be guaranteed. Investigations of how to combine non-linear predictors with linear predictors indicate that a cascade of an RBF network and a linear filter is a suitable selection since it provides good results and its application to analysis-by-synthesis coders results in large computational advantages with respect to the parallel configuration. This hybrid predictor has been tested for a low-delay code-excited predictive coder, providing an average improvement of 0·4 dB with respect to a CELP coder. Additionally, subjective listening tests give the proposed coder a slight preference over the CELP coder. These results are encouraging because we consider that the proposed coder can be implemented in real time after some improvements, which are detailed as the subject of further work. © 1997 John Wiley & Sons. Ltd. 相似文献
16.
Arenas-Garcia J. Figueiras-Vidal A.R. Sayed A.H. 《Signal Processing, IEEE Transactions on》2006,54(3):1078-1090
Combination approaches provide an interesting way to improve adaptive filter performance. In this paper, we study the mean-square performance of a convex combination of two transversal filters. The individual filters are independently adapted using their own error signals, while the combination is adapted by means of a stochastic gradient algorithm in order to minimize the error of the overall structure. General expressions are derived that show that the method is universal with respect to the component filters, i.e., in steady-state, it performs at least as well as the best component filter. Furthermore, when the correlation between the a priori errors of the components is low enough, their combination is able to outperform both of them. Using energy conservation relations, we specialize the results to a combination of least mean-square filters operating both in stationary and in nonstationary scenarios. We also show how the universality of the scheme can be exploited to design filters with improved tracking performance. 相似文献
17.
The attractive possibility of applying layerwise block training algorithms to multilayer perceptrons MLP, which offers initial advantages in computational effort, is refined in this article by means of introducing a sensitivity correction factor in the formulation. This results in a clear performance advantage, which we verify in several applications. The reasons for this advantage are discussed and related to implicit relations with second-order techniques, natural gradient formulations through Fisher's information matrix, and sample selection. Extensions to recurrent networks and other research lines are suggested at the close of the article. 相似文献
18.
Arenas-Garcia J. Gomez-Verdejo V. Figueiras-Vidal A.R. 《IEEE transactions on instrumentation and measurement》2005,54(6):2239-2249
Among all adaptive filtering algorithms, Widrow and Hoff's least mean square (LMS) has probably become the most popular because of its robustness, good tracking properties and simplicity. A drawback of LMS is that the step size implies a compromise between speed of convergence and final misadjustment. To combine different speed LMS filters serves to alleviate this compromise, as it was demonstrated by our studies on a two filter combination that we call combination of LMS filters (CLMS). Here, we extend this scheme in two directions. First, we propose a generalization to combine multiple LMS filters with different steps that provides the combination with better tracking capabilities. Second, we use a different mixing parameter for each weight of the filter in order to make independent their adaption speeds. Some simulation examples in plant identification and noise cancellation applications show the validity of the new schemes when compared to the CLMS filter and to other previous variable step approaches. 相似文献
19.
Generalizing CMAC architecture and training. 总被引:4,自引:0,他引:4
F J Gonzalez-Serrano A R Figueiras-Vidal A Artes-Rodriguez 《Neural Networks, IEEE Transactions on》1998,9(6):1509-1514
The cerebellar model articulation controller (CMAC) is a simple and fast neural-network based on local approximations. However, its rigid structure reduces its accuracy of approximation and speed of convergence with heterogeneous inputs. In this paper, we propose a generalized CMAC (GCMAC) network that considers different degrees of generalization for each input. Its representation abilities are analyzed, and a set of local relationships that the output function must satisfy are derived. An adaptive growing method of the network is also presented. The validity of our approach and methods are shown by some simulated examples. 相似文献
20.
Weighted least squares training of support vector classifiersleading to compact and adaptive schemes 总被引:3,自引:0,他引:3
Navia-Vazquez A. Perez-Cruz F. Artes-Rodriguez A. Figueiras-Vidal A.R. 《Neural Networks, IEEE Transactions on》2001,12(5):1047-1059
An iterative block training method for support vector classifiers (SVCs) based on weighted least squares (WLS) optimization is presented. The algorithm, which minimizes structural risk in the primal space, is applicable to both linear and nonlinear machines. In some nonlinear cases, it is necessary to previously find a projection of data onto an intermediate-dimensional space by means of either principal component analysis or clustering techniques. The proposed approach yields very compact machines, the complexity reduction with respect to the SVC solution is especially notable in problems with highly overlapped classes. Furthermore, the formulation in terms of WLS minimization makes the development of adaptive SVCs straightforward, opening up new fields of application for this type of model, mainly online processing of large amounts of (static/stationary) data, as well as online update in nonstationary scenarios (adaptive solutions). The performance of this new type of algorithm is analyzed by means of several simulations. 相似文献