共查询到20条相似文献,搜索用时 15 毫秒
1.
Athanassios Vasilakos Demetris Stathakis 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2005,9(5):332-340
Granulation of information is a new way to describe the increased complexity of natural phenomena. The lack of clear borders in nature calls for a more efficient way to process such data. Land use both in general but also as perceived in satellite images is a typical example of data that are inherently not clearly delimited. A granular neural network (GNN) approach is used here to facilitate land use classification. The GNN model used combines membership functions of spectral as well as non-spectral spatial information to produce land use categories. Spectral information refers to IRS satellite image bands and non-spectral data are here of topographic nature, namely slope, aspect and elevation. The processing is done through a standard neural network trained by back-propagation learning algorithm. A thorough presentation of the results is given in order to evaluate the merits of this method. 相似文献
2.
Granular neural networks for numerical-linguistic data fusion andknowledge discovery 总被引:1,自引:0,他引:1
Yan-Qing Zhang Fraser M.D. Gagliano R.A. Kandel A. 《Neural Networks, IEEE Transactions on》2000,11(3):658-667
We present a neural-networks-based knowledge discovery and data mining (KDDM) methodology based on granular computing, neural computing, fuzzy computing, linguistic computing, and pattern recognition. The major issues include 1) how to make neural networks process both numerical and linguistic data in a database, 2) how to convert fuzzy linguistic data into related numerical features, 3) how to use neural networks to do numerical-linguistic data fusion, 4) how to use neural networks to discover granular knowledge from numerical-linguistic databases, and 5) how to use discovered granular knowledge to predict missing data. In order to answer the above concerns, a granular neural network (GNN) is designed to deal with numerical-linguistic data fusion and granular knowledge discovery in numerical-linguistic databases. From a data granulation point of view the GNN can process granular data in a database. From a data fusion point of view, the GNN makes decisions based on different kinds of granular data. From a KDDM point of view the GNN is able to learn internal granular relations between numerical-linguistic inputs and outputs, and predict new relations in a database. The GNN is also capable of greatly compressing low-level granular data to high-level granular knowledge with some compression error and a data compression rate. To do KDDM in huge databases, parallel GNN and distributed GNN will be investigated in the future. 相似文献
3.
Granular neural networks: A study of optimizing allocation of information granularity in input space
In this paper, we develop a granular input space for neural networks, especially for multilayer perceptrons (MLPs). Unlike conventional neural networks, a neural network with granular input is an augmented study on a basis of a well learned numeric neural network. We explore an efficient way of forming granular input variables so that the corresponding granular outputs of the neural network achieve the highest values of the criteria of specificity (and support). When we augment neural networks through distributing information granularities across input variables, the output of a network has different levels of sensitivity on different input variables. Capturing the relationship between input variables and output result becomes of a great help for mining knowledge from the data. And in this way, important features of the data can be easily found. As an essential design asset, information granules are considered in this construct. The quantification of information granules is viewed as levels of granularity which is given by the expert. The detailed optimization procedure of allocation of information granularity is realized by an improved partheno genetic algorithm (IPGA). The proposed algorithm is testified effective by some numeric studies completed for synthetic data and data coming from the machine learning and StatLib repositories. Moreover, the experimental studies offer a deep insight into the specificity of input features. 相似文献
4.
Machine Learning - We introduce a declarative differentiable programming framework, based on the language of Lifted Relational Neural Networks, where small parameterized logic programs are used to... 相似文献
5.
Granular neural web agents for stock prediction 总被引:2,自引:0,他引:2
Y.-Q. Zhang S. Akkaladevi G. Vachtsevanos T. Y. Lin 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2002,6(5):406-413
A granular neural Web-based stock prediction agent is developed using the granular neural network (GNN) that can discover
fuzzy rules. Stock data sets are downloaded from www.yahoo.com website. These data sets are inserted into the database tables
using a java program. Then, the GNN is trained using sample data for any stock. After learning from the past stock data, the
GNN is able to use discover fuzzy rules to make future predictions. After doing simulations with six different stocks (msft,
orcl, dow, csco, ibm, km), it is conclusive that the granular neural stock prediction agent is giving less average errors
with large amount of past training data and high average errors in case of fewer amounts of past training data. Java Servlets,
Java Script and jdbc are used. SQL is used as the back-end database. The performance of the GNN algorithm is compared with
the performance of the BP algorithm by training the same set of data and predicting the future stock values. The average error
of the GNN is less than that of BP algorithm. 相似文献
6.
The process neural network (PrNN) is an ANN model suited for solving the learning problems with signal inputs, whose elementary unit is the process neuron (PN), an emerging neuron model. There is an essential difference between the process neuron and traditional neurons, but there also exists a relation between them. The former can be approximated by the latter within any precision. First, the PN model and some PrNNs are introduced in brief. And then, two PN approximating theorems are presented and proved in detail. Each theorem gives an approximating model to the PN model, i.e., the time-domain feature expansion model and the orthogonal decomposition feature expansion model. Some corollaries are given for the PrNNs based on these two theorems. Thereafter, simulation studies are performed on some simulated signal sets and a real dataset. The results show that the PrNN can effectively suppress noises polluting the signals and generalize quite well. Finally some problems on PrNNs are discussed and further research directions are suggested. 相似文献
7.
8.
Eugene Wong 《Algorithmica》1991,6(1-6):466-478
The first purpose of this paper is to present a class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube. These algorithms, based on both diffusion processes and simulated annealing, are implementable as analog integrated circuits. Such circuits can be viewed as generalizations of neural networks of the Hopfield type, and are called “diffusion machines.” Our second objective is to show that “learning” in these networks can be achieved by a set of three interconnected diffusion machines: one that learns, one to model the desired behavior, and one to compute the weight changes. 相似文献
9.
Boosting neural networks 总被引:15,自引:0,他引:15
Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, AdaBoost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether AdaBoost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the AdaBoost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by AdaBoost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4% error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5% error on the UCI letters and 8.1% error on the UCI satellite data set, which is significantly better than boosted decision trees. 相似文献
10.
Knowledge about the distribution of a statistical estimator is important for various purposes, such as the construction of confidence intervals for model parameters or the determination of critical values of tests. A widely used method to estimate this distribution is the so-called boot-strap, which is based on an imitation of the probabilistic structure of the data-generating process on the basis of the information provided by a given set of random observations. In this article we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates. 相似文献
11.
12.
Paolini Emilio De Marinis Lorenzo Cococcioni Marco Valcarenghi Luca Maggiani Luca Andriolli Nicola 《Neural computing & applications》2022,34(18):15589-15601
Neural Computing and Applications - Photonics-based neural networks promise to outperform electronic counterparts, accelerating neural network computations while reducing power consumption and... 相似文献
13.
Sang-Woo Moon Seong-Gon Kong 《Neural Networks, IEEE Transactions on》2001,12(2):307-317
This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. 相似文献
14.
Stochastic neural networks 总被引:2,自引:0,他引:2
Eugene Wong 《Algorithmica》1991,6(1):466-478
The first purpose of this paper is to present a class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube. These algorithms, based on both diffusion processes and simulated annealing, are implementable as analog integrated circuits. Such circuits can be viewed as generalizations of neural networks of the Hopfield type, and are called diffusion machines.Our second objective is to show that learning in these networks can be achieved by a set of three interconnected diffusion machines: one that learns, one to model the desired behavior, and one to compute the weight changes.This research was supported in part by U.S. Army Research Office Grant DAAL03-89-K-0128. 相似文献
15.
Hybridization of the probabilistic neural networks with feed-forward neural networks for forecasting
Mehdi Khashei Mehdi Bijari 《Engineering Applications of Artificial Intelligence》2012,25(6):1277-1288
Feed-forward neural networks (FFNNs) are among the most important neural networks that can be applied to a wide range of forecasting problems with a high degree of accuracy. Several large-scale forecasting competitions with a large number of commonly used time series forecasting models conclude that combining forecasts from more than one model often leads to improved performance, especially when the models in the ensemble are quite different. In the literature, several hybrid models have been proposed by combining different time series models together. In this paper, in contrast of the traditional hybrid models, a novel hybridization of the feed-forward neural networks (FFNNs) is proposed using the probabilistic neural networks (PNNs) in order to yield more accurate results than traditional feed-forward neural networks. In the proposed model, the estimated values of the FFNN models are modified based on the distinguished trend of their residuals and optimum step length, which are respectively yield from a probabilistic neural network and a mathematical programming model. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way in order to construct a more accurate hybrid model than FFNN models. Therefore, it can be applied as an appropriate alternative model for forecasting tasks, especially when higher forecasting accuracy is needed. 相似文献
16.
Pareto evolutionary neural networks 总被引:4,自引:0,他引:4
For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimization. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimization. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case. This study constructs a novel methodology for implementing multiobjective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers. New generalization methods to deal with the unique properties of multiobjective error minimization that are not apparent in the uni-objective case are presented and compared on synthetic data, with a novel method based on bootstrapping of the training data shown to significantly improve generalization ability. Finally experimental evidence is presented in this study demonstrating the general application potential of the framework by generating populations of ENNs for forecasting 37 different international stock indexes. 相似文献
17.
Eric A. Wan 《Applied Intelligence》1993,3(1):91-105
Traditional feedforward neural networks are static structures that simply map input to output. To better reflect the dynamics in the biological system, time dependency is incorporated into the network by using Finite Impulse Response (FIR) linear filters to model the processes of axonal transport, synaptic modulation, and charge dissipation. While a constructive proof gives a theoretical equivalence between the class of problems solvable by the FIR model and the static structure, certain practical and computational advantages exist for the FIR model. Adaptation of the network is achieved through an efficient gradient descent algorithm, which is shown to be a temporal generalization of the popular backpropagation algorithm for static networks. Applications of the network are discussed with a detailed example of using the network for time series prediction. 相似文献
18.
Parallel consensual neural networks 总被引:8,自引:0,他引:8
Benediktsson J.A. Sveinsson J.R. Ersoy O.K. Swain P.H. 《Neural Networks, IEEE Transactions on》1997,8(1):54-64
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data. 相似文献
19.
The authors discuss the requirements of learning for generalization, where the traditional methods based on gradient descent have limited success. A stochastic learning algorithm based on simulated annealing in weight space is presented. The authors verify the convergence properties and feasibility of the algorithm. An implementation of the algorithm and validation experiments are described 相似文献
20.
REUVEN R. LEVARY 《International journal of systems science》2013,44(7):1415-1418
Neural networks that are integrated with rule-based systems having a knowledge base offer more capabilities than networks not integrated with such systems. 相似文献