首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We introduce a design procedure for fuzzy systems using the concept of information granulation and genetic optimization. Information granulation and resulting information granules themselves become an important design aspect of fuzzy models. By accommodating the formalism of fuzzy sets, the model is geared towards capturing relationship between information granules (fuzzy sets) rather than concentrating on plain numeric data. Information granulation realized with the use of the standard C-Means clustering helps determine the initial values of the parameters of the fuzzy models. This in particular concerns such essential components of the rules as the initial apexes of the membership functions standing in the premise part of the fuzzy rules and the initial values of the polynomial functions standing in the consequence part. The initial parameters are afterwards tuned with the aid of the genetic algorithms (GAs) and the least square method (LSM). The overall design methodology arises as a hybrid development process involving structural and parametric optimization. Especially, genetic algorithms and C-Means are used to generate the structurally as well as parametrically optimized fuzzy model. To identify the structure and estimate parameters of the fuzzy model we exploit the methodologies such as joint and successive method realized by means of genetic algorithms. The proposed model is evaluated using experimental data and its performance is contrasted with the behavior of the fuzzy models available in the literature.  相似文献   

2.
Support vector machines (SVMs) are state-of-the-art tools used to address issues pertinent to classification. However, the explanation capabilities of SVMs are also their main weakness, which is why SVMs are typically regarded as incomprehensible black box models. In the present study, a rule extraction algorithm to extract the comprehensible rule from SVMs and enhance their explanation capability is proposed. The proposed algorithm seeks to use the support vectors from a training model of SVMs and combine genetic algorithms for constructing rule sets. The proposed method can not only generate rule sets from SVMs based on the mixed discrete and continuous variables but can also select important variables in the rule set simultaneously. Measurements of accuracy, sensitivity, specificity, and fidelity are utilized to compare the performance of the proposed method with direct learner algorithms and several rule-extraction techniques from SVMs. The results indicate that the proposed method performs at least as well as with the most successful direct rule learners. Finally, an actual case of pressure ulcer was studied, and the results indicated the practicality of our proposed method in real applications.  相似文献   

3.
针对三维环境中导弹追踪目标时制导和控制算法复杂而导致计算量非常大的问题,提出了一种基于隐性交叉遗传算法优化广义回归神经网络的实时动态目标追踪模型。通过将导弹防御区离散化为多个小模块生成输入数据,并针对每个可接受的目标参数数据集,使用RCGA估算导航常量和导弹注意时间;利用输入和输出的目标参数集生成GRNN所需的训练数据集;针对任意位置的目标轨道,将训练后的GRNN应用于实时导弹导引系统的实现中。通过战术目标仿真模型验证了所提算法的有效性及可靠性,仿真结果表明,相比其他几种目标追踪算法,算法取得了更好的实时性和更高的目标定位精度,脱靶率接近零。  相似文献   

4.
Creating and rendering intermediate geometric primitives is one of the approaches to visualisze data sets in 3D space.Some algorithms have been developed to construct isosurface from uniformly distributed 3D data sets.These algorithms assume that the function value varies linearly along edges of each cell.But to irregular 3D data sets,this assumption is inapplicable.Moreover,the detth sorting of cells is more complicated for irregular data sets,which is indispensable for generating isosurface images or semitransparent isosurface images,if Z-buffer method is not adopted.In this paper,isosurface models based on the assumption that the function value has nonlinear distribution within a tetrahedron are proposed.The depth sorting algorithm and data structures are developed for the irregular data sets in which cells may be subdivided into tetrahedra.The implementation issues of this algorithm are discussed and experimental results are shown to illustrate potentials of this technique.  相似文献   

5.
Association rule mining algorithms mostly use a randomly generated single seed to initialize a population without paying attention to the effectiveness of that population in evolutionary learning. Recently, research has shown significant impact of the initial population on the production of good solutions over several generations of a genetic algorithm. Single seed based genetic algorithms suffer from the following major challenges (1) solutions of a genetic algorithm are varied, since different seeds generate different initial population, (2) difficulty in defining a good seed for a specific application. To avoid these problems, in this paper we propose the MSGA, a new multiple seeds based genetic algorithm which generates multiple seeds from different domains of a solution space to discover high quality rules from a large data set. This scheme introduces m-domain model and m-seeds selection process through which the whole solution space is subdivided into m- number of same size domains, selecting a seed from each domain. Use of these seeds enables this method to generate an effective initial population for evolutionary learning of the fitness value of each rule. As a result, strong searching efficiency is obtained at the beginning of the evolution, achieving fast convergence. The MSGA is tested with different mutation and crossover operators for mining interesting Boolean association rules from four real world data sets. The results are compared to different single seeds based genetic algorithms under the same conditions.  相似文献   

6.
Features are the basic elements which transform CAD data into instructions necessary for automatic generation of manufacturing process plans. In this paper, a hybrid of graph-based and hint-based techniques is proposed to automatically extract interacting features from solid models. The graph-based hints generated by this approach are in geometrical and topological compliance with their corresponding features. They indicate whether the feature is 2.5D, floorless or 3D. To reduce the product model complexity while extracting features, a method to remove fillets existing in the boundary of a 2.5D feature is also proposed. Finally, three geometric completion algorithms, namely, Base-Completion, Profile-Completion and 3D-volume generation algorithms are proposed to generate feature volumes. The base-completion and profile-completion algorithms generate maximal volumes for 2.5D features. The 3D volume generation algorithm extracts 3D portions of the part.  相似文献   

7.
Advocating the Use of Imprecisely Observed Data in Genetic Fuzzy Systems   总被引:2,自引:0,他引:2  
In our opinion, and in accordance with current literature, the precise contribution of genetic fuzzy systems to the corpus of the machine learning theory has not been clearly stated yet. In particular, we question the existence of a set of problems for which the use of fuzzy rules, in combination with genetic algorithms, produces more robust models, or classifiers that are inherently better than those arising from the Bayesian point of view. We will show that this set of problems actually exists, and comprises interval and fuzzy valued datasets, but it is not being exploited. Current genetic fuzzy classifiers deal with crisp classification problems, where the role of fuzzy sets is reduced to give a parametric definition of a set of discriminant functions, with a convenient linguistic interpretation. Provided that the customary use of fuzzy sets in statistics is vague data, we propose to test genetic fuzzy classifiers over imprecisely measured data and design experiments well suited to these problems. The same can be said about genetic fuzzy models: the use of a scalar fitness function assumes crisp data, where fuzzy models, a priori, do not have advantages over statistical regression.  相似文献   

8.
Implicit Surface-Based Geometric Fusion   总被引:1,自引:0,他引:1  
This paper introduces a general purpose algorithm for reliable integration of sets of surface measurements into a single 3D model. The new algorithm constructs a single continuous implicit surface representation which is the zero-set of a scalar field function. An explicit object model is obtained using any implicit surface polygonization algorithm. Object models are reconstructed from both multiple view conventional 2.5D range images and hand-held sensor range data. To our knowledge this is the first geometric fusion algorithm capable of reconstructing 3D object models from noisy hand-held sensor range data.This approach has several important advantages over existing techniques. The implicit surface representation allows reconstruction of unknown objects of arbitrary topology and geometry. A continuous implicit surface representation enables reliable reconstruction of complex geometry. Correct integration of overlapping surface measurements in the presence of noise is achieved using geometric constraints based on measurement uncertainty. The use of measurement uncertainty ensures that the algorithm is robust to significant levels of measurement noise. Previous implicit surface-based approaches use discrete representations resulting in unreliable reconstruction for regions of high curvature or thin surface sections. Direct representation of the implicit surface boundary ensures correct reconstruction of arbitrary topology object surfaces. Fusion of overlapping measurements is performed using operations in 3D space only. This avoids the local 2D projection required for many previous methods which results in limitations on the object surface geometry that is reliably reconstructed. All previous geometric fusion algorithms developed for conventional range sensor data are based on the 2.5D image structure preventing their use for hand-held sensor data. Performance evaluation of the new integration algorithm against existing techniques demonstrates improved reconstruction of complex geometry.  相似文献   

9.
Imbalanced data classification, an important type of classification task, is challenging for standard learning algorithms. There are different strategies to handle the problem, as popular imbalanced learning technologies, data level imbalanced learning methods have elicited ample attention from researchers in recent years. However, most data level approaches linearly generate new instances by using local neighbor information rather than based on overall data distribution. Differing from these algorithms, in this study, we develop a new data level method, namely, generative learning (GL), to deal with imbalanced problems. In GL, we fit the distribution of the original data and generate new data on the basis of the distribution by adopting the Gaussian mixed model. Generated data, including synthetic minority and majority classes, are used to train learning models. The proposed method is validated through experiments performed on real-world data sets. Results show that our approach is competitive and comparable with other methods, such as SMOTE, SMOTE-ENN, SMOTE-TomekLinks, Borderline-SMOTE, and safe-level-SMOTE. Wilcoxon signed rank test is applied, and the testing results show again the significant superiority of our proposal.  相似文献   

10.
To solve many-objective optimization problems (MaOPs) by evolutionary algorithms (EAs), the maintenance of convergence and diversity is essential and difficult. Improved multi-objective optimization evolutionary algorithms (MOEAs), usually based on the genetic algorithm (GA), have been applied to MaOPs, which use the crossover and mutation operators of GAs to generate new solutions. In this paper, a new approach, based on decomposition and the MOEA/D framework, is proposed: model and clustering based estimation of distribution algorithm (MCEDA). MOEA/D means the multi-objective evolutionary algorithm based on decomposition. The proposed MCEDA is a new estimation of distribution algorithm (EDA) framework, which is intended to extend the application of estimation of distribution algorithm to MaOPs. MCEDA was implemented by two similar algorithm, MCEDA/B (based on bits model) and MCEDA/RM (based on regular model) to deal with MaOPs. In MCEDA, the problem is decomposed into several subproblems. For each subproblem, clustering algorithm is applied to divide the population into several subgroups. On each subgroup, an estimation model is created to generate the new population. In this work, two kinds of models are adopted, the new proposed bits model and the regular model used in RM-MEDA (a regularity model based multi-objective estimation of distribution algorithm). The non-dominated selection operator is applied to improve convergence. The proposed algorithms have been tested on the benchmark test suite for evolutionary algorithms (DTLZ). The comparison with several state-of-the-art algorithms indicates that the proposed MCEDA is a competitive and promising approach.  相似文献   

11.
We present a technique using Markov models with spectral features for recognizing 2D shapes. We analyze the properties of Fourier spectral features derived from closed contours of 2D shapes and use these features for 2D pattern recognition. We develop algorithms for reestimating parameters of hidden Markov models. To demonstrate the effectiveness of our models, we have tested our methods on two image databases: hand-tools and unconstrained handwritten numerals. We are able to achieve high recognition rates of 99.4 percent and 96.7 percent without rejection on these two sets of image data, respectively  相似文献   

12.
This paper introduces a new neural-fuzzy technique combined with genetic algorithms in the prediction of permeability in petroleum reservoirs. The methodology involves the use of neural networks to generate membership functions and to approximate permeability automatically from digitized data (well logs) obtained from oil wells. The trained networks are used as fuzzy rules and hyper-surface membership functions. The results of these rules are interpolated based on the membership grades and the parameters in the defuzzification operators which are optimized by genetic algorithms. The use of the integrated methodology is demonstrated via a case study in a petroleum reservoir in offshore Western Australia. The results show that the integrated neural-fuzzy-genetic-algorithm (INFUGA) gives the smallest error on the unseen data when compared to similar algorithms. The INFUGA algorithm is expected to provide a significant improvement when the unseen data come from a mixed or complex distribution.  相似文献   

13.
Recently, evolutionary multiobjective optimization (EMO) algorithms have been utilized for the design of accurate and interpretable fuzzy rule-based systems. This research area is often referred to as multiobjective genetic fuzzy systems (MoGFS), where EMO algorithms are used to search for non-dominated fuzzy rule-based systems with respect to their accuracy and interpretability. In this paper, we examine the ability of EMO algorithms to efficiently search for Pareto optimal or near Pareto optimal fuzzy rule-based systems for classification problems. We use NSGA-II (elitist non-dominated sorting genetic algorithm), its variants, and MOEA/D (multiobjective evolutionary algorithm based on decomposition) in our multiobjective fuzzy genetics-based machine learning (MoFGBML) algorithm. Classification performance of obtained fuzzy rule-based systems by each EMO algorithm is evaluated for training data and test data under various settings of the available computation load and the granularity of fuzzy partitions. Experimental results in this paper suggest that reported classification performance of MoGFS in the literature can be further improved using more computation load, more efficient EMO algorithms, and/or more antecedent fuzzy sets from finer fuzzy partitions.  相似文献   

14.
Cluster analysis is sensitive to noise variables intrinsically contained within high dimensional data sets. As the size of data sets increases, clustering techniques robust to noise variables must be identified. This investigation gauges the capabilities of recent clustering algorithms applied to two real data sets increasingly perturbed by superfluous noise variables. The recent techniques include mixture models of factor analysers and auto-associative multivariate regression trees. Statistical techniques are integrated to create two approaches useful for clustering noisy data: multivariate regression trees with principal component scores and multivariate regression trees with factor scores. The tree techniques generate the superior clustering results.  相似文献   

15.
We present a new method of surface reconstruction that generates smooth and seamless models from sparse, noisy, nonuniform, and low resolution range data. Data acquisition techniques from computer vision, such as stereo range images and space carving, produce 3D point sets that are imprecise and nonuniform when compared to laser or optical range scanners. Traditional reconstruction algorithms designed for dense and precise data do not produce smooth reconstructions when applied to vision-based data sets. Our method constructs a 3D implicit surface, formulated as a sum of weighted radial basis functions. We achieve three primary advantages over existing algorithms: (1) the implicit functions we construct estimate the surface well in regions where there is little data, (2) the reconstructed surface is insensitive to noise in data acquisition because we can allow the surface to approximate, rather than exactly interpolate, the data, and (3) the reconstructed surface is locally detailed, yet globally smooth, because we use radial basis functions that achieve multiple orders of smoothness.  相似文献   

16.
《Computers & Geosciences》2006,32(4):497-511
When seismic profiles deviate significantly from straight lines, the results from 2D traveltime inversion programs will be in error due to the inherent 3D component present in the data. Thus, it is necessary to use a program that can handle the 3D aspects of the acquisition geometry. This study compares the performance and results from two computer programs for 3D seismic tomography. These algorithms are the package for First Arrival Seismic Tomography (FAST) and a Local Earthquake tomography program, PStomo_eq. Although both codes invert for the velocity field using the conjugate gradient solver LSQR, the common smoothness constraint is handled differently. In addition, the programs do not incorporate the same options for user-specified constraints. These differences in implementation are clearly observed in the inverted velocity fields obtained in this study. Both FAST and PStomo_eq are applied to synthetic and real data sets with crooked line geometry. First arrival traveltimes from seismic data acquired in the Siljan ring impact area are used for the real data set test. The results show that FAST gives smoother models than PStomo_eq. On the real data set PStomo_eq showed a better correlation to the information at hand. Different criteria exist for what is desirable in a model; thus, the choice of which program to use will mostly depend on the particular goals of the study.  相似文献   

17.
Recent research shows that rule based models perform well while classifying large data sets such as data streams with concept drifts. A genetic algorithm is a strong rule based classification algorithm which is used only for mining static small data sets. If the genetic algorithm can be made scalable and adaptable by reducing its I/O intensity, it will become an efficient and effective tool for mining large data sets like data streams. In this paper a scalable and adaptable online genetic algorithm is proposed to mine classification rules for the data streams with concept drifts. Since the data streams are generated continuously in a rapid rate, the proposed method does not use a fixed static data set for fitness calculation. Instead, it extracts a small snapshot of the training example from the current part of data stream whenever data is required for the fitness calculation. The proposed method also builds rules for all the classes separately in a parallel independent iterative manner. This makes the proposed method scalable to the data streams and also adaptable to the concept drifts that occur in the data stream in a fast and more natural way without storing the whole stream or a part of the stream in a compressed form as done by the other rule based algorithms. The results of the proposed method are comparable with the other standard methods which are used for mining the data streams.  相似文献   

18.
A regularization-based approach to 3D reconstruction from multiple images is proposed. As one of the most widely used multiple-view 3D reconstruction algorithms, Space Carving can produce a Photo Hull of a scene, which is at best a coarse volumetric model. The two-view stereo algorithm, on the other hand, can generate a more accurate reconstruction of the surfaces, provided that a given surface is visible to both views. The proposed method is essentially a data fusion approach to 3D reconstruction, combining the above two algorithms by means of regularization. The process is divided into two steps: (1) computing the Photo Hull from multiple calibrated images and (2) selecting two of the images as input and solving the two-view stereo problem by global optimization, using the Photo Hull as the regularizer. Our dynamic programming implementation of this regularization-based stereo approach potentially provides an efficient and robust way of reconstructing 3D surfaces. The results of an implementation of this theory is presented on real data sets and compared with peer algorithms.  相似文献   

19.
We propose a new framework to reconstruct building details by automatically assembling 3D templates on coarse textured building models. In a preprocessing step, we generate an initial coarse model to approximate a point cloud computed using Structure from Motion and Multi View Stereo, and we model a set of 3D templates of facade details. Next, we optimize the initial coarse model to enforce consistency between geometry and appearance (texture images). Then, building details are reconstructed by assembling templates on the textured faces of the coarse model. The 3D templates are automatically chosen and located by our optimization‐based template assembly algorithm that balances image matching and structural regularity. In the results, we demonstrate how our framework can enrich the details of coarse models using various data sets.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号