首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Training data development with the D-optimality criterion   总被引:4,自引:0,他引:4  
The importance of using optimum experimental design (OED) concepts when selecting data for training a neural network is highlighted in this paper. We demonstrate that an optimality criterion borrowed from another field; namely the D-optimality criterion used in OED, can be used to enhance the training value of a small training data set. This is important in cases where resources are limited, and collecting data is expensive, hazardous, or time consuming. The analysis results in the cases considered indicate that even with a small set of training examples, so long as the training data set was chosen according to the D-optimality criterion, the network was able to generalize, and as a result, was able to fit complex surfaces  相似文献   

2.
The problem under consideration is to obtain a measurement schedule for training neural networks. This task is perceived as an experimental design in a given design space that is obtained in such a way as to minimize the difference between the neural network and the system being considered. This difference can be expressed in many different ways and one of them, namely, the D-optimality criterion is used in this paper. In particular, the paper presents a unified and comprehensive treatment of this problem by discussing the existing and previously unpublished properties of the optimum experimental design (OED) for neural networks. The consequences of the above properties are discussed as well. A hybrid algorithm that can be used for both the training and data development of neural networks is another important contribution of this paper. A careful analysis of the algorithm is presented and its comprehensive convergence analysis with the help of the Lyapunov method are given. The paper contains a number of numerical examples that justify the application of the OED theory for neural networks. Moreover, an industrial application example is given that deals with the valve actuator.  相似文献   

3.
The problem of designing optimal blood sampling protocols for kinetic experiments in pharmacology, physiology and medicine is briefly described, followed by a presentation of several interesting results based on sequentially optimized studies we have performed in more than 75 laboratory animals. Experiences with different algorithms and design software are also presented. The overall approach appears to be highly efficacious, from the standpoints of both laboratory economics and resulting model accuracy. Optimal sampling schedules (OSS) have a number of different time points equal to the number of unknown parameters for a popular class of models. Replication rather than distribution of samples provide maximum accuracy when additional sampling is feasible; and specific replicates can be used to weight some parameter accuracies more than others, even when a D-optimality criterion is used. Our sequential experiment scheme often converged in 1 step and resulting optimal sampling schedules were reasonably robust, allowing for biological variation among the animals studied.  相似文献   

4.
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.  相似文献   

5.
Various sequential derivative-free optimization algorithms exist for solving black-box optimization problems. Two important building blocks in these algorithms are the trust region and the geometry improvement. In this paper, we propose to incorporate the D-optimality criterion, well known in the design of experiments, into these algorithms in two different ways. Firstly, it is used to define a trust region that adapts its shape to the locations of the points in which the objective function has been evaluated. Secondly, it is used to determine an optimal geometry-improving point. The proposed trust region and geometry improvement can both be implemented into existing sequential algorithms.  相似文献   

6.
1 Introduction Finding out proper starting points for all the intersection curves between two surfaces is a key process in numerical tracing methods for surface-surface intersection (SSI) problems. A number of methods [1] are introduced to calculate the starting points. Cugini et al. [2] introduced the concept of shrinking bounding boxes to calculate starting points. This method is simple and in some cases effective, but it may miss some intersection components. Muellenheim [3] presented an…  相似文献   

7.
The accuracy of different approximating response surfaces is investigated. In the classical response surface methodology (CRSM) the true response function is usually replaced with a low-order polynomial. In Kriging the true response function is replaced with a low-order polynomial and an error correcting function. In this paper the error part of the approximating response surface is obtained from simple point Kriging theory. The combined polynomial and error correcting function will be addressed as a Kriging surface approximation.To be able to use Kriging the spatial correlation or covariance must be known. In this paper the error is assumed to have a normal distribution and the covariance to depend only on one parameter. The maximum-likelihood method is used to find the latter parameter. A weighted least-square procedure is used to determine the trend before simple point Kriging is used for the error function. In CRSM the surface approximation is determined through an ordinary least-square fit. In both cases the D-optimality criterion has been used to distribute the design points.From this investigation we have found that a low-ordered polynomial assumption should be made with the Kriging approach. We have also concluded that Kriging better than CRSM resolves abrupt changes in the response, e.g. due to buckling, contact or plastic deformation.  相似文献   

8.
To find starting points for all the intersection curves, one of the surfaces is subdivided into some small surface patches. Based on a correlative algorithm of computing the minimum distance of two surfaces, the intersections of every patch with another surface are detected, and starting points are calculated by dichotomy. This algorithm shows superior efficiency in the computational complexity and number of iterations needed. It can be used to determine exact starting points on all possible solution curves between any kinds of parametric sculptured surfaces.  相似文献   

9.
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.  相似文献   

10.
径向基函数神经网络的一种两级学习方法   总被引:2,自引:1,他引:1  
建立RBF(radial basis function)神经网络模型关键在于确定网络隐中心向量、基宽度参数和隐节点数.为设计结构简单,且具有良好泛化性能径向基网络结构,本文提出了一种RBF网络的两级学习新设计方法.该方法在下级由正则化正交最小二乘法与D-最优试验设计结合算法自动构建结构节俭的RBF网络模型;在上级通过粒子群优化算法优选结合算法中影响网络泛化性能的3个学习参数,即基宽度参数、正则化系数和D-最优代价系数的最佳参数组合.仿真实例表明了该方法的有效性.  相似文献   

11.
We address the problem of finding an optimal polygonal approximation of digitized curves. Several optimal algorithms have already been proposed to determine the minimum number of points that define a polygonal approximation of the curve, with respect to a criterion error. We present a new algorithm with reasonable complexity to determine the optimal polygonal approximation using the mean square error norm. The idea is to estimate the remaining number of segments and to integrate the cost in the A* algorithm. The solution is optimal in the minimum number of segments.  相似文献   

12.
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.  相似文献   

13.
The computational tasks required in the coordinate metrology of manufactured surfaces including Point Measurement Planning (PMP), Substitute Geometry Estimation (SGE), and Deviation Zone Evaluation (DZE) are typically conducted sequentially. This paper represents a methodology to integrate PMP, SGE, and DZE in order to reduce the inherent sources of computational uncertainties in coordinate measurement of planar surfaces. The methodology is developed based on a closed-loop of three tasks, where the results of SGE and DZE are used to properly revise the set of sample points. The goal of this study is to find a PMP approach that uses a minimal number of sampled points to represent the inspected surface efficiently. Several different sampling strategies are presented and are employed for the inspection of the manufactured surfaces with various patterns of distribution of geometric deviations. A comprehensive experimental study is conducted and statistically analyzed to detect the most reliable sampling strategy to be used as the PMP engine of the loop. It is shown that the developed methodology effectively finds the minimum deviation zone of the surfaces using a small number of sample points. The adaptive computational coordinate metrology method developed in this work can potentially be utilized in the inspection of other geometric features and freeform surfaces.  相似文献   

14.
Surrogate models are widely used to predict response function of any system and in quantifying uncertainty associated with the response function. It is required to have response quantities at some preselected sample points to construct a surrogate model which can be processed in two way. Either the surrogate model is constructed using one shot experimental design techniques, or, the sample points can be generated in a sequential manner so that optimum sample points for a specific problem can be obtained. This paper addresses a comprehensive comparisons between these two types of sampling techniques for the construction of more accurate surrogate models. Two most popular one shot sampling strategies: Latin hypercube sampling and Sobol sequence, and four different type sequential experimental designs (SED) namely, Monte Carlo intersite projected (MCIP), Monte Carlo intersite projected threshold (MCIPT), optimizer projected (OP) and LOLA-Voronoi (LV) method are taken for the present study. Two most widely used surrogate models, namely polynomial chaos expansion and Kriging are used to check the applicability of the experimental design techniques. Three different types of numerical problems are solved using the two above-mentioned surrogate models considering all the experimental design techniques independently. Further, all the results are compared with the standard Monte Carlo simulation (MCS). Overall study depicts that SED performs well in predicting the response functions more accurately with an acceptable number of sample points even for high-dimensional problems which maintains the balance between accuracy and efficiency. More specifically, MCIPT and LV method based Kriging outperforms other combinations.  相似文献   

15.
The present work determines the optimal number of cells for minimum weight design of an aircraft wing under strength and natural frequency constraints for the two cases (i) uniform loading and (ii) a tip moment. Two SUMT optimization algorithms with and without parameters have been used and suggestions for faster convergence for one have been given. The importance of different starting design points and convergence criteria in getting the constrained minimum has been shown. The variables considered are length, chord, skin thickness and various spar thicknesses. The natural frequency has been obtained by the use of exact continuum theory of cylindrical tubes, and comparison with elementary theory has been made. The optimization results indicate that increasing the number of cells beyond two does not lead to any substantial reduction or increase in weight. Also, stringent convergence criterion and more than one starting point are necessarry for better results.  相似文献   

16.
Atomistic models are a very valuable simulation tool in the field of material science. Among them are the continuous cellular automata (CCA), which can simulate accurately the process of chemical etching used in micro-electro-mechanical-systems (MEMS) micromachining. Due to the CCA intrinsic atomistic nature, simulation results are obtained in the form of a cloud of points, so data visualization has been usually problematic. When using these models as a part of a computer aided design tool, good data visualization is very important. In this paper, a minimum energy model implemented with the level set (LS) method for improving the visual representation of simulated MEMS is presented. Additionally, the sparse field method has been applied to reduce the high computational cost of the original LS. Finally, some reconstructed surfaces with completely different topologies are presented, proving the effectiveness of our implementation and the fact that it is capable of producing any real surface, flat and smooth ones.  相似文献   

17.
针对三元组数据内在关联性复杂的特点,提出了基于平行因子分解(PARAFAC)的协同聚类推荐算法。该算法利用PARAFAC算法对张量进行分解,挖掘多维数据实体之间的相关联系和潜在主题。首先,利用PARAFAC分解算法对三元组张量数据进行聚类;然后,基于协同聚类算法提出了三种不同方案的推荐模型,并通过实验对三种方案进行了比较,得到了最优的推荐模型;最后,将提出的协同聚类模型与基于高阶奇异值分解(HOSVD)的推荐模型进行比较。在last.fm数据集上,PARAFAC协同聚类算法比HOSVD张量分解算法在召回率和精确度上平均提高了9.8个百分点和3.7个百分点,在delicious数据集上平均提高了11.6个百分点和3.9个百分点。实验结果表明所提算法能更有效地挖掘出张量中的潜在信息和内在联系,实现高准确率和高召回率的推荐。  相似文献   

18.
张琳  陈燕  汲业  张金松 《计算机应用研究》2011,28(11):4071-4073
针对传统K-means算法必须事先确定聚类数目以及对初始聚类中心的选取比较敏感的缺陷,采用基于密度的思想,通过设定Eps邻域以及Eps邻域内至少包含的对象数minpts来排除孤立点,并将不重复的核心点作为初始聚类中心;采用类内距离和类间距离的比值作为准则评价函数,将准则函数取得最小值时的聚类数作为最佳聚类数,这些改进有效地克服了K-means算法的不足。最后通过几个实例介绍了改进后算法的具体应用,实例表明改进后的算法比原算法有更高的聚类准确性,更能实现类内紧密类间远离的聚类效果。  相似文献   

19.
In this paper, an approach for robust stability analysis of a digital closed-loop system for digital controller implementations subject to finite word length (FWL) effects is proposed. Uncertainties caused by the roundoff and computational errors subject to FWL effects are expressed in function of mantissa bit number when the mode of floating-point arithmetic is used in the process. Then, based on the Small Gain Theorem and the Bellman-Grownwall Lemma, a sufficient stability criterion for the digital closed-loop system is derived. The eigenvalue sensitivity of the closed-loop system is developed in terms of mixed matrix-2/Frobenius norms. Then, by minimizing this eigenvalue sensitivity and using orthogonal Hermitian transform as well, an optimal similarity transformation can be obtained. By substituting this optimal transformation into the stability criterion, a minimum mantissa bit number used for implementing the stabilizing digital controllers can be determined. The main contributions are that this approach provides an analytical closed-form solution for obtaining the optimal transformation and, in addition to the stability criterion, leads to the implementation of the stabilizing controllers with a lower mantissa bit number when using this optimal one. Finally, detailed numerical design processes and simulation results are used to illustrate the effectiveness of the proposed scheme.  相似文献   

20.
Liquid--liquid equilibrium (LLE) data are important in chemical industry for the design of separation equipments, and it is troublesome to determine experimentally. In this paper, a new method for correlation of ternary LLE data is presented. The method is implemented by using a combined structure that uses genetic algorithm (GA)--trained neural network (NN). NN coefficients that satisfy the criterion of equilibrium were obtained by using GA. At the training phase, experimental concentration data and corresponding activity coefficients were used as input and output, respectively. At the test phase, trained NN was used to correlate the whole experimental data by giving only one initial value. Calculated results were compared with the experimental data, and very low root-mean-square deviation error values are obtained between experimental and calculated data. By using this model tie-line and solubility curve data of LLE can be obtained with only a few experimental data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号