首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 343 毫秒
1.
Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable performance of the traditional effluent quality measurements,we propose a selective ensemble extreme learning machine modeling method to enhance the effluent quality predictions.Extreme learning machine algorithm is inserted into a selective ensemble frame as the component model since it runs much faster and provides better generalization performance than other popular learning algorithms.Ensemble extreme learning machine models overcome variations in different trials of simulations for single model.Selective ensemble based on genetic algorithm is used to further exclude some bad components from all the available ensembles in order to reduce the computation complexity and improve the generalization performance.The proposed method is verified with the data from an industrial wastewater treatment plant,located in Shenyang,China.Experimental results show that the proposed method has relatively stronger generalization and higher accuracy than partial least square,neural network partial least square,single extreme learning machine and ensemble extreme learning machine model.  相似文献   

2.
Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous studies have been performed to develop compensation methods.However,for most existing methods,the development cycle is still considerably time-consumptive and demands high computational or capital cost.In this paper,a novel theory-guided regularization method for training of deep neural networks(DNNs),implanted in a learning system,is introduced to learn the intrinsic relationship between the workpiece shape after springback and the required process parameter,e.g.,loading stroke,in sheet metal bending processes.By directly bridging the workpiece shape to the process parameter,issues concerning springback in the process design would be circumvented.The novel regularization method utilizes the well-recognized theories in material mechanics,Swift’s law,by penalizing divergence from this law throughout the network training process.The regularization is implemented by a multi-task learning network architecture,with the learning of extra tasks regularized during training.The stress-strain curve describing the material properties and the prior knowledge used to guide learning are stored in the database and the knowledge base,respectively.One can obtain the predicted loading stroke for a new workpiece shape by importing the target geometry through the user interface.In this research,the neural models were found to outperform a traditional machine learning model,support vector regression model,in experiments with different amount of training data.Through a series of studies with varying conditions of training data structure and amount,workpiece material and applied bending processes,the theory-guided DNN has been shown to achieve superior generalization and learning consistency than the data-driven DNNs,especially when only scarce and scattered experiment data are available for training which is often the case in practice.The theory-guided DNN could also be applicable to other sheet metal forming processes.It provides an alternative method for compensating springback with significantly shorter development cycle and less capital cost and computational requirement than traditional compensation methods in sheet metal forming industry.  相似文献   

3.
Learning a compact predictive model in an online setting has recently gained a great deal of attention.The combination of online learning with sparsity-inducing regularization enables faster learning with a smaller memory space than the previous learning frameworks.Many optimization methods and learning algorithms have been developed on the basis of online learning with L1-regularization.L1-regularization tends to truncate some types of parameters,such as those that rarely occur or have a small range of values,unless they are emphasized in advance.However,the inclusion of a pre-processing step would make it very difficult to preserve the advantages of online learning.We propose a new regularization framework for sparse online learning.We focus on regularization terms,and we enhance the state-of-the-art regularization approach by integrating information on all previous subgradients of the loss function into a regularization term.The resulting algorithms enable online learning to adjust the intensity of each feature’s truncations without pre-processing and eventually eliminate the bias of L1-regularization.We show theoretical properties of our framework,the computational complexity and upper bound of regret.Experiments demonstrated that our algorithms outperformed previous methods in many classification tasks.  相似文献   

4.
Frequent counting is a very so often required operation in machine learning algorithms. A typical machine learning task, learning the structure of Bayesian network (BN) based on metric scoring, is introduced as an example that heavily relies on frequent counting. A fast calculation method for frequent counting enhanced with two cache layers is then presented for learning BN. The main contribution of our approach is to eliminate comparison operations for frequent counting by introducing a multi-radix number system calculation. Both mathematical analysis and empirical comparison between our method and state-of-the-art solution are conducted. The results show that our method is dominantly superior to state-of-the-art solution in solving the problem of learning BN.  相似文献   

5.
Network traffic classification based on ensemble learning and co-training   总被引:4,自引:0,他引:4  
Classification of network traffic is the essential step for many network researches. However,with the rapid evolution of Internet applications the effectiveness of the port-based or payload-based identifi-cation approaches has been greatly diminished in recent years. And many researchers begin to turn their attentions to an alternative machine learning based method. This paper presents a novel machine learning-based classification model,which combines ensemble learning paradigm with co-training tech-niques. Compared to previous approaches,most of which only employed single classifier,multiple clas-sifiers and semi-supervised learning are applied in our method and it mainly helps to overcome three shortcomings:limited flow accuracy rate,weak adaptability and huge demand of labeled training set. In this paper,statistical characteristics of IP flows are extracted from the packet level traces to establish the feature set,then the classification model is created and tested and the empirical results prove its feasibility and effectiveness.  相似文献   

6.
Knowledge acquisition with machine learning techniques is a fundamental requirement for knowledge discovery from databases and data mining systems.Two techniques in particular-inductive learning and theory revision-have been used toward this end.A method that combines both approaches to effectively acquire theories (regularity) from a set of training examples is presented.Inductive learning is used to acquire new regularity from the training examples;and theory revision is used to improve an initial theory.In addition,a theory preference criterion that is a combination of the MDL-based heuristic and the Laplace estimate has been successfully employed in the selection of the promising theory.The resulting algorithm developed by integrating inductive learning and theory revision and using the criterion has the ability to deal with complex problems,obtaining useful theories in terms of its predictive accuracy.  相似文献   

7.
A multi-layer network design, which excuses light-path design and IP routing design at the same time, has attracted great attention for IP over WDM network designs. The multi-layer network design problem can be solved by using a MILP (mixed integer linear programming) problem. However, the MILP problem for a large-scale network cannot be calculated due to the huge amount of variables used in the computation. In order to cope with this problem, a calculation method, which decomposes the original MILP problem into smaller sub-problems and obtains an approximate solution by solving these smaller MILP problems, has been proposed. However, this method has a defect that the calculation accuracy is degraded. In order to cope with this problem, we propose a novel method that solves the original MILP problem using the results of the sub-problems. We evaluate our proposed method by the computational experiments and show the effectiveness of our method.  相似文献   

8.
Motion deblurring is a basic problem in the field of image processing and analysis. This paper proposes a new method of single image blind deblurring which can be significant to kernel estimation and non-blind deconvolution. Experiments show that the details of the image destroy the structure of the kernel, especially when the blur kernel is large. So we extract the image structure with salient edges by the method based on RTV. In addition, the traditional method for motion blur kernel estimation based on sparse priors is conducive to gain a sparse blur kernel. But these priors do not ensure the continuity of blur kernel and sometimes induce noisy estimated results. Therefore we propose the kernel refinement method based on L0 to overcome the above shortcomings. In terms of non-blind deconvolution we adopt the L1/L2 regularization term. Compared with the traditional method, the method based on L1/L2 norm has better adaptability to image structure, and the constructed energy functional can better describe the sharp image. For this model, an effective algorithm is presented based on alternating minimization algorithm.  相似文献   

9.
Random vector functional ink(RVFL)networks belong to a class of single hidden layer neural networks in which some parameters are randomly selected.Their network structure in which contains the direct links between inputs and outputs is unique,and stability analysis and real-time performance are two difficulties of the control systems based on neural networks.In this paper,combining the advantages of RVFL and the ideas of online sequential extreme learning machine(OS-ELM)and initial-training-free online extreme learning machine(ITFOELM),a novel online learning algorithm which is named as initial-training-free online random vector functional link algo rithm(ITF-ORVFL)is investigated for training RVFL.The link vector of RVFL network can be analytically determined based on sequentially arriving data by ITF-ORVFL with a high learning speed,and the stability for nonlinear systems based on this learning algorithm is analyzed.The experiment results indicate that the proposed ITF-ORVFL is effective in coping with nonparametric uncertainty.  相似文献   

10.
With the rapid development of information technology, the knowledge increases by the way of explosion, and the rapid updating of knowledge challenges the traditional education so as to appeal people to change the learning concepts and the learning methods, lnternet, with its abundant resources, advanced technology and convenient way to obtain information, is favored by people. E-learning, as an innovative way of distance learning, provides a strong challenge to traditional learning with its unique advantage, will become lifelong learning as a major way. In this paper, the basic framework of e-learning platform has been built on the basis of development situation of domestic and foreign network education, and the different functions of the modules are analyzed, then the ideas to adapt to the new form of e-learning platform are put forward.  相似文献   

11.
TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization   总被引:1,自引:0,他引:1  
In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.  相似文献   

12.
In this paper, we investigate to use the L1/2 regularization method for variable selection based on the Cox's proportional hazards model. The L1/2 regularization can be taken as a representative of Lq (0 < q < 1) regularizations and has been demonstrated many attractive properties. To solve the L1/2 penalized Cox model, we propose a coordinate descent algorithm with a new univariate half thresholding operator which is applicable to high-dimensional biological data. Simulation results based on standard artificial data show that the L1/2 regularization method can be more accurate for variable selection than Lasso and SCAD methods. The results from real DNA microarray datasets indicate the L1/2 regularization method performs competitively.  相似文献   

13.
In this paper, we propose a novel method for fast face recognition called L 1/2-regularized sparse representation using hierarchical feature selection. By employing hierarchical feature selection, we can compress the scale and dimension of global dictionary, which directly contributes to the decrease of computational cost in sparse representation that our approach is strongly rooted in. It consists of Gabor wavelets and extreme learning machine auto-encoder (ELM-AE) hierarchically. For Gabor wavelets’ part, local features can be extracted at multiple scales and orientations to form Gabor-feature-based image, which in turn improves the recognition rate. Besides, in the presence of occluded face image, the scale of Gabor-feature-based global dictionary can be compressed accordingly because redundancies exist in Gabor-feature-based occlusion dictionary. For ELM-AE part, the dimension of Gabor-feature-based global dictionary can be compressed because high-dimensional face images can be rapidly represented by low-dimensional feature. By introducing L 1/2 regularization, our approach can produce sparser and more robust representation compared to L 1-regularized sparse representation-based classification (SRC), which also contributes to the decrease of the computational cost in sparse representation. In comparison with related work such as SRC and Gabor-feature-based SRC, experimental results on a variety of face databases demonstrate the great advantage of our method for computational cost. Moreover, we also achieve approximate or even better recognition rate.  相似文献   

14.
Nowadays, a series of methods are based on a L 1 penalty to solve the variable selection problem for a Cox’s proportional hazards model. In 2010, Xu et al. have proposed a L 1/2 regularization and proved that the L 1/2 penalty is sparser than the L 1 penalty in linear regression models. In this paper, we propose a novel shooting method for the L 1/2 regularization and apply it on the Cox model for variable selection. The experimental results based on comprehensive simulation studies, real Primary Biliary Cirrhosis and diffuse large B cell lymphoma datasets show that the L 1/2 regularization shooting method performs competitively.  相似文献   

15.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

16.
This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with the challenging problem of semi-supervised learning with noisy initial labels. Inspired by the successful use of sparse coding for noise reduction, we choose to give new L1-norm formulation of Laplacian regularization for graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is explicitly defined over the eigenvectors of the normalized Laplacian matrix, we formulate graph-based semi-supervised learning as an L1-norm linear reconstruction problem which can be efficiently solved by sparse coding. Furthermore, by working with only a small subset of eigenvectors, we develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Finally, we evaluate the proposed algorithm in noise-robust image classification. The experimental results on several benchmark datasets demonstrate the promising performance of the proposed algorithm.  相似文献   

17.
王一宾    裴根生  程玉胜   《智能系统学报》2019,14(4):831-842
将正则化极限学习机或者核极限学习机理论应用到多标记分类中,一定程度上提高了算法的稳定性。但目前这些算法关于损失函数添加的正则项都基于L2正则,导致模型缺乏稀疏性表达。同时,弹性网络正则化既保证模型鲁棒性且兼具模型稀疏化学习,但结合弹性网络的极限学习机如何解决多标记问题鲜有研究。基于此,本文提出一种对核极限学习机添加弹性网络正则化的多标记学习算法。首先,对多标记数据特征空间使用径向基核函数映射;随后,对核极限学习机损失函数施加弹性网络正则项;最后,采用坐标下降法迭代求解输出权值以得到最终预测标记。通过对比试验和统计分析表明,提出的算法具有更好的性能表现。  相似文献   

18.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

19.
p范数正则化支持向量机分类算法   总被引:6,自引:3,他引:3  
L2范数罚支持向量机(Support vector machine,SVM)是目前使用最广泛的分类器算法之一,同时实现特征选择和分类器构造的L1范数和L0范数罚SVM算法也已经提出.但是,这两个方法中,正则化阶次都是事先给定,预设p=2或p=1.而我们的实验研究显示,对于不同的数据,使用不同的正则化阶次,可以改进分类算法的预测准确率.本文提出p范数正则化SVM分类器算法设计新模式,正则化范数的阶次p可取范围为02范数罚SVM,L1范数罚SVM和L0范数罚SVM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号