共查询到20条相似文献,搜索用时 6 毫秒
1.
Feature selection for logistic regression (LR) is still a challenging subject. In this paper, we present a new feature selection method for logistic regression based on a combination of the zero-norm and l2-norm regularization. However, discontinuity of the zero-norm makes it difficult to find the optimal solution. We apply a proper nonconvex approximation of the zero-norm to derive a robust difference of convex functions (DC) program. Moreover, DC optimization algorithm (DCA) is used to solve the problem effectively and the corresponding DCA converges linearly. Compared with traditional methods, numerical experiments on benchmark datasets show that the proposed method reduces the number of input features while maintaining accuracy. Furthermore, as a practical application, the proposed method is used to directly classify licorice seeds using near-infrared spectroscopy data. The simulation results in different spectral regions illustrates that the proposed method achieves equivalent classification performance to traditional logistic regressions yet suppresses more features. These results show the feasibility and effectiveness of the proposed method. 相似文献
2.
Cheng Soon Ong 《Optimization methods & software》2013,28(4):830-854
Sparsity of a classifier is a desirable condition for high-dimensional data and large sample sizes. This paper investigates the two complementary notions of sparsity for binary classification: sparsity in the number of features and sparsity in the number of examples. Several different losses and regularizers are considered: the hinge loss and ramp loss, and ?2, ?1, approximate ?0, and capped ?1 regularization. We propose three new objective functions that further promote sparsity, the capped ?1 regularization with hinge loss, and the ramp loss versions of approximate ?0 and capped ?1 regularization. We derive difference of convex functions algorithms (DCA) for solving these novel non-convex objective functions. The proposed algorithms are shown to converge in a finite number of iterations to a local minimum. Using simulated data and several data sets from the University of California Irvine (UCI) machine learning repository, we empirically investigate the fraction of features and examples required by the different classifiers. 相似文献
3.
Nedyalko Petrov Antoniya Georgieva Ivan Jordanov 《Neural computing & applications》2013,22(7-8):1499-1508
A further investigation of our intelligent machine vision system for pattern recognition and texture image classification is discussed in this paper. A data set of 335 texture images is to be classified into several classes, based on their texture similarities, while no a priori human vision expert knowledge about the classes is available. Hence, unsupervised learning and self-organizing maps (SOM) neural networks are used for solving the classification problem. Nevertheless, in some of the experiments, a supervised texture analysis method is also considered for comparison purposes. Four major experiments are conducted: in the first one, classifiers are trained using all the extracted features without any statistical preprocessing; in the second simulation, the available features are normalized before being fed to a classifier; in the third experiment, the trained classifiers use linear transformations of the original features, received after preprocessing with principal component analysis; and in the last one, transforms of the features obtained after applying linear discriminant analysis are used. During the simulation, each test is performed 50 times implementing the proposed algorithm. Results from the employed unsupervised learning, after training, testing, and validation of the SOMs, are analyzed and critically compared with results from other authors. 相似文献
4.
Self-organizing maps with asymmetric neighborhood function 总被引:2,自引:0,他引:2
The self-organizing map (SOM) is an unsupervised learning method as well as a type of nonlinear principal component analysis that forms a topologically ordered mapping from the high-dimensional data space to a low-dimensional representation space. It has recently found wide applications in such areas as visualization, classification, and mining of various data. However, when the data sets to be processed are very large, a copious amount of time is often required to train the map, which seems to restrict the range of putative applications. One of the major culprits for this slow ordering time is that a kind of topological defect (e.g., a kink in one dimension or a twist in two dimensions) gets created in the map during training. Once such a defect appears in the map during training, the ordered map cannot be obtained until the defect is eliminated, for which the number of iterations required is typically several times larger than in the absence of the defect. In order to overcome this weakness, we propose that an asymmetric neighborhood function be used for the SOM algorithm. Compared with the commonly used symmetric neighborhood function, we found that an asymmetric neighborhood function accelerates the ordering process of the SOM algorithm, though this asymmetry tends to distort the generated ordered map. We demonstrate that the distortion of the map can be suppressed by improving the asymmetric neighborhood function SOM algorithm. The number of learning steps required for perfect ordering in the case of the one-dimensional SOM is numerically shown to be reduced from O(N(3)) to O(N(2)) with an asymmetric neighborhood function, even when the improved algorithm is used to get the final map without distortion. 相似文献
5.
Presents an extension of the self-organizing learning algorithm of feature maps in order to improve its convergence to neighborhood preserving maps. The Kohonen learning algorithm is controlled by two learning parameters, which have to be chosen empirically because there exists neither rules nor a method for their calculation. Consequently, often time consuming parameter studies have to precede before a neighborhood preserving feature map is obtained. To circumvent those lengthy numerical studies, here, a method is presented and incorporated into the learning algorithm which determines the learning parameters automatically. Therefore, system models of the learning and organizing process are developed in order to be followed and predicted by linear and extended Kalman filters. The learning parameters are optimal within the system models, so that the self-organizing process converges automatically to a neighborhood preserving feature map of the learning data. 相似文献
6.
Self-organizing nets for optimization 总被引:1,自引:0,他引:1
Given some optimization problem and a series of typically expensive trials of solution candidates sampled from a search space, how can we efficiently select the next candidate? We address this fundamental problem by embedding simple optimization strategies in learning algorithms inspired by Kohonen's self-organizing maps and neural gas networks. Our adaptive nets or grids are used to identify and exploit search space regions that maximize the probability of generating points closer to the optima. Net nodes are attracted by candidates that lead to improved evaluations, thus, quickly biasing the active data selection process toward promising regions, without loss of ability to escape from local optima. On standard benchmark functions, our techniques perform more reliably than the widely used covariance matrix adaptation evolution strategy. The proposed algorithm is also applied to the problem of drag reduction in a flow past an actively controlled circular cylinder, leading to unprecedented drag reduction. 相似文献
7.
计算机人工神经网络技术提供了新的图像压缩方法。自组织特征映射人工神经网络就能够用于图像的有损压缩。通过将图像分成若干小块,然后使用神经网络进行训练达到特征向量自动聚类,从而将这若干个图像块分成不同的类,其类别个数远小于图像块的个数,最后使用一个映射表保存这些信息。该方式,将图像中相同或者非常相似的部分归为一类,降低了信息冗余度,从而可以进行图像的有损压缩。该方法采用了计算机神经网络,有比较好的适应性,能够方便的和其它压缩技术结合实现效果更好的混合压缩,具有良好的推广价值。 相似文献
8.
Self-organizing maps for the skeletonization of sparse shapes 总被引:6,自引:0,他引:6
Singh R. Cherkassky V. Papanikolopoulos N. 《Neural Networks, IEEE Transactions on》2000,11(1):241-248
This paper presents a method for computing the skeleton of planar shapes and objects which exhibit sparseness (lack of connectivity), within their image regions. Such sparseness in images may occur due to poor lighting conditions, incorrect thresholding or image sub-sampling. Furthermore, in document image analysis, sparse shapes are characteristic of texts faded due to aging and/or poor ink quality. Given the pixel distribution for a shape, the proposed method involves an iterative evolution of a piecewise-linear approximation of the shape skeleton by using a minimum spanning tree-based self-organizing map (SOM). By constraining the SOM to lie on the edges of the Delaunay triangulation of the shape distribution, the adjacency relationships between regions in the shape are detected and used in the evolution of the skeleton. The SOM, on convergence, gives the final skeletal shape. The skeletonization is invariant to Euclidean transformations. The potential of the method is demonstrated on a variety of sparse shapes from different application domains. 相似文献
9.
G. E. Stavroulakis L. N. Polyakova 《Structural and Multidisciplinary Optimization》1996,12(2-3):167-176
The impact of difference convex optimization techniques on structural analysis algorithms for nonsmooth and non-convex problems is investigated in this paper. Algorithms for the numerical solutions are proposed and studied. The relation to more general optimization techniques and to computational mechanics algorithms is also discussed. The theory is illustrated by a composite beam delamination example. 相似文献
10.
Nonlinear control synthesis by convex optimization 总被引:3,自引:0,他引:3
A stability criterion for nonlinear systems, recently derived by the third author, can be viewed as a dual to Lyapunov's second theorem. The criterion is stated in terms of a function which can be interpreted as the stationary density of a substance that is generated all over the state-space and flows along the system trajectories toward the equilibrium. The new criterion has a remarkable convexity property, which in this note is used for controller synthesis via convex optimization. Recent numerical methods for verification of positivity of multivariate polynomials based on sum of squares decompositions are used. 相似文献
11.
12.
Self-organizing maps, vector quantization, and mixture modeling 总被引:1,自引:0,他引:1
Self-organizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive expectation-maximization (EM) algorithms for self-organizing maps with and without missing values. We compare self-organizing maps with the elastic-net approach and explain why the former is better suited for the visualization of high-dimensional data. Several extensions and improvements are discussed. As an illustration we apply a self-organizing map based on a multinomial distribution to market basket analysis. 相似文献
13.
Alireza Karimi Author VitaeAuthor Vitae 《Automatica》2007,43(8):1395-1402
Robust control synthesis of linear time-invariant SISO polytopic systems is investigated using the polynomial approach. A convex set of all stabilizing controllers for a polytopic system is given over an infinite-dimensional space. A finite-dimensional approximation of this set is obtained using the orthonormal basis functions and represented by a set of LMIs thanks to the KYP lemma. Then, an LMI based convex optimization problem for robust pole placement with sensitivity function shaping in two- and infinity-norm is proposed. The simulation results show the effectiveness of the proposed method. 相似文献
14.
提出了一种有别于当前优化算法框架的自组织学习算法(self-organizing learning algorithm,SLA),该算法融合遗传算法并行搜索与模拟退火串行搜索,结合粒子群学习机制和禁忌搜索机制,实现了系统与环境的交互学习,能够很好地处理传统优化方无法应对的高维非线性优化问题.SLA分自学习和互学习两个智能化学习阶段,先进行基于自学习机制的邻域禁忌搜索,保证局部极值的收敛,然后通过信息共享平台,进行基于互学习机制的广域禁忌搜索,保证全局极值的收敛.系统通过与环境交互学习而自适应地调整搜索策略和相关参数,使得搜索过程能够有效地避免盲目性,而具有相当的自组织性.最后,通过高维测试函数的对比仿真实验表明,SLA在由小型低维空间转入超大型高维空间时,仍能够与环境保持稳定,透明的交互学习,其全局搜索能力和整体稳健性明显优于其它搜索方法. 相似文献
15.
In the Vehicle Routing Problem with Backhauls (VRPB), a central depot, a fleet of homogeneous vehicles, and a set of customers
are given. The set of customers is divided into two subsets. The first (second) set of linehauls (backhauls) consists of customers
with known quantity of goods to be delivered from (collected to) the depot. The VRPB objective is to design a set of minimum
cost routes; originating and terminating at the central depot to service the set of customers. In this paper, we develop a
self-organizing feature maps algorithm, which uses unsupervised competitive neural network concepts. The definition of the
architecture of the neural network and its learning rule are the main contribution. The architecture consists of two types
of chains: linehaul and backhaul chains. Linehaul chains interact exclusively with linehaul customers. Similarly, backhaul
chains interact exclusively with backhaul customers. Additonal types of interactions are introduced in order to form feasible
VRPB solution when the algorithm converges. The generated routes are then improved using the well-known 2-opt procedure. The
implemented algorithm is compared with other approaches in the literature. The computational results are reported for standard
benchmark test problems. They show that the proposed approach is competitive with the most efficient metaheuristics. 相似文献
16.
针对机器人运动学正解及相机的外参数标定存在偏差时,基于非线性最优化的手眼标定算法无法确保目标函数收敛到全局极小值的问题,提出基于四元数理论的凸松弛全局最优化手眼标定算法。考虑到机械手末端相对运动旋转轴之间的夹角对标定方程求解精度的影响,首先利用随机抽样一致性(RANSAC)算法对标定数据中旋转轴之间的夹角进行预筛选,再利用四元数参数化旋转矩阵,建立多项式几何误差目标函数和约束,采用基于线性矩阵不等式(LMI)凸松弛全局优化算法求解全局最优手眼变换矩阵。实测结果表明,该算法可以求得全局最优解,手眼变换矩阵几何误差平均值不大于1.4 mm,标准差小于0.16 mm,结果稍优于四元数非线性最优化算法。 相似文献
17.
The main result in this paper is to establish some new characterizations of convex functions, in which we also simplify the proof of the characterizations given by Bessenyei and Páles. 相似文献
18.
Michel Neuhaus Horst Bunke 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2005,35(3):503-514
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively. 相似文献
19.
Moosaei Hossein Bazikar Fatemeh Ketabchi Saeed Hladk Milan 《Applied Intelligence》2022,52(3):2634-2654
Applied Intelligence - Universum data that do not belong to any class of a classification problem can be exploited to utilize prior knowledge to improve generalization performance. In this paper,... 相似文献
20.
Recently, it has been shown that the regret of the Follow the Regularized Leader (FTRL) algorithm for online linear optimization can be bounded by the total variation of the cost vectors rather than the number of rounds. In this paper, we extend this result to general online convex optimization. In particular, this resolves an open problem that has been posed in a number of recent papers. We first analyze the limitations of the FTRL algorithm as proposed by Hazan and Kale (in Machine Learning 80(2–3), 165–188, 2010) when applied to online convex optimization, and extend the definition of variation to a gradual variation which is shown to be a lower bound of the total variation. We then present two novel algorithms that bound the regret by the gradual variation of cost functions. Unlike previous approaches that maintain a single sequence of solutions, the proposed algorithms maintain two sequences of solutions that make it possible to achieve a variation-based regret bound for online convex optimization. To establish the main results, we discuss a lower bound for FTRL that maintains only one sequence of solutions, and a necessary condition on smoothness of the cost functions for obtaining a gradual variation bound. We extend the main results three-fold: (i) we present a general method to obtain a gradual variation bound measured by general norm; (ii) we extend algorithms to a class of online non-smooth optimization with gradual variation bound; and (iii) we develop a deterministic algorithm for online bandit optimization in multipoint bandit setting. 相似文献