Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGCIust) that builds layers of subgraphs based on this matrix and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure, we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones. 相似文献
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGClust) that builds layers of subgraphs based on this matrix, and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones. 相似文献
The joint estimation of the location vector and the shape matrix of a set of independent and identically Complex Elliptically Symmetric (CES) distributed observations is investigated from both the theoretical and computational viewpoints. This joint estimation problem is framed in the original context of semiparametric models allowing us to handle the (generally unknown) density generator as an infinite-dimensional nuisance parameter. In the first part of the paper, a computationally efficient and memory saving implementation of the robust and semiparmaetric efficient R-estimator for shape matrices is derived. Building upon this result, in the second part, a joint estimator, relying on the Tyler’s M-estimator of location and on the R-estimator of shape matrix, is proposed and its Mean Squared Error (MSE) performance compared with the Semiparametric Cramér-Rao Bound (SCRB).
Online-to-offline (OTO) is a new commercial model with enormous market potential. Online customer orders are forwarded to the offline brick-and-mortar store to fulfil, which is a combination of dual-channel supply chain. OTO overcomes many disadvantages of the traditional dual-channel supply chain, but still faces uncertain market demand. To reduce the inventory risk caused by demand uncertainty, lateral inventory transshipment is employed in this paper to pool inventory risk in OTO supply chain. We model centralised OTO and decentralised OTO with/without transshipment, and then analyse different scenarios. Our results demonstrate that there exists a unique Nash equilibrium of inventory order levels in dual channels and an optimal transshipment price to maximise the profit of the entire supply chain. Finally, we provide a numerical example of uniform demand distribution. Our analyses offer many managerial insights and show that transshipment always benefits the OTO supply chain. 相似文献
The field of superamphiphobic surface fabrication has evolved rapidly in the last decade; however, research on important issues such as sustainability and green chemistry procedures is still scarce. Herein, a simple method of microwave irradiation (MW) to minimize energy consumption during the preparation of superamphiphobic aluminum (Al) surfaces is reported. Al substrates are first etched in diluted HCl solutions to generate a microstructure and then irradiated in a commercial microwave unit for several time intervals, temperatures, and pressures. The surfaces are then coated with different compounds, and the wettability is tested with high and very-low surface tension liquids. Optical profilometry and scanning electron microscopy images show that the density of hierarchical micro-nanostructures increases with MW time, temperature, and pressure. At 170 °C and 7.9 bar, the surfaces present a high density of structures and re-entrant topographies. The obtained coatings display excellent repellence to liquids with surface tensions as low as 27.5 mN m−1. X-ray photoelectron spectroscopy data show the importance of efficient surface functionalization for the production of superamphiphobicity in Al substrates. The results show that MW irradiation of Al substrates can be a green and efficient method for fabricating superamphiphobic surfaces. 相似文献
Iris recognition has been widely used in several scenarios with very satisfactory results. As it is one of the earliest stages, the image segmentation is in the basis of the process and plays a crucial role in the success of the recognition task. In this paper we analyze the relationship between the accuracy of the iris segmentation process and the error rates of three typical iris recognition methods. We selected 5000 images of the UBIRIS, CASIA and ICE databases that the used segmentation algorithm can accurately segment and artificially simulated four types of segmentation inaccuracies. The obtained results allowed us to conclude about a strong relationship between translational segmentation inaccuracies – that lead to errors in phase – and the recognition error rates. 相似文献
A new method for the recognition of spoken emotions is presented based on features of the glottal airflow signal. Its effectiveness is tested on the new optimum path classifier (OPF) as well as on six other previously established classification methods that included the Gaussian mixture model (GMM), support vector machine (SVM), artificial neural networks – multi layer perceptron (ANN-MLP), k-nearest neighbor rule (k-NN), Bayesian classifier (BC) and the C4.5 decision tree. The speech database used in this work was collected in an anechoic environment with ten speakers (5 M and 5 F) each speaking ten sentences in four different emotions: Happy, Angry, Sad, and Neutral. The glottal waveform was extracted from fluent speech via inverse filtering. The investigated features included the glottal symmetry and MFCC vectors of various lengths both for the glottal and the corresponding speech signal. Experimental results indicate that best performance is obtained for the glottal-only features with SVM and OPF generally providing the highest recognition rates, while for GMM or the combination of glottal and speech features performance was relatively inferior. For this text dependent, multi speaker task the top performing classifiers achieved perfect recognition rates for the case of 6th order glottal MFCCs. 相似文献
An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search
diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum
search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false
in parallel systems. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity
during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given
by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing
and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate
a framework for the development of advanced collective search methods that present such desiderata of search intensity and
search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we
refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of
meta-meta-heuristics, because all arguments are independent of specific problems handled (such as scheduling, planning, etc.),
of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood
or genetic operators (2-opt, crossover, etc.). 相似文献
When numerical CSPs are used to solve systems of n equations with n variables, the preconditioned interval Newton operator plays two key roles: First it allows handling the n equations as a global constraint, hence achieving a powerful contraction. Second it can prove rigorously the existence of
solutions. However, none of these advantages can be used for under-constrained systems of equations, which have manifolds
of solutions. A new framework is proposed in this paper to extend the advantages of the preconditioned interval Newton to
under-constrained systems of equations. This is achieved simply by allowing domains of the NCSP to be parallelepipeds, which
generalize the boxes usually used as domains. 相似文献