共查询到20条相似文献,搜索用时 0 毫秒
1.
Hamid Masoud Saeed Jalili Seyed Mohammad Hossein Hasheminejad 《Applied Intelligence》2013,38(3):289-314
Combinatorial Particle Swarm Optimization (CPSO) is a relatively recent technique for solving combinatorial optimization problems. CPSO has been used in different applications, e.g., partitional clustering and project scheduling problems, and it has shown a very good performance. In partitional clustering problem, CPSO needs to determine the number of clusters in advance. However, in many clustering problems, the correct number of clusters is unknown, and it is usually impossible to estimate. In this paper, an improved version, called CPSOII, is proposed as a dynamic clustering algorithm, which automatically finds the best number of clusters and simultaneously categorizes data objects. CPSOII uses a renumbering procedure as a preprocessing step and several extended PSO operators to increase population diversity and remove redundant particles. Using the renumbering procedure increases the diversity of population, speed of convergence and quality of solutions. For performance evaluation, we have examined CPSOII using both artificial and real data. Experimental results show that CPSOII is very effective, robust and can solve clustering problems successfully with both known and unknown number of clusters. Comparing the obtained results from CPSOII with CPSO and other clustering techniques such as KCPSO, CGA and K-means reveals that CPSOII yields promising results. For example, it improves 9.26 % of the value of DBI criterion for Hepato data set. 相似文献
2.
Pairwise data clustering by deterministic annealing 总被引:4,自引:0,他引:4
Hofmann T. Buhmann J.M. 《IEEE transactions on pattern analysis and machine intelligence》1997,19(1):1-14
Partitioning a data set and extracting hidden structure from the data arises in different application areas of pattern recognition, speech and image processing. Pairwise data clustering is a combinatorial optimization method for data grouping which extracts hidden structure from proximity data. We describe a deterministic annealing approach to pairwise clustering which shares the robustness properties of maximum entropy inference. The resulting Gibbs probability distributions are estimated by mean-field approximation. A new structure-preserving algorithm to cluster dissimilarity data and to simultaneously embed these data in a Euclidian vector space is discussed which can be used for dimensionality reduction and data visualization. The suggested embedding algorithm which outperforms conventional approaches has been implemented to analyze dissimilarity data from protein analysis and from linguistics. The algorithm for pairwise data clustering is used to segment textured images 相似文献
3.
We show a practical application of a well-known nonequilibrium relation, the Jarzynski equality, in quantum computation. Its implementation may open a way to solve combinatorial optimization problems, minimization of a real single-valued function, cost function, with many arguments. It has been disclosed that the ordinary quantum computational algorithm to solve a kind of hard optimization problems, has a bottleneck that its computational time is restricted to be extremely slow without relevant errors. However, by our novel strategy shown in the present study, we might overcome such a difficulty. 相似文献
4.
María Teresa Gallegos 《Computational statistics & data analysis》2010,54(3):637-48
Statistical clustering criteria with free scale parameters and unknown cluster sizes are inclined to create small, spurious clusters. To mitigate this tendency a statistical model for cardinality-constrained clustering of data with gross outliers is established, its maximum likelihood and maximum a posteriori clustering criteria are derived, and their consistency and robustness are analyzed. The criteria lead to constrained optimization problems that can be solved by using iterative, alternating trimming algorithms of k-means type. Each step in the algorithms requires the solution of a λ-assignment problem known from combinatorial optimization. The method allows one to estimate the numbers of clusters and outliers. It is illustrated with a synthetic data set and a real one. 相似文献
5.
A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing 总被引:6,自引:0,他引:6
Lipo Wang Sa Li Tian F. Xiuju Fu 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2004,34(5):2119-2125
Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications. 相似文献
6.
Simulated annealing using a reversible jump Markov chain Monte Carlo algorithm for fuzzy clustering 总被引:3,自引:0,他引:3
In this paper, an approach for automatically clustering a data set into a number of fuzzy partitions with a simulated annealing using a reversible jump Markov chain Monte Carlo algorithm is proposed. This is in contrast to the widely used fuzzy clustering scheme, the fuzzy c-means (FCM) algorithm, which requires the a priori knowledge of the number of clusters. The said approach performs the clustering by optimizing a cluster validity index, the Xie-Beni index. It makes use of the homogeneous reversible jump Markov chain Monte Carlo (RJMCMC) kernel as the proposal so that the algorithm is able to jump between different dimensions, i.e., number of clusters, until the correct value is obtained. Different moves, like birth, death, split, merge, and update, are used for sampling a candidate state given the current state. The effectiveness of the proposed technique in optimizing the Xie-Beni index and thereby determining the appropriate clustering is demonstrated for both artificial and real-life data sets. In a part of the investigation, the utility of the fuzzy clustering scheme for classifying pixels in an IRS satellite image of Kolkata is studied. A technique for reducing the computation efforts in the case of satellite image data is incorporated. 相似文献
7.
Microarray technology has made it possible to monitor the expression levels of many genes simultaneously across a number of experimental conditions. Fuzzy clustering is an important tool for analyzing microarray gene expression data. In this article, a real-coded Simulated Annealing (VSA) based fuzzy clustering method with variable length configuration is developed and combined with popular Artificial Neural Network (ANN) based classifier. The idea is to refine the clustering produced by VSA using ANN classifier to obtain improved clustering performance. The proposed technique is used to cluster three publicly available real life microarray data sets. The superior performance of the proposed technique has been demonstrated by comparing with some widely used existing clustering algorithms. Also statistical significance test has been conducted to establish the statistical significance of the superior performance of the proposed clustering algorithm. Finally biological relevance of the clustering solutions are established. 相似文献
8.
融合K-调和均值和模拟退火粒子群的混合聚类算法 总被引:1,自引:0,他引:1
针对K-调和均值和模拟退火粒子群聚类算法的优缺点,提出了1种融合K-调和均值和模拟退火粒子群的混合聚类算法。首先通过K-调和均值方法将粒子群分成若干个子群,每个粒子根据其个体极值和所在子种群的全局极值来更新位置。同时引入模拟退火思想,抑制了早期收敛,提高了计算精度。本文使用Iris、Zoo、Wine和Image Segmentation,4个数据库,以F-measure为评价聚类效果的标准,对混合聚类算法进行了验证。研究发现,该混合聚类算法可以有效地避免陷入局部最优,在保证收敛速度的同时增强了算法的全局搜索能力,明显改善了聚类效果。该算法目前已用于无锡一淡水养殖基地的水产健康养殖水质分析系统,运行效果良好。 相似文献
9.
Tommaso Urli 《Constraints》2015,20(4):473-473
10.
11.
Thomas Watson 《Computational Complexity》2013,22(4):727-769
We define a combinatorial checkerboard to be a function f : {1, . . . , m} d → {1,?1} of the form ${f(u_1,\ldots,u_d)=\prod_{i=1}^df_i(u_i)}$ for some functions f i : {1, . . . , m} → {1,?1}. This is a variant of combinatorial rectangles, which can be defined in the same way but using {0, 1} instead of {1,?1}. We consider the problem of constructing explicit pseudorandom generators for combinatorial checkerboards. This is a generalization of small-bias generators, which correspond to the case m = 2. We construct a pseudorandom generator that ${\epsilon}$ -fools all combinatorial checkerboards with seed length ${O\bigl(\log m+\log d\cdot\log\log d+\log^{3/2} \frac{1}{\epsilon}\bigr)}$ . Previous work by Impagliazzo, Nisan, and Wigderson implies a pseudorandom generator with seed length ${O\bigl(\log m+\log^2d+\log d\cdot\log\frac{1}{\epsilon}\bigr)}$ . Our seed length is better except when ${\frac{1}{\epsilon}\geq d^{\omega(\log d)}}$ . 相似文献
12.
Combinatorial problems appear in many areas in science, engineering, biomedicine, business, and operations research. This article presents a new intelligent computing approach for solving combinatorial problems, involving permutations and combinations, by incorporating logic programming. An overview of applied combinatorial problems in various domains is given. Such computationally hard and popular combinatorial problems as the traveling salesman problem are discussed to illustrate the usefulness of the logic programming approach. Detailed discussions of implementation of combinatorial problems with time complexity analyses are presented in Prolog, the standard language of logic programming. These programs can be easily integrated into other systems to implement logic programming in combinatorics. 相似文献
13.
Reconfigurable accelerators can improve process time on combinatorial problems with fine-grained parallelism. Such problems contain a huge number of logical operations (NOT, AND and OR) that can evaluate simultaneously, a characteristic that varies considerably from problem to problem. Because of this variability, such combinatorial problems are approached using instance-specific reconfiguration-hardware tailored to a specific algorithm and a specific set of input data. Boolean satisfiability (SAT for short) is a common combinatorial problem that exhibits fine-grained parallelism. SAT varies considerably based on the situation. Its solution is thus an ideal candidate for improvements based on instance-specific reconfiguration. In fact, simulation of an instance-specific accelerator show potential speed-ups by a factor of up to 140,000 in execution time over the solution by a software solver. The authors detail the results of their prototype that results an order-of-magnitude speed-up in the execution of difficult satisfiability problems 相似文献
14.
Clustering is a well known technique in identifying intrinsic structures and find out useful information from large amount of data. One of the most extensively used clustering techniques is the fuzzy c-means algorithm. However, computational task becomes a problem in standard objective function of fuzzy c-means due to large amount of data, measurement uncertainty in data objects. Further, the fuzzy c-means suffer to set the optimal parameters for the clustering method. Hence the goal of this paper is to produce an alternative generalization of FCM clustering techniques in order to deal with the more complicated data; called quadratic entropy based fuzzy c-means. This paper is dealing with the effective quadratic entropy fuzzy c-means using the combination of regularization function, quadratic terms, mean distance functions, and kernel distance functions. It gives a complete framework of quadratic entropy approaching for constructing effective quadratic entropy based fuzzy clustering algorithms. This paper establishes an effective way of estimating memberships and updating centers by minimizing the proposed objective functions. In order to reduce the number iterations of proposed techniques this article proposes a new algorithm to initialize the cluster centers.In order to obtain the cluster validity and choosing the number of clusters in using proposed techniques, we use silhouette method. First time, this paper segments the synthetic control chart time series directly using our proposed methods for examining the performance of methods and it shows that the proposed clustering techniques have advantages over the existing standard FCM and very recent ClusterM-k-NN in segmenting synthetic control chart time series. 相似文献
15.
16.
Extended Hopfield models for combinatorial optimization 总被引:4,自引:0,他引:4
The extended Hopfield neural network proposed by Abe et al. (1992) for solving combinatorial optimization problems with equality and/or inequality constraints has the drawback of being frequently stabilized in states with neurons of ambiguous classification as active or inactive. We introduce in the model a competitive activation mechanism and we derive a new expression of the penalty energy allowing us to reduce significantly the number of neurons with intermediate level of activations. The new version of the model is validated experimentally on the set covering problem. Our results confirm the importance of instituting competitive activation mechanisms in Hopfield neural-network models. 相似文献
17.
We discuss two experimental designs and show how to use them to evaluate difficult empirical combinatorial problems. We restrict our analysis here to the knapsack problem but comment more generally on the use of computational testing to analyze the performances of algorithms.Corresponding author. Part of this work was carried out while the author was visiting the IOE Department at University of Michigan and the CS department at Columbia University. 相似文献
18.
Bagging for path-based clustering 总被引:3,自引:0,他引:3
Fischer B. Buhmann J.M. 《IEEE transactions on pattern analysis and machine intelligence》2003,25(11):1411-1415
A resampling scheme for clustering with similarity to bootstrap aggregation (bagging) is presented. Bagging is used to improve the quality of path-based clustering, a data clustering method that can extract elongated structures from data in a noise robust way. The results of an agglomerative optimization method are influenced by small fluctuations of the input data. To increase the reliability of clustering solutions, a stochastic resampling method is developed to infer consensus clusters. A related reliability measure allows us to estimate the number of clusters, based on the stability of an optimized cluster solution under resampling. The quality of path-based clustering with resampling is evaluated on a large image data set of human segmentations. 相似文献
19.
The pixel labeling problems in computer vision are often formulated as energy minimization tasks. Algorithms such as graph cuts and belief propagation are prominent; however, they are only applicable for specific energy forms. For general optimization, Markov Chain Monte Carlo (MCMC) based simulated annealing can estimate the minima states very slowly.This paper presents a sampling paradigm for faster optimization. First, in contrast to previous MCMCs, the role of detailed balance constraint is eliminated. The reversible Markov chain jumps are essential for sampling an arbitrary posterior distribution, but they are not essential for optimization tasks. This allows a computationally simple window cluster sample. Second, the proposal states are generated from combined sets of local minima which achieve a substantial increase in speed compared to uniformly labeled cluster proposals. Third, under the coarse-to-fine strategy, the maximum window size variable is incorporated along with the temperature variable during simulated annealing. The proposed window annealing is experimentally shown to be many times faster and capable of finding lower energy compared to the previous Gibbs and Swendsen-Wang cut (SW-cut) sampler. In addition, the proposed method is compared with other deterministic algorithms like graph cut, belief propagation, and spectral method in their own specific energy forms. Window annealing displays competitive performance in all domains. 相似文献
20.
Mark Sh. Levin 《Advances in Engineering Software》2011,42(12):1089-1098
Four-layer framework for combinatorial optimization problems/models domain is suggested for applied problems structuring and solving: (1) basic combinatorial models and multicriteria decision making problems (e.g., clustering, knapsack problem, multiple choice problem, multicriteria ranking, assignment/allocation); (2) composite models/procedures (e.g., multicriteria combinatorial problems, morphological clique problem); (3) basic (standard) solving frameworks, e.g.: (i) Hierarchical Morphological Multicriteria Design (HMMD) (ranking, combinatorial synthesis based on morphological clique problem), (ii) multi-stage design (two-level HMMD), (iii) special multi-stage composite framework (clustering, assignment/location, multiple choice problem); and (4) domain-oriented solving frameworks, e.g.: (a) design of modular software, (b) design of test inputs for multi-function system testing, (c) combinatorial planning of medical treatment, (d) design and improvement of communication network topology, (e) multi-stage framework for information retrieval, (f) combinatorial evolution and forecasting of software, devices. The multi-layer approach covers ‘decision cycle’, i.e., problem statement, models, algorithms/procedures, solving schemes, decisions, decision analysis and improvement. 相似文献