首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new mapping heuristic is developed, based on the recently proposed Mean Field Annealing (MFA) algorithm. An efficient implementation scheme, which decreases the complexity of the proposed algorithm by asymptotical factors, is also given. Performance of the proposed MFA algorithm is evaluated in comparison with two well-known heuristics: Simulated Annealing and Kernighan-Lin. Results of the experiments indicate that MFA can be used as an alternative heuristic for solving the mapping problem. The inherent parallelism of the MFA is exploited by designing an efficient parallel algorithm for the proposed MFA heuristic.  相似文献   

2.
Clustering is an important research area with numerous applications in pattern recognition, machine learning, and data mining. Since the clustering problem on numeric data sets can be formulated as a typical combinatorial optimization problem, many researches have addressed the design of heuristic algorithms for finding sub-optimal solutions in a reasonable period of time. However, most of the heuristic clustering algorithms suffer from the problem of being sensitive to the initialization and do not guarantee the high quality results. Recently, Approximate Backbone (AB), i.e., the commonly shared intersection of several sub-optimal solutions, has been proposed to address the sensitivity problem of initialization. In this paper, we aim to introduce the AB into heuristic clustering to overcome the initialization sensitivity of conventional heuristic clustering algorithms. The main advantage of the proposed method is the capability of restricting the initial search space around the optimal result by defining the AB, and in turn, reducing the impact of initialization on clustering, eventually improving the performance of heuristic clustering. Experiments on synthetic and real world data sets are performed to validate the effectiveness of the proposed approach in comparison to three conventional heuristic clustering algorithms and three other algorithms with improvement on initialization.  相似文献   

3.
根据灵敏度矩阵提出了一种简单的灵敏度定义,该定义反映了单个输入节点对整个网络性能的影响。进而,基于该灵敏度定义提出了神经网络输入层剪枝算法。最后,通过UCI机器学习数据库中的两个模式分类例子验证方法的有效性。  相似文献   

4.
关联模式挖掘研究是数据挖掘研究领域的重要分支之一,旨在发现项集之间存在的关联或相关关系。然而,传统的基于支持度一可信度框架的挖掘方法存在着一些不足:一是会产生过多的模式(包括频繁项集和规则);二是挖掘出来的规则有些是用户不感兴趣的,无用的,甚至是错误的;所以在挖掘过程中能有效地对无用模式进行剪枝是必要的。将卡方分析引入到模式的相关性度量中,利用卡方检验对项集之间、规则前件与后件之间的相关性进行度量是一种有效的剪枝方法。实验结果分析表明,在支持度度量的基础上引入卡方检验可以有效地对非相关模式进行剪枝,从而减小频繁项集和规则的规模。  相似文献   

5.
In the multivariate one-way analysis of variance a test statistic based solely on the rank orders of the data is proposed. In the two group case the statistic simplifies to a test of Puri and Sen [19]. Monte Carlo simulation techniques are used to evaluate the performance of the test statistic under various distributions. These evaluation include the simulated significance levels and power functions. By the nature of the proposed test it is evident that the new procedure is easier to implement than other existing rank based tests.  相似文献   

6.
Probabilistic methods for causal discovery are based on the detection of patterns of correlation between variables. They are based on statistical theory and have revolutionised the study of causality. However, when correlation itself is unreliable, so are probabilistic methods: unusual data can lead to spurious causal links, while nonmonotonic functional relationships between variables can prevent the detection of causal links. We describe a new heuristic method for inferring causality between two continuous variables, based on randomness and unimodality tests and making few assumptions about the data. We evaluate the method against probabilistic and additive noise algorithms on real and artificial datasets, and show that it performs competitively.  相似文献   

7.
Ensemble pruning deals with the reduction of base classifiers prior to combination in order to improve generalization and prediction efficiency. Existing ensemble pruning algorithms require much pruning time. This paper presents a fast pruning approach: pattern mining based ensemble pruning (PMEP). In this algorithm, the prediction results of all base classifiers are organized as a transaction database, and FP-Tree structure is used to compact the prediction results. Then a greedy pattern mining method is explored to find the ensemble of size k. After obtaining the ensembles of all possible sizes, the one with the best accuracy is outputted. Compared with Bagging, GASEN, and Forward Selection, experimental results show that PMEP achieves the best prediction accuracy and keeps the size of the final ensemble small, more importantly, its pruning time is much less than other ensemble pruning algorithms.  相似文献   

8.
针对传统骨架提取算法中出现影响骨架识别的毛刺问题,特别是其中对物体形状的描述会产生很大影响的绷带骨架,在骨架的权值的基础上提出了一种新的骨架修剪算法.算法的核心是通过距离变换获取骨架权值并在骨架修剪过程中把表征骨架重要性的权值作为骨架分支是否被去除的依据.实验结果表明,作为目标骨架表示和识别的预处理手段,基于权值的骨架修剪算法能使骨架更能准确的反应目标物体的稳定结构.  相似文献   

9.
为了分析惯性测量单元(IMU)的随机误差项系数,采用了Allan方差分析方法,同时,针对大数据量情况,为了减小计算量并保持Allan方差分析的准确性,结合双对数曲线特点,给出了一种简化的Allan方差算法,并使用该算法对一种微机械(MEMS)惯性测量单元的长时间静态数据进行了分析.结果表明:该方法可以有效地辨识出IMU各误差项系数.  相似文献   

10.
Modifications to a grammatical inference scheme by Feldman et al. are presented. A comparison of the relative performance of the original and modified schemes is made using the complexity measures of Feldman and Wharton. The case where a complex model is used to generate the sample set is then analyzed. A set of 104 samples was found that trained the program to infer the grammar that corresponded to the original model. The results of a study of the performance of this algorithm when there is a large number of samples is then presented. The major conclusion of this study is that the modified scheme has a superior performance on small sample sets but is highly unsuitable for large ones.  相似文献   

11.
In the point site labeling problem, we are given a set P={p1,p2,…,pn} of point sites in the plane. The label of a point pi is an axis-parallel rectangle of specified size. The objective is to label the maximum number of points on the map so that the placed labels are mutually non-overlapping. Here, we investigate a special class of the point site labeling problem where (i) height of the labels of all the points are same but their lengths may differ, (ii) the label of a point pi touches the point at one of its four corners, and (iii) the label of one point does not obscure any other point in P. We describe an efficient heuristic algorithm for this problem which runs in time in the worst case. We run our algorithm as well as the algorithm Rules proposed by Wagner et al. on randomly generated point sets and on the available benchmarks. The results produced by our algorithm are almost the same as Rules in most of the cases. But our algorithm is faster than Rules in dense instance. We have also computed the optimum solutions for all the examples we have considered by designing an algorithm, which performs an exhaustive search in the worst case. We found that the exhaustive search algorithm runs reasonably fast for most of the examples we have considered.  相似文献   

12.
The rectangle packing problem often appears in encasement and cutting as well as very large-scale integration design. To solve this problem, many algorithms such as genetic algorithm, simulated annealing and other heuristic algorithms have been proposed. In this paper, a new heuristic algorithm is recommended based on two important concepts, namely, the corner-occupying action and caving degree. Twenty-one rectangle-packing instances are tested by the algorithm developed, 16 of which having achieved optimum solutions within reasonable runtime. Experimental results demonstrate that the algorithm developed is fairly efficient for solving the rectangle packing problem.  相似文献   

13.
In this paper, we study the survivability in waveband switching optical networks and propose a new heuristic algorithm called Protection based on Survivable Integrated Auxiliary Graph (PSIAG) to tolerate the single-link failure. The survivable integrated auxiliary graph (SIAG) is compared of the single virtual topology layer and multiple waveband-plane layers, and it can well solve the problem of routing and waveband assignment. In PSIAG, we can feasible use the waveband sub-path grouping scheme based on SIAG to save the switching ports in MG-OXCs. For each demand, PSIAG first computes the single-hop or multi-hop route-pair including a primary path and a link-disjoint backup path on virtual topology layer. If the route-pair cannot be found on virtual topology layer, PSIAG then computes the hybrid multi-hop route-pair on jointing the virtual topology layer and waveband-plane layers. Simulation results show that PSIAG can obtain better performance than previous algorithm.  相似文献   

14.
Eye movement is the simplest and repetitive movement that enables humans to interact with the environment. The common daily activities, such as reading a book or watching television, involve this natural activity, which consists of rapidly shifting our gaze from one region to another. In clinical application, the identification of the main components of eye movement during visual exploration, such as fixations and saccades, is the objective of the analysis of eye movements: however, in patients affected by motor control disorder the identification of fixation is not banal. This work presents a new fixation identification algorithm based on the analysis of variance and covariance: the main idea was to use bivariate statistical analysis to compare variance over x and y to identify fixation. We describe the new algorithm, and we compare it with the common fixations algorithm based on dispersion. To demonstrate the performance of our approach, we tested the algorithm in a group of healthy subjects and patients affected by motor control disorder.  相似文献   

15.
In this work a new optimization method, called the heuristic Kalman algorithm (HKA), is presented. This new algorithm is proposed as an alternative approach for solving continuous, non-convex optimization problems. The principle of HKA is to explicitly consider the optimization problem as a measurement process designed to give an estimate of the optimum. A specific procedure, based on the Kalman estimator, was developed to improve the quality of the estimate obtained through the measurement process. The main advantage of HKA, compared to other metaheuristics, lies in the small number of parameters that need to be set by the user. Further, it is shown that HKA converges almost surely to a near-optimal solution. The efficiency of HKA was evaluated in detail using several non-convex test problems, both in the unconstrained and constrained cases. The results were then compared to those obtained via other metaheuristics. The numerical experiments show that HKA is a promising approach for solving non-convex optimization problems, particularly in terms of computation time and success ratio.  相似文献   

16.
In this paper, we propose a new pruning algorithm to obtain the optimal number of hidden units of a single layer of a fully connected neural network (NN). The technique relies on a global sensitivity analysis of model output. The relevance of the hidden nodes is determined by analysing the Fourier decomposition of the variance of the model output. Each hidden unit is assigned a ratio (the fraction of variance which the unit accounts for) that gives their ranking. This quantitative information therefore leads to a suggestion of the most favorable units to eliminate. Experimental results suggest that the method can be seen as an effective tool available to the user in controlling the complexity in NNs.  相似文献   

17.
A comparative analysis of methods for pruning decision trees   总被引:14,自引:0,他引:14  
In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a top-down approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method  相似文献   

18.
The main feature of the method suggested in this paper is the assignment of priority to elements and priority elements are preferred to non-priority elements when assigning elements to stations. It gives the minimum number of stations under a predetermined cycle time. The work element time is considered to be invariant. This method has been tested by solving nearly all the problems available in the pertinent literature. This method yields better or similar results as available in the literature. A computer program incorporating the new heuristic method is presented in the paper.  相似文献   

19.
The alpha-beta technique for searching game trees is analyzed, in an attempt to provide some insight into its behavior. The first portion of this paper is an expository presentation of the method together with a proof of its correctness and a historical discussion. The alpha-beta procedure is shown to be optimal in a certain sense, and bounds are obtained for its running time with various kinds of random data.  相似文献   

20.
It is very expensive and time-consuming to annotate huge amounts of data. Active learning would be a suitable approach to minimize the effort of annotation. A novel active learning approach, coupled K nearest neighbor pseudo pruning (CKNNPP), is proposed in the paper, which is based on querying examples by KNNPP method. The KNNPP method applies k nearest neighbor technique to search for k neighbor samples from labeled samples of unlabeled samples. When k labeled samples are not belong to the same class, the corresponded unlabeled sample is queried and given its right label by supervisor, and then it is added to labeled training set. In contrast with the previous depiction, the unlabeled sample is not selected and pruned, that is the pseudo pruning. This definition is enlightened from the K nearest neighbor pruning preprocessing. These samples selected by KNNPP are considered to be near or on the optimal classification hyperplane that is crucial for active learning. Especially, in order to avoid the excursion of the optimal classification hyperplane after adding a queried sample, CKNNPP method is proposed finally that two samples with different class label (like a couple, annotated by supervisor) are queried by KNNPP and added in the training set simultaneously for updating training set in each iteration. The CKNNPP can provide a good performance, and especially it is simple, effective, and robust, and can solve the classification problem with unbalanced dataset compared with the existing methods. Then, the computational complexity of CKNNPP is analyzed. Additionally, a new stopping criterion is applied in the proposed method, and the classifier is implemented by Lagrangian Support Vector Machines in iterations of active learning. Finally, twelve UCI datasets, image datasets of aircrafts, and the dataset of radar high-resolution range profile are used to validate the feasibility and effectiveness of the proposed method. The results illuminate that CKNNPP gains superior performance compared with the other seven state-of-the-art active learning approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号