首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 278 毫秒
1.
Only a subset of the boundary points—the segment borders—have to be taken into account in searching for the optimal multisplit of a numerical value range with respect to the most commonly used attribute evaluation functions of classification learning algorithms. Segments and their borders can be found efficiently in a linear-time preprocessing step.In this paper we expand the applicability of segment borders by showing that inspecting them alone suffices in optimizing any convex evaluation function. For strictly convex evaluation functions inspecting all segment borders is also necessary. These results are derived directly from Jensen's inequality.We also study the evaluation function Training Set Error which is not strictly convex. With that function the data can be preprocessed into an even smaller number of cut point candidates, called alternations, when striving for optimal partition. Examining all alternations also seems necessary, since—analogously to strictly convex functions—the placement of neighboring cut points affects the optimality of an alternation. We test empirically the reduction of the number of cut point candidates that can be obtained for Training Set Error on real-world data.  相似文献   

2.
General and Efficient Multisplitting of Numerical Attributes   总被引:10,自引:0,他引:10  
Elomaa  Tapio  Rousu  Juho 《Machine Learning》1999,36(3):201-244
Often in supervised learning numerical attributes require special treatment and do not fit the learning scheme as well as one could hope. Nevertheless, they are common in practical tasks and, therefore, need to be taken into account. We characterize the well-behavedness of an evaluation function, a property that guarantees the optimal multi-partition of an arbitrary numerical domain to be defined on boundary points. Well-behavedness reduces the number of candidate cut points that need to be examined in multisplitting numerical attributes. Many commonly used attribute evaluation functions possess this property; we demonstrate that the cumulative functions Information Gain and Training Set Error as well as the non-cumulative functions Gain Ratio and Normalized Distance Measure are all well-behaved. We also devise a method of finding optimal multisplits efficiently by examining the minimum number of boundary point combinations that is required to produce partitions which are optimal with respect to a cumulative and well-behaved evaluation function. Our empirical experiments validate the utility of optimal multisplitting: it produces constantly better partitions than alternative approaches do and it only requires comparable time. In top-down induction of decision trees the choice of evaluation function has a more decisive effect on the result than the choice of partitioning strategy; optimizing the value of most common attribute evaluation functions does not raise the accuracy of the produced decision trees. In our tests the construction time using optimal multisplitting was, on the average, twice that required by greedy multisplitting, which in its part required on the average twice the time of binary splitting.  相似文献   

3.
Necessary and Sufficient Pre-processing in Numerical Range Discretization   总被引:1,自引:0,他引:1  
The time complexities of class-driven numerical range discretization algorithms depend on the number of cut point candidates. Previous analysis has shown that only a subset of all cut points - the segment borders - have to be taken into account in optimal discretization with respect to many goodness criteria. In this paper we show that inspecting segment borders alone suffices in optimizing any convex evaluation function. For strictly convex evaluation functions inspecting all of them also is necessary, since the placement of neighboring cut points affects the optimality of a segment border. With the training set error function, which is not strictly convex, it suffices to inspect an even smaller set of cut point candidates, called alternations, when striving for optimal partition. On the other hand, we prove that failing to check an alternation may lead to suboptimal discretization. We present a linear-time algorithm for finding all alternation points. The number of alternation points is typically much lower than the total number of cut points. In our experiments running the discretization algorithm over the sequence of alternation points led to a significant speed-up.  相似文献   

4.
分支裁减法是一种有效的求解小规模TSP的整数规划方法.随着TSP规模的逐步扩大,问题求解的复杂性也随之增加.在TSP的可计算数学研究领域中,局部搜索算法能快速求解TSP的局部最优解.通过将局部搜索算法与分支裁减法结合,利用局部搜索算法对分支裁减法获得上界所对应环路进行优化,使分支限界算法的上界更快地向全局最优解靠近,提高算法的求解效率,扩大了分支裁减法求解TSP的规模.  相似文献   

5.

Image registration is a commonly task in medical image analysis. Therefore, a significant number of algorithms have been developed to perform rigid and non-rigid image registration. Particularly, the free-form deformation algorithm is frequently used to carry out non-rigid registration task; however, it is a computationally very intensive algorithm. In this work, we describe an approach based on profiling data to identify potential parts of this algorithm for which parallel implementations can be developed. The proposed approach assesses the efficient of the algorithm by applying performance analysis techniques commonly available in traditional computer operating systems. Hence, this article provides guidelines to support researchers working on medical image processing and analysis to achieve real-time non-rigid image registration applications using common computing systems. According to our experimental findings, significant speedups can be accomplished by parallelizing sequential snippets, i.e., code regions that are executed more than once. For the selected costly functions previously identified in the studied free-form deformation algorithm, the developed parallelization decreased the runtime by up to seven times relatively to the related single thread based implementation. The implementations were developed based on the Open Multi-Processing application programming interface. In conclusion, this study confirms that based on the call graph visualization and detected performance bottlenecks, one can easily find and evaluate snippets which are potential optimization targets in addition to throughput in memory accesses.

  相似文献   

6.
《Computers & chemistry》1993,17(2):203-207
We have implemented the Smith and Waterman dynamic programming algorithm on the massively parallel MP1104 computer from MasPar and compared its ability to detect remote protein sequence homologies with that of other commonly used database search algorithms. Dynamic programming algorithms are normally too computer intensive to permit full databases search, however on the MP1104 a search of the Swiss-Prot database takes about 15 s. This nearly interactive speed of database searching permits one to optimize the parameters for each query. Most of the common database search methods (FASTA, FASTDB and BLAST) gain their speed by using approximations such as word matching or eliminating gaps from the alignments which prevents them from detecting remote homologies. By using queries from protein super families containing a large number of family members of diverse similarities, we have measured the ability of each of these algorithms to detect the remotest members of each super family. Using these super families, we have found that the algorithms, in order of decreasing sensitivity are BLAZE, FASTDB, FASTA and BLAST. Hence the massively parallel computers allow one to have maximal sensitivity and search speed simultaneously.  相似文献   

7.
The conventional local optimization path and coevolutionary processes are studied when "local Nash equilibrium (NE) traps" exist. Conventional NE search algorithms in games with local optima can misidentify NE by following a local optimization path. We prove that any iterative NE search algorithms based on local optimization cannot differentiate real NE and "local NE traps". Coevolutionary programming, a parallel and global search algorithm, is applied to overcome this problem. In order to enhance the poor convergence of simple coevolutionary programming, hybrid coevolutionary programming is suggested. The conventional NE algorithms, simple coevolutionary programming, and hybrid coevolutionary algorithms are tested through a simple numerical example and transmission-constrained electricity market examples.  相似文献   

8.
This paper presents a hybrid model named: CLA-DE for global numerical optimization. This model is based on cellular learning automata (CLA) and differential evolution algorithm. The main idea is to learn the most promising regions of the search space using cellular learning automata. Learning automata in the CLA iteratively partition the search dimensions of a problem and learn the most admissible partitions. In order to facilitate incorporation among the CLA cells and improve their impact on each other, differential evolution algorithm is incorporated, by which communication and information exchange among neighboring cells are speeded up. The proposed model is compared with some evolutionary algorithms to demonstrate its effectiveness. Experiments are conducted on a group of benchmark functions which are commonly used in the literature. The results show that the proposed algorithm can achieve near optimal solutions in all cases which are highly competitive with the ones from the compared algorithms.  相似文献   

9.
SOMS is a general surrogate‐based multistart algorithm, which is used in combination with any local optimizer to find global optima for computationally expensive functions with multiple local minima. SOMS differs from previous multistart methods in that a surrogate approximation is used by the multistart algorithm to help reduce the number of function evaluations necessary to identify the most promising points from which to start each nonlinear programming local search. SOMS's numerical results are compared with four well‐known methods, namely, Multi‐Level Single Linkage (MLSL), MATLAB's MultiStart, MATLAB's GlobalSearch, and GLOBAL. In addition, we propose a class of wavy test functions that mimic the wavy nature of objective functions arising in many black‐box simulations. Extensive comparisons of algorithms on the wavy test functions and on earlier standard global‐optimization test functions are done for a total of 19 different test problems. The numerical results indicate that SOMS performs favorably in comparison to alternative methods and does especially well on wavy functions when the number of function evaluations allowed is limited.  相似文献   

10.
This paper presents a saddle point programming approach to compute the medial axis (MA). After exploring the saddle point properties of the medial axis transform (MAT), the mathematical programming method is employed to establish the saddle point programming model of the MAT. By using the optimal conditions, i.e., the number and distribution of the tangent points between the boundary and medial axis disk, the one- and two-dimensional saddle point algorithms are developed. In order to determine the branch point, it is better to consider its generating mechanism. Here, we identify the branch point according to the sudden changes of the solutions to the one-dimensional saddle point algorithm. Hence, all the regular and irregular points of MA can be computed by a general algorithm, and it is proved to be efficient and accurate by the numerical examples.  相似文献   

11.
In order to find hyperparameters for a machine learning model, algorithms such as grid search or random search are used over the space of possible values of the models’ hyperparameters. These search algorithms opt the solution that minimizes a specific cost function. In language models, perplexity is one of the most popular cost functions. In this study, we propose a fractional nonlinear programming model that finds the optimal perplexity value. The special structure of the model allows us to approximate it by a linear programming model that can be solved using the well-known simplex algorithm. To the best of our knowledge, this is the first attempt to use optimization techniques to find perplexity values in the language modeling literature. We apply our model to find hyperparameters of a language model and compare it to the grid search algorithm. Furthermore, we illustrate that it results in lower perplexity values. We perform this experiment on a real-world dataset from SwiftKey to validate our proposed approach.  相似文献   

12.
This paper presents a computationally efficient algorithm for function approximation with piecewise linear sigmoidal nodes. A one hidden layer network is constructed one node at a time using the well-known method of fitting the residual. The task of fitting an individual node is accomplished using a new algorithm that searches for the best fit by solving a sequence of quadratic programming problems. This approach offers significant advantages over derivative-based search algorithms (e.g., backpropagation and its extensions). Unique characteristics of this algorithm include: finite step convergence, a simple stopping criterion, solutions that are independent of initial conditions, good scaling properties and a robust numerical implementation. Empirical results are included to illustrate these characteristics.  相似文献   

13.
曾明华  全轲 《计算机应用》2020,40(7):1908-1912
为解决粒子群优化(PSO)算法求解双层规划问题时易陷入局部最优解的问题,提出了一种基于模拟退火(SA)Metropolis准则的改进混合布谷鸟搜索量子行为粒子群优化(ICSQPSO)算法。首先,该混合算法引入SA算法中的Metropolis准则,在求解过程中既能接受好解也能以一定的概率接受坏解,增强全局寻优能力;接着,为布谷鸟搜索算法设计一种改进动态步长Lévy飞行,以保持粒子群在优化过程中较高的多样性,保证搜索广度;最后,利用布谷鸟搜索算法中的偏好随机游走机制帮助粒子跳出局部最优解。通过对13个涵盖非线性规划、分式规划、多个下层规划的双层规划实例的数值实验,结果表明:ICSQPSO算法所得12个双层规划的目标函数最优值显著优于对比算法,只有1例的结果稍差,并且有半数实例的结果优于对比算法50%。由此可见,ICSQPSO算法对双层规划的寻优能力明显优于对比算法。  相似文献   

14.
The computation burden of intensive numerical real-time algorithms is a problem encountered in robotics and many other fields. A cost-effective solution for the implementation of these algorithms requires knowledge of computer architecture, compiler technology and algorithms. A cost-effective numeric processing methodology using a combined hardware-software approach and taking advantage of logic programming tools is presented. The methodology is based on optimizing the numerical calculation process of the algorithm. It also enables the specification of hardware resources. The process uses a rule-based-system (RBS) implemented in the logic programming language Prolog to automatically reduce the number of operations in the numerical execution of the algorithm and optimizes the use of hardware resources. The methodology provides a solution for the problems of handshake overhead and algorithm translation efficiency.The Direct Kinematics Solution (DKS), a robot arm control algorithm, is presented as a case study to illustrate the methodology. The proposed methodology has a general potential which can be extended to the optimization or implementation of different algorithms.  相似文献   

15.
Shared-nothing并行数据库系统查询优化技术   总被引:15,自引:0,他引:15  
查询优化是并行数据库系统的核心技术。该文介绍作者自行研制的一个Shared-nothing并行数据库系统PBASE/2中独特的两阶段优化策略。为了缩减并行相称优化庞大的搜索空间,PBASE/2将并行查询优化划分为顺序优化和并行化两个在阶段。在顺序优化阶段对并行化后的通信代价进行预先估算,将通信开销加入顺序优化的代价模型,同时对动态规划搜索算法进行了修正和扩展,保证了顺序优化阶段得到的最小代价计划在  相似文献   

16.
多智能体路径规划(multi-agent path finding,MAPF)是为多个智能体规划路径的问题,关键约束是多个智能体同时沿着规划路径行进而不会发生冲突。MAPF在物流、军事、安防等领域有着大量应用。对国内外关于MAPF的主要研究成果进行系统整理和分类,按照规划方式不同,MAPF算法分为集中式规划算法和分布式执行算法。集中式规划算法是最经典和最常用的MAPF算法,主要分为基于[A*]搜索、基于冲突搜索、基于代价增长树和基于规约四种算法。分布式执行算法是人工智能领域兴起的基于强化学习的MAPF算法,按照改进技术不同,分布式执行算法分为专家演示型、改进通信型和任务分解型三种算法。基于上述分类,比较MAPF各种算法的特点和适用性,分析现有算法的优点和不足,指出现有算法面临的挑战并对未来工作进行了展望。  相似文献   

17.
Two of the most complex optimization problems encountered in the design of third generation optical networks are the dynamic routing and wavelength assignment (DRWA) problem under the assumptions of ideal and non-ideal physical layers. Both these problems are NP-complete in nature. These are challenging due to the presence of multiple local optima in the search space. Even heuristics-based algorithms fail to solve these problems efficiently as the search space is non-convex. This paper reports the performance of a metaheuristic, that is, an evolutionary programming algorithm in solving different optical network optimization problems. The primary motivation behind adopting this approach is to reduce the algorithm execution time. It is demonstrated that the same basic approach can be used to solve different optimization problems by designing problem-specific fitness functions. Also, it is shown how the algorithm performance can be improved by integrating suitable soft constraints with the original constraints. Exhaustive simulation studies are carried out assuming the presence of different levels of linear impairments such as switch and demultiplexer crosstalk and non-linear impairments like four wave mixing to illustrate the superiority of the proposed algorithms.  相似文献   

18.
Harmony search (HS) algorithm is inspired by the music improvisation process in which a musician searches for the best harmony and continues to polish the harmony to improve its aesthetics. The efficiency of evolutionary algorithms depends on the extent of balance between diversification and intensification during the course of the search. An ideal evolutionary algorithm must have efficient exploration in the beginning and enhanced exploitation toward the end. In this paper, a two‐phase harmony search (TPHS) algorithm is proposed that attempts to strike a balance between exploration and exploitation by concentrating on diversification in the first phase using catastrophic mutation and then switches to intensification using local search in the second phase. The performance of TPHS is analyzed and compared with 4 state‐of‐the‐art HS variants on all the 30 IEEE CEC 2014 benchmark functions. The numerical results demonstrate the superiority of the proposed TPHS algorithm in terms of accuracy, particularly on multimodal functions when compared with other state‐of‐the‐art HS variants; further comparison with state‐of‐the‐art evolutionary algorithms reveals excellent performance of TPHS on composition functions. Composition functions are combined, rotated, shifted, and biased version of other unimodal and multimodal test functions and mimic the difficulties of real search spaces by providing a massive number of local optima and different shapes for different regions of the search space. The performance of the TPHS algorithm is also evaluated on a real‐life problem from the field of computer vision called camera calibration problem, ie, a 12‐dimensional highly nonlinear optimization problem with several local optima.  相似文献   

19.
Several decomposition methods have been proposed for the distributed optimal design of quasi-separable problems encountered in Multidisciplinary Design Optimization (MDO). Some of these methods are known to have numerical convergence difficulties that can be explained theoretically. We propose a new decomposition algorithm for quasi-separable MDO problems. In particular, we propose a decomposed problem formulation based on the augmented Lagrangian penalty function and the block coordinate descent algorithm. The proposed solution algorithm consists of inner and outer loops. In the outer loop, the augmented Lagrangian penalty parameters are updated. In the inner loop, our method alternates between solving an optimization master problem and solving disciplinary optimization subproblems. The coordinating master problem can be solved analytically; the disciplinary subproblems can be solved using commonly available gradient-based optimization algorithms. The augmented Lagrangian decomposition method is derived such that existing proofs can be used to show convergence of the decomposition algorithm to Karush–Kuhn–Tucker points of the original problem under mild assumptions. We investigate the numerical performance of the proposed method on two example problems.  相似文献   

20.
块匹配运动估计是视频编码国际标准中广泛采用的关键技术.许多快速块匹配法通过限制搜索点数来减少运算量,但与全搜索算法相比极易出现匹配误差.该文介绍了一种应用新的判别条件的多级顺序排除算法(MSEA),并在此基础上提出一种新的算法,该算法引入了尺度化的部分失真消除(PDE)技术,用于尺度化累积部分误差和当前最小误差.实验证明,相对于一脉相承的同为穷举搜索算法的全搜索算法(FS)、顺序排除算法(SEA)、多级顺序排除算法(MSEA)等,该算法大幅度提高了搜索效率.与多级顺序排除算法相比,平均每宏块节省了大约75%的运算次数.该算法在保证图像质量的前提下,使视频编码的速度大大提高.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号