首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A heuristic method for solving large-scale multi-facility location problems is presented. The method is analogous to Cooper's method (SIAM Rev. 6 (1964) 37), using the authors’ single facility location method (Comput. Optim. Appl. 21 (2002) 213) as a parallel subroutine, and reassigning customers to facilities using the heuristic of nearest center reclassification. Numerical results are reported. Scope and purpose We study the multiple facility location problem (MFLP). The objective in MFLP is to locate facilities to serve optimally a given set of customers. MFLPs have many applications in Operations Research, and a rich literature, see Drezner (Location Sci. 3(4) (1995) 275) for a recent survey.MFLPs involve, in addition to the location decision, also the assignment of customers to facilities. The MFLP is therefore a special clustering problem, the clusters here are the sets of customers assigned to the same facility.We propose a parallel heuristic method for solving MFLPs, using ideas from cluster analysis (nearest mean reclassification (Cluster Analysis, 3rd Edition, Edward Arnold, London, 1993)), and the authors’ Newton bracketing method for convex minimization (Comput. Optim. Appl. 21 (2002) 213) as a subroutine. The method is suitable for large-scale problems, as illustrated by numerical examples.  相似文献   

2.
3.
In many support vector-based clustering algorithms, a key computational bottleneck is the cluster labeling time of each data point which restricts the scalability of the method. In this paper, we review a general framework of support vector-based clustering using dynamical system and propose a novel method to speed up labeling time which is log-linear to the size of data. We also give theoretical background of the proposed method. Various large-scale benchmark results are provided to show the effectiveness and efficiency of the proposed method.  相似文献   

4.
This paper is focused on structural static reanalysis problem with modification of supports. An efficient reanalysis method is proposed. The method is based on the introduction of the modified master stiffness matrices, the rank-one decomposition of the corresponding incremental stiffness matrix, and the sparse Cholesky rank-one update/downdate algorithm. Adding and deleting of supports with arbitrary orientations can be dealt with. Numerical examples show that exact results can be obtained by the proposed method, and the computational times can be significantly reduced in comparison with the direct analysis method.  相似文献   

5.
《Location Science #》1998,6(1-4):25-39
We present demand point aggregation procedures for the p-median and p-center network location models. A coarse aggregation structure is initially obtained by partitioning the demand points according to a grid imposed over the demand region. A “row-column’’ aggregation algorithm is used to determine the spacing of rows and columns of the grid to exploit the problem structure. A second step involves locating aggregate demand points on the subnetworks induced by the cells of the grid partitioning. The aggregate demand point set so obtained then defines an approximating location model; alternatively, it may initialize an iterative network location–allocation procedure to find the aggregate demand points. We have tested our procedures on data sets based on maps from the TIGER/Line database of the United States Census Bureau, and report on our computational experience.  相似文献   

6.
协同进化算法在求解大规模全局优化问题上具有较好的效果,其核心思想是利用分而治之的策略将一个高维问题分解成若干个子问题,然后分别优化每个子问题.然而,现有的分解方法通常需要花费大量的计算成本来获得精确的变量分组.通过采用递归交互检测中的历史信息简化分组过程,能够避免检测某些集合的相互关系,本文提出了一种新型三层递归差分分组策略(NTRDG).与其他4种现有的分组方法相比, NTRDG在不影响分组精度的情况下计算成本消耗较低.仿真结果表明, NTRDG在求解大规模全局优化问题时具有很强的竞争力.  相似文献   

7.
This article describes a new direct method for linear static reanalysis of structures, based on the displacement method. The modifications introduced can take the form of adding, eliminating, or substituting one or more elements. It is not necessary to maintain the total number of degrees of freedom. This new method is based on the introduction of the modifications as constraints on the original system by means of the Lagrange multipliers. As a direct result of reanalysis, the nodal forces on the modified section are also obtained. This method is simpler and more efficient than previous ones, requiring only the software necessary for the modified Crout method. The corresponding FORTRAN 77 subroutines are included in an Appendix.  相似文献   

8.
A technique is suggested as an approach to developing controls for large-scale dynamical systems. The method has the advantage that it enables one to formulate a small-scale dynamic optimization problem based upon certain key indicators of behavior related to the large-scale system. The resultant control problem is algebraic.  相似文献   

9.
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver.Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.  相似文献   

10.
This research integrates component mode synthesis with a multilevel design optimization strategy to develop a design methodology for making the dynamical modification of complex structures simpler and tractable. The component mode synthesis formulates the eigenvalue equation of the entire structure in terms of vibration characteristics of local structure components. With this particular feature, the component mode synthesis helps to facilitate a two-stage procedure for the dynamic modification of a complex structure; the lower-level design optimization modifies the local structure components whose performances are prescribed by the optimal solution of the upper-level design optimization. The paper first discusses the formulation and the derivation of the Kuhn-Tucker necessary conditions for the proposed design optimization procedure and then presents numerical examples to demonstrate its numerical implementation and applicability.  相似文献   

11.
12.
A modification of the combinatorial truncation method for optimization over vertex-located sets is considered. The modification allows dealing with degenerate solutions of auxiliary problems. The form of the truncating inequality is substantiated. An example is given to illustrate the application of the method.  相似文献   

13.
Parallel factor analysis (PARAFAC) is a tensor (multiway array) factorization method which allows to find hidden factors (component matrices) from a multidimensional data. Most of the existing algorithms for the PARAFAC, especially the alternating least squares (ALS) algorithm need to compute Khatri-Rao products of tall factors and multiplication of large matrices, and due to this require high computational cost and large memory and are not suitable for very large-scale-problems. Hence, PARAFAC for large-scale data tensors is still a challenging problem. In this paper, we propose a new approach based on a modified ALS algorithm which computes Hadamard products, instead Khatri-Rao products, and employs relatively small matrices. The new algorithms are able to process extremely large-scale tensors with billions of entries. Extensive experiments confirm the validity and high performance of the developed algorithm in comparison with other well-known algorithms.  相似文献   

14.
15.
A procedure to reanalyze a damaged structure using a finite element force method of analysis is presented. Unlike similar methods that use the displacement method of analysis, the procedure presented here yields a direct, rather than an iterative method. A computer program was written and illustrative examples were solved and are presented herein. Residual elastic strength is defined and calculated for damaged structures that were originally optimally designed for minimum weight.  相似文献   

16.
The main limits of reanalysis method using CUDA (Compute Unified Device Architecture) for large-scale engineering optimization problems are low efficiency on single GPU and memory bottleneck of GPU. To breakthrough these bottlenecks, an efficient parallel independent coefficient (IC) reanalysis method is developed based on multiple GPUs platform. The IC reanalysis procedure is reconstructed to accommodate the use of multiple GPUs. The matrices and vectors are successfully partitioned and prepared for each GPU to achieve good load balance as well as little communication between GPUs. This study also proposes an effective technique to overlap the computation and communication by using non-blocking communication strategy. GPUs would continue their succeeding tasks while communication is still carried out simultaneously. Furthermore, the CSR format is used in each GPU for saving the memory. Finally, large-scale vehicle design problems are implemented by the developed solver. According to the test results, the multi-GPU based IC reanalysis method has potential capability for handling the real large scale problem and reducing the design cycle.  相似文献   

17.
We reveal a connection between the incompressibility method and the Lovász local lemma in the context of Ramsey theory. We obtain bounds by repeatedly encoding objects of interest and thereby compressing strings. The method is demonstrated on the example of van der Waerden numbers. In particular we reprove that . The method is applicable to obtain lower bounds of Ramsey numbers, large transitive subtournaments and other Ramsey phenomena as well.  相似文献   

18.
Finding maximum weight connected subgraphs within networks is a fundamental combinatorial optimization problem both from theoretical and practical standpoints. One of the most prominent applications of this problem appears in Systems Biology and it corresponds to the detection of active subnetworks within gene interaction networks.Due to its importance, several modeling and algorithmic strategies have been proposed for tackling the maximum weight connected subgraph problem (MWCS) over the last years; the most effective strategies typically depend on the use of integer linear programming (ILP). Nonetheless, this implies that large-scale networks (such as those appearing in Systems Biology) can become burdensome; moreover, not all practitioners may have access to an ILP solver. In this paper, a unified modeling and algorithmic scheme is designed to solve the MWCS and some of its application-oriented variants with cardinality-constraints or budget-constraints. The proposed framework is based on a general node-based model which is tackled by a Relax-and-Cut scheme, i.e., Lagrangian relaxation combined with constraint generation; this yields a heuristic procedure capable of providing both dual and primal bounds. The approach is enhanced by additional valid inequalities, lifted valid inequalities, primal heuristics and variable-fixing procedures.Computational results on instances from the literature, as well as on additional large-scale instances, show that the proposed framework is competitive with respect to the existing approaches and it allows to find improved solutions for some unsolved instances from literature. The effect of initializing a Branch-and-Cut approach with information from the Relax-and-Cut is also investigated. The implemented approach is made available online.  相似文献   

19.
Dear editor, Employing observability analysis of a dynamic system is necessary to determine the efficiency of a Kalman filter de-signed to estimate the state of...  相似文献   

20.
The popular dynamic reanalysis methods, such as combined approximation (CA) mainly focus on frequency domain. Compared with time domain analysis, the main challenge for reanalysis methods is to calculate the responses in each iteration. Due to this difficulty, popular reanalysis methods are not available for time domain analysis, such as Newmark- $\beta $ and central different method (CDF). Therefore, a novel adaptive time-based global reanalysis (ATGR) algorithm for Newmark- $\beta $ method is suggested. If basis vectors are generated for predicting the response in each time step, computational cost of reanalysis should be significantly increased. To improve the efficiency, an adaptive reanalysis algorithm is suggested. Moreover, in order to enhance the accuracy of the popular combined algorithm (CA) reanalysis, a global strategy is suggested to construct basis vectors. Numerical examples show that accurate approximations are achieved efficiently for time domain problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号