首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A zonal grid algorithm for direct numerical simulation (DNS) of incompressible turbulent flows within a Finite-Volume framework is presented. The algorithm uses fully coupled embedded grids and a conservative treatment of the grid-interface variables. A family of conservative prolongation operators is tested in a 2D vortex dipole and a 3D turbulent boundary layer flow. These tests show that both, first- and second-order interpolation conserves the overall second-order spatial accuracy of the scheme. The first-order conservative interpolation has a smaller damping effect on the solution but the second-order conservative interpolation has better spectral properties. The application of this algorithm in boundary layer flow separating and reattaching due to the presence of a streamwise pressure gradient reveals the power and usefulness of the presented algorithm. This simulation has been made possible by the zonal grid algorithm by reducing the required number of grid points from about 500 × 106 to 130 × 106 grid cells.  相似文献   

2.

Using a matrix of drop size distributions (DSDs), measured by a microscale array of disdrometers, a method of spatial and temporal DSD interpolation is presented. The goal of this interpolation technique is to estimate the DSD above the disdrometer array as a function of three spatial coordinates, time and drop diameter. This interpolation algorithm assumes simplified drop dynamics, based on cloud advection and terminal velocity of raindrops. Once a 3D DSD has been calculated, useful quantities such as radar reflectivity Z and rainfall rate R can be computed and compared with corresponding rain gauge and weather radar data.  相似文献   

3.
一种新的求解度约束最小生成树的遗传算法   总被引:3,自引:0,他引:3  
染色体编码是遗传算法的关键内容,编码的优劣并直接影响算法的性能.提出了基于过程控制的生成树编码方法--PC编码.PC码为定长的整数向量,使用PC编码求解特定生成树问题时,首先选定的一个有效算法,并将修改为可控算法,然后用编码向量控制算法的运行过程,从面得到唯一生成树.为了求解度约束最小生成树(DCMST)问题,在D-Prim算法的基础上,设计r过程可控的度约束生成树构造PC-Prim算法.给出了以PC-Prim算法作为译码器的求解DC-MST问题的遗传算法.仿真结果表明遗传算法求解精度和运行时间均优于参与其他算法.  相似文献   

4.
5.
基于量子遗传聚类的入侵检测方法*   总被引:1,自引:0,他引:1  
现有基于聚类的入侵检测算法,聚类过程中需要预设聚类数,且算法的性能受初始数据输入顺序的影响,为此提出了一种新的基于量子遗传聚类入侵检测方法。该方法的基本思想是先自动建立初始聚类簇,再用改进量子遗传算法对初始聚类组合优化,最后进行入侵检测。实验结果表明,该方法能够有效地检测出网络中的入侵数据。  相似文献   

6.
A previously presented hybrid finite volume/particle method for the solution of the joint-velocity-frequency-composition probability density function (JPDF) transport equation in complex 3D geometries is extended for parallel computing. The parallelization strategy is based on domain decomposition. The finite volume method (FVM) and the particle method (PM) are parallelized separately and the algorithm is fully synchronous. For the FVM a standard method based on transferring data in ghost cells is used. Moreover, a subdomain interior decomposition algorithm to efficiently solve the implicit time integration for hyperbolic systems is described. The parallelization of the PM is more complicated due to the use of a sub-time stepping algorithm for the particle trajectory integration. Hereby, each particle obeys its local CFL criterion, and the covered distances per global time step can vary significantly. Therefore, an efficient algorithm which deals with this issue and has minimum communication effort was devised and implemented. Numerical tests to validate the parallel vs. the serial algorithm are presented, where also the effectiveness of the subdomain interior decomposition for the implicit time integration was investigated. A 3D dump-combustor configuration test case with about 2.5 × 105 cells was used to demonstrate the good performance of the parallel algorithm. The hybrid algorithm scales well and the maximum speedup on 60 processors for this configuration was 50 (≈80% parallel efficiency).  相似文献   

7.
We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max–Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC’s ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.  相似文献   

8.
This work is devoted to the development of efficient parallel algorithms for the direct numerical simulation (DNS) of incompressible flows on modern supercomputers. In doing so, a Poisson equation needs to be solved at each time-step to project the velocity field onto a divergence-free space. Due to the non-local nature of its solution, this elliptic system is the part of the algorithm that is most difficult to parallelize. The Poisson solver presented here is restricted to problems with one uniform periodic direction. It is a combination of a block preconditioned Conjugate Gradient (PCG) and an FFT diagonalization. The latter decomposes the original system into a set of mutually independent 2D systems that are solved by means of the PCG algorithm. For the most ill-conditioned systems, that correspond to the lowest Fourier frequencies, the PCG is replaced by a direct Schur-complement based solver.The previous version of the Poisson solver was conceived for single-core (also dual-core) processors and therefore, the distributed memory model with message-passing interface (MPI) was used. The irruption of multi-core architectures motivated the use of a two-level hybrid MPI + OpenMP parallelization with the shared memory model on the second level. Advantages and implementation details for the additional OpenMP parallelization are presented and discussed in this paper. Numerical experiments show that, within its range of efficient scalability, the previous MPI-only parallelization is slightly outperformed by the MPI + OpenMP approach. But more importantly, the hybrid parallelization has allowed to significantly extend the range of efficient scalability. Here, the solver has been successfully tested up to 12800 CPU cores for meshes with up to 109 grid points. However, estimations based on the presented results show that this range can be potentially stretched up until 200,000 cores approximately. Finally, several examples of DNS simulations are briefly presented to illustrate some potential applications of the solver.  相似文献   

9.
The Lx norm has been widely studied as a criterion for curve-fitting problems. This procedure is well suited to problems in numerical analysis [7], where errors due to round-off are assumed to have an underlying uniform distribution. TheLx norm, or Chebychev problem, allows for the “worst case”, in the sense of requiring the largest absolute error to be a minimum. Stiefel [9] developed a method called the “exchange method” for finding Chebychev estimates. The method presented here differs from the exchange method in several important respects: We use a “reduced basis” although determinants are used instead of finding an explicit basis inverse; and we develop a procedure for multiple pivots, or skipping extreme point solutions. The algorithm is specialized to solve the problem with an intercept term and a single independent variable. Results of the computational experience with a computer code version of the algorithm are presented.  相似文献   

10.
传统的超L型瓦仿真算法主要采用穷举的方法,效率较低,且有一定的局限性。针对上述问题,将三维直角坐标系引入三环网络,在三维直角坐标系下,提出广义三环网络G(N;s1,s2,s3)的超L型瓦仿真算法,利用C++和OpenGL实现超L型仿真,并求得其相关参数l、m、n,以及三环网络的直径D。实验结果表明,该算法具有较高的执行效率和更强的通用性。  相似文献   

11.
于群英  杨文荣 《微机发展》2008,18(6):210-213
针对复杂的网络环境中的DNS服务器搭建与发生故障等问题,较为详尽地阐述了DNS实现原理,并从DNS结构、查询原理入手,对故障现象进行分析,采用网络诊断命令nslcokup获得相关数据,根据返回的数据进行对比分析,确定故障产生的原因,探讨解决故障的方法,系统地提出了由内到外、由Intranet到Internet逐步展开的策略。实践证明,该策略能够提高复杂网络环境下的DNS的运行效率,极大地缩短DNS故障排除时间,从而保持网络的畅通。  相似文献   

12.
Direct numerical simulation (DNS) offers useful information about the understanding and modeling of turbulent flow. However, few DNSs of wall-bounded compressible turbulent flows have been performed. The objective of this paper is to construct a DNS algorithm which can simulate the compressible turbulent flow between the adiabatic and isothermal walls accurately and efficiently. Since this flow is the simplest turbulent flow with adiabatic and isothermal walls, it is ideal for the modeling of compressible turbulent flow near the adiabatic and isothermal walls. The present DNS algorithm for wall-bounded compressible turbulent flow is based on the B-spline collocation method in the wall-normal direction. In addition, the skew-symmetric form for convection term is used in the DNS algorithm to maintain numerical stability. The validity of the DNS algorithm is confirmed by comparing our results with those of an existing DNS of the compressible turbulent flow between isothermal walls [J. Fluid Mech. 305 (1995) 159]. The applicability and usefulness of the DNS algorithm are demonstrated by the stable computation of the DNS of compressible turbulent flow between adiabatic and isothermal walls.  相似文献   

13.
We present a new constructive solving approach for systems of 3D geometric constraints. The solver is based on the cluster rewriting approach, which can efficiently solve large systems of constraints on points, and incrementally handle changes to a system, but can so far solve only a limited class of problems. The new solving approach extends the class of problems that can be solved, while retaining the advantages of the cluster rewriting approach. Whereas previous cluster rewriting solvers only determined rigid clusters, we also determine two types of non-rigid clusters, i.e. clusters with particular degrees of freedom. This allows us to solve many additional problems that cannot be decomposed into rigid clusters, without resorting to expensive algebraic solving methods. In addition to the basic ideas of the approach, an incremental solving algorithm, two methods for solution selection, and a method for mapping constraints on 3D primitives to constraints on points are presented.  相似文献   

14.
This paper presents a 100-line Python code for general 3D topology optimization. The code adopts the Abaqus Scripting Interface that provides convenient access to advanced finite element analysis (FEA). It is developed for the compliance minimization with a volume constraint using the Bi-directional Evolutionary Structural Optimization (BESO) method. The source code is composed of a main program controlling the iterative procedure and five independent functions realizing input model preparation, FEA, mesh-independent filter and BESO algorithm. The code reads the initial design from a model database (.cae file) that can be of arbitrary 3D geometries generated in Abaqus/CAE or converted from various widely used CAD modelling packages. This well-structured code can be conveniently extended to various other topology optimization problems. As examples of easy modifications to the code, extensions to multiple load cases and nonlinearities are presented. This code is useful for researchers in the topology optimization field and for practicing engineers seeking automated conceptual design tools. With further extensions, the code could solve sophisticated 3D conceptual design problems in structural engineering, mechanical engineering and architecture practice. The complete code is given in the appendix section and can also be downloaded from the website: www.rmit.edu.au/research/cism/.  相似文献   

15.
For the problem of indeterminate direction of local search, lacking of efficient regulation mechanism between local search and global search and regenerating new antibodies randomly in the original optimization version of artificial immune network (opt-aiNet), this paper puts forward a novel predication based immune network (PiNet) to solve multimodal function optimization more efficiently, accurately and reliably. The algorithm mimics natural phenomenon in immune system such as clonal selection, affinity maturation, immune network, immune memory and immune predication. The proposed algorithm includes two main features with opt-aiNet. The information of antibodies in continuous generations is utilized to point out the direction of local search and to adjust the balance between local and global search. PiNet also employs memory cells to generate new antibodies with high affinities. Theory analysis and experiments on 10 widely used benchmark problems show that when compared with opt-aiNet method, PiNet algorithm is capable of improving search performance significantly in successful rate, convergence speed, search ability, solution quality and algorithm stability.  相似文献   

16.
The density based notion for clustering approach is used widely due to its easy implementation and ability to detect arbitrary shaped clusters in the presence of noisy data points without requiring prior knowledge of the number of clusters to be identified. Density-based spatial clustering of applications with noise (DBSCAN) is the first algorithm proposed in the literature that uses density based notion for cluster detection. Since most of the real data set, today contains feature space of adjacent nested clusters, clearly DBSCAN is not suitable to detect variable adjacent density clusters due to the use of global density parameter neighborhood radius N rad and minimum number of points in neighborhood N pts . So the efficiency of DBSCAN depends on these initial parameter settings, for DBSCAN to work properly, the neighborhood radius must be less than the distance between two clusters otherwise algorithm merges two clusters and detects them as a single cluster. Through this paper: 1) We have proposed improved version of DBSCAN algorithm to detect clusters of varying density adjacent clusters by using the concept of neighborhood difference and using the notion of density based approach without introducing much additional computational complexity to original DBSCAN algorithm. 2) We validated our experimental results using one of our authors recently proposed space density indexing (SDI) internal cluster measure to demonstrate the quality of proposed clustering method. Also our experimental results suggested that proposed method is effective in detecting variable density adjacent nested clusters.  相似文献   

17.
Cosmological simulations of structures and galaxies formations have played a fundamental role in the study of the origin, formation and evolution of the Universe. These studies improved enormously with the use of supercomputers and parallel systems and, recently, grid based systems and Linux clusters. Now we present the new version of the tree N-body parallel code FLY that runs on a PC Linux Cluster using the one side communication paradigm MPI-2 and we show the performances obtained. FLY is included in the Computer Physics Communication Program Library. This new version was developed using the Linux Cluster of CINECA, an IBM Cluster with 1024 Intel Xeon Pentium IV 3.0 GHz. The results show that it is possible to run a 64 million particle simulation in less than 15 minutes for each time-step, and the code scalability with the number of processors is achieved. This leads us to propose FLY as a code to run very large N-body simulations with more than 109 particles with the higher resolution of a pure tree code. The FLY new version is available at the CPC Program Library, http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0.html [U. Becciani, M. Comparato, V. Antonuccio-Delogu, Comput Phys. Comm. 174 (2006) 605].  相似文献   

18.
In distributed data mining, adopting a flat node distribution model can affect scalability. To address the problem of modularity, flexibility and scalability, we propose a Hierarchically-distributed Peer-to-Peer (HP2PC) architecture and clustering algorithm. The architecture is based on a multi-layer overlay network of peer neighborhoods. Supernodes, which act as representatives of neighborhoods, are recursively grouped to form higher level neighborhoods. Within a certain level of the hierarchy, peers cooperate within their respective neighborhoods to perform P2P clustering. Using this model, we can partition the clustering problem in a modular way across neighborhoods, solve each part individually using a distributed K-means variant, then successively combine clusterings up the hierarchy where increasingly more global solutions are computed. In addition, for document clustering applications, we summarize the distributed document clusters using a distributed keyphrase extraction algorithm, thus providing interpretation of the clusters. Results show decent speedup, reaching 165 times faster than centralized clustering for a 250-node simulated network, with comparable clustering quality to the centralized approach. We also provide comparison to the P2P K-means algorithm and show that HP2PC accuracy is better for typical hierarchy heights. Results for distributed cluster summarization match those of their centralized counterparts with up to 88% accuracy.  相似文献   

19.
Sun  Liang  Xing  Jian-chun  Wang  Zhen-yu  Zhang  Xun  Liu  Liang 《Neural computing & applications》2018,29(5):1311-1330

Image contour-based feature extraction method has been applied to some fields of image recognition and virtual reality. However, image contour features are easily susceptible to factors like noise, rotation and thresholds during extraction and processing. To solve the above problem, this paper proposes a contour coding image recognition algorithm based on level set and BP neural network models. Firstly, level set model is employed to extract the contours of images. Secondly, image coding method proposed herein is used to code images horizontally, vertically and obliquely. At last, BP neural network model is trained to recognize the image codes. Validity of the proposed algorithm is verified by using a set of actual engineering part images as well as MPEG and PLANE databases. The results show that the proposed method achieves high recognition rate and requires small samples, which also exhibits good robustness to external disturbances such as noise and image scaling and rotation.

  相似文献   

20.
The interactive program, MOCRAFT, differs from other micro-computer versions of CRAFT in several respects. Unlike some versions which are written in BASIC and are slow to execute, or have limited size, or lack the features of the original, MOCRAFT is a full implementation of the SHARE Library FORTRAN code.

MOCRAFT also contains many features not found in any other versions of the well known and powerful facilities layout program, CRAFT (for Computerized Relative Allocation of Facilities Technique by Armour and Buffa). For example, MOCRAFT can utilize both cost-flow and REL data at the sme times, and is therefore a true multiple objective method. It can also accomodate constraints between arbitraty points and departments in the layout.

Originally developed at the University of Wisconsin-Milwaukee for control panel layouts, the source code for MOCRAFT has been extensively edited and enhanced at Cleveland State University. The code was originally implemented on CSU's VAX network by the author in 1985 and the first PC version was compiled by D'Souze and Mohanty, in 1987. This version of MOCRAFT, by the author, is an improvement over the first PC version, and runs on a standard configured IBM PC, XT or AT, with at least 256K bytes of RAM.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号