首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 218 毫秒
1.
The quantum Monte Carlo diagonalization or stochastic diagonalization serves as a computational method of solving exactly quantum Hamiltonian models. While based on a variational method, in which the solution approaches the optimal eigenstate of a huge Hamiltonian matrix, the diagonalization method in practice has difficulty because of the rapidly increasing number of quantum states. In this paper, we suggest an improved implementation method of finding the ground state via exact diagonalization of the Hubbard and t-J model Hamiltonians. Achieved is a great increase in the computational capability through an optimized code based on Boolean operations, a reduction of the state space using symmetry properties, and an effective variation on the trial ground state. Our method is restricted mainly by the memory capacity to keep the components of the trial ground state. Carried out on a single personal computer, the method turns out to find exact solutions in a relatively short time with 108-109 basis states.  相似文献   

2.
The Euclidean Distance Transform (EDT) is an important tool in image analysis and machine vision. This paper provides an area-efficient hardware solution to the computation of EDT on a binary image. An O(n) hardware algorithm for computing EDT of an n×n image is presented. A pipelined 2D array architecture for harware implementation is designed. The architecture has a regular structure with locally connected identical processing elements. Further, pipelining reduces hardware resources. Such an array architecture is easily scalable to handle images of different sizes and is suitable for implementation on reconfigurable devices like FPGAs. Results of FPGA-based implementation shows that the hardware can process about 6000 images of size 512×512 per second which is much higher than the video rate of 30 frames per second.  相似文献   

3.
In this work, we propose a structured computational framework for modelling the envelope of the swept volume, that is the boundary of the volume obtained by sweeping an input solid along a trajectory of rigid motions. Our framework is adapted to the well-established industry-standard brep format to enable its implementation in modern CAD systems. This is achieved via a “local analysis”, which covers parametrizations and singularities, as well as a “global theory” which tackles face-boundaries, self-intersections and trim curves. Central to the local analysis is the “funnel” which serves as a natural parameter space for the basic surfaces constituting the sweep. The trimming problem is reduced to the problem of surface–surface intersections of these basic surfaces. Based on the complexity of these intersections, we introduce a novel classification of sweeps as decomposable and non-decomposable. Further, we construct an invariant function θ on the funnel which efficiently separates decomposable and non-decomposable sweeps. Through a geometric theorem we also show intimate connections between θ, local curvatures and the inverse trajectory used in earlier works as an approach towards trimming. In contrast to the inverse trajectory approach of testing points, θ is a computationally robust global function. It is the key to a complete structural understanding, and an efficient computation of both, the singular locus and the trim curves, which are central to a stable implementation. Several illustrative outputs of a pilot implementation are included.  相似文献   

4.
The problem of building an ? 0-sampler is to sample near-uniformly from the support set of a dynamic multiset. This problem has a variety of applications within data analysis, computational geometry and graph algorithms. In this paper, we abstract a set of steps for building an ? 0-sampler, based on sampling, recovery and selection. We analyze the implementation of an ? 0-sampler within this framework, and show how prior constructions of ? 0-samplers can all be expressed in terms of these steps. Our experimental contribution is to provide a first detailed study of the accuracy and computational cost of ? 0-samplers.  相似文献   

5.
There exist algorithms, also called “fast” algorithms, which exploit the special structure of Toeplitz matrices so that, e.g., allow to solve a linear system of equations in O(n 2) flops. However, some implementations of classical algorithms that do not use this structure (O(n 3) flops) highly reduce the time to solution when several cores are available. That is why it is necessary to work on “fast” algorithms so that they do not lose track of the benefits of new hardware/software. In this work, we propose a new approach to the Generalized Schur Algorithm, a very known algorithm for the solution of Toeplitz systems, to work on a Block–Toeplitz matrix. Our algorithm is based on matrix–matrix multiplications, thus allowing to exploit an efficient implementation of this operation if it exists. Our algorithm also makes use of the thread level parallelism featured by multicores to decrease execution time.  相似文献   

6.
Working with an integer bilinear programming formulation of a one-dimensional cutting-stock problem, we develop an ILP-based local-search heuristic. The ILPs holistically integrate the master and subproblem of the usual price driven pattern-generation paradigm, resulting in a unified model that generates new patterns in situ. We work harder to generate new columns, but we are guaranteed that new columns give us an integer linear-programming improvement (rather then the continuous linear-programming improvement sought by the usual price driven column generation). The method is well suited to practical restrictions such as when a limited number of cutting patterns should be employed, and our goal is to generate a profile of solutions trading off trim loss against the number of patterns utilized. We describe our implementation and results of computational experiments on instances from a chemical-fiber company.  相似文献   

7.
The author presents a polynomial-based algorithm for high-order multidimensional interpolation at the coarse–fine interface in the context of adaptive mesh refinement on structured Cartesian grids. The proposed algorithm reduces coarse–fine interpolation to matrix–vector products by exploiting the static mesh geometry and a family of nonsingularity-preserving stencil transformations. As such, no linear system is solved at the runtime and the ill-conditioning of Vandermonde matrix is avoided. The algorithm is also generic in that D, the dimensionality of the computational domain, and p, the degree of the interpolating polynomial, are both arbitrary positive integers. Stability and accuracy are verified by interpolating simple functions, and by applying the proposed method to adaptively solving Poisson’s equation and the convection–diffusion equation. The companion MATLAB® package, AMRCFI, is also freely available for convenience and more implementation details.  相似文献   

8.
Concept detection stands as an important problem for efficient indexing and retrieval in large video archives. In this work, the KavTan System, which performs high-level semantic classification in one of the largest TV archives of Turkey, is presented. In this system, concept detection is performed using generalized visual and audio concept detection modules that are supported by video text detection, audio keyword spotting and specialized audio-visual semantic detection components. The performance of the presented framework was assessed objectively over a wide range of semantic concepts (5 high-level, 14 visual, 9 audio, 2 supplementary) by using a significant amount of precisely labeled ground truth data. KavTan System achieves successful high-level concept detection performance in unconstrained TV broadcast by efficiently utilizing multimodal information that is systematically extracted from both spatial and temporal extent of multimedia data.  相似文献   

9.

The implementation of periodic boundary conditions (PBCs) is one of the most important and difficult steps in the computational analysis of structures and materials. This is especially true in cases such as mechanical metamaterials which typically possess intricate geometries and designs which makes finding and implementing the correct PBCs a difficult challenge. In this work, we analyze one of the most common PBCs implementation technique, as well as implement and validate an alternative generic method which is suitable to simulate any possible 2D microstructural geometry with a quadrilateral unit cell regardless of symmetry and mode of deformation. A detailed schematic of how both these methods can be employed to study 3D systems is also presented.

  相似文献   

10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号