首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate the evolution of the probability distribution function in time for some wave and Maxwell equations in random media for which the parameters, e.g. permeability, permittivity, fluctuate randomly in space; more precisely, two different media interface randomly in space. We numerically compute the probability distribution and density for output solutions. The underlying numerical and statistical techniques are the so-called polynomial chaos Galerkin projection, which has been extensively used for simulating partial differential equations with uncertainties, and the Monte Carlo simulations.  相似文献   

2.
The convergence to steady state solutions of the Euler equations for high order weighted essentially non-oscillatory (WENO) finite difference schemes with the Lax-Friedrichs flux splitting (Jiang and Shu, in J. Comput. Phys. 126:202–228, 1996) is investigated. Numerical evidence in Zhang and Shu (J. Sci. Comput. 31:273–305, 2007) indicates that there exist slight post-shock oscillations when we use high order WENO schemes to solve problems containing shock waves. Even though these oscillations are small in their magnitude and do not affect the “essentially non-oscillatory” property of the WENO schemes, they are indeed responsible for the numerical residue to hang at the truncation error level of the scheme instead of settling down to machine zero. Differently from the strategy adopted in Zhang and Shu (J. Sci. Comput. 31:273–305, 2007), in which a new smoothness indicator was introduced to facilitate convergence to steady states, in this paper we study the effect of the local characteristic decomposition on steady state convergence. Numerical tests indicate that the slight post-shock oscillation has a close relationship with the local characteristic decomposition process. When this process is based on an average Jacobian at the cell interface using the Roe average, as is the standard procedure for WENO schemes, such post-shock oscillation appears. If we instead use upwind-biased interpolation to approximate the physical variables including the velocity and enthalpy on the cell interface to compute the left and right eigenvectors of the Jacobian for the local characteristic decomposition, the slight post-shock oscillation can be removed or reduced significantly and the numerical residue settles down to lower values than other WENO schemes and can reach machine zero for many test cases. This new procedure is also effective for higher order WENO schemes and for WENO schemes with different smoothness indicators.  相似文献   

3.
A distributed approach is described for solving lineality (or linearity) space (LS) problems with large cardinalities and a large number of dimensions. The LS solution has applications in engineering, science, and business, and includes a subset of solutions of the more general extended linear complementarity problem (ELCP). A parallel MATLAB framework is employed and results are computed on an 8-node Rocks based cluster computer using Remote Procedure Calls (RPCs) and the MPICH2 Message Passing Interface (MPI). Results show that both approaches perform comparably when solving distributed LS problems. This indicates that when deciding which parallel approach to use, the implementation details particular to the method are the decisive factors, which in this investigation give MPICH2 MPI the advantage.
Mario E. CaireEmail:
  相似文献   

4.
Noise is present in many real-world continuous optimization problems. Stochastic search algorithms such as Evolution Strategies (ESs) have been proposed as effective search methods in such contexts. In this paper, we provide a mathematical analysis of the convergence of a (1+1)-ES on unimodal spherical objective functions in the presence of noise. We prove for a multiplicative noise model that for a positive expected value of the noisy objective function, convergence or divergence happens depending on the infimum of the support of the noise. Moreover, we investigate convergence rates and show that log-linear convergence is preserved in presence of noise. This result is a strong theoretical foundation of the robustness of ESs with respect to noise.  相似文献   

5.
We present the first exact and robust implementation of the 3D Minkowski sum of two non-convex polyhedra. Our implementation decomposes the two polyhedra into convex pieces, performs pairwise Minkowski sums on the convex pieces, and constructs their union. We achieve exactness and the handling of all degeneracies by building upon 3D Nef polyhedra as provided by Cgal. The implementation also supports open and closed polyhedra. This allows the handling of degenerate scenarios like the tight passage problem in robot motion planning. The bottleneck of our approach is the union step. We address efficiency by optimizing this step by two means: we implement an efficient decomposition that yields a small number of convex pieces, and develop, test and optimize multiple strategies for uniting the partial sums by consecutive binary union operations. The decomposition that we implemented as part of the Minkowski sum is interesting in its own right. It is the first robust implementation of a decomposition of polyhedra into convex pieces that yields at most O(r 2) pieces, where r is the number of edges whose adjacent facets comprise an angle of more than 180 degrees with respect to the interior of the polyhedron. This work was partially supported by the Netherlands’ Organisation for Scientific Research (NWO) under project no. 639.023.301.  相似文献   

6.
Given an undirected multigraph G=(V,E), a family $\mathcal{W}Given an undirected multigraph G=(V,E), a family W\mathcal{W} of areas WV, and a target connectivity k≥1, we consider the problem of augmenting G by the smallest number of new edges so that the resulting graph has at least k edge-disjoint paths between v and W for every pair of a vertex vV and an area W ? WW\in \mathcal{W} . So far this problem was shown to be NP-complete in the case of k=1 and polynomially solvable in the case of k=2. In this paper, we show that the problem for k≥3 can be solved in O(m+n(k 3+n 2)(p+kn+nlog n)log k+pkn 3log (n/k)) time, where n=|V|, m=|{{u,v}|(u,v)∈E}|, and p=|W|p=|\mathcal{W}| .  相似文献   

7.
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford (Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier, New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer. These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden contours.
Maurizio PaoliniEmail:
  相似文献   

8.
In this paper we continue the study, which was initiated in (Ben-Artzi et al. in Math. Model. Numer. Anal. 35(2):313–303, 2001; Fishelov et al. in Lecture Notes in Computer Science, vol. 2667, pp. 809–817, 2003; Ben-Artzi et al. in J. Comput. Phys. 205(2):640–664, 2005 and SIAM J. Numer. Anal. 44(5):1997–2024, 2006) of the numerical resolution of the pure streamfunction formulation of the time-dependent two-dimensional Navier-Stokes equation. Here we focus on enhancing our second-order scheme, introduced in the last three afore-mentioned articles, to fourth order accuracy. We construct fourth order approximations for the Laplacian, the biharmonic and the nonlinear convective operators. The scheme is compact (nine-point stencil) for the Laplacian and the biharmonic operators, which are both treated implicitly in the time-stepping scheme. The approximation of the convective term is compact in the no-leak boundary conditions case and is nearly compact (thirteen points stencil) in the case of general boundary conditions. However, we stress that in any case no unphysical boundary condition was applied to our scheme. Numerical results demonstrate that the fourth order accuracy is actually obtained for several test-cases.  相似文献   

9.
Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme. Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially, the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied in a Cloud Computing environment both performance-wise and cost-wise.  相似文献   

10.
Computational Anatomy aims for the study of variability in anatomical structures from images. Variability is encoded by the spatial transformations existing between anatomical images and a template selected as reference. In the absence of a more justified model for inter-subject variability, transformations are considered to belong to a convenient family of diffeomorphisms which provides a suitable mathematical setting for the analysis of anatomical variability. One of the proposed paradigms for diffeomorphic registration is the Large Deformation Diffeomorphic Metric Mapping (LDDMM). In this framework, transformations are characterized as end points of paths parameterized by time-varying flows of vector fields defined on the tangent space of a Riemannian manifold of diffeomorphisms and computed from the solution of the non-stationary transport equation associated to these flows. With this characterization, optimization in LDDMM is performed on the space of non-stationary vector field flows resulting into a time and memory consuming algorithm. Recently, an alternative characterization of paths of diffeomorphisms based on constant-time flows of vector fields has been proposed in the literature. With this parameterization, diffeomorphisms constitute solutions of stationary ODEs. In this article, the stationary parameterization is included for diffeomorphic registration in the LDDMM framework. We formulate the variational problem related to this registration scenario and derive the associated Euler-Lagrange equations. Moreover, the performance of the non-stationary vs the stationary parameterizations in real and simulated 3D-MRI brain datasets is evaluated. Compared to the non-stationary parameterization, our proposal provides similar results in terms of image matching and local differences between the diffeomorphic transformations while drastically reducing memory and time requirements.  相似文献   

11.
In recent years we have seen a tremendous growth in the amount of freely available 3D content, in part due to breakthroughs for 3D model design and acquisition. For example, advances in range sensor technology and design software have dramatically reduced the manual labor required to construct 3D models. As collections of 3D content continue to grow rapidly, the ability to perform fast and accurate retrieval from a database of models has become a necessity. At the core of this retrieval task is the fundamental challenge of defining and evaluating similarity between 3D shapes. Some effective methods dealing with this challenge consider similarity measures based on the visual appearance of models. While collections of rendered images are discriminative for retrieval tasks, such representations come with a few inherent limitations such as restrictions in the image viewpoint sampling and high computational costs. In this paper we present a novel algorithm for model similarity that addresses these issues. Our proposed method exploits techniques from spherical signal processing to efficiently evaluate a visual similarity measure between models. Extensive evaluations on multiple datasets are provided.  相似文献   

12.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics 86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n 5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n 11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case. Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong.  相似文献   

13.
We present a technique for analyzing the number of cache misses incurred by multithreaded cache oblivious algorithms on an idealized parallel machine in which each processor has a private cache. We specialize this technique to computations executed by the Cilk work-stealing scheduler on a machine with dag-consistent shared memory. We show that a multithreaded cache oblivious matrix multiplication incurs cache misses when executed by the Cilk scheduler on a machine with P processors, each with a cache of size Z, with high probability. This bound is tighter than previously published bounds. We also present a new multithreaded cache oblivious algorithm for 1D stencil computations incurring cache misses with high probability, one for Gaussian elimination and back substitution, and one for the length computation part of the longest common subsequence problem incurring cache misses with high probability. This work was supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract No. NBCH30390004.  相似文献   

14.
We consider initial value problems for semilinear parabolic equations, which possess a dispersive term, nonlocal in general. This dispersive term is not necessarily dominated by the dissipative term. In our numerical schemes, the time discretization is done by linearly implicit schemes. More specifically, we discretize the initial value problem by the implicit–explicit Euler scheme and by the two-step implicit–explicit BDF scheme. In this work, we extend the results in Akrivis et al. (Math. Comput. 67:457–477, 1998; Numer. Math. 82:521–541, 1999), where the dispersive term (if present) was dominated by the dissipative one and was integrated explicitly. We also derive optimal order error estimates. We provide various physically relevant applications of dispersive–dissipative equations and systems fitting in our abstract framework.  相似文献   

15.
In this paper, the Minimum Polynomial Extrapolation method (MPE) is used to accelerate the convergence of the Characteristic–Based–Split (CBS) scheme for the numerical solution of steady state incompressible flows with heat transfer. The CBS scheme is a fractional step method for the solution of the Navier–Stokes equations while the MPE method is a vector extrapolation method which transforms the original sequence into another sequence converging to the same limit faster then the original one without the explicit knowledge of the sequence generator. The developed algorithm is tested on a two-dimensional benchmark problem (buoyancy–driven convection problem) where the Navier–Stokes equations are coupled with the temperature equation. The obtained results show the feature of the extrapolation procedure to the CBS scheme and the reduction of the computational time of the simulation.  相似文献   

16.
The long-term dynamic behavior of many dynamical systems evolves on a low-dimensional, attracting, invariant slow manifold, which can be parameterized by only a few variables (“observables”). The explicit derivation of such a slow manifold (and thus, the reduction of the long-term system dynamics) is often extremely difficult or practically impossible. For this class of problems, the equation-free framework has been developed to enable performing coarse-grained computations, based on short full model simulations. Each full model simulation should be initialized so that the full model state is consistent with the values of the observables and close to the slow manifold. To compute such an initial full model state, a class of constrained runs functional iterations was proposed (Gear and Kevrekidis, J. Sci. Comput. 25(1), 17–28, 2005; Gear et al., SIAM J. Appl. Dyn. Syst. 4(3), 711–732, 2005). The schemes in this class only use the full model simulator and converge, under certain conditions, to an approximation of the desired state on the slow manifold. In this article, we develop an implementation of the constrained runs scheme that is based on a (preconditioned) Newton-Krylov method rather than on a simple functional iteration. The functional iteration and the Newton-Krylov method are compared in detail using a lattice Boltzmann model for one-dimensional reaction-diffusion as the full model simulator. Depending on the parameters of the lattice Boltzmann model, the functional iteration may converge slowly or even diverge. We show that both issues are largely resolved by using the Newton-Krylov method, especially when a coarse grid correction preconditioner is incorporated.  相似文献   

17.
This paper presents the design and implementation of a new comparative analytical framework for studying the usability of modern high breakdown robust estimators. The emphasis is on finding the intrinsic limits, in terms of size and relative spatial accuracy, of such techniques in solving the emerging challenges of the segmentation of fine structures. A minimum threshold for the distance between separable structures is shown to depend mainly on the scale estimation error. A scale invariant performance measure is introduced to quantify the finite sample bias of the scale estimate of a robust estimator and the measure is evaluated for some state-of-the-art high breakdown robust estimators using datasets containing at least two close but distinct structures with varying distances and inlier ratios. The results show that the new generation of density-based robust estimators (such as pbM-estimator and TSSE) have a poorer performance in problems with datasets containing only a small number of samples in each structure compared with ones based on direct processing of the residuals (such as MSSE). An important message of this paper is that an estimator that performs best in some circumstances, may not be competitive in others: particularly performance on data structures that are relatively large and/or well-separated vs closely spaced fine structures.  相似文献   

18.
A system reduction scheme is devised related to a multibody formulation from which the dynamic response of a wind turbine is determined. In this formulation, each substructure is described in its own frame of reference, which is moving freely in the vicinity of the moving substructure. The Ritz bases spanning the reduced system comprises of rigid body modes and some dynamic low-frequency elastic eigenmodes compatible to the kinematic constraints of the related substructure. The high-frequency elastic modes are presumed to cause merely quasi-static displacements, and thus are included in the expansion via a quasi-static correction. The results show that by using the derived reduction scheme it is only necessary with 2 dynamical modes for the blade substructure when the remaining modes are treated as quasi-static. Moreover, it is shown that it has little to none effect if the gyroscopic stiffness matrix during a stopped situation or under nominal operational conditions is used to derive the functional basis of the modal expansion.  相似文献   

19.
Changing basic conditions in the field of logistics requires intuitive planning methods as well as new ways of supporting and educating the operative staff. VR and AR show a great promise for that.
Rupert ReifEmail: URL: http://www.fml.mw.tum.de
  相似文献   

20.
The reconstruction of geometry or, in particular, the shape of objects is a common issue in image analysis. Starting from a variational formulation of such a problem on a shape manifold we introduce a regularization technique incorporating statistical shape knowledge. The key idea is to consider a Riemannian metric on the shape manifold which reflects the statistics of a given training set. We investigate the properties of the regularization functional and illustrate our technique by applying it to region-based and edge-based segmentation of image data. In contrast to previous works our framework can be considered on arbitrary (finite-dimensional) shape manifolds and allows the use of Riemannian metrics for regularization of a wide class of variational problems in image processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号