首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.
This paper presents a new mesh optimization approach aiming to improve the mesh quality on the boundary. The existing mesh untangling and smoothing algorithms (Vachal et al. in J Comput Phys 196: 627–644, 2004; Knupp in J Numer Methods Eng 48: 1165–1185, 2002), which have been proved to work well to interior mesh optimization, are enhanced by adding constrains of surface and curve shape functions that approximate the boundary geometry from the finite element mesh. The enhanced constrained optimization guarantees that the boundary nodes to be optimized always move on the approximated boundary. A dual-grid hexahedral meshing method is used to generate sample meshes for testing the proposed mesh optimization approach. As complementary treatments to the mesh optimization, appropriate mesh topology modifications, including buffering element insertion and local mesh refinement, are performed in order to eliminate concave and distorted elements on the boundary. Finally, the optimization results of some examples are given to demonstrate the effectivity of the proposed approach.  相似文献   

2.
We present an image deformation method driven by skeleton; it is based on MLS deformation algorithm (Schaefer et al. in SIGGRAPH, vol. 25, pp. 533–540, 2006). We improve the MLS deformation by defining a new weight function based on skeleton. Being different from the weight function based on control points, our weight function has benefited from the shape information of undeformed object and keeps deformation local, therefore our method can achieve a realistic effect. In cartoon video, we propose a new method to track the skeleton in the video, to build new origin skeleton and new target skeleton on each frame, and to apply our image deformation method to each frame and maintain spatiotemporal consistency. Results demonstrate that our method is able to decrease the effect of squeeze and use less control points.  相似文献   

3.
In this paper, a new shape modeling approach that can enable direct Boolean intersection between acquired and designed geometry without model conversion is presented. At its core is a new method that enables direct intersection and Boolean operations between designed geometry (objects bounded by NURBS and polygonal surfaces) and scanned geometry (objects represented by point cloud data).We use the moving least-squares (MLS) surface as the underlying surface representation for acquired point-sampled geometry. Based on the MLS surface definition, we derive closed formula for computing curvature of planar curves on the MLS surface. A set of intersection algorithms including line and MLS surface intersection, curvature-adaptive plane and MLS surface intersection, and polygonal mesh and MLS surface intersection are successively developed. Further, an algorithm for NURBS and MLS surface intersection is then developed. It first adaptively subdivides NURBS surfaces into polygonal mesh, and then intersects the mesh with the MLS surface. The intersection points are mapped to the NURBS surface through the Gauss-Newton method.Based on the above algorithms, a prototype system has been implemented. Through various examples from the system, we demonstrate that direct Boolean intersection between designed geometry and acquired geometry offers a useful and effective means for the shape modeling applications where point-cloud data is involved.  相似文献   

4.
In this paper, we present a segmentation algorithm which partitions a mesh based on the premise that a 3D object consists of a core body and its constituent protrusible parts. Our approach is based on prominent feature extraction and core approximation and segments the mesh into perceptually meaningful components. Based upon the aforementioned premise, we present a methodology to compute the prominent features of the mesh, to approximate the core of the mesh and finally to trace the partitioning boundaries which will be further refined using a minimum cut algorithm. Although the proposed methodology is aligned with a general framework introduced by Lin et al. (IEEE Trans. Multimedia 9(1):46–57, 2007), new approaches have been introduced for the implementation of distinct stages of the framework leading to improved efficiency and robustness. The evaluation of the proposed algorithm is addressed in a consistent framework wherein a comparison with the state of the art is performed.  相似文献   

5.
This paper presents a second-order accurate adaptive Godunov method for two-dimensional (2D) compressible multicomponent flows, which is an extension of the previous adaptive moving mesh method of Tang et al. (SIAM J. Numer. Anal. 41:487–515, 2003) to unstructured triangular meshes in place of the structured quadrangular meshes. The current algorithm solves the governing equations of 2D multicomponent flows and the finite-volume approximations of the mesh equations by a fully conservative, second-order accurate Godunov scheme and a relaxed Jacobi-type iteration, respectively. The geometry-based conservative interpolation is employed to remap the solutions from the old mesh to the newly resulting mesh, and a simple slope limiter and a new monitor function are chosen to obtain oscillation-free solutions, and track and resolve both small, local, and large solution gradients automatically. Several numerical experiments are conducted to demonstrate robustness and efficiency of the proposed method. They are a quasi-2D Riemann problem, the double-Mach reflection problem, the forward facing step problem, and two shock wave and bubble interaction problems.  相似文献   

6.
The moving-window discrete Fourier transform (MWDFT) is a dynamic spectrum analysis in which the next analysis interval differs from the previous one by including the next signal sample and excluding the first one from the previous analysis interval (Dillard in IEEE Trans Inform Theory 13:2–6, 1967, Comput Elect Eng 1:143–152, 1973, USA Patent 4023028, May 10, 1977). Such a spectrum analysis is necessary for time–frequency localization of analyzed signals with given peculiarities (Tolimieri and An in Time–frequency representations. Birkhauser, Basel, 1998). Using the well-known fast Fourier transform (FFT) towards this aim is not effective. Recursive algorithms which use only one complex multiplication for computing one spectrum sample during each analysis interval are more effective. The author improved one algorithm so that it is possible to use only one complex multiplication for computing two, four, and even eight (for complex signals) spectrum samples simultaneously. Problems of realization and application of the MWDFT are also considered in the paper.  相似文献   

7.
A new mesh optimization framework for 3D triangular surface meshes is presented, which formulates the task as an energy minimization problem in the same spirit as in Hoppe et al. (SIGGRAPH’93: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, 1993). The desired mesh properties are controlled through a global energy function including data attached terms measuring the fidelity to the original mesh, shape potentials favoring high quality triangles, and connectivity as well as budget terms controlling the sampling density. The optimization algorithm modifies mesh connectivity as well as the vertex positions. Solutions for the vertex repositioning step are obtained by a discrete graph cut algorithm examining global combinations of local candidates.  相似文献   

8.
The error estimates of automatic integration by pure floating-point arithmetic are intrinsically embedded with uncertainty. This in critical cases can make the computation problematic. To avoid the problem, we use product rules to implement a self-validating subroutine for bivariate cubature over rectangular regions. Different from previous self-validating integrators for multiple variables (Storck in Scientific Computing with Automatic Result Verification, pp. 187–224, Academic Press, San Diego, [1993]; Wolfe in Appl. Math. Comput. 96:145–159, [1998]), which use derivatives of specific higher orders for the error estimates, we extend the ideas for univariate quadrature investigated in (Chen in Computing 78(1):81–99, [2006]) to our bivariate cubature to enable locally adaptive error estimates by full utilization of Peano kernels theorem. The mechanism for active recognition of unreachable error bounds is also set up. We demonstrate the effectiveness of our approach by comparing it with a conventional integrator.  相似文献   

9.
We propose and analyze a nonparametric region-based active contour model for segmenting cluttered scenes. The proposed model is unsupervised and assumes pixel intensity is independently identically distributed. Our proposed energy functional consists of a geometric regularization term that penalizes the length of the partition boundaries and a region-based image term that uses histograms of pixel intensity to distinguish different regions. More specifically, the region data encourages segmentation so that local histograms within each region are approximately homogeneous. An advantage of using local histograms in the data term is that histogram differentiation is not required to solve the energy minimization problem. We use Wasserstein distance with exponent 1 to determine the dissimilarity between two histograms. The Wasserstein distance is a metric and is able to faithfully measure the distance between two histograms, compared to many pointwise distances. Moreover, it is insensitive to oscillations, and therefore our model is robust to noise. A fast global minimization method based on (Chan et al. in SIAM J. Appl. Math. 66(5):1632–1648, 2006; Bresson et al. in J. Math. Imaging Vis. 28(2):151–167, 2007) is employed to solve the proposed model. The advantages of using this method are two-fold. First, the computational time is less than that of the method by gradient descent of the associated Euler-Lagrange equation (Chan et al. in Proc. of SSVM, pp. 697–708, 2007). Second, it is able to find a global minimizer. Finally, we propose a variant of our model that is able to properly segment a cluttered scene with local illumination changes. This research is supported by ONR grant N00014-09-1-0105 and NSF grant DMS-0610079.  相似文献   

10.
Transaction-level modeling is used in hardware design for describing designs at a higher level compared to the register-transfer level (RTL) (e.g. Cai and Gajski in CODES+ISSS ’03: proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, pp. 19–24, 2003; Chen et al. in FMCAD ’07: proceedings of the formal methods in computer aided design, pp. 53–61, 2007; Mahajan et al. in MEMOCODE ’07: proceedings of the 5th IEEE/ACM international conference on formal methods and models for codesign, pp. 123–132, 2007; Swan in DAC ’06: proceedings of the 43rd annual conference on design automation, pp. 90–92, 2006). Each transaction represents a unit of work, which is also a useful unit for design verification. In such models, there are many properties of interest which involve interactions between multiple transactions. Examples of this are ordering relationships in sequential processing and hazard checking in pipelined circuits. Writing such properties on the RTL design requires significant expertise in understanding the higher-level computation being done in a given RTL design and possible instrumentation of the RTL to express the property of interest. This is a barrier to the easy use of such properties in RTL designs.  相似文献   

11.
In this paper, we present an extensive experimental comparison of existing similarity metrics addressing the quality assessment problem of mesh segmentation. We introduce a new metric, named the 3D Normalized Probabilistic Rand Index (3D-NPRI), which outperforms the others in terms of properties and discriminative power. This comparative study includes a subjective experiment with human observers and is based on a corpus of manually segmented models. This corpus is an improved version of our previous one (Benhabiles et al. in IEEE International Conference on Shape Modeling and Application (SMI), 2009). It is composed of a set of 3D-mesh models grouped in different classes associated with several manual ground-truth segmentations. Finally the 3D-NPRI is applied to evaluate six recent segmentation algorithms using our corpus and the Chen et al.’s (ACM Trans. Graph. (SIGGRAPH), 28(3), 2009) corpus.  相似文献   

12.
In this paper, we have considered the distributed scheduling problem for channel access in TDMA wireless mesh networks. The problem is to assign time-slot(s) for nodes to access the channels, and it is guaranteed that nodes can communicate with all their one-hop neighbors in the assigned time-slot(s). And, the objective is to minimize the cycle length, i.e., the total number of different time-slots in one scheduling cycle. In single-channel ad hoc networks, the best known result for this problem is proved to be K 2 in arbitrary graphs (Chlamtac and Pinter in IEEE Trans. Comput. C-36(6):729–737, 1987) and 25K in unit disk graphs () with K as the maximum node degree. There are multiple channels in wireless mesh networks, and different nodes can use different control channels to reduce congestion on the control channels. In this paper, we have considered two scheduling models for wireless mesh networks. The first model is that each node has two radios, and the scheduling is simultaneously done on the two radios. We have proved that the upper bound of the cycle length in arbitrary graphs can be 2K. The second model is that the time-slots are scheduled for the nodes regardless of the number of radios on them. In this case, we have proved that the upper bound can be (4K−2). We also have proposed greedy algorithms with different criterion. The basic idea of these algorithms is to organize the conflicting nodes by special criterion, such as node identification, node degree, the number of conflicting neighbors, etc. And, a node cannot be assigned to a time-slot(s) until all neighbor nodes, which have higher criterion and might conflict with the current node, are assigned time-slot(s) already. All these algorithms are fully distributed and easy to realize. Simulations are also done to verify the performance of these algorithms.  相似文献   

13.
We study the on-line minimum weighted bipartite matching problem in arbitrary metric spaces. Here, n not necessary disjoint points of a metric space M are given, and are to be matched on-line with n points of M revealed one by one. The cost of a matching is the sum of the distances of the matched points, and the goal is to find or approximate its minimum. The competitive ratio of the deterministic problem is known to be Θ(n), see (Kalyanasundaram, B., Pruhs, K. in J. Algorithms 14(3):478–488, 1993) and (Khuller, S., et al. in Theor. Comput. Sci. 127(2):255–267, 1994). It was conjectured in (Kalyanasundaram, B., Pruhs, K. in Lecture Notes in Computer Science, vol. 1442, pp. 268–280, 1998) that a randomized algorithm may perform better against an oblivious adversary, namely with an expected competitive ratio Θ(log n). We prove a slightly weaker result by showing a o(log 3 n) upper bound on the expected competitive ratio. As an application the same upper bound holds for the notoriously hard fire station problem, where M is the real line, see (Fuchs, B., et al. in Electonic Notes in Discrete Mathematics, vol. 13, 2003) and (Koutsoupias, E., Nanavati, A. in Lecture Notes in Computer Science, vol. 2909, pp. 179–191, 2004). The authors were partially supported by OTKA grants T034475 and T049398.  相似文献   

14.
15.
In this paper, we propose a tailored-finite-point method for a kind of singular perturbation problems in unbounded domains. First, we use the artificial boundary method (Han in Frontiers and Prospects of Contemporary Applied Mathematics, [2005]) to reduce the original problem to a problem on bounded computational domain. Then we propose a new approach to construct a discrete scheme for the reduced problem, where our finite point method has been tailored to some particular properties or solutions of the problem. From the numerical results, we find that our new methods can achieve very high accuracy with very coarse mesh even for very small ε. In the contrast, the traditional finite element method does not get satisfactory numerical results with the same mesh. Han was supported by the NSFC Project No. 10471073. Z. Huang was supported by the NSFC Projects No. 10301017, and 10676017, the National Basic Research Program of China under the grant 2005CB321701. R.B. Kellogg was supported by the Boole Centre for Research in Informatics at National University of Ireland, Cork and by Science Foundation Ireland under the Basic Research Grant Programme 2004 (Grants 04/BR/M0055, 04/BR/M0055s1).  相似文献   

16.
We present an improved technique for data hiding in polygonal meshes, which is based on the work of Bogomjakov et al. (Comput. Graph. Forum 27(2):637–642, 2008). Like their method, we use an arrangement on primitives relative to a reference ordering to embed a message. But instead of directly interpreting the index of a primitive in the reference ordering as the encoded/decoded bits, our method slightly modifies the mapping so that our modification doubles the chance of encoding an additional bit compared to Bogomjakov et al.’s (Comput. Graph. Forum 27(2):637–642, 2008). We illustrate the inefficiency in the original mapping of Bogomjakov et al. (Comput. Graph. Forum 27(2):637–642, 2008) with an intuitive representation using a binary tree.  相似文献   

17.
Scent has been well documented as having significant effects on emotion (Alaoui-Ismaili in Physiol Behav 62(4):713–720, 1997; Herz et al. in Motiv Emot 28(4):363–383, 2004), learning (Smith et al. in Percept Mot Skills 74(2):339–343, 1992; Morgan in Percept Mot Skills 83(3)(2):1227–1234, 1996), memory (Herz in Am J Psychol 110(4):489–505, 1997) and task performance (Barker et al. in Percept Mot Skills 97(3)(1):1007–1010, 2003). This paper describes an experiment in which environmentally appropriate scent was presented as an additional sensory modality consistent with other aspects of a virtual environment called DarkCon. Subjects’ game play habits were recorded as an additional factor for analysis. Subjects were randomly assigned to receive scent during the VE, and/or afterward during a task of recall of the environment. It was hypothesized that scent presentation during the VE would significantly improve recall, and that subjects who were presented with scent during the recall task, in addition to experiencing the scented VE, would perform the best on the recall task. Skin-conductance was a significant predictor of recall, over and above experimental groups. Finally, it was hypothesized that subjects’ game play habits would affect both their behavior in and recall of the environment. Results are encouraging to the use of scent in virtual environments, and directions for future research are discussed. The project described herein has been sponsored by the US Army Research, Development, and Engineering Command (RDECOM). Statements and opinions expressed do not necessarily reflect the position or the policy of the US Government; no official endorsement should be inferred.  相似文献   

18.
We propose a method to use finite model builders in order to construct infinite models of first-order formulae. The constructed models are Herbrand interpretations, in which the interpretation of the predicate symbols is specified by tree tuple automata (Comon et al. 1997). Our approach is based on formula transformation: a formula ϕ is transformed into a formula Δ(ϕ) s.t. ϕ has a model representable by a term tuple automaton iff Δ(ϕ) has a finite model. This paper is an extended version of Peltier (2008).  相似文献   

19.
In the field of design of computer experiments (DoCE), Latin hypercube designs are frequently used for the approximation and optimization of black-boxes. In certain situations, we need a special type of designs consisting of two separate designs, one being a subset of the other. These nested designs can be used to deal with training and test sets, models with different levels of accuracy, linking parameters, and sequential evaluations. In this paper, we construct nested maximin Latin hypercube designs for up to ten dimensions. We show that different types of grids should be considered when constructing nested designs and discuss how to determine which grid to use for a specific application. To determine nested maximin designs for dimensions higher than two, four variants of the ESE algorithm of Jin et al. (J Stat Plan Inference 134(1):268–287, 2005) are introduced and compared. Our main focus is on GROUPRAND, the most successful of these four variants. In the numerical comparison, we consider the calculation times, space-fillingness of the obtained designs and the performance of different grids. Maximin distances for different numbers of points are provided; the corresponding nested maximin designs can be found on the website .  相似文献   

20.
In Misra (ACM Trans Program Lang Syst 16(6):1737–1767, 1994), Misra introduced the powerlist data structure, which is well suited to express recursive, data-parallel algorithms. Moreover, Misra and other researchers have shown how powerlists can be used to prove the correctness of several algorithms. This success has encouraged some researchers to pursue automated proofs of theorems about powerlists (Kapur 1997; Kapur and Subramaniam 1995, Form Methods Syst Des 13(2):127–158, 1998). In this paper, we show how ACL2 can be used to verify theorems about powerlists. We depart from previous approaches in two significant ways. First, the powerlists we use are not the regular structures defined by Misra; that is, we do not require powerlists to be balanced trees. As we will see, this complicates some of the proofs, but on the other hand it allows us to state theorems that are otherwise beyond the language of powerlists. Second, we wish to prove the correctness of powerlist algorithms as much as possible within the logic of powerlists. Previous approaches have relied on intermediate lemmas which are unproven (indeed unstated) within the powerlist logic. However, we believe these lemmas must be formalized if the final theorems are to be used as a foundation for subsequent work, e.g., in the verification of system libraries. In our experience, some of these unproven lemmas presented the biggest obstacle to finding an automated proof. We illustrate our approach with two case studies involving Batcher sorting and prefix sums.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号