首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 15 毫秒
1.
郁宏春  孙军 《计算机工程》2004,30(17):154-156
污点噪声是老电影中常见的问题,它们常表现为黑斑或亮斑。提出了一种有效的方法来去除这种噪声并恢复丢失的数据。,通过分析时空域亮度的连续性来检测污点噪声,方法是选出每个检测点的最平滑的方向,估计其未受污染的概率,对该点是否为污点进行软判决,用一个由该概率自适应控制的时空滤波器来进行污点移除数据恢复,引入了一种保护机制来应对连续性分析失败的情况。实验表明这种方法是有效、实用且易于实现的。  相似文献   

2.
This paper proposes a generic methodology for segmentation and reconstruction of volumetric datasets based on a deformable model, the topological active volumes (TAV). This model, based on a polyhedral mesh, integrates features of region-based and boundary-based segmentation methods in order to fit the contours of the objects and model its inner topology. Moreover, it implements automatic procedures, the so-called topological changes, that alter the mesh structure and allow the segmentation of complex features such as pronounced curvatures or holes, as well as the detection of several objects in the scene. This work presents the TAV model and the segmentation methodology and explains how the changes in the TAV structure can improve the adjustment process. In particular, it is focused on the increase of the mesh density in complex image areas in order to improve the adjustment to object surfaces. The suitability of the mesh structure and the segmentation methodology is analyzed and the accuracy of the proposed model is proved with both synthetic and real images.  相似文献   

3.
Most of the numerical simulation methods regarding cloth draping are based on mechanical models. Graphically, the representation of this model is likely to be a uniform grid. Fabrics being a very flexible material, a number of wrinkles appear on its surface when submitted to free or constrained motion (collision/applied load, supports). The problem regarding the simulation run is to represent realistically the mechanical system surface and its associated motion which are strongly related to mesh discretization. We propose a new method based on adaptive meshing allowing the mechanical system to behave without any constraint related to a uniform mesh. Numerical examples are given to show the efficiency of the method.  相似文献   

4.
Two approaches are described that improve the efficiency of optical flow computation without incurring loss of accuracy. The first approach segments images into regions of moving objects. The method is based on a previously defined Galerkin finite element method on a triangular mesh combined with a multiresolution segmentation approach for object flow computation. Images are automatically segmented into subdomains of moving objects by an algorithm that employs a hierarchy of mesh coarseness for the flow computation, and these subdomains are reconstructed over a finer mesh on which to recompute flow more accurately. The second approach uses an adaptive mesh in which the resolution increases where motion is found to occur. Optical flow is computed over a reasonably coarse mesh, and this is used to construct an optimal adaptive mesh in a way that is different from the gradient methods reported in the literature. The finite element mesh facilitates a reduction in computational effort by enabling processing to focus on particular objects of interest in a scene (i.e. those areas where motion is detected). The proposed methods were tested on real and synthetic image sequences, and promising results are reported.  相似文献   

5.
In this paper we introduce and analyze a fully discrete approximation for a parabolic problem with a nonlinear boundary condition which implies that the solutions blow up in finite time. We use standard linear elements with mass lumping for the space variable. For the time discretization we write the problem in an equivalent form which is obtained by introducing an appropriate time re-scaling and then, we use explicit Runge-Kutta methods for this equivalent problem. In order to motivate our procedure we present it first in the case of a simple ordinary differential equation and show how the blow up time is approximated in this case. We obtain necessary and sufficient conditions for the blow-up of the numerical solution and prove that the numerical blow-up time converges to the continuous one. We also study, for the explicit Euler approximation, the localization of blow-up points for the numerical scheme. Received October 4, 2001; revised March 27, 2002 Published online: July 8, 2002  相似文献   

6.
An algorithm for generating unstructured tetrahedral meshes of arbitrarily shaped three-dimensional regions is described. The algorithm works for regions without cracks, as well as for regions with one or multiple cracks. The algorithm incorporates aspects of well known meshing procedures, but includes some original steps. It uses an advancing front technique, along with an octree to develop local guidelines for the size of generated elements. The advancing front technique is based on a standard procedure found in the literature, with two additional steps to ensure valid volume mesh generation for virtually any domain. The first additional step is related to the generation of elements only considering the topology of the current front, and the second additional step is a back-tracking procedure with face deletion, to ensure that a mesh can be generated even when problems happen during the advance of the front. To improve mesh quality (as far as element shape is concerned), an a posteriori local mesh improvement procedure is used. The performance of the algorithm is evaluated by application to a number of realistically complex, cracked geometries.  相似文献   

7.
个性化三维人体模型快速建模方法   总被引:11,自引:0,他引:11  
采用模型重用的思想,通过输入21个人体测量学参数对标准人体模型进行编辑,实时获得相应体型的个性化三维人体模型.该方法使得编辑过程能够实时完成;生成的人体模型可以方便地驱动;精选的21个人体测量学参数覆盖了人体各处的细节,能够比较完整地描述人体的外形特征,满足对人体模型个性化的需求.  相似文献   

8.
Finite element analysis of 3D phenomena gives as a result a set of function values specified on the nodes of the mesh. Various modes of graphical representation of such results are possible, including contour plots on cross-sections and surfaces of constant field values. In this paper, we propose an algorithm for the generation of such models. The continuous surfaces representing constant field values are obtained element-by-element by linear interpolation of nodal values. The normalized gradient of computed values is used for surface smoothing and shading. The method uses shaded polygon and shaded triangle strip primitives to meet with industry standards for graphical equipment.  相似文献   

9.
Ulrike Storck 《Computing》2000,65(3):271-280
We present a verified numerical integration algorithm with an adaptive strategy for smooth integrands. Verified representations of the remainder term are derived and a new adaptive strategy is given which delivers a desired integral enclosure with an error usually bounded by a specified error bound. Here, we discuss the distribution of the specified error bound onto the subintervals used in the algorithm more closely and present numerical results depending on the distribution of the error. Received March 15, 1999; revised December 1, 1999  相似文献   

10.
The boundary concentrated FEM, a variant of the hp-version of the finite element method, is proposed for the numerical treatment of elliptic boundary value problems. It is particularly suited for equations with smooth coefficients and non-smooth boundary conditions. In the two-dimensional case it is shown that the Cholesky factorization of the resulting stiffness matrix requires O(Nlog4 N) units of storage and can be computed with O(Nlog8 N) work, where N denotes the problem size. Numerical results confirm theoretical estimates. Received October 4, 2001; revised August 19, 2002 Published online: October 24, 2002  相似文献   

11.
Mining discriminative spatial patterns in image data is an emerging subject of interest in medical imaging, meteorology, engineering, biology, and other fields. In this paper, we propose a novel approach for detecting spatial regions that are highly discriminative among different classes of three dimensional (3D) image data. The main idea of our approach is to treat the initial 3D image as a hyper-rectangle and search for discriminative regions by adaptively partitioning the space into progressively smaller hyper-rectangles (sub-regions). We use statistical information about each hyper-rectangle to guide the selectivity of the partitioning. A hyper-rectangle is partitioned only if its attribute cannot adequately discriminate among the distinct labeled classes, and it is sufficiently large for further splitting. To evaluate the discriminative power of the attributes corresponding to the detected regions, we performed classification experiments on artificial and real datasets. Our results show that the proposed method outperforms major competitors, achieving 30% and 15% better classification accuracy on synthetic and real data respectively while reducing by two orders of magnitude the number of statistical tests required by voxel-based approaches.  相似文献   

12.
A video-on-demand (VOD) server needs to store hundreds of movie titles and to support thousands of concurrent accesses. This, technically and economically, imposes a great challenge on the design of the disk storage subsystem of a VOD server. Due to different demands for different movie titles, the numbers of concurrent accesses to each movie can differ a lot. We define access profile as the number of concurrent accesses to each movie title that should be supported by a VOD server. The access profile is derived based on the popularity of each movie title and thus serves as a major design goal for the disk storage subsystem. Since some popular (hot) movie titles may be concurrently accessed by hundreds of users and a current high-end magnetic disk array (disk) can only support tens of concurrent accesses, it is necessary to replicate and/or stripe the hot movie files over multiple disk arrays. The consequence of replication and striping of hot movie titles is the potential increase on the required number of disk arrays. Therefore, how to replicate, stripe, and place the movie files over a minimum number of magnetic disk arrays such that a given access profile can be supported is an important problem. In this paper, we formulate the problem of the video file allocation over disk arrays, demonstrate that it is a NP-hard problem, and present some heuristic algorithms to find the near-optimal solutions. The result of this study can be applied to the design of the storage subsystem of a VOD server to economically minimize the cost or to maximize the utilization of disk arrays.  相似文献   

13.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号