首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Statistical shape modeling is a widely used technique for the representation and analysis of the shapes and shape variations present in a population. A statistical shape model models the distribution in a high dimensional shape space, where each shape is represented by a single point. We present a design study on the intuitive exploration and visualization of shape spaces and shape models. Our approach focuses on the dual‐space nature of these spaces. The high‐dimensional shape space represents the population, whereas object space represents the shape of the 3D object associated with a point in shape space. A 3D object view provides local details for a single shape. The high dimensional points in shape space are visualized using a 2D scatter plot projection, the axes of which can be manipulated interactively. This results in a dynamic scatter plot, with the further extension that each point is visualized as a small version of the object shape that it represents. We further enhance the population‐object duality with a new type of view aimed at shape comparison. This new “shape evolution view” visualizes shape variability along a single trajectory in shape space, and serves as a link between the two spaces described above. Our three‐view exploration concept strongly emphasizes linked interaction between all spaces. Moving the cursor over the scatter plot or evolution views, shapes are dynamically interpolated and shown in the object view. Conversely, camera manipulation in the object view affects the object visualizations in the other views. We present a GPU‐accelerated implementation, and show the effectiveness of the three‐view approach using a number of real‐world cases. In these, we demonstrate how this multi‐view approach can be used to visually explore important aspects of a statistical shape model, including specificity, compactness and reconstruction error.  相似文献   

2.
3.
In this work we present a point classification algorithm for multi‐variate data. Our method is based on the concept of attribute subspaces, which are derived from a set of user specified attribute target values. Our classification approach enables users to visually distinguish regions of saliency through concurrent viewing of these subspaces in single images. We also allow a user to threshold the data according to a specified distance from attribute target values. Based on the degree of thresholding, the remaining data points are assigned radii of influence that are used for the final coloring. This limits the view to only those points that are most relevant, while maintaining a similar visual context.  相似文献   

4.
Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter‐frame variation across the whole sequence. We propose a local‐to‐global approach to co‐segment point cloud sequences of articulated objects into near‐rigid moving parts. Our method starts from a per‐frame point clustering, derived from a robust voting‐based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space‐time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co‐segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.  相似文献   

5.
The generation of inbetween frames that interpolate a given set of key frames is a major component in the production of a 2D feature animation. Our objective is to considerably reduce the cost of the inbetweening phase by offering an intuitive and effective interactive environment that automates inbetweening when possible while allowing the artist to guide, complement, or override the results. Tight inbetweens, which interpolate similar key frames, are particularly time‐consuming and tedious to draw. Therefore, we focus on automating these high‐precision and expensive portions of the process. We have designed a set of user‐guided semi‐automatic techniques that fit well with current practice and minimize the number of required artist‐gestures. We present a novel technique for stroke interpolation from only two keys which combines a stroke motion constructed from logarithmic spiral vertex trajectories with a stroke deformation based on curvature averaging and twisting warps. We discuss our system in the context of a feature animation production environment and evaluate our approach with real production data.  相似文献   

6.
Modern supercomputers enable increasingly large N‐body simulations using unstructured point data. The structures implied by these points can be reconstructed implicitly. Direct volume rendering of radial basis function (RBF) kernels in domain‐space offers flexible classification and robust feature reconstruction, but achieving performant RBF volume rendering remains a challenge for existing methods on both CPUs and accelerators. In this paper, we present a fast CPU method for direct volume rendering of particle data with RBF kernels. We propose a novel two‐pass algorithm: first sampling the RBF field using coherent bounding hierarchy traversal, then subsequently integrating samples along ray segments. Our approach performs interactively for a range of data sets from molecular dynamics and astrophysics up to 82 million particles. It does not rely on level of detail or subsampling, and offers better reconstruction quality than structured volume rendering of the same data, exhibiting comparable performance and requiring no additional preprocessing or memory footprint other than the BVH. Lastly, our technique enables multi‐field, multi‐material classification of particle data, providing better insight and analysis.  相似文献   

7.
We present Forward Light Cuts, a novel approach to real‐time global illumination using forward rendering techniques. We focus on unshadowed diffuse interactions for the first indirect light bounce in the context of large models such as the complex scenes usually encountered in CAD application scenarios. Our approach efficiently generates and uses a multiscale radiance cache by exploiting the geometry‐specific stages of the graphics pipeline, namely the tessellator unit and the geometry shader To do so, we assimilate virtual point lights to the scene's triangles and design a stochastic decimation process chained with a partitioning strategy that accounts for both close‐by strong light reflections, and distant regions from which numerous virtual point lights collectively contribute strongly to the end pixel. Our probabilistic solution is supported by a mathematical analysis and a number of experiments covering a wide range of application scenarios. As a result, our algorithm requires no precomputation of any kind, is compatible with dynamic view points, lighting condition, geometry and materials, and scales to tens of millions of polygons on current graphics hardware.  相似文献   

8.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

9.
Segmenting a moving foreground (fg) from its background (bg) is a fundamental step in many Machine Vision and Computer Graphics applications. Nevertheless, hardly any attempts have been made to tackle this problem in dynamic 3D scanned scenes. Scanned dynamic scenes are typically challenging due to noise and large missing parts. Here, we present a novel approach for motion segmentation in dynamic point‐cloud scenes designed to cater to the unique properties of such data. Our key idea is to augment fg/bg classification with an active learning framework by refining the segmentation process in an adaptive manner. Our method initially classifies the scene points as either fg or bg in an un‐supervised manner. This, by training discriminative RBF‐SVM classifiers on automatically labeled, high‐certainty fg/bg points. Next, we adaptively detect unreliable classification regions (i.e. where fg/bg separation is uncertain), locally add more training examples to better capture the motion in these areas, and re‐train the classifiers to fine‐tune the segmentation. This not only improves segmentation accuracy, but also allows our method to perform in a coarse‐to‐fine manner, thereby efficiently process high‐density point‐clouds. Additionally, we present a unique interactive paradigm for enhancing this learning process, by using a manual editing tool. The user explicitly edits the RBF‐SVM decision borders in unreliable regions in order to refine and correct the classification. We provide extensive qualitative and quantitative experiments on both real (scanned) and synthetic dynamic scenes.  相似文献   

10.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

11.
Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi‐camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all‐pose garment outline interpretation, and a shading‐based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape‐from‐shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.  相似文献   

12.
We present a novel L4RW (Laziness‐based Realistic Real‐time Responsive Rebalance in Walking) technique to synthesize 4RW animations under unexpected external perturbations with minimal locomotion effort. We first devise a lazy dynamic rebalance model, which specifies the dynamic balance conditions, defines the rebalance effort, and selects the suitable rebalance strategy automatically using the laziness law after an unexpected perturbation. Based on the model, L4RW searches over a motion capture (mocap) database for an appropriate motion segment to follow, and the transition‐to motions is generated by interpolating the active response dynamic motion. A support vector machine (SVM) based training, classification, and predication algorithm is applied to reduce the search space, and it is trained offline only once. Our algorithm classifies the mocap database into many rebalance strategy‐specified subsets and then online predicts responsive motions in the subset according to the selected strategy. The rebalance effort, the ‘extrapolated center of mass’ (XCoM) and environment constraints are selected as feature attributes for the SVM feature vector. Furthermore, the subset's segments are sorted through the rebalance effort, then our algorithm searches for an acceptable segment starting from the least‐effort segment. Compared with previous methods, our search increases speed by over two orders of magnitude, and our algorithm creates more realistic and smooth 4RW animation.  相似文献   

13.
This paper presents a digital storytelling approach that generates automatic animations for time‐varying data visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. Specifically, we analyze information related to a given event and abstract it as an event graph, which represents data features as nodes and event relationships as links. This graph embeds a tree‐like hierarchical structure which encodes data features at different scales. Next, narrative structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of narrative structures are considered in our automatic rendering parameter decision process to generate animations as digital stories. We integrate this animation generation approach into an interactive exploration process of time‐varying data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a storm surge application that our approach allows semantic visualization of time‐varying data and easy animation generation for users without special knowledge about the underlying visualization techniques.  相似文献   

14.
We present novel visual and interactive techniques for exploratory visualization of animal kinematics using instantaneous helical axes (IHAs). The helical axis has been used in orthopedics, biomechanics, and structural mechanics as a construct for describing rigid body motion. Within biomechanics, recent imaging advances have made possible accurate high‐speed measurements of individual bone positions and orientations during experiments. From this high‐speed data, instantaneous helical axes of motion may be calculated. We address questions of effective interactive, exploratory visualization of this high‐speed 3D motion data. A 3D glyph that encodes all parameters of the IHA in visual form is presented. Interactive controls are used to examine the change in the IHA over time and relate the IHA to anatomical features of interest selected by a user. The techniques developed are applied to a stereoscopic, interactive visualization of the mechanics of pig mastication and assessed by a team of evolutionary biologists who found interactive IHA‐based analysis a useful addition to more traditional motion analysis techniques.  相似文献   

15.
In this paper, we propose a technique to produce artistic strokes in a variety of drawing material based on example images. Our approach is to divide example strokes scanned from images into small pieces along their stroke directions and synthesize a novel stroke by rearranging them along a user specified curve. The visible quality of a synthesized stroke can be maintained by utilizing the connectivity information stored in a directed graph constructed in the preprocessing step. At run‐time, the graph is traversed to find a path best matching the user specification given as a curve and additional information. The results of our experiments shows that visually convincing strokes of various materials can be generated efficiently.  相似文献   

16.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

17.
We present an approach for editing shadows in all‐frequency lighting environments. To support artistic control, we propose to decouple shadowing from lighting and focus on providing intuitive controls to edit the former. To accomplish this task, we precompute and store scene visibility information separately from lighting and BRDFs and allow artists to edit visibility directly, by providing operations to select shadows and edit their shape. To facilitate a wider range of editing operations, we generalize visibility from binary to three‐channel oating point quantities and introduce a novel shadow representation based on computation of visibility ratios between the original render and the edited one. We demonstrate our results for diffuse and glossy surfaces, still scenes and animations.  相似文献   

18.
Matrix Trees     
We propose a new data representation for octrees and kd‐trees that improves upon memory size and algorithm speed of existing techniques. While pointerless approaches exploit the regular structure of the tree to facilitate efficient data access, their memory footprint becomes prohibitively large as the height of the tree increases. Pointerbased trees require memory consumption proportional to the number of tree nodes, thus exploiting the typical sparsity of large trees. Yet, their traversal is slowed by the need to follow explicit pointers across the different levels. Our solution is a pointerless approach that represents each tree level with its own matrix, as opposed to traditional pointerless trees that use only a single vector. This novel data organization allows us to fully exploit the tree's regular structure and improve the performance of tree operations. By using a sparse matrix data structure we obtain a representation that is suited for sparse and dense trees alike. In particular, it uses less total memory than pointer‐based trees even when the data set is extremely sparse. We show how our approach is easily implemented on the GPU and illustrate its performance in typical visualization scenarios.  相似文献   

19.
20.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号