首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To imitate personal handwritings is non‐trivial. In this paper, we attempt to address the challenging problem of automatic handwriting facsimile. We focus on Chinese calligraphic writings due to their rich variation in style, high artistic values and also the fact that they are among the most difficult candidates for the problem. We first analyze the structures and shapes of the constituent components, i.e., strokes and radicals, of characters in sample calligraphic writings by the same writer. To generate calligraphic writing in the style of the writer, we facsimile the individual character elements as well as the layout relationships used to compose the character, both in the writer's personal writing style. To test our algorithm, we compare our facsimileing results of Chinese calligraphic writings with the original writings. Our results are found to be acceptable for most cases, some of which are difficult to differentiate from the real ones. More results and supplementary materials are provided in our project website at http://www.cs.hku.hk/~songhua/facsimile/ .  相似文献   

2.
In this paper we present a pipeline for rendering dynamic 2D/3D line drawings efficiently. Our main goal is to create efficient static renditions and coherent animations of line drawings in a setting where lines can be added, deleted and arbitrarily transformed on‐the‐fly. Such a dynamic setting enables us to handle interactively sketched 2D line data, as well as arbitrarily transformed 3D line data in a unified manner. We evaluate the proximity of screen projected strokes to simplify them while preserving their continuity. We achieve this by using a special data structure that facilitates efficient proximity calculations in a dynamic setting. This on‐the‐fly proximity evaluation also facilitates generation of appropriate visibility cues to mitigate depth ambiguities and visual clutter for 3D line data. As we perform all these operations using only line data, we can create line drawings from 3D models without any surface information. We demonstrate the effectiveness and applicability of our approach by showing several examples with initial line representations obtained from a variety of sources: 2D and 3D hand‐drawn sketches and 3D salient geometry lines obtained from 3D surface representations.  相似文献   

3.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

4.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes.  相似文献   

5.
In this paper, we introduce a new representation – radiance transfer fields (RTF) – for rendering interreflections in dynamic scenes under low frequency illumination. The RTF describes the radiance transferred by an individual object to its surrounding space as a function of the incident radiance. An important property of RTF is its independence of the scene configuration, enabling interreflection computation in dynamic scenes. Secondly, RTFs naturally fit in with the rendering framework of precomputed shadow fields, incurring negligible cost to add interreflection effects. In addition, RTFs can be used to compute interreflections for both diffuse and glossy objects. We also show that RTF data can be highly compressed by clustered principal component analysis (CPCA), which not only reduces the memory cost but also accelerates rendering. Finally, we present some experimental results demonstrating our techniques.  相似文献   

6.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

7.
Recent soft shadow mapping techniques based on back-projection can render high quality soft shadows in real time. However, real time high quality rendering of large penumbrae is still challenging, especially when multilayer shadow maps are used to reduce single light sample silhouette artifact. In this paper, we present an efficient algorithm to attack this problem. We first present a GPU-friendly packet-based approach rendering a packet of neighboring pixels together to amortize the cost of computing visibility factors. Then, we propose a hierarchical technique to quickly locate the contour edges, further reducing the computation cost. At last, we suggest a multi-view shadow map approach to reduce the single light sample artifact. We also demonstrate its higher image quality and higher efficiency compared to the existing depth peeling approaches.  相似文献   

8.
Fiber tracking is a standard tool to estimate the course of major white matter tracts from diffusion tensor magnetic resonance imaging (DT‐MRI) data. In this work, we aim at supporting the visual analysis of classical streamlines from fiber tracking by integrating context from anatomical data, acquired by a T1‐weighted MRI measurement. To this end, we suggest a novel visualization metaphor, which is based on data‐driven deformation of geometry and has been inspired by a technique for anatomical fiber preparation known as Klingler dissection. We demonstrate that our method conveys the relation between streamlines and surrounding anatomical features more effectively than standard techniques like slice images and direct volume rendering. The method works automatically, but its GPU‐based implementation allows for additional, intuitive interaction.  相似文献   

9.
Motion based Painterly Rendering   总被引:1,自引:0,他引:1  
Previous painterly rendering techniques normally use image gradients for deciding stroke orientations. Image gradients are good for expressing object shapes, but difficult to express the flow or movements of objects. In real painting, the use of brush strokes corresponding to the actual movement of objects allows viewers to recognize objects' motion better and thus to have an impression of the dynamic. In this paper, we propose a novel painterly rendering algorithm to express dynamic objects based on their motion information. We first extract motion information (magnitude, direction, standard deviation) of a scene from a set of consecutive image sequences from the same view. Then the motion directions are used for determining stroke orientations in the regions with significant motions, and image gradients determine stroke orientations where little motion is observed. Our algorithm is useful for realistically and dynamically representing moving objects. We have applied our algorithm for rendering landscapes. We could segment a scene into dynamic and static regions, and express the actual movement of dynamic objects using motion based strokes.  相似文献   

10.
Many categories of objects, such as human faces, can be naturally viewed as a composition of several different layers. For example, a bearded face with glasses can be decomposed into three layers: a layer for glasses, a layer for the beard and a layer for other permanent facial features. While modeling such a face with a linear subspace model could be very difficult, layer separation allows for easy modeling and modification of some certain structures while leaving others unchanged. In this paper, we present a method for automatic layer extraction and its applications to face synthesis and editing. Layers are automatically extracted by utilizing the differences between subspaces and modeled separately. We show that our method can be used for tasks such beard removal (virtual shaving), beard synthesis, and beard transfer, among others.  相似文献   

11.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

12.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

13.
14.
We extend the rendering technique for continuous scatterplots to allow for a broad class of interpolation methods within the spatial grid instead of only linear interpolation. To do this, we propose an approach that projects the image of a cell from the spatial domain to the scatterplot domain. We approximate this image using either the convex hull or an axis-aligned rectangle that forms a tight fit of the projected points. In both cases, the approach relies on subdivision in the spatial domain to control the approximation error introduced in the scatterplot domain. Acceleration of this algorithm in homogeneous regions of the spatial domain is achieved using an octree hierarchy. The algorithm is scalable and adaptive since it allows us to balance computation time and scatterplot quality. We evaluate and discuss the results with respect to accuracy and computational speed. Our methods are applied to examples of 2-D transfer function design.  相似文献   

15.
Traversing voxels along a three dimensional (3D) line is one of the most fundamental algorithms for voxel‐based applications. This paper presents a new 6‐connectivity integer algorithm for this task. The proposed algorithm accepts voxels having different sizes in x, y and z directions. To explain the idea of the proposed approach, a 2D algorithm is firstly considered and then extended in 3D. This algorithm is a multi‐step as up to three voxels may be added in one iteration. It accepts both integer and floating‐point input. The new algorithm was compared to other popular voxel traversing algorithms. Counting the number of arithmetic operations showed that the proposed algorithm requires the least amount of operations per traversed voxel. A comparison of spent CPU time using either integer or floating‐point arithmetic confirms that the proposed algorithm is the most efficient. This algorithm is simple, and in compact form which also makes it attractive for hardware implementation.  相似文献   

16.
Improvements in biological data acquisition and genomes sequencing now allow to reconstruct entire metabolic networks of many living organisms. The size and complexity of these networks prohibit manual drawing and thereby urge the need of dedicated visualization techniques. An efficient representation of such a network should preserve the topological information of metabolic pathways while respecting biological drawing conventions. These constraints complicate the automatic generation of such visualization as it raises graph drawing issues. In this paper we propose a method to lay out the entire metabolic network while preserving the pathway information as much as possible. That method is flexible as it enables the user to define whether or not node duplication should be performed, to preserve or not the network topology. Our technique combines partitioning, node placement and edge bundling to provide a pseudo‐orthogonal visualization of the metabolic network. To ease pathway information retrieval, we also provide complementary interaction tools that emphasize relevant pathways in the entire metabolic context.  相似文献   

17.
We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches.  相似文献   

18.
In this work, we present a technique based on kernel density estimation for rendering smooth curves. With this approach, we produce uncluttered and expressive pictures, revealing frequency information about one, or, multiple curves, independent of the level of detail in the data, the zoom level, and the screen resolution. With this technique the visual representation scales seamlessly from an exact line drawing, (for low‐frequency/low‐complexity curves) to a probability density estimate for more intricate situations. This scale‐independence facilitates displays based on non‐linear time, enabling high‐resolution accuracy of recent values, accompanied by long historical series for context. We demonstrate the functionality of this approach in the context of prediction scenarios and in the context of streaming data.  相似文献   

19.
The parallel vectors (PV) operator is a feature extraction approach for defining line‐type features such as creases (ridges and valleys) in scalar fields, as well as separation, attachment, and vortex core lines in vector fields. In this work, we extend PV feature extraction to higher‐order data represented by piecewise analytical functions defined over grid cells. The extraction uses PV in two distinct stages. First, seed points on the feature lines are placed by evaluating the inclusion form of the PV criterion with reduced affine arithmetic. Second, a feature flow field is derived from the higher‐order PV expression where the features can be extracted as streamlines starting at the seeds. Our approach allows for guaranteed bounds regarding accuracy with respect to existence, position, and topology of the features obtained. The method is suitable for parallel implementation and we present results obtained with our GPU‐based prototype. We apply our method to higher‐order data obtained from discontinuous Galerkin fluid simulations.  相似文献   

20.
We introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in realtime with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image‐space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line‐based styles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号