首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

2.
Annoying shaky motion is one of the significant problems in home videos, since hand shake is an unavoidable effect when capturing by using a hand‐held camcorder. Video stabilization is an important technique to solve this problem, but the stabilized videos resulting from some current methods usually have decreased resolution and are still not so stable. In this paper, we propose a robust and practical method of full‐frame video stabilization while considering user's capturing intention to remove not only the high frequency shaky motions but also the low frequency unexpected movements. To guess the user's capturing intention, we first consider the regions of interest in the video to estimate which regions or objects the user wants to capture, and then use a polyline to estimate a new stable camcorder motion path while avoiding the user's interested regions or objects being cut out. Then, we fill the dynamic and static missing areas caused by frame alignment from other frames to keep the same resolution and quality as the original video. Furthermore, we smooth the discontinuous regions by using a three‐dimensional Poisson‐based method. After the above automatic operations, a full‐frame stabilized video can be achieved and the important regions and objects can also be preserved.  相似文献   

3.
4.
A person's handwriting appears differently within a typical range of variations, and the shapes of handwriting characters also show complex interaction with their nearby neighbors. This makes automatic synthesis of handwriting characters and paragraphs very challenging. In this paper, we propose a method for synthesizing handwriting texts according to a writer's handwriting style. The synthesis algorithm is composed by two phases. First, we create the multidimensional morphable models for different characters based on one writer's data. Then, we compute the cursive probability to decide whether each pair of neighboring characters are conjoined together or not. By jointly modeling the handwriting style and conjoined property through a novel trajectory optimization, final handwriting words can be synthesized from a set of collected samples. Furthermore, the paragraphs’ layouts are also automatically generated and adjusted according to the writer's style obtained from the same dataset. We demonstrate that our method can successfully synthesize an entire paragraph that mimic a writer's handwriting using his/her collected handwriting samples.  相似文献   

5.
Creating variations of an image object is an important task, which usually requires manipulating the skeletal structure of the object. However, most existing methods (such as image deformation) only allow for stretching the skeletal structure of an object: modifying skeletal topology remains a challenge. This paper presents a technique for synthesizing image objects with different skeletal structures while respecting to an input image object. To apply this technique, a user firstly annotates the skeletal structure of the input object by specifying a number of strokes in the input image, and draws corresponding strokes in an output domain to generate new skeletal structures. Then, a number of the example texture pieces are sampled along the strokes in the input image and pasted along the strokes in the output domain with their orientations. The result is obtained by optimizing the texture sampling and seam computation. The proposed method is successfully used to synthesize challenging skeletal structures, such as skeletal branches, and a wide range of image objects with various skeletal structures, to demonstrate its effectiveness.  相似文献   

6.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes.  相似文献   

7.
In this paper we present a method for automatic interpolation between adjacent discrete levels of detail to achieve smooth LOD changes in image space. We achieve this by breaking the problem into two passes: We render the two LOD levels individually and combine them in a separate pass afterwards. The interpolation is formulated in a way that only one level has to be updated per frame and the other can be reused from the previous frame, thereby causing roughly the same render cost as with simple non interpolated discrete LOD rendering, only incurring the slight overhead of the final combination pass. Additionally we describe customized interpolation schemes using visibility textures. The method was designed with the ease of integration into existing engines in mind. It requires neither sorting nor blending of objects, nor does it introduce any constrains in the LOD used. The LODs can be coplanar, alpha masked, animated, impostors, and intersecting, while still interpolating smoothly.  相似文献   

8.
The ABSTRACT is to be in fully-justified italicized text, between two horizontal lines, in one-column format, below the author and affiliation information. Use the word “Abstract” as the title, in 9-point Times, boldface type, left-aligned to the text, initially capitalized. The abstract is to be in 9-point, single-spaced type. The abstract may be up to 3 inches (7.62 cm) long. Leave one blank line after the abstract, then add the subject categories according to the ACM Classification Index (see http://www.acm.org/class/1998/ ).  相似文献   

9.
In this paper, we introduce a new representation – radiance transfer fields (RTF) – for rendering interreflections in dynamic scenes under low frequency illumination. The RTF describes the radiance transferred by an individual object to its surrounding space as a function of the incident radiance. An important property of RTF is its independence of the scene configuration, enabling interreflection computation in dynamic scenes. Secondly, RTFs naturally fit in with the rendering framework of precomputed shadow fields, incurring negligible cost to add interreflection effects. In addition, RTFs can be used to compute interreflections for both diffuse and glossy objects. We also show that RTF data can be highly compressed by clustered principal component analysis (CPCA), which not only reduces the memory cost but also accelerates rendering. Finally, we present some experimental results demonstrating our techniques.  相似文献   

10.
Fiber tracking is a standard tool to estimate the course of major white matter tracts from diffusion tensor magnetic resonance imaging (DT‐MRI) data. In this work, we aim at supporting the visual analysis of classical streamlines from fiber tracking by integrating context from anatomical data, acquired by a T1‐weighted MRI measurement. To this end, we suggest a novel visualization metaphor, which is based on data‐driven deformation of geometry and has been inspired by a technique for anatomical fiber preparation known as Klingler dissection. We demonstrate that our method conveys the relation between streamlines and surrounding anatomical features more effectively than standard techniques like slice images and direct volume rendering. The method works automatically, but its GPU‐based implementation allows for additional, intuitive interaction.  相似文献   

11.
Recent soft shadow mapping techniques based on back-projection can render high quality soft shadows in real time. However, real time high quality rendering of large penumbrae is still challenging, especially when multilayer shadow maps are used to reduce single light sample silhouette artifact. In this paper, we present an efficient algorithm to attack this problem. We first present a GPU-friendly packet-based approach rendering a packet of neighboring pixels together to amortize the cost of computing visibility factors. Then, we propose a hierarchical technique to quickly locate the contour edges, further reducing the computation cost. At last, we suggest a multi-view shadow map approach to reduce the single light sample artifact. We also demonstrate its higher image quality and higher efficiency compared to the existing depth peeling approaches.  相似文献   

12.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

13.
This paper proposes a novel system that “rephotographs” a historical photograph with a collection of images. Rather than finding the accurate viewpoint of the historical photo, users only need to take a number of photographs around the target scene. We adopt the structure from motion technique to estimate the spatial relationship among these photographs, and construct a set of 3D point cloud. Based on the user‐specified correspondences between the projected 3D point cloud and historical photograph, the camera parameters of the historical photograph are estimated. We then combine forward and backward warping images to render the result. Finally, inpainting and content‐preserving warping are used to refine it, and the photograph at the same viewpoint of the historical one is produced by this photo collection.  相似文献   

14.
15.
This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.  相似文献   

16.
We present an alternative approach to create digital camouflage images which follows human's perception intuition and complies with the physical creation procedure of artists. Our method is based on a two‐scale decomposition scheme of the input images. We modify the large‐scale layer of the background image by considering structural importance based on energy optimization and the detail layer by controlling its spatial variation. A gradient correction is presented to prevent halo artifacts. Users can control the difficulty level of perceiving the camouflage effect through a few parameters. Our camouflage images are natural and have less long coherent edges in the hidden region. Experimental results show that our algorithm yields visually pleasing camouflage images.  相似文献   

17.
We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches.  相似文献   

18.
The rendering of large data sets can result in cluttered displays and non‐interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items.  相似文献   

19.
Collage can provide a summary form on the collection of photos in an album. In this paper, we introduce a novel approach to constructing photo collage in the hierarchical narrative manner. As opposed to previous methods focusing on spatial coherence in the collage layout, our narrative collage arranges the photos according to the basic narrative elements from literary writings, i.e., character, setting and plot. Face, time and place attributes are exploited to embody those narrative elements in the collage. Then, photos are organized into the hierarchical structure for the multi‐level details in the events recorded by the album. Such hierarchical narrative collage can present a visual overview in the chronological order on what happened in the album. Experimental results show that our approach offers a better summarization to browse on the photo album content than previous ones.  相似文献   

20.
We introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in realtime with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image‐space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line‐based styles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号